You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by mo...@apache.org on 2020/02/28 12:10:32 UTC

[incubator-doris-website] branch asf-site updated: update doc

This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-doris-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 699b8a4  update doc
699b8a4 is described below

commit 699b8a4b0bf311c37bd41a97356231481947d625
Author: morningman <mo...@163.com>
AuthorDate: Fri Feb 28 20:10:09 2020 +0800

    update doc
---
 .../alter-table/alter-table-bitmap-index.md.txt    |  80 ++
 .../alter-table/alter-table-schema-change.md.txt   |   3 +-
 .../cn/administrator-guide/broker.md.txt           |  19 +-
 .../cn/administrator-guide/config/fe_config.md.txt |   8 +
 .../administrator-guide/dynamic-partition.md.txt   | 179 ++++
 .../http-actions/compaction-action.md.txt          |  79 ++
 .../http-actions/restore-tablet.md.txt             |   2 +-
 .../load-data/broker-load-manual.md.txt            |  10 +-
 .../load-data/insert-into-manual.md.txt            | 134 ++-
 .../load-data/load-manual.md.txt                   |  14 +-
 .../load-data/stream-load-manual.md.txt            |   7 +-
 .../operation/metadata-operation.md.txt            |   8 +-
 .../operation/multi-tenant.md.txt                  |   8 +-
 .../operation/tablet-meta-tool.md.txt              |   2 +-
 .../operation/tablet-repair-and-balance.md.txt     |   8 +-
 .../cn/administrator-guide/privilege.md.txt        |  16 +-
 .../cn/administrator-guide/variables.md.txt        |  18 +-
 .../documentation/cn/community/pull-request.md.txt |  34 +-
 .../cn/developer-guide/debug-tool.md.txt           |   2 +-
 .../cn/developer-guide/format-code.md.txt          |  66 ++
 .../extending-doris/user-defined-function.md.txt   |   2 +-
 .../cn/getting-started/basic-usage.md.txt          |   3 +-
 .../cn/getting-started/best-practice.md.txt        |  26 +-
 .../cn/getting-started/data-model-rollup.md.txt    |  64 +-
 .../cn/getting-started/data-partition.md.txt       |   6 +-
 .../cn/getting-started/hit-the-rollup.md.txt       |   8 +-
 .../documentation/cn/installing/compilation.md.txt |   3 +-
 .../cn/installing/install-deploy.md.txt            |   6 +-
 .../cn/internal/grouping_sets_design.md.txt        | 510 ++++++++++++
 .../cn/internal/metadata-design.md.txt             |   4 +-
 .../documentation/cn/internal/spark_load.md.txt    | 205 +++++
 .../aggregate-functions/bitmap.md.txt              | 141 ++--
 .../bitmap_and.md.txt}                             |  32 +-
 .../bitmap_contains.md.txt}                        |  35 +-
 .../bitmap_empty.md.txt}                           |  27 +-
 .../bitmap-functions/bitmap_from_string.md.txt     |  56 ++
 .../bitmap_has_any.md.txt}                         |  32 +-
 .../bitmap_hash.md.txt}                            |  27 +-
 .../bitmap_or.md.txt}                              |  32 +-
 .../bitmap-functions/bitmap_to_string.md.txt       |  62 ++
 .../sql-functions/bitmap-functions/index.rst.txt   |   8 +
 .../to_bitmap.md.txt}                              |  28 +-
 .../{day.md.txt => curdate.md.txt}                 |  35 +-
 .../date-time-functions/date_format.md.txt         |   2 +-
 .../sql-functions/date-time-functions/day.md.txt   |   1 +
 .../date-time-functions/from_unixtime.md.txt       |   2 +-
 .../hour.md.txt}                                   |  22 +-
 .../{day.md.txt => minute.md.txt}                  |  21 +-
 .../{day.md.txt => second.md.txt}                  |  21 +-
 .../date-time-functions/timestampadd.md.txt        |  52 ++
 .../date-time-functions/timestampdiff.md.txt       |  60 ++
 .../sql-functions/hash-functions/index.rst.txt     |   8 +
 .../murmur_hash3_32.md.txt}                        |  42 +-
 .../cn/sql-reference/sql-functions/index.rst.txt   |   2 +
 .../ends_with.md.txt}                              |  28 +-
 .../starts_with.md.txt}                            |  30 +-
 .../Account Management/DROP USER.md.txt            |  13 +-
 .../Administration/SHOW INDEX.md.txt}              |  26 +-
 .../Data Definition/ALTER TABLE.md.txt             |  74 +-
 .../ALTER VIEW.md.txt}                             |  38 +-
 .../Data Definition/CANCEL ALTER.md.txt            |  15 +-
 .../CREATE INDEX.md.txt}                           |  23 +-
 .../Data Definition/CREATE TABLE.md.txt            | 844 +++++++++++--------
 .../Data Definition/DROP INDEX.md.txt}             |  25 +-
 .../sql-statements/Data Definition/RECOVER.md.txt  |   2 +-
 ...{show-function.md.txt => show-functions.md.txt} |  38 +-
 .../Data Manipulation/BROKER LOAD.md.txt           |   4 +-
 .../Data Manipulation/GROUP BY.md.txt              | 163 ++++
 .../sql-statements/Data Manipulation/LOAD.md.txt   |   2 +-
 .../Data Manipulation/SHOW ALTER.md.txt            |  10 +-
 .../SHOW DYNAMIC PARTITION TABLES.md.txt}          |  29 +-
 .../Data Manipulation/SHOW PARTITIONS.md.txt       |  15 +-
 .../Data Manipulation/SHOW TRANSACTION.md.txt      |  80 ++
 .../alter-table/alter-table-bitmap-index_EN.md.txt |  77 ++
 .../alter-table/alter-table-rollup_EN.md.txt       | 181 ++++
 .../alter-table-schema-change_EN.md.txt            | 224 +++++
 .../administrator-guide/alter-table/index.rst.txt  |   9 +
 .../en/administrator-guide/broker_EN.md.txt        | 286 +++++++
 .../administrator-guide/config/fe_config_en.md.txt |   8 +
 .../dynamic-partition_EN.md.txt                    | 185 +++++
 .../http-actions/compaction-action_EN.md.txt       |  78 ++
 .../en/administrator-guide/index.rst.txt           |   1 +
 .../load-data/insert-into-manual_EN.md.txt         | 122 ++-
 .../load-data/load-manual_EN.md.txt                |   2 +-
 .../load-data/stream-load-manual_EN.md.txt         |   7 +-
 .../operation/metadata-operation_EN.md.txt         |   2 +-
 .../en/administrator-guide/privilege_EN.md.txt     |  12 +-
 .../en/administrator-guide/variables_EN.md.txt     |  10 +-
 .../en/developer-guide/format-code.md.txt          |  78 ++
 .../user-defined-function_EN.md.txt                |   2 +-
 .../en/getting-started/basic-usage_EN.md.txt       |   5 +-
 .../en/getting-started/best-practice_EN.md.txt     |  27 +-
 .../en/getting-started/data-model-rollup_EN.md.txt | 120 ++-
 .../en/getting-started/data-partition_EN.md.txt    | 236 +++---
 .../en/installing/install-deploy_EN.md.txt         |   4 +-
 .../en/internal/grouping_sets_design_EN.md.txt     | 494 +++++++++++
 .../aggregate-functions/bitmap_EN.md.txt           | 143 +++-
 .../bitmap-functions/bitmap_and_EN.md.txt}         |  32 +-
 .../bitmap-functions/bitmap_contains_EN.md.txt}    |  32 +-
 .../bitmap-functions/bitmap_empty_EN.md.txt}       |  27 +-
 .../bitmap-functions/bitmap_from_string.md.txt     |  56 ++
 .../bitmap-functions/bitmap_has_any_EN.md.txt}     |  32 +-
 .../bitmap-functions/bitmap_hash_EN.md.txt}        |  27 +-
 .../bitmap-functions/bitmap_or_EN.md.txt}          |  32 +-
 .../bitmap-functions/bitmap_to_string.md.txt       |  63 ++
 .../sql-functions/bitmap-functions/index.rst.txt   |   8 +
 .../bitmap-functions/to_bitmap_EN.md.txt}          |  27 +-
 .../curdate_EN.md.txt}                             |  28 +-
 .../date-time-functions/hour_EN.md.txt}            |  24 +-
 .../date-time-functions/minute_EN.md.txt}          |  24 +-
 .../date-time-functions/second_EN.md.txt}          |  24 +-
 .../date-time-functions/timestampadd_EN.md.txt     |  51 ++
 .../date-time-functions/timestampdiff_EN.md.txt    |  60 ++
 .../sql-functions/hash-functions/index.rst.txt     |   8 +
 .../hash-functions/murmur_hash3_32.md.txt}         |  42 +-
 .../en/sql-reference/sql-functions/index.rst.txt   |   2 +
 .../ends_with_EN.md.txt}                           |  29 +-
 .../starts_with_EN.md.txt}                         |  31 +-
 .../Account Management/DROP USER_EN.md.txt         |  16 +-
 .../Administration/SHOW INDEX_EN.md.txt}           |  26 +-
 .../Data Definition/ALTER TABLE_EN.md.txt          | 574 +++++++------
 .../Data Definition/ALTER VIEW_EN.md.txt}          |  35 +-
 .../Data Definition/CANCEL ALTER_EN.md.txt         |  12 +
 .../Data Definition/CREATE INDEX_EN.md.txt}        |  25 +-
 .../Data Definition/CREATE TABLE_EN.md.txt         | 667 ++++++++-------
 .../Data Definition/DROP INDEX_EN.md.txt}          |  25 +-
 .../Data Definition/RECOVER_EN.md.txt              |   2 +-
 .../Data Definition/show-function_EN.md.txt        |  56 --
 .../Data Definition/show-functions_EN.md.txt}      |  46 +-
 .../Data Manipulation/BROKER LOAD_EN.md.txt        |   4 +-
 .../Data Manipulation/GROUP BY_EN.md.txt           | 161 ++++
 .../Data Manipulation/LOAD_EN.md.txt               |   2 +-
 .../Data Manipulation/SHOW ALTER_EN.md.txt         |  10 +-
 .../SHOW DYNAMIC PARTITION TABLES_EN.md.txt}       |  34 +-
 .../Data Manipulation/SHOW PARTITIONS_EN.md.txt    |  12 +-
 .../Data Manipulation/SHOW TRANSACTION_EN.md.txt   |  79 ++
 .../alter-table-bitmap-index.html}                 | 133 +--
 .../alter-table/alter-table-rollup.html            |   7 +-
 .../alter-table/alter-table-schema-change.html     |   6 +-
 .../cn/administrator-guide/alter-table/index.html  |  19 +-
 .../cn/administrator-guide/backup-restore.html     |   1 +
 .../cn/administrator-guide/broker.html             |  20 +-
 .../cn/administrator-guide/colocation-join.html    |   5 +-
 .../cn/administrator-guide/config/fe_config.html   |   7 +
 .../cn/administrator-guide/config/index.html       |   1 +
 .../cn/administrator-guide/dynamic-partition.html  | 431 ++++++++++
 .../cn/administrator-guide/export-manual.html      |   5 +-
 .../http-actions/cancel-label.html                 |   6 +-
 ...fe-get-log-file.html => compaction-action.html} |  96 ++-
 .../http-actions/fe-get-log-file.html              |   6 +-
 .../http-actions/get-label-state.html              |   2 +
 .../cn/administrator-guide/http-actions/index.html |   3 +
 .../http-actions/restore-tablet.html               |   6 +-
 .../cn/administrator-guide/index.html              |   2 +
 .../load-data/broker-load-manual.html              |  11 +-
 .../cn/administrator-guide/load-data/index.html    |   1 +
 .../load-data/insert-into-manual.html              | 100 ++-
 .../administrator-guide/load-data/load-manual.html |  14 +-
 .../load-data/routine-load-manual.html             |   1 +
 .../load-data/stream-load-manual.html              |   8 +-
 .../operation/disk-capacity.html                   |   1 +
 .../cn/administrator-guide/operation/index.html    |   1 +
 .../operation/metadata-operation.html              |   9 +-
 .../operation/monitor-alert.html                   |   1 +
 .../operation/multi-tenant.html                    |   9 +-
 .../operation/tablet-meta-tool.html                |   3 +-
 .../operation/tablet-repair-and-balance.html       |   9 +-
 .../operation/tablet-restore-tool.html             |   1 +
 .../cn/administrator-guide/privilege.html          |  15 +-
 .../cn/administrator-guide/small-file-mgr.html     |   1 +
 .../cn/administrator-guide/sql-mode.html           |   1 +
 .../cn/administrator-guide/time-zone.html          |   1 +
 .../cn/administrator-guide/variables.html          |  17 +-
 content/documentation/cn/community/index.html      |  39 +-
 .../documentation/cn/community/pull-request.html   |  92 ++-
 .../cn/developer-guide/debug-tool.html             |   7 +-
 .../format-code.html}                              | 152 ++--
 .../documentation/cn/developer-guide/index.html    |  19 +
 .../cn/extending-doris/user-defined-function.html  |   2 +-
 .../cn/getting-started/basic-usage.html            |   3 +-
 .../cn/getting-started/best-practice.html          | 163 ++--
 .../cn/getting-started/data-model-rollup.html      |  62 +-
 .../cn/getting-started/data-partition.html         |   6 +-
 .../cn/getting-started/hit-the-rollup.html         |   6 +-
 .../documentation/cn/getting-started/index.html    |   6 +-
 content/documentation/cn/index.html                | 127 ++-
 .../documentation/cn/installing/compilation.html   |   9 +-
 .../cn/installing/install-deploy.html              |  12 +-
 .../cn/internal/doris_storage_optimization.html    |   6 +-
 .../cn/internal/grouping_sets_design.html          | 687 ++++++++++++++++
 content/documentation/cn/internal/index.html       |  52 ++
 .../documentation/cn/internal/metadata-design.html |  14 +-
 content/documentation/cn/internal/spark_load.html  | 449 ++++++++++
 content/documentation/cn/sql-reference/index.html  |   4 +-
 .../sql-functions/aggregate-functions/avg.html     |   6 +-
 .../sql-functions/aggregate-functions/bitmap.html  | 150 ++--
 .../sql-functions/aggregate-functions/count.html   |  10 +-
 .../aggregate-functions/count_distinct.html        | 285 -------
 .../aggregate-functions/hll_union_agg.html         |   6 +-
 .../sql-functions/aggregate-functions/index.html   |  27 +-
 .../sql-functions/aggregate-functions/max.html     |   2 +
 .../sql-functions/aggregate-functions/min.html     |   2 +
 .../sql-functions/aggregate-functions/ndv.html     |   2 +
 .../aggregate-functions/percentile_approx.html     |   2 +
 .../sql-functions/aggregate-functions/stddev.html  |   2 +
 .../aggregate-functions/stddev_samp.html           |   2 +
 .../sql-functions/aggregate-functions/sum.html     |   2 +
 .../aggregate-functions/var_samp.html              |   2 +
 .../aggregate-functions/variance.html              |   6 +-
 .../bitmap_and.html}                               |  52 +-
 .../bitmap_contains.html}                          |  49 +-
 .../bitmap_empty.html}                             |  45 +-
 .../bitmap_from_string.html}                       |  71 +-
 .../bitmap_has_any.html}                           |  52 +-
 .../bitmap_hash.html}                              |  48 +-
 .../bitmap_or.html}                                |  52 +-
 .../bitmap_to_string.html}                         |  76 +-
 .../index.html                                     | 141 ++--
 .../month.html => bitmap-functions/to_bitmap.html} |  43 +-
 .../cn/sql-reference/sql-functions/cast.html       |   2 +
 .../date-time-functions/convert_tz.html            |   6 +-
 .../{current_timestamp.html => curdate.html}       |  49 +-
 .../date-time-functions/current_timestamp.html     |   6 +-
 .../sql-functions/date-time-functions/curtime.html |   2 +
 .../date-time-functions/date_add.html              |   2 +
 .../date-time-functions/date_format.html           |   4 +-
 .../date-time-functions/date_sub.html              |   2 +
 .../date-time-functions/datediff.html              |   2 +
 .../sql-functions/date-time-functions/day.html     |   6 +-
 .../sql-functions/date-time-functions/dayname.html |   2 +
 .../date-time-functions/dayofmonth.html            |   2 +
 .../date-time-functions/dayofweek.html             |   2 +
 .../date-time-functions/dayofyear.html             |   2 +
 .../date-time-functions/from_days.html             |   2 +
 .../date-time-functions/from_unixtime.html         |   8 +-
 .../date-time-functions/{month.html => hour.html}  |  32 +-
 .../sql-functions/date-time-functions/index.html   |  50 ++
 .../{month.html => minute.html}                    |  36 +-
 .../sql-functions/date-time-functions/month.html   |   6 +-
 .../date-time-functions/monthname.html             |   2 +
 .../sql-functions/date-time-functions/now.html     |   6 +-
 .../date-time-functions/{now.html => second.html}  |  39 +-
 .../date-time-functions/str_to_date.html           |   6 +-
 .../date-time-functions/timediff.html              |   6 +-
 .../{to_days.html => timestampadd.html}            |  49 +-
 .../{datediff.html => timestampdiff.html}          |  64 +-
 .../sql-functions/date-time-functions/to_days.html |   6 +-
 .../date-time-functions/unix_timestamp.html        |   2 +
 .../date-time-functions/utc_timestamp.html         |   2 +
 .../date-time-functions/workofyear.html            |   2 +
 .../sql-functions/date-time-functions/year.html    |   2 +
 .../lcase.html => hash-functions/index.html}       |  62 +-
 .../murmur_hash3_32.html}                          |  64 +-
 .../cn/sql-reference/sql-functions/index.html      |   2 +
 .../sql-functions/spatial-functions/index.html     |   2 +
 .../sql-functions/spatial-functions/st_astext.html |   2 +
 .../sql-functions/spatial-functions/st_circle.html |   2 +
 .../spatial-functions/st_contains.html             |   2 +
 .../spatial-functions/st_distance_sphere.html      |   2 +
 .../spatial-functions/st_geometryfromtext.html     |   2 +
 .../spatial-functions/st_linefromtext.html         |   2 +
 .../sql-functions/spatial-functions/st_point.html  |   2 +
 .../spatial-functions/st_polygon.html              |   2 +
 .../sql-functions/spatial-functions/st_x.html      |   2 +
 .../sql-functions/spatial-functions/st_y.html      |   2 +
 .../sql-functions/string-functions/ascii.html      |   2 +
 .../sql-functions/string-functions/concat.html     |   2 +
 .../sql-functions/string-functions/concat_ws.html  |   6 +-
 .../{find_in_set.html => ends_with.html}           |  41 +-
 .../string-functions/find_in_set.html              |   6 +-
 .../string-functions/get_json_double.html          |   2 +
 .../string-functions/get_json_int.html             |   2 +
 .../string-functions/get_json_string.html          |   2 +
 .../string-functions/group_concat.html             |   2 +
 .../sql-functions/string-functions/index.html      |  18 +
 .../sql-functions/string-functions/instr.html      |   2 +
 .../sql-functions/string-functions/lcase.html      |   2 +
 .../sql-functions/string-functions/left.html       |   2 +
 .../sql-functions/string-functions/length.html     |   2 +
 .../sql-functions/string-functions/locate.html     |   2 +
 .../sql-functions/string-functions/lower.html      |   2 +
 .../sql-functions/string-functions/lpad.html       |   2 +
 .../sql-functions/string-functions/ltrim.html      |   2 +
 .../string-functions/money_format.html             |   2 +
 .../string-functions/regexp_extract.html           |   2 +
 .../string-functions/regexp_replace.html           |   2 +
 .../sql-functions/string-functions/repeat.html     |   2 +
 .../sql-functions/string-functions/right.html      |   2 +
 .../sql-functions/string-functions/split_part.html |   6 +-
 .../{strleft.html => starts_with.html}             |  43 +-
 .../sql-functions/string-functions/strleft.html    |   6 +-
 .../sql-functions/string-functions/strright.html   |   2 +
 .../Account Management/DROP USER.html              |  13 +-
 .../Administration/SHOW FULL COLUMNS.html          |   4 +-
 .../{SHOW FULL COLUMNS.html => SHOW INDEX.html}    |  25 +-
 .../Administration/SHOW MIGRATIONS.html            |   4 +-
 .../sql-statements/Administration/index.html       |   6 +
 .../Data Definition/ALTER TABLE.html               |  67 +-
 .../{CREATE DATABASE.html => ALTER VIEW.html}      |  48 +-
 .../sql-statements/Data Definition/BACKUP.html     |   4 +-
 .../Data Definition/CANCEL ALTER.html              |  15 +-
 .../Data Definition/CREATE DATABASE.html           |   4 +-
 .../{CREATE DATABASE.html => CREATE INDEX.html}    |  27 +-
 .../Data Definition/CREATE REPOSITORY.html         |   4 +-
 .../Data Definition/CREATE TABLE.html              | 907 ++++++++++++---------
 .../Data Definition/DROP DATABASE.html             |   4 +-
 .../{DROP DATABASE.html => DROP INDEX.html}        |  30 +-
 .../Data Definition/DROP REPOSITORY.html           |   4 +-
 .../sql-statements/Data Definition/RECOVER.html    |   2 +-
 .../Data Definition/drop-function.html             |   4 +-
 .../sql-statements/Data Definition/index.html      |  29 +-
 .../{show-function.html => show-functions.html}    |  46 +-
 .../Data Manipulation/BROKER LOAD.html             |   4 +-
 .../sql-statements/Data Manipulation/EXPORT.html   |   4 +-
 .../sql-statements/Data Manipulation/GROUP BY.html | 395 +++++++++
 .../sql-statements/Data Manipulation/LOAD.html     |   6 +-
 .../Data Manipulation/SHOW ALTER.html              |  10 +-
 .../Data Manipulation/SHOW DELETE.html             |   4 +-
 ...ETE.html => SHOW DYNAMIC PARTITION TABLES.html} |  24 +-
 .../Data Manipulation/SHOW EXPORT.html             |   4 +-
 .../Data Manipulation/SHOW PARTITIONS.html         |  13 +-
 .../Data Manipulation/SHOW TABLET.html             |   4 +-
 .../{SHOW EXPORT.html => SHOW TRANSACTION.html}    |  98 ++-
 .../Data Manipulation/STOP ROUTINE LOAD.html       |   4 +-
 .../sql-statements/Data Manipulation/index.html    |  27 +-
 .../cn/sql-reference/sql-statements/index.html     |   4 +-
 .../alter-table-bitmap-index_EN.html}              | 134 +--
 .../alter-table/alter-table-rollup_EN.html         | 437 ++++++++++
 .../alter-table/alter-table-schema-change_EN.html} | 259 +++---
 .../administrator-guide/alter-table/index.html     | 122 +--
 .../en/administrator-guide/backup-restore_EN.html  |   7 +-
 .../administrator-guide/broker_EN.html}            | 235 +++---
 .../en/administrator-guide/colocation-join_EN.html |  11 +-
 .../administrator-guide/config/fe_config_en.html   |   9 +
 .../en/administrator-guide/config/index.html       |   3 +
 .../administrator-guide/dynamic-partition_EN.html  | 433 ++++++++++
 .../en/administrator-guide/export_manual_EN.html   |   7 +-
 .../http-actions/cancel-label_EN.html              |   8 +-
 ...-log-file_EN.html => compaction-action_EN.html} | 120 +--
 .../http-actions/fe-get-log-file_EN.html           |   8 +-
 .../http-actions/get-label-state_EN.html           |   4 +
 .../en/administrator-guide/http-actions/index.html |   9 +-
 .../http-actions/restore-tablet_EN.html            |   4 +
 .../en/administrator-guide/index.html              |   5 +
 .../load-data/broker-load-manual_EN.html           |   3 +
 .../en/administrator-guide/load-data/index.html    |   3 +
 .../load-data/insert-into-manual_EN.html           | 106 ++-
 .../load-data/load-manual_EN.html                  |   5 +-
 .../load-data/routine-load-manual_EN.html          |   3 +
 .../load-data/stream-load-manual_EN.html           |  16 +-
 .../en/administrator-guide/operation/index.html    |   3 +
 .../operation/metadata-operation_EN.html           |   5 +-
 .../operation/monitor-alert_EN.html                |   3 +
 .../operation/multi-tenant_EN.html                 |   3 +
 .../operation/tablet-meta-tool_EN.html             |   3 +
 .../operation/tablet-repair-and-balance_EN.html    |   3 +
 .../en/administrator-guide/privilege_EN.html       |  15 +-
 .../en/administrator-guide/small-file-mgr_EN.html  |   3 +
 .../en/administrator-guide/sql-mode_EN.html        |   3 +
 .../en/administrator-guide/time-zone_EN.html       |   3 +
 .../en/administrator-guide/variables_EN.html       |  11 +-
 content/documentation/en/community/index.html      |   4 +-
 .../en/developer-guide/debug-tool.html             |   5 +-
 .../en/developer-guide/format-code.html            | 317 +++++++
 .../documentation/en/developer-guide/index.html    |  19 +
 .../extending-doris/user-defined-function_EN.html  |   2 +-
 .../en/getting-started/basic-usage_EN.html         |   3 +-
 .../en/getting-started/best-practice_EN.html       |  47 +-
 .../en/getting-started/data-model-rollup_EN.html   | 810 ++++++++++++++----
 .../en/getting-started/data-partition_EN.html      | 262 +++---
 .../documentation/en/getting-started/index.html    |   6 +-
 content/documentation/en/index.html                |  89 +-
 .../en/installing/install-deploy_EN.html           |  10 +-
 .../en/internal/doris_storage_optimization_EN.html |   5 +-
 .../en/internal/grouping_sets_design_EN.html       | 690 ++++++++++++++++
 content/documentation/en/internal/index.html       |  33 +
 .../en/internal/metadata-design_EN.html            |   5 +-
 .../sql-functions/aggregate-functions/avg_EN.html  |   6 +-
 .../aggregate-functions/bitmap_EN.html             | 148 ++--
 .../aggregate-functions/count_EN.html              |  10 +-
 .../aggregate-functions/count_distinct_EN.html     | 285 -------
 .../aggregate-functions/hll_union_agg_EN.html      |   6 +-
 .../sql-functions/aggregate-functions/index.html   |  27 +-
 .../sql-functions/aggregate-functions/max_EN.html  |   2 +
 .../sql-functions/aggregate-functions/min_EN.html  |   2 +
 .../sql-functions/aggregate-functions/ndv_EN.html  |   2 +
 .../aggregate-functions/percentile_approx_EN.html  |   2 +
 .../aggregate-functions/stddev_EN.html             |   2 +
 .../aggregate-functions/stddev_samp_EN.html        |   2 +
 .../sql-functions/aggregate-functions/sum_EN.html  |   2 +
 .../aggregate-functions/var_samp_EN.html           |   2 +
 .../aggregate-functions/variance_EN.html           |   6 +-
 .../bitmap_and_EN.html}                            |  64 +-
 .../bitmap_contains_EN.html}                       |  64 +-
 .../bitmap_empty_EN.html}                          |  67 +-
 .../bitmap_from_string.html}                       |  78 +-
 .../bitmap_has_any_EN.html}                        |  64 +-
 .../bitmap_hash_EN.html}                           |  52 +-
 .../bitmap_or_EN.html}                             |  64 +-
 .../bitmap_to_string.html}                         |  80 +-
 .../index.html                                     | 111 +--
 .../to_bitmap_EN.html}                             |  46 +-
 .../en/sql-reference/sql-functions/cast_EN.html    |   2 +
 .../{current_timestamp_EN.html => curdate_EN.html} |  41 +-
 .../date-time-functions/current_timestamp_EN.html  |   6 +-
 .../date-time-functions/date_add_EN.html           |   2 +
 .../date-time-functions/date_format_EN.html        |   2 +
 .../date-time-functions/date_sub_EN.html           |   2 +
 .../date-time-functions/datediff_EN.html           |   2 +
 .../sql-functions/date-time-functions/day_EN.html  |   2 +
 .../date-time-functions/dayname_EN.html            |   2 +
 .../date-time-functions/dayofmonth_EN.html         |   2 +
 .../date-time-functions/dayofweek_EN.html          |   2 +
 .../date-time-functions/dayofyear_EN.html          |   2 +
 .../date-time-functions/from_days_EN.html          |   2 +
 .../date-time-functions/from_unixtime_EN.html      |   6 +-
 .../{month_EN.html => hour_EN.html}                |  30 +-
 .../sql-functions/date-time-functions/index.html   |  54 +-
 .../{year_EN.html => minute_EN.html}               |  34 +-
 .../date-time-functions/month_EN.html              |   6 +-
 .../date-time-functions/monthname_EN.html          |   2 +
 .../sql-functions/date-time-functions/now_EN.html  |   6 +-
 .../{year_EN.html => second_EN.html}               |  34 +-
 .../date-time-functions/str_to_date_EN.html        |   6 +-
 .../date-time-functions/timediff_EN.html           |   6 +-
 .../{str_to_date_EN.html => timestampadd_EN.html}  |  58 +-
 .../timestampdiff_EN.html}                         |  84 +-
 .../date-time-functions/to_days_EN.html            |   6 +-
 .../date-time-functions/unix_timestamp_EN.html     |   2 +
 .../date-time-functions/utc_timestamp_EN.html      |   2 +
 .../date-time-functions/workofyear_EN.html         |   2 +
 .../sql-functions/date-time-functions/year_EN.html |   2 +
 .../lcase_EN.html => hash-functions/index.html}    |  62 +-
 .../murmur_hash3_32.html}                          |  78 +-
 .../en/sql-reference/sql-functions/index.html      |   2 +
 .../sql-functions/spatial-functions/index.html     |   2 +
 .../spatial-functions/st_astext_EN.html            |   2 +
 .../spatial-functions/st_circle_EN.html            |   2 +
 .../spatial-functions/st_contains_EN.html          |   2 +
 .../spatial-functions/st_distance_sphere_EN.html   |   2 +
 .../spatial-functions/st_geometryfromtext_EN.html  |   2 +
 .../spatial-functions/st_linefromtext_EN.html      |   2 +
 .../spatial-functions/st_point_EN.html             |   2 +
 .../spatial-functions/st_polygon_EN.html           |   2 +
 .../sql-functions/spatial-functions/st_x_EN.html   |   2 +
 .../sql-functions/spatial-functions/st_y_EN.html   |   2 +
 .../sql-functions/string-functions/ascii_EN.html   |   2 +
 .../sql-functions/string-functions/concat_EN.html  |   2 +
 .../string-functions/concat_ws_EN.html             |   6 +-
 .../{find_in_set_EN.html => ends_with_EN.html}     |  45 +-
 .../string-functions/find_in_set_EN.html           |   6 +-
 .../string-functions/get_json_double_EN.html       |   2 +
 .../string-functions/get_json_int_EN.html          |   2 +
 .../string-functions/get_json_string_EN.html       |   2 +
 .../string-functions/group_concat_EN.html          |   2 +
 .../sql-functions/string-functions/index.html      |  18 +
 .../sql-functions/string-functions/instr_EN.html   |   2 +
 .../sql-functions/string-functions/lcase_EN.html   |   2 +
 .../sql-functions/string-functions/left_EN.html    |   2 +
 .../sql-functions/string-functions/length_EN.html  |   2 +
 .../sql-functions/string-functions/locate_EN.html  |   2 +
 .../sql-functions/string-functions/lower_EN.html   |   2 +
 .../sql-functions/string-functions/lpad_EN.html    |   2 +
 .../sql-functions/string-functions/ltrim_EN.html   |   2 +
 .../string-functions/money_format_EN.html          |   2 +
 .../string-functions/regexp_extract_EN.html        |   2 +
 .../string-functions/regexp_replace_EN.html        |   2 +
 .../sql-functions/string-functions/repeat_EN.html  |   2 +
 .../sql-functions/string-functions/right_EN.html   |   2 +
 .../string-functions/split_part_EN.html            |   6 +-
 .../{strleft_EN.html => starts_with_EN.html}       |  44 +-
 .../sql-functions/string-functions/strleft_EN.html |   6 +-
 .../string-functions/strright_EN.html              |   2 +
 .../Account Management/DROP USER_EN.html           |  23 +-
 .../Administration/SHOW FULL COLUMNS_EN.html       |   4 +-
 ...HOW FULL COLUMNS_EN.html => SHOW INDEX_EN.html} |  28 +-
 .../Administration/SHOW MIGRATIONS_EN.html         |   4 +-
 .../sql-statements/Administration/index.html       |   6 +
 .../Data Definition/ALTER TABLE_EN.html            | 553 +++++++------
 .../{BACKUP_EN.html => ALTER VIEW_EN.html}         |  75 +-
 .../sql-statements/Data Definition/BACKUP_EN.html  |   4 +-
 .../Data Definition/CANCEL ALTER_EN.html           |  12 +
 .../Data Definition/CREATE DATABASE_EN.html        |   4 +-
 ...REATE DATABASE_EN.html => CREATE INDEX_EN.html} |  39 +-
 .../Data Definition/CREATE REPOSITORY_EN.html      |   4 +-
 .../Data Definition/CREATE TABLE_EN.html           | 802 ++++++++++--------
 .../Data Definition/DROP DATABASE_EN.html          |   4 +-
 ...{DROP REPOSITORY_EN.html => DROP INDEX_EN.html} |  37 +-
 .../Data Definition/DROP REPOSITORY_EN.html        |   4 +-
 .../sql-statements/Data Definition/RECOVER_EN.html |   2 +-
 .../Data Definition/drop-function_EN.html          |   4 +-
 .../sql-statements/Data Definition/index.html      |  31 +-
 ...how-function_EN.html => show-functions_EN.html} |  54 +-
 .../Data Manipulation/BROKER LOAD_EN.html          |   4 +-
 .../Data Manipulation/GET LABEL STATE_EN.html      |   4 +-
 .../Data Manipulation/GROUP BY_EN.html             | 394 +++++++++
 .../sql-statements/Data Manipulation/LOAD_EN.html  |   6 +-
 .../Data Manipulation/SHOW ALTER_EN.html           |   5 +-
 .../Data Manipulation/SHOW DELETE_EN.html          |   4 +-
 ....html => SHOW DYNAMIC PARTITION TABLES_EN.html} |  63 +-
 .../Data Manipulation/SHOW EXPORT_EN.html          |   4 +-
 .../Data Manipulation/SHOW PARTITIONS_EN.html      |   9 +-
 .../Data Manipulation/SHOW TABLET_EN.html          |   4 +-
 ...HOW TABLET_EN.html => SHOW TRANSACTION_EN.html} |  97 ++-
 .../Data Manipulation/STOP ROUTINE LOAD_EN.html    |   4 +-
 .../sql-statements/Data Manipulation/index.html    |  27 +-
 .../en/sql-reference/sql-statements/index.html     |   4 +-
 content/nohup.out                                  |  31 +
 content/objects.inv                                | Bin 8779 -> 9829 bytes
 content/searchindex.js                             |   2 +-
 510 files changed, 17253 insertions(+), 6438 deletions(-)

diff --git a/content/_sources/documentation/cn/administrator-guide/alter-table/alter-table-bitmap-index.md.txt b/content/_sources/documentation/cn/administrator-guide/alter-table/alter-table-bitmap-index.md.txt
new file mode 100644
index 0000000..5eb6877
--- /dev/null
+++ b/content/_sources/documentation/cn/administrator-guide/alter-table/alter-table-bitmap-index.md.txt
@@ -0,0 +1,80 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Bitmap 索引
+用户可以通过创建bitmap index 加速查询
+本文档主要介绍如何创建 index 作业,以及创建 index 的一些注意事项和常见问题。
+
+## 名词解释
+* bitmap index:位图索引,是一种快速数据结构,能够加快查询速度
+
+# 原理介绍
+创建和删除本质上是一个 schema change 的作业,具体细节可以参照 [Schema Change](alter-table-schema-change)。
+
+## 语法
+index 创建和修改相关语法有两种形式,一种集成与 alter table 语句中,另一种是使用单独的 
+create/drop index 语法
+1. 创建索引
+
+    创建索引的的语法可以参见 [CREATE INDEX](../../sql-reference/sql-statements/Data%20Definition/CREATE%20INDEX.html) 
+    或 [ALTER TABLE](../../sql-reference/sql-statements/Data%20Definition/ALTER%20TABLE.html) 中bitmap 索引相关的操作,
+    也可以通过在创建表时指定bitmap 索引,参见[CREATE TABLE](../../sql-reference/sql-statements/Data%20Definition/CREATE%20TABLE.html)
+
+2. 查看索引
+
+    参照[SHOW INDEX](../../sql-reference/sql-statements/Administration/SHOW%20INDEX.html)
+
+3. 删除索引
+
+    参照[DROP INDEX](../../sql-reference/sql-statements/Data%20Definition/DROP%20INDEX.html)
+    或者 [ALTER TABLE](../../sql-reference/sql-statements/Data%20Definition/ALTER%20TABLE.html) 中bitmap 索引相关的操作
+
+## 创建作业
+参照 schema change 文档 [Scheam Change](alter-table-schema-change.html)
+
+## 查看作业
+参照 schema change 文档 [Scheam Change](alter-table-schema-change.html)
+
+## 取消作业
+参照 schema change 文档 [Scheam Change](alter-table-schema-change.html)
+
+## 注意事项
+* 目前索引仅支持 bitmap 类型的索引。 
+* bitmap 索引仅在单列上创建。
+* bitmap 索引能够应用在 `Duplicate` 数据模型的所有列和 `Aggregate`, `Uniq` 模型的key列上。
+* bitmap 索引支持的数据类型如下:
+    * `TINYINT`
+    * `SMALLINT`
+    * `INT`
+    * `UNSIGNEDINT`
+    * `BIGINT`
+    * `CHAR`
+    * `VARCHAE`
+    * `DATE`
+    * `DATETIME`
+    * `LARGEINT`
+    * `DECIMAL`
+    * `BOOL`
+
+* bitmap索引仅在 segmentV2 下生效,需要在be的配置文件中增加如下配置
+
+    ```
+    default_rowset_type=BETA
+    compaction_rowset_type=BETA
+    ``` 
diff --git a/content/_sources/documentation/cn/administrator-guide/alter-table/alter-table-schema-change.md.txt b/content/_sources/documentation/cn/administrator-guide/alter-table/alter-table-schema-change.md.txt
index 5e98433..401afea 100644
--- a/content/_sources/documentation/cn/administrator-guide/alter-table/alter-table-schema-change.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/alter-table/alter-table-schema-change.md.txt
@@ -25,6 +25,7 @@ under the License.
 * 修改列类型
 * 调整列顺序
 * 增加、修改 Bloom Filter
+* 增加、删除 bitmap index
 
 本文档主要介绍如何创建 Scheam Change 作业,以及进行 Scheam Change 的一些注意事项和常见问题。
 
@@ -166,7 +167,7 @@ ADD COLUMN k5 INT default "1" to rollup2;
 
 可以看到,Base 表 tbl1 也自动加入了 k4, k5 列。即给任意 rollup 增加的列,都会自动加入到 Base 表中。
 
-同时,不允许向 Rollup 中加入 Base 表已经存在的列。如果用户需要这样做,可以重新建立一个包含新增列的 Rollup,之后在删除原 Rollup。
+同时,不允许向 Rollup 中加入 Base 表已经存在的列。如果用户需要这样做,可以重新建立一个包含新增列的 Rollup,之后再删除原 Rollup。
 
 ## 注意事项
 
diff --git a/content/_sources/documentation/cn/administrator-guide/broker.md.txt b/content/_sources/documentation/cn/administrator-guide/broker.md.txt
index d86074d..0fdcff0 100644
--- a/content/_sources/documentation/cn/administrator-guide/broker.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/broker.md.txt
@@ -186,13 +186,29 @@ WITH BROKER "broker_name"
         "kerberos_keytab_content" = "ASDOWHDLAWIDJHWLDKSALDJSDIWALD"
     )
     ```
+    如果采用Kerberos认证方式,则部署Broker进程的时候需要[krb5.conf](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html)文件,
+    krb5.conf文件包含Kerberos的配置信息,通常,您应该将krb5.conf文件安装在目录/etc中。您可以通过设置环境变量KRB5_CONFIG覆盖默认位置。
+    krb5.conf文件的内容示例如下:
+    ```
+    [libdefaults]
+        default_realm = DORIS.HADOOP
+        default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
+        default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
+        dns_lookup_kdc = true
+        dns_lookup_realm = false
+    
+    [realms]
+        DORIS.HADOOP = {
+            kdc = kerberos-doris.hadoop.service:7005
+        }
+    ```
     
 3. HDFS HA 模式
 
     这个配置用于访问以 HA 模式部署的 HDFS 集群。
     
     * `dfs.nameservices`:指定 hdfs 服务的名字,自定义,如:"dfs.nameservices" = "my_ha"。
-    * `dfs.ha.namenodes.xxx`:自定义 namenode 的名字,多个名字以逗号分隔。其中 xxx 为 `dfs.nameservices` 中自定义的名字,如 "dfs.ha.namenodes.my_ha" = "my_nn"。
+    * `dfs.ha.namenodes.xxx`:自定义 namenode 的名字,多个名字以逗号分隔。其中 xxx 为 `dfs.nameservices` 中自定义的名字,如: "dfs.ha.namenodes.my_ha" = "my_nn"。
     * `dfs.namenode.rpc-address.xxx.nn`:指定 namenode 的rpc地址信息。其中 nn 表示 `dfs.ha.namenodes.xxx` 中配置的 namenode 的名字,如:"dfs.namenode.rpc-address.my_ha.my_nn" = "host:port"。
     * `dfs.client.failover.proxy.provider`:指定 client 连接 namenode 的 provider,默认为:org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider。
 
@@ -221,6 +237,7 @@ WITH BROKER "broker_name"
         "dfs.client.failover.proxy.provider" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
     )
     ```
+   关于HDFS集群的配置可以写入hdfs-site.xml文件中,用户使用Broker进程读取HDFS集群的信息时,只需要填写集群的文件路径名和认证信息即可。
     
 #### 百度对象存储 BOS
 
diff --git a/content/_sources/documentation/cn/administrator-guide/config/fe_config.md.txt b/content/_sources/documentation/cn/administrator-guide/config/fe_config.md.txt
index 071bdb8..241dbe1 100644
--- a/content/_sources/documentation/cn/administrator-guide/config/fe_config.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/config/fe_config.md.txt
@@ -24,3 +24,11 @@ under the License.
   这个配置主要用来修改 brpc 的参数 max_body_size ,默认配置是 64M。一般发生在 multi distinct + 无 group by + 超过1T 数据量的情况下。尤其如果发现查询卡死,且 BE 出现类似 body_size is too large 的字样。
 
   由于这是一个 brpc 的配置,用户也可以在运行中直接修改该参数。通过访问 http://host:brpc_port/flags 修改。
+
+## max_running_txn_num_per_db
+
+  这个配置主要是用来控制同一个 db 的并发导入个数的,默认配置是100。当导入的并发执行的个数超过这个配置的值的时候,同步执行的导入就会失败比如 stream load。异步执行的导入就会一直处在 pending 状态比如 broker load。
+
+  一般来说不推荐更改这个并发数。如果当前导入并发超过这个值,则需要先检查是否单个导入任务过慢,或者小文件太多没有合并后导入的问题。
+
+  报错信息比如:current running txns on db xxx is xx, larger than limit xx。就属于这类问题。
diff --git a/content/_sources/documentation/cn/administrator-guide/dynamic-partition.md.txt b/content/_sources/documentation/cn/administrator-guide/dynamic-partition.md.txt
new file mode 100644
index 0000000..9c60118
--- /dev/null
+++ b/content/_sources/documentation/cn/administrator-guide/dynamic-partition.md.txt
@@ -0,0 +1,179 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 动态分区
+
+动态分区是在 Doris 0.12 版本中引入的新功能。旨在对表级别的分区实现生命周期管理(TTL),减少用户的使用负担。
+
+最初的设计、实现和效果可以参阅 [ISSUE 2262](https://github.com/apache/incubator-doris/issues/2262)。
+
+目前实现了动态添加分区的功能,下一个版本会支持动态删除分区的功能。
+
+## 名词解释
+
+* FE:Frontend,Doris 的前端节点。负责元数据管理和请求接入。
+* BE:Backend,Doris 的后端节点。负责查询执行和数据存储。
+
+## 原理
+
+在某些使用场景下,用户会将表按照天进行分区划分,每天定时执行例行任务,这时需要使用方手动管理分区,否则可能由于使用方没有创建分区导致数据导入失败,这给使用方带来了额外的维护成本。
+
+在实现方式上, FE会启动一个后台线程,根据fe.conf中`dynamic_partition_enable` 及 `dynamic_partition_check_interval_seconds`参数决定该线程是否启动以及该线程的调度频率.
+
+建表时,在properties中指定dynamic_partition属性,FE首先对动态分区属性进行解析,校验输入参数的合法性,然后将对应的属性持久化到FE的元数据中,并将该表注册到动态分区列表中,后台线程会根据配置参数定期对动态分区列表进行扫描,读取表的动态分区属性,执行添加分区的任务,每次的调度信息会保留在FE的内存中,可以通过`SHOW DYNAMIC PARTITION TABLES`查看调度任务是否成功,如果存在分区创建失败,会将失败信息输出。
+
+## 使用方式
+
+### 建表
+
+建表时,可以在 `PROPERTIES` 中指定以下`dynamic_partition`属性,表示这个表是一个动态分区表。
+    
+示例:
+
+```
+CREATE TABLE example_db.dynamic_partition
+(
+k1 DATE,
+k2 INT,
+k3 SMALLINT,
+v1 VARCHAR(2048),
+v2 DATETIME DEFAULT "2014-02-04 15:36:00"
+)
+ENGINE=olap
+DUPLICATE KEY(k1, k2, k3)
+PARTITION BY RANGE (k1)
+(
+PARTITION p1 VALUES LESS THAN ("2014-01-01"),
+PARTITION p2 VALUES LESS THAN ("2014-06-01"),
+PARTITION p3 VALUES LESS THAN ("2014-12-01")
+)
+DISTRIBUTED BY HASH(k2) BUCKETS 32
+PROPERTIES(
+"storage_medium" = "SSD",
+"dynamic_partition.enable" = "true"
+"dynamic_partition.time_unit" = "DAY",
+"dynamic_partition.end" = "3",
+"dynamic_partition.prefix" = "p",
+"dynamic_partition.buckets" = "32"
+ );
+```
+创建一张动态分区表,指定开启动态分区特性,以当天为2020-01-08为例,在每次调度时,会提前创建今天以及以后3天的4个分区(若分区已存在则会忽略),分区名根据指定前缀分别为`p20200108` `p20200109` `p20200110` `p20200111`,每个分区的分桶数量为32,每个分区的范围如下:
+```
+[types: [DATE]; keys: [2020-01-08]; ‥types: [DATE]; keys: [2020-01-09]; )
+[types: [DATE]; keys: [2020-01-09]; ‥types: [DATE]; keys: [2020-01-10]; )
+[types: [DATE]; keys: [2020-01-10]; ‥types: [DATE]; keys: [2020-01-11]; )
+[types: [DATE]; keys: [2020-01-11]; ‥types: [DATE]; keys: [2020-01-12]; )
+```
+    
+### 开启动态分区功能
+1. 首先需要在fe.conf中设置`dynamic_partition_enable=true`,可以在集群启动时通过修改配置文件指定,也可以在运行时通过http接口动态修改,修改方法查看高级操作部分
+
+2. 如果需要对0.12版本之前的表添加动态分区属性,则需要通过以下命令修改表的属性
+```
+ALTER TABLE dynamic_partition set ("dynamic_partition.enable" = "true", "dynamic_partition.time_unit" = "DAY", "dynamic_partition.end" = "3", "dynamic_partition.prefix" = "p", "dynamic_partition.buckets" = "32");
+```
+
+### 停止动态分区功能
+
+如果需要对集群中所有动态分区表停止动态分区功能,则需要在fe.conf中设置`dynamic_partition_enable=false`
+
+如果需要对指定表停止动态分区功能,则可以通过以下命令修改表的属性
+```
+ALTER TABLE dynamic_partition set ("dynamic_partition.enable" = "false")
+```
+
+### 修改动态分区属性
+
+通过如下命令可以修改动态分区的属性
+```
+ALTER TABLE dynamic_partition set("key" = "value")
+```
+
+### 查看动态分区表调度情况
+
+通过以下命令可以进一步查看动态分区表的调度情况:
+
+```    
+SHOW DYNAMIC PARTITION TABLES;
+
++-------------------+--------+----------+------+--------+---------+---------------------+---------------------+--------+------+
+| TableName         | Enable | TimeUnit | End  | Prefix | Buckets | LastUpdateTime      | LastSchedulerTime   | State  | Msg  |
++-------------------+--------+----------+------+--------+---------+---------------------+---------------------+--------+------+
+| dynamic_partition | true   | DAY      | 3    | p      | 32      | 2020-01-08 20:19:09 | 2020-01-08 20:19:34 | NORMAL | N/A  |
++-------------------+--------+----------+------+--------+---------+---------------------+---------------------+--------+------+
+1 row in set (0.00 sec)
+
+```
+    
+* LastUpdateTime: 最后一次修改动态分区属性的时间 
+* LastSchedulerTime:   最后一次执行动态分区调度的时间
+* State:    最后一次执行动态分区调度的状态
+* Msg:  最后一次执行动态分区调度的错误信息 
+
+## 高级操作
+
+### FE 配置项
+
+* dynamic\_partition\_enable
+
+    是否开启 Doris 的动态分区功能。默认为 false,即关闭。该参数只影响动态分区表的分区操作,不影响普通表。
+    
+* dynamic\_partition\_check\_interval\_seconds
+
+    动态分区线程的执行频率,默认为3600(1个小时),即每1个小时进行一次调度
+    
+### HTTP Restful API
+
+Doris 提供了修改动态分区配置参数的 HTTP Restful API,用于运行时修改动态分区配置参数。
+
+该 API 实现在 FE 端,使用 `fe_host:fe_http_port` 进行访问。需要 ADMIN 权限。
+
+1. 将 dynamic_partition_enable 设置为 true 或 false
+    
+    * 标记为 true
+    
+        ```
+        GET /api/_set_config?dynamic_partition_enable=true
+        
+        例如: curl --location-trusted -u username:password -XGET http://fe_host:fe_http_port/api/_set_config?dynamic_partition_enable=true
+        
+        返回:200
+        ```
+        
+    * 标记为 false
+    
+        ```
+        GET /api/_set_config?dynamic_partition_enable=false
+        
+        例如: curl --location-trusted -u username:password -XGET http://fe_host:fe_http_port/api/_set_config?dynamic_partition_enable=false
+        
+        返回:200
+        ```
+    
+2. 设置 dynamic partition 的调度频率
+    
+    * 设置调度时间为12小时调度一次
+        
+        ```
+        GET /api/_set_config?dynamic_partition_check_interval_seconds=432000
+        
+        例如: curl --location-trusted -u username:password -XGET http://fe_host:fe_http_port/api/_set_config?dynamic_partition_check_interval_seconds=432000
+        
+        返回:200
+        ```
diff --git a/content/_sources/documentation/cn/administrator-guide/http-actions/compaction-action.md.txt b/content/_sources/documentation/cn/administrator-guide/http-actions/compaction-action.md.txt
new file mode 100644
index 0000000..d4cadbd
--- /dev/null
+++ b/content/_sources/documentation/cn/administrator-guide/http-actions/compaction-action.md.txt
@@ -0,0 +1,79 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Compaction Action
+
+该 API 用于查看某个 BE 节点总体的 compaction 状态,或者指定 tablet 的 compaction 状态。也可以用于手动触发 Compaction。
+
+## 查看 Compaction 状态
+
+### 节点整体 compaction 状态
+
+(TODO)
+
+### 指定 tablet 的 compaction 状态
+
+```
+curl -X GET http://be_host:webserver_port/api/compaction/show?tablet_id=xxxx\&schema_hash=yyyy
+```
+
+若 tablet 不存在,返回 JSON 格式的错误:
+
+```
+{
+    "status": "Fail",
+    "msg": "Tablet not found"
+}
+```
+
+若 tablet 存在,则返回 JSON 格式的结果:
+
+```
+{
+    "cumulative point": 50,
+    "last cumulative failure time": "2019-12-16 18:13:43.224",
+    "last base failure time": "2019-12-16 18:13:23.320",
+    "last cumu success time": "2019-12-16 18:12:15.110",
+    "last base success time": "2019-12-16 18:11:50.780",
+    "rowsets": [
+        "[0-48] 10 DATA OVERLAPPING",
+        "[49-49] 2 DATA OVERLAPPING",
+        "[50-50] 0 DELETE NONOVERLAPPING",
+        "[51-51] 5 DATE OVERLAPPING"
+    ]
+}
+```
+
+结果说明:
+
+* cumulative point:base 和 cumulative compaction 的版本分界线。在 point(不含)之前的版本由 base compaction 处理。point(含)之后的版本由 cumulative compaction 处理。
+* last cumulative failure time:上一次尝试 cumulative compaction 失败的时间。默认 10min 后才会再次尝试对该 tablet 做 cumulative compaction。
+* last base failure time:上一次尝试 base compaction 失败的时间。默认 10min 后才会再次尝试对该 tablet 做 base compaction。
+* rowsets:该 tablet 当前的 rowset 集合。如 [0-48] 表示 0-48 版本。第二位数字表示该版本中 segment 的数量。`DELETE` 表示 delete 版本。`DATE` 表示数据版本。`OVERLAPPING` 和 `NONOVERLAPPING` 表示segment数据是否重叠。
+
+### 示例
+
+```
+curl -X GET http://192.168.10.24:8040/api/compaction/show?tablet_id=10015\&schema_hash=1294206575
+```
+
+## 手动触发 Compaction
+
+(TODO)
+
diff --git a/content/_sources/documentation/cn/administrator-guide/http-actions/restore-tablet.md.txt b/content/_sources/documentation/cn/administrator-guide/http-actions/restore-tablet.md.txt
index 8175625..a6eff81 100644
--- a/content/_sources/documentation/cn/administrator-guide/http-actions/restore-tablet.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/http-actions/restore-tablet.md.txt
@@ -31,6 +31,6 @@ under the License.
 
     curl -X POST "http://hostname:8088/api/restore_tablet?tablet_id=123456\&schema_hash=1111111"
 
-##keyword
+## keyword
 
     RESTORE,TABLET,RESTORE,TABLET
diff --git a/content/_sources/documentation/cn/administrator-guide/load-data/broker-load-manual.md.txt b/content/_sources/documentation/cn/administrator-guide/load-data/broker-load-manual.md.txt
index 5616f47..a739231 100644
--- a/content/_sources/documentation/cn/administrator-guide/load-data/broker-load-manual.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/load-data/broker-load-manual.md.txt
@@ -157,7 +157,7 @@ Label 的另一个作用,是防止用户重复导入相同的数据。**强烈
 
 + negative
 
-    ```data_desc```中还可以设置数据取反导入。这个功能主要用于,当数据表中聚合列的类型都为 SUM 类型时。如果希望撤销某一批导入的数据。则可以通过 `negative` 参数当如同一批数据。Doris 会自动为这一批数据在聚合列上数据取反,以达到消除同一批数据的功能。
+    ```data_desc```中还可以设置数据取反导入。这个功能主要用于,当数据表中聚合列的类型都为 SUM 类型时。如果希望撤销某一批导入的数据。则可以通过 `negative` 参数导入同一批数据。Doris 会自动为这一批数据在聚合列上数据取反,以达到消除同一批数据的功能。
     
 + partition
 
@@ -201,7 +201,7 @@ Label 的另一个作用,是防止用户重复导入相同的数据。**强烈
     
     计算公式为:
     
-    ``` (dpp.abnorm.ALL / (dpp.abnorm.ALL + dpp.norm.ALL ) ) > max_filter_ratio ```
+    ``` max_filter_ratio = (dpp.abnorm.ALL / (dpp.abnorm.ALL + dpp.norm.ALL ) ) ```
     
     ```dpp.abnorm.ALL``` 表示数据质量不合格的行数。如类型不匹配,列数不匹配,长度不匹配等等。
     
@@ -285,7 +285,7 @@ LoadFinishTime: 2019-07-27 11:50:16
 
 + JobId
 
-    导入任务的唯一ID,每个导入任务的 JobId 都不同,有系统自动生成。与 Label 不同的是,JobId永远不会相同,而 Label 则可以在导入任务失败后被复用。
+    导入任务的唯一ID,每个导入任务的 JobId 都不同,由系统自动生成。与 Label 不同的是,JobId永远不会相同,而 Label 则可以在导入任务失败后被复用。
     
 + Label
 
@@ -315,7 +315,7 @@ LoadFinishTime: 2019-07-27 11:50:16
 
 + EtlInfo
 
-    主要显示了导入的数据量指标 ```unselected.rows``` , ```dpp.norm.ALL 和 dpp.abnorm.ALL```。用户可以根据第一个数值判断 where 条件过滤了多少行,后两个指标验证当前导入任务的错误率是否超过 max\_filter\_ratio。
+    主要显示了导入的数据量指标 ```unselected.rows``` , ```dpp.norm.ALL``` 和 ```dpp.abnorm.ALL```。用户可以根据第一个数值判断 where 条件过滤了多少行,后两个指标验证当前导入任务的错误率是否超过 ```max_filter_ratio```。
 
     三个指标之和就是原始数据量的总行数。
     
@@ -448,7 +448,7 @@ LoadFinishTime: 2019-07-27 11:50:16
         ```
         期望最大导入文件数据量 = 14400s * 10M/s * BE 个数
         比如:集群的 BE 个数为 10个
-        期望最大导入文件数据量 = 14400 * 10M/s * 10 = 1440000M ≈ 1440G
+        期望最大导入文件数据量 = 14400s * 10M/s * 10 = 1440000M ≈ 1440G
         
         注意:一般用户的环境可能达不到 10M/s 的速度,所以建议超过 500G 的文件都进行文件切分,再导入。
         
diff --git a/content/_sources/documentation/cn/administrator-guide/load-data/insert-into-manual.md.txt b/content/_sources/documentation/cn/administrator-guide/load-data/insert-into-manual.md.txt
index eab3618..c00f135 100644
--- a/content/_sources/documentation/cn/administrator-guide/load-data/insert-into-manual.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/load-data/insert-into-manual.md.txt
@@ -47,6 +47,21 @@ INSERT INTO tbl2 WITH LABEL label1 SELECT * FROM tbl3;
 INSERT INTO tbl1 VALUES ("qweasdzxcqweasdzxc"), ("a");
 ```
 
+**注意**
+
+当需要使用 `CTE(Common Table Expressions)` 作为 insert 操作中的查询部分时,必须指定 `WITH LABEL` 和 column list 部分。示例
+
+```
+INSERT INTO tbl1 WITH LABEL label1
+WITH cte1 AS (SELECT * FROM tbl1), cte2 AS (SELECT * FROM tbl2)
+SELECT k1 FROM cte1 JOIN cte2 WHERE cte1.k1 = 1;
+
+
+INSERT INTO tbl1 (k1)
+WITH cte1 AS (SELECT * FROM tbl1), cte2 AS (SELECT * FROM tbl2)
+SELECT k1 FROM cte1 JOIN cte2 WHERE cte1.k1 = 1;
+```
+
 下面主要介绍创建导入语句中使用到的参数:
 
 + partition\_info
@@ -79,44 +94,101 @@ INSERT INTO tbl1 VALUES ("qweasdzxcqweasdzxc"), ("a");
     
 ### 导入结果
 
-Insert Into 本身就是一个 SQL 命令,所以返回的行为同 SQL 命令的返回行为。
-
-如果导入失败,则返回语句执行失败。示例如下:
-
-```ERROR 1064 (HY000): all partitions have no load data. url: http://ip:port/api/_load_error_log?file=__shard_14/error_log_insert_stmt_f435264d82f342e4-a33764f5f0dfbf00_f435264d82f342e4_a33764f5f0dfbf00```
-
-其中 url 可以用于查询错误的数据,具体见后面 **查看错误行** 小结。
-
-如果导入成功,则返回语句执行成功。示例如下:
-
-```
-Query OK, 100 row affected, 0 warning (0.22 sec)
-```
+Insert Into 本身就是一个 SQL 命令,其返回结果会根据执行结果的不同,分为以下几种:
 
-如果用户指定了 Label,则会也会返回 Label
-```
-Query OK, 100 row affected, 0 warning (0.22 sec)
-{'label':'user_specified_label'}
-```
+1. 结果集为空
 
-导入可能部分成功,则会附加 Label 字段。示例如下:
+    如果 insert 对应 select 语句的结果集为空,则返回如下:
+    
+    ```
+    mysql> insert into tbl1 select * from empty_tbl;
+    Query OK, 0 rows affected (0.02 sec)
+    ```
+    
+    `Query OK` 表示执行成功。`0 rows affected` 表示没有数据被导入。
+    
+2. 结果集不为空
 
-```
-Query OK, 100 row affected, 1 warning (0.23 sec)
-{'label':'7d66c457-658b-4a3e-bdcf-8beee872ef2c'}
-```
+    在结果集不为空的情况下。返回结果分为如下几种情况:
+    
+    1. Insert 执行成功并可见:
 
-```
-Query OK, 100 row affected, 1 warning (0.23 sec)
-{'label':'user_specified_label'}
-```
+        ```
+        mysql> insert into tbl1 select * from tbl2;
+        Query OK, 4 rows affected (0.38 sec)
+        {'label':'insert_8510c568-9eda-4173-9e36-6adc7d35291c', 'status':'visible', 'txnId':'4005'}
+        
+        mysql> insert into tbl1 with label my_label1 select * from tbl2;
+        Query OK, 4 rows affected (0.38 sec)
+        {'label':'my_label1', 'status':'visible', 'txnId':'4005'}
+        
+        mysql> insert into tbl1 select * from tbl2;
+        Query OK, 2 rows affected, 2 warnings (0.31 sec)
+        {'label':'insert_f0747f0e-7a35-46e2-affa-13a235f4020d', 'status':'visible', 'txnId':'4005'}
+        
+        mysql> insert into tbl1 select * from tbl2;
+        Query OK, 2 rows affected, 2 warnings (0.31 sec)
+        {'label':'insert_f0747f0e-7a35-46e2-affa-13a235f4020d', 'status':'committed', 'txnId':'4005'}
+        ```
+        
+        `Query OK` 表示执行成功。`4 rows affected` 表示总共有4行数据被导入。`2 warnings` 表示被过滤的行数。
+        
+        同时会返回一个 json 串:
+        
+        ```
+        {'label':'my_label1', 'status':'visible', 'txnId':'4005'}
+        {'label':'insert_f0747f0e-7a35-46e2-affa-13a235f4020d', 'status':'committed', 'txnId':'4005'}
+        {'label':'my_label1', 'status':'visible', 'txnId':'4005', 'err':'some other error'}
+        ```
+        
+        `label` 为用户指定的 label 或自动生成的 label。Label 是该 Insert Into 导入作业的标识。每个导入作业,都有一个在单 database 内部唯一的 Label。
+        
+        `status` 表示导入数据是否可见。如果可见,显示 `visible`,如果不可见,显示 `committed`。
+        
+        `txnId` 为这个 insert 对应的导入事务的 id。
+        
+        `err` 字段会显示一些其他非预期错误。
+        
+        当需要查看被过滤的行时,用户可以通过如下语句
+        
+        ```
+        show load where label="xxx";
+        ```
+        
+        返回结果中的 URL 可以用于查询错误的数据,具体见后面 **查看错误行** 小结。
+                
+        **数据不可见是一个临时状态,这批数据最终是一定可见的**
+        
+        可以通过如下语句查看这批数据的可见状态:
+        
+        ```
+        show transaction where id=4005;
+        ```
+        
+        返回结果中的 `TransactionStatus` 列如果为 `visible`,则表述数据可见。
+        
+    2. Insert 执行失败
 
-其中 affected 表示导入的行数。warning 表示失败的行数。用户需要通过 `SHOW LOAD WHERE LABEL="xxx";` 命令,获取 url 查看错误行。
+        执行失败表示没有任何数据被成功导入,并返回如下:
+        
+        ```
+        mysql> insert into tbl1 select * from tbl2 where k1 = "a";
+        ERROR 1064 (HY000): all partitions have no load data. url: http://10.74.167.16:8042/api/_load_error_log?file=__shard_2/error_log_insert_stmt_ba8bb9e158e4879-ae8de8507c0bf8a2_ba8bb9e158e4879_ae8de8507c0bf8a2
+        ```
+        
+        其中 `ERROR 1064 (HY000): all partitions have no load data` 显示失败原因。后面的 url 可以用于查询错误的数据,具体见后面 **查看错误行** 小结。
+        
 
-如果没有任何数据,也会返回成功,且 affected 和 warning 都是 0。
+**综上,对于 insert 操作返回结果的正确处理逻辑应为:**
 
-Label 是该 Insert Into 导入作业的标识。每个导入作业,都有一个在单 database 内部唯一的 Label。Insert Into 的 Label 则是由系统生成的,用户可以拿着这个 Label 通过查询导入命令异步获取导入状态。
-    
+1. 如果返回结果为 `ERROR 1064 (HY000)`,则表示导入失败。
+2. 如果返回结果为 `Query OK`,则表示执行成功。
+    1. 如果 `rows affected` 为 0,表示结果集为空,没有数据被导入。
+    2. 如果 `rows affected` 大于 0:
+        1. 如果 `status` 为 `committed`,表示数据还不可见。需要通过 `show transaction` 语句查看状态直到 `visible`
+        2. 如果 `status` 为 `visible`,表示数据导入成功。
+    3. 如果 `warnings` 大于 0,表示有数据被过滤,可以通过 `show load` 语句获取 url 查看被过滤的行。
+        
 ## 相关系统配置
 
 ### FE 配置
diff --git a/content/_sources/documentation/cn/administrator-guide/load-data/load-manual.md.txt b/content/_sources/documentation/cn/administrator-guide/load-data/load-manual.md.txt
index 7354467..b261687 100644
--- a/content/_sources/documentation/cn/administrator-guide/load-data/load-manual.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/load-data/load-manual.md.txt
@@ -36,7 +36,7 @@ Doris 支持多种导入方式。建议先完整阅读本文档,再根据所
 
 为适配不同的数据导入需求,Doris 系统提供了5种不同的导入方式。每种导入方式支持不同的数据源,存在不同的使用方式(异步,同步)。
 
-所有导入方式都支持 csv 数据格式。其中 Broker load 还支持 parquet 数据格式。
+所有导入方式都支持 csv 数据格式。其中 Broker load 还支持 parquet 和 orc 数据格式。
 
 每个导入方式的说明请参阅单个导入方式的操作手册。
 
@@ -99,10 +99,9 @@ Doris 支持多种导入方式。建议先完整阅读本文档,再根据所
 
 Doris 对所有导入方式提供原子性保证。既保证同一个导入作业内的数据,原子生效。不会出现仅导入部分数据的情况。
 
-同时,每一个导入作业都有一个由用户指定或者系统自动生成的 Label。Label 在一个 Database 内唯一。当一个 Label 对应的导入作业成功够,不可在重复使用该 Label 提交导入作业。如果 Label 对应的导入作业失败,则可以重复使用。
-
-用户可以通过 Label 机制,来保证 Label 对应的数据最多被导入一次,级 At-Most-Once 语义。
+同时,每一个导入作业都有一个由用户指定或者系统自动生成的 Label。Label 在一个 Database 内唯一。当一个 Label 对应的导入作业成功后,不可再重复使用该 Label 提交导入作业。如果 Label 对应的导入作业失败,则可以重复使用。
 
+用户可以通过 Label 机制,来保证 Label 对应的数据最多被导入一次,即At-Most-Once 语义。
 
 ## 同步和异步
 
@@ -110,7 +109,7 @@ Doris 目前的导入方式分为两类,同步和异步。如果是外部程
 
 ### 同步
 
-同步导入方式既用户创建导入任务,Doris 同步执行导入,执行完成后返回用户导入结果。用户可直接根据创建导入任务命令返回的结果同步判断导入是否成功。
+同步导入方式即用户创建导入任务,Doris 同步执行导入,执行完成后返回用户导入结果。用户可直接根据创建导入任务命令返回的结果同步判断导入是否成功。
 
 同步类型的导入方式有: **Stream load**,**Insert**。
 
@@ -123,7 +122,7 @@ Doris 目前的导入方式分为两类,同步和异步。如果是外部程
 *注意:如果用户使用的导入方式是同步返回的,且导入的数据量过大,则创建导入请求可能会花很长时间才能返回结果。*
 
 ### 异步
-异步导入方式既用户创建导入任务后,Doris 直接返回创建成功。**创建成功不代表数据已经导入**。导入任务会被异步执行,用户在创建成功后,需要通过轮询的方式发送查看命令查看导入作业的状态。如果创建失败,则可以根据失败信息,判断是否需要再次创建。
+异步导入方式即用户创建导入任务后,Doris 直接返回创建成功。**创建成功不代表数据已经导入**。导入任务会被异步执行,用户在创建成功后,需要通过轮询的方式发送查看命令查看导入作业的状态。如果创建失败,则可以根据失败信息,判断是否需要再次创建。
 
 异步类型的导入方式有:**Broker load**,**Multi load**。
 
@@ -157,7 +156,6 @@ Doris 目前的导入方式分为两类,同步和异步。如果是外部程
 3. 确定导入方式的类型:导入方式为同步或异步。比如 Broker load 为异步导入方式,则外部系统在提交创建导入后,必须调用查看导入命令,根据查看导入命令的结果来判断导入是否成功。
 4. 制定 Label 生成策略:Label 生成策略需满足,每一批次数据唯一且固定的原则。这样 Doris 就可以保证 At-Most-Once。
 5. 程序自身保证 At-Least-Once:外部系统需要保证自身的 At-Least-Once,这样就可以保证导入流程的 Exactly-Once。
-6. 
 
 ## 通用系统配置
 
@@ -203,7 +201,7 @@ Doris 目前的导入方式分为两类,同步和异步。如果是外部程
     
 * load\_process\_max\_memory\_limit\_bytes 和 load\_process\_max\_memory\_limit\_percent
 
-    这两个参数,限制了单个 Backend 上,可用于导入任务的内存上限。分别是最大内存和最大内存百分比。`load_process_max_memory_limit_percent` 默认为 80%,该值为 `mem_limit` 配置的 80%。即假设物理内存为 M,则默认导入内存限制为 M * 80% * 80%。
+    这两个参数,限制了单个 Backend 上,可用于导入任务的内存上限。分别是最大内存和最大内存百分比。`load_process_max_memory_limit_percent` 默认为 80,表示对 Backend 总内存限制的百分比(总内存限制 `mem_limit` 默认为 80%,表示对物理内存的百分比)。即假设物理内存为 M,则默认导入内存限制为 M * 80% * 80%。
 
     `load_process_max_memory_limit_bytes` 默认为 100GB。系统会在两个参数中取较小者,作为最终的 Backend 导入内存使用上限。
 
diff --git a/content/_sources/documentation/cn/administrator-guide/load-data/stream-load-manual.md.txt b/content/_sources/documentation/cn/administrator-guide/load-data/stream-load-manual.md.txt
index 5ab9188..3010236 100644
--- a/content/_sources/documentation/cn/administrator-guide/load-data/stream-load-manual.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/load-data/stream-load-manual.md.txt
@@ -66,8 +66,7 @@ Stream load 通过 HTTP 协议提交和传输数据。这里通过 `curl` 命令
 ```
 curl --location-trusted -u user:passwd [-H ""...] -T data.file -XPUT http://fe_host:http_port/api/{db}/{table}/_stream_load
 
-Header 中支持如下属性:
-label, column_separator, columns, where, max_filter_ratio, partitions
+Header 中支持属性见下面的 ‘导入任务参数’ 说明 
 格式为: -H "key1:value1"
 ```
 
@@ -129,7 +128,7 @@ Stream load 由于使用的是 HTTP 协议,所以所有导入任务有关的
     columns: c2,c1
     
     表达式变换例子:原始文件有两列,目标表也有两列(c1,c2)但是原始文件的两列均需要经过函数变换才能对应目标表的两列,则写法如下:
-    columns: tmp_c1, tmp_c2, c1 = year(tmp_c1), c2 = mouth(tmp_c2)
+    columns: tmp_c1, tmp_c2, c1 = year(tmp_c1), c2 = month(tmp_c2)
     其中 tmp_*是一个占位符,代表的是原始文件中的两个原始列。
     ```
 
@@ -249,7 +248,7 @@ Stream load 由于使用的是 HTTP 协议,所以所有导入任务有关的
 
     导入任务的超时时间(以秒为单位),导入任务在设定的 timeout 时间内未完成则会被系统取消,变成 CANCELLED。
     
-    默认的 timeout 时间为 600 秒。如果导入的源文件无法再规定时间内完成导入,用户可以在 stream load 请求中设置单独的超时时间。
+    默认的 timeout 时间为 600 秒。如果导入的源文件无法在规定时间内完成导入,用户可以在 stream load 请求中设置单独的超时时间。
 
     或者调整 FE 的参数```stream_load_default_timeout_second``` 来设置全局的默认超时时间。
 
diff --git a/content/_sources/documentation/cn/administrator-guide/operation/metadata-operation.md.txt b/content/_sources/documentation/cn/administrator-guide/operation/metadata-operation.md.txt
index f4e8dbc..29d59d2 100644
--- a/content/_sources/documentation/cn/administrator-guide/operation/metadata-operation.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/operation/metadata-operation.md.txt
@@ -25,7 +25,7 @@ under the License.
 
 ## 重要提示
 
-* 当前元数据的设计是无法向后兼容的。即如果新版本有新增的元数据结构变动(可以查看 FE 代码中的 `FeMetaVersion.java` 文件中是否有新增的 VERSION),那么在升级到新版本后,通常是无法在回滚到旧版本的。所以,在升级 FE 之前,请务必按照 [升级文档](../../installing/upgrade.md) 中的操作,测试元数据兼容性。
+* 当前元数据的设计是无法向后兼容的。即如果新版本有新增的元数据结构变动(可以查看 FE 代码中的 `FeMetaVersion.java` 文件中是否有新增的 VERSION),那么在升级到新版本后,通常是无法再回滚到旧版本的。所以,在升级 FE 之前,请务必按照 [升级文档](../../installing/upgrade.md) 中的操作,测试元数据兼容性。
 
 ## 元数据目录结构
 
@@ -60,7 +60,7 @@ under the License.
     
     你也可能会看到一个 `image.ckpt` 文件。这是一个正在生成的元数据镜像。通过 `du -sh` 命令应该可以看到这个文件大小在不断变大,说明镜像内容正在写入这个文件。当镜像写完后,会自动重名为一个新的 `image.xxxxx` 并替换旧的 image 文件。
     
-    只有角色为 Master 的 FE 才会主动定期生成 image 文件。每次生成完后,都会推送给其他非 Master 角色的 FE。当确认其他所有 FE 都收到这个 image 后,Master FE 会删除 bdbje 中就的元数据 journal。所以,如果 image 生成失败,或者 image 推送给其他 FE 失败时,都会导致 bdbje 中的数据不断累积。
+    只有角色为 Master 的 FE 才会主动定期生成 image 文件。每次生成完后,都会推送给其他非 Master 角色的 FE。当确认其他所有 FE 都收到这个 image 后,Master FE 会删除 bdbje 中旧的元数据 journal。所以,如果 image 生成失败,或者 image 推送给其他 FE 失败时,都会导致 bdbje 中的数据不断累积。
     
     `ROLE` 文件记录了 FE 的类型(FOLLOWER 或 OBSERVER),是一个文本文件。
     
@@ -168,7 +168,7 @@ under the License.
 
 ### 故障恢复
 
-FE 有可能因为某些原因出现无法启动 bdbje、FE 之间无法同步等问题。现象包括无法进行元数据写操作、没有 MASTER 等等。这时,我们需要手动操作来恢复 FE。手动恢复 FE 的大致原理,是先通过当前 `meta_dir` 中的元数据,启动一个新的 MASTER,然后在逐台添加其他 FE。请严格按照如下步骤操作:
+FE 有可能因为某些原因出现无法启动 bdbje、FE 之间无法同步等问题。现象包括无法进行元数据写操作、没有 MASTER 等等。这时,我们需要手动操作来恢复 FE。手动恢复 FE 的大致原理,是先通过当前 `meta_dir` 中的元数据,启动一个新的 MASTER,然后再逐台添加其他 FE。请严格按照如下步骤操作:
 
 1. 首先,停止所有 FE 进程,同时停止一切业务访问。保证在元数据恢复期间,不会因为外部访问导致其他不可预期的问题。
 
@@ -258,7 +258,7 @@ FE 目前有以下几个端口
 1. 集群停止所有 Load,Create,Alter 操作
 2. 执行以下命令,从 Master FE 内存中 dump 出元数据:(下面称为 image_mem)
 ```
-curl -u $root_user:$password http://$master_hostname:8410/dump
+curl -u $root_user:$password http://$master_hostname:8030/dump
 ```
 3. 用 image_mem 文件替换掉 OBSERVER FE 节点上`meta_dir/image`目录下的 image 文件,重启 OBSERVER FE 节点,
 验证 image_mem 文件的完整性和正确性(可以在 FE Web 页面查看 DB 和 Table 的元数据是否正常,查看fe.log 是否有异常,是否在正常 replayed journal)
diff --git a/content/_sources/documentation/cn/administrator-guide/operation/multi-tenant.md.txt b/content/_sources/documentation/cn/administrator-guide/operation/multi-tenant.md.txt
index e00dcc6..83379d4 100644
--- a/content/_sources/documentation/cn/administrator-guide/operation/multi-tenant.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/operation/multi-tenant.md.txt
@@ -28,7 +28,7 @@ Doris 作为一款 PB 级别的在线报表与多维分析数据库,对外通
 - 一个用户的查询或者查询引起的bug经常会影响其他用户。
 - 实际生产环境单机只能部署一个BE进程。而多个BE可以更好的解决胖节点问题。并且对于join、聚合操作可以提供更高的并发度。
 
-综合以上三点,Doris需要新的多租户方案,既能做到较好的资源隔离和故障隔离,同时也能减少维护的代价,满足共有云和私有云的需求。
+综合以上三点,Doris需要新的多租户方案,既能做到较好的资源隔离和故障隔离,同时也能减少维护的代价,满足公有云和私有云的需求。
 
 ## 设计原则
 
@@ -41,7 +41,7 @@ Doris 作为一款 PB 级别的在线报表与多维分析数据库,对外通
 - FE: Frontend,即 Doris 中用于元数据管理即查询规划的模块。
 - BE: Backend,即 Doris 中用于存储和查询数据的模块。
 - Master: FE 的一种角色。一个Doris集群只有一个Master,其他的FE为Observer或者Follower。
-- instance:一个 BE 进程及时一个 instance。
+- instance:一个 BE 进程即是一个 instance。
 - host:单个物理机
 - cluster:即一个集群,由多个instance组成。
 - 租户:一个cluster属于一个租户。cluster和租户之间是一对一关系。
@@ -61,7 +61,7 @@ Doris 作为一款 PB 级别的在线报表与多维分析数据库,对外通
 1. cluster表示一个虚拟的集群,由多个BE的instance组成。多个cluster共享FE。
 2. 一个host上可以启动多个instance。cluster创建时,选取任意指定数量的instance,组成一个cluster。
 3. 创建cluster的同时,会创建一个名为superuser的账户,隶属于该cluster。superuser可以对cluster进行管理、创建数据库、分配权限等。
-4. Doris启动后,汇创建一个默认的cluster:default_cluster。如果用户不希望使用多cluster的功能,则会提供这个默认的cluster,并隐藏多cluster的其他操作细节。
+4. Doris启动后,会创建一个默认的cluster:default_cluster。如果用户不希望使用多cluster的功能,则会提供这个默认的cluster,并隐藏多cluster的其他操作细节。
 
 具体架构如下图:
 ![](../../../../resources/images/multi_tenant_arch.png)
@@ -198,7 +198,7 @@ Doris 作为一款 PB 级别的在线报表与多维分析数据库,对外通
 
     为了保证高可用,每个分片的副本必需在不同的机器上。所以建表时,选择副本所在be的策略为在每个host上随机选取一个be。然后从这些be中随机选取所需副本数量的be。总体上做到每个机器上分片分布均匀。
     
-    因此,加入需要创建一个3副本的分片,即使cluster包含3个或以上的instance,但是只有2个或以下的host,依然不能创建该分片。
+    因此,假如需要创建一个3副本的分片,即使cluster包含3个或以上的instance,但是只有2个或以下的host,依然不能创建该分片。
     
 7. 负载均衡
 
diff --git a/content/_sources/documentation/cn/administrator-guide/operation/tablet-meta-tool.md.txt b/content/_sources/documentation/cn/administrator-guide/operation/tablet-meta-tool.md.txt
index 0f1de88..7cbe002 100644
--- a/content/_sources/documentation/cn/administrator-guide/operation/tablet-meta-tool.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/operation/tablet-meta-tool.md.txt
@@ -27,7 +27,7 @@ under the License.
 
 其中 http 接口仅用于在线的查看 tablet 的元数据,可以在 BE 进程运行的状态下使用。
 
-而 meta\_tool 工具则仅用于离线的各类元数据管理操作,必须先停止BE进城后,才可使用。
+而 meta\_tool 工具则仅用于离线的各类元数据管理操作,必须先停止BE进程后,才可使用。
 
 meta\_tool 工具存放在 BE 的 lib/ 目录下。
 
diff --git a/content/_sources/documentation/cn/administrator-guide/operation/tablet-repair-and-balance.md.txt b/content/_sources/documentation/cn/administrator-guide/operation/tablet-repair-and-balance.md.txt
index 1c47845..17f2a8c 100644
--- a/content/_sources/documentation/cn/administrator-guide/operation/tablet-repair-and-balance.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/operation/tablet-repair-and-balance.md.txt
@@ -208,7 +208,7 @@ TabletScheduler 里等待被调度的分片会根据状态不同,赋予不同
 
 ## 副本均衡
 
-Doris 会自动进行集群内的副本均衡。均衡的主要思想,是对某些分片,先在低负载的节点上创建一个副本,然后再删除这些分片在高负载节点上的副本。同时,因为不同存储介质的存在,在同一个集群内的不同 BE 节点上,可能存在或不存在一种或两种存储介质。我们要求存储介质为 A 的分片在均衡后,尽量依然存储在存储介质 A 中。所以我们根据存储介质,对集群的 BE 节点进行划分。然后针对不同的存储介质的 BE 节点集合,进行负载均衡调度。
+Doris 会自动进行集群内的副本均衡。均衡的主要思想,是对某些分片,先在低负载的节点上创建一个副本,然后再删除这些分片在高负载节点上的副本。同时,因为不同存储介质的存在,在同一个集群内的不同 BE 节点上,可能存在一种或两种存储介质。我们要求存储介质为 A 的分片在均衡后,尽量依然存储在存储介质 A 中。所以我们根据存储介质,对集群的 BE 节点进行划分。然后针对不同的存储介质的 BE 节点集合,进行负载均衡调度。
 
 同样,副本均衡会保证不会将同一个 Tablet 的副本部署在同一个 host 的 BE 上。
 
@@ -504,7 +504,7 @@ TabletScheduler 在每轮调度时,都会通过 LoadBalancer 来选择一定
 
     以下命令,可以查看通过 `ADMIN REPAIR TABLE` 命令设置的优先修复的表或分区。
     
-    `SHOW PROC '/cluster_balance/priority_repair'`;
+    `SHOW PROC '/cluster_balance/priority_repair';`
     
     其中 `RemainingTimeMs` 表示,这些优先修复的内容,将在这个时间后,被自动移出优先修复队列。以防止优先修复一直失败导致资源被占用。
     
@@ -512,7 +512,7 @@ TabletScheduler 在每轮调度时,都会通过 LoadBalancer 来选择一定
 
 我们收集了 TabletChecker 和 TabletScheduler 在运行过程中的一些统计信息,可以通过以下命令查看:
 
-`SHOW PROC '/cluster_balance/sched_stat'`;
+`SHOW PROC '/cluster_balance/sched_stat';`
 
 ```
 +---------------------------------------------------+-------------+
@@ -598,7 +598,7 @@ TabletScheduler 在每轮调度时,都会通过 LoadBalancer 来选择一定
 
 * balance\_load\_score\_threshold
 
-    * 说明:集群均衡的阈值。默认为 0.1,即 10%。当一个 BE 节点的 load core,不高于或不低于平均 load core 的 10% 时,我们认为这个节点是均衡的。如果想让集群负载更加平均,可以适当调低这个参数。
+    * 说明:集群均衡的阈值。默认为 0.1,即 10%。当一个 BE 节点的 load score,不高于或不低于平均 load score 的 10% 时,我们认为这个节点是均衡的。如果想让集群负载更加平均,可以适当调低这个参数。
     * 默认值:0.1
     * 重要性:中
 
diff --git a/content/_sources/documentation/cn/administrator-guide/privilege.md.txt b/content/_sources/documentation/cn/administrator-guide/privilege.md.txt
index 040c199..8c08ac2 100644
--- a/content/_sources/documentation/cn/administrator-guide/privilege.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/privilege.md.txt
@@ -27,7 +27,7 @@ Doris 新的权限管理系统参照了 Mysql 的权限管理机制,做到了
 
     在权限系统中,一个用户被识别为一个 User Identity(用户标识)。用户标识由两部分组成:username 和 userhost。其中 username 为用户名,由英文大小写组成。userhost 表示该用户链接来自的 IP。user_identity 以 username@'userhost' 的方式呈现,表示来自 userhost 的 username。
     
-    user_identity 的另一种表现方式为 username@['domain'],其中 domain 为域名,可以通过 DNS 会 BNS(百度名字服务)解析为一组 ip。最终表现为一组 username@'userhost',所以后面我们统一使用 username@'userhost' 来表示。
+    user_identity 的另一种表现方式为 username@['domain'],其中 domain 为域名,可以通过 DNS 或 BNS(百度名字服务)解析为一组 ip。最终表现为一组 username@'userhost',所以后面我们统一使用 username@'userhost' 来表示。
     
 2. 权限 Privilege
 
@@ -53,7 +53,7 @@ Doris 新的权限管理系统参照了 Mysql 的权限管理机制,做到了
 6. 删除角色:DROP ROLE
 7. 查看当前用户权限:SHOW GRANTS
 8. 查看所有用户权限:SHOW ALL GRANTS
-9. 查看已创建的角色:SHOW ROELS
+9. 查看已创建的角色:SHOW ROLES
 10. 查看用户属性:SHOW PROPERTY
 
 关于以上命令的详细帮助,可以通过 mysql 客户端连接 Doris 后,使用 help + command 获取帮助。如 `HELP CREATE USER`。
@@ -101,7 +101,7 @@ Doris 目前支持以下几种权限
     
 ## ADMIN/GRANT 权限说明
 
-ADMIN\_PRIV 和 GRANT\_PRIV 权限同时拥有**“授予权限”**的权限,较为特殊。这里对和这两个权限相关的操作逐一说明。
+ADMIN\_PRIV 和 GRANT\_PRIV 权限同时拥有**授予权限**的权限,较为特殊。这里对和这两个权限相关的操作逐一说明。
 
 1. CREATE USER
 
@@ -188,6 +188,14 @@ ADMIN\_PRIV 和 GRANT\_PRIV 权限同时拥有**“授予权限”**的权限,
 
 8. 拥有 GLOBAL 层级 GRANT_PRIV 其实等同于拥有 ADMIN\_PRIV,因为该层级的 GRANT\_PRIV 有授予任意权限的权限,请谨慎使用。
 
+9. `current_user()` 和 `user()`
+
+    用户可以通过 `SELECT current_user();` 和 `SELECT user();` 分别查看 `current_user` 和 `user`。其中 `current_user` 表示当前用户是以哪种身份通过认证系统的,而 `user` 则是用户当前实际的 `user_identity`。举例说明:
+
+    假设创建了 `user1@'192.%'` 这个用户,然后以为来自 192.168.10.1 的用户 user1 登陆了系统,则此时的 `current_user` 为 `user1@'192.%'`,而 `user` 为 `user1@'192.168.10.1'`。
+
+    所有的权限都是赋予某一个 `current_user` 的,真实用户拥有对应的 `current_user` 的所有权限。
+
 ## 最佳实践
 
 这里举例一些 Doris 权限系统的使用场景。
@@ -202,6 +210,8 @@ ADMIN\_PRIV 和 GRANT\_PRIV 权限同时拥有**“授予权限”**的权限,
 
     一个集群内有多个业务,每个业务可能使用一个或多个数据。每个业务需要管理自己的用户。在这种场景下。管理员用户可以为每个数据库创建一个拥有 DATABASE 层级 GRANT 权限的用户。该用户仅可以对用户进行指定的数据库的授权。
 
+3. 黑名单
 
+    Doris 本身不支持黑名单,只有白名单功能,但我们可以通过某些方式来模拟黑名单。假设先创建了名为 `user@'192.%'` 的用户,表示允许来自 `192.*` 的用户登录。此时如果想禁止来自 `192.168.10.1` 的用户登录。则可以再创建一个用户 `cmy@'192.168.10.1'` 的用户,并设置一个新的密码。因为 `192.168.10.1` 的优先级高于 `192.%`,所以来自 `192.168.10.1` 将不能再使用旧密码进行登录。
 
     
diff --git a/content/_sources/documentation/cn/administrator-guide/variables.md.txt b/content/_sources/documentation/cn/administrator-guide/variables.md.txt
index 5b25a53..fae4666 100644
--- a/content/_sources/documentation/cn/administrator-guide/variables.md.txt
+++ b/content/_sources/documentation/cn/administrator-guide/variables.md.txt
@@ -92,7 +92,7 @@ SET forward_to_master = concat('tr', 'u', 'e');
 
     用于指定在查询执行过程中,各个节点传输的单个数据包的行数。默认一个数据包的行数为 1024 行,即源端节点每产生 1024 行数据后,打包发给目的节点。
     
-    较大的行数,会在扫描大数据量场景下提升查询的吞吐,但在可能会在小查询场景下增加查询延迟。同时,也会增加查询的内存开销。建议设置范围 1024 至 4096。
+    较大的行数,会在扫描大数据量场景下提升查询的吞吐,但可能会在小查询场景下增加查询延迟。同时,也会增加查询的内存开销。建议设置范围 1024 至 4096。
     
 * `character_set_client`
 
@@ -164,7 +164,7 @@ SET forward_to_master = concat('tr', 'u', 'e');
     
     当前受该参数影响的命令如下:
     
-    1. `SHOW FRONTEND;`
+    1. `SHOW FRONTENDS;`
 
         转发到 Master 可以查看最后一次心跳信息。
     
@@ -172,7 +172,7 @@ SET forward_to_master = concat('tr', 'u', 'e');
 
         转发到 Master 可以查看启动时间、最后一次心跳信息、磁盘容量信息。
         
-    3. `SHOW BROKERS;`
+    3. `SHOW BROKER;`
 
         转发到 Master 可以查看启动时间、最后一次心跳信息。
         
@@ -199,7 +199,7 @@ SET forward_to_master = concat('tr', 'u', 'e');
     默认情况下,只有在查询发生错误时,BE 才会发送 profile 给 FE,用于查看错误。正常结束的查询不会发送 profile。发送 profile 会产生一定的网络开销,对高并发查询场景不利。
     当用户希望对一个查询的 profile 进行分析时,可以将这个变量设为 true 后,发送查询。查询结束后,可以通过在当前连接的 FE 的 web 页面查看到 profile:
     
-    `fe_host:fe_http:port/query`
+    `fe_host:fe_http_port/query`
     
     其中会显示最近100条,开启 `is_report_success` 的查询的 profile。
     
@@ -253,7 +253,7 @@ SET forward_to_master = concat('tr', 'u', 'e');
     
     一个查询计划通常会产生一组 scan range,即需要扫描的数据范围。这些数据分布在多个 BE 节点上。一个 BE 节点会有一个或多个 scan range。默认情况下,每个 BE 节点的一组 scan range 只由一个执行实例处理。当机器资源比较充裕时,可以将增加该变量,让更多的执行实例同时处理一组 scan range,从而提升查询效率。
     
-    修改该参数仅对扫描节点的效率提升有帮助。较大数值可能会消耗更多的机器资源,如CPU、内存、磁盘IO。
+    而 scan 实例的数量决定了上层其他执行节点,如聚合节点,join 节点的数量。因此相当于增加了整个查询计划执行的并发度。修改该参数会对大查询效率提升有帮助,但较大数值会消耗更多的机器资源,如CPU、内存、磁盘IO。
     
 * `query_cache_size`
 
@@ -306,3 +306,11 @@ SET forward_to_master = concat('tr', 'u', 'e');
 * `wait_timeout`
 
     用于设置空闲连接的连接时长。当一个空闲连接在该时长内与 Doris 没有任何交互,则 Doris 会主动断开这个链接。默认为 8 小时,单位为秒。
+
+* `default_rowset_type`
+
+    用于设置计算节点存储引擎默认的存储格式。当前支持的存储格式包括:alpha/beta。
+
+* `use_v2_rollup`
+
+    用于控制查询使用segment v2存储格式的rollup索引获取数据。该变量用于上线segment v2的时候,进行验证使用;其他情况,不建议使用。
diff --git a/content/_sources/documentation/cn/community/pull-request.md.txt b/content/_sources/documentation/cn/community/pull-request.md.txt
index eb4cc73..e9f79dc 100644
--- a/content/_sources/documentation/cn/community/pull-request.md.txt
+++ b/content/_sources/documentation/cn/community/pull-request.md.txt
@@ -29,7 +29,7 @@ under the License.
 
 ### 2. 配置git和提交修改
 
-####(1)将代码克隆到本地:
+#### (1)将代码克隆到本地:
 
 ```
 git clone https://github.com/<your_github_name>/incubator-doris.git
@@ -39,14 +39,14 @@ git clone https://github.com/<your_github_name>/incubator-doris.git
   
 clone 完成后,origin 会默认指向 github 上的远程 fork 地址。
 
-####(2)将 apache/incubator-doris 添加为本地仓库的远程分支 upstream:
+#### (2)将 apache/incubator-doris 添加为本地仓库的远程分支 upstream:
 
 ```
 cd  incubator-doris
 git remote add upstream https://github.com/apache/incubator-doris.git
 ```
 
-####(3)检查远程仓库设置:
+#### (3)检查远程仓库设置:
 
 ```
 git remote -v
@@ -56,7 +56,7 @@ upstream  https://github.com/apache/incubator-doris.git (fetch)
 upstream  https://github.com/apache/incubator-doris.git (push)
 ```
 
-####(4)新建分支以便在分支上做修改:
+#### (4)新建分支以便在分支上做修改:
 
 ```
 git checkout -b <your_branch_name>
@@ -66,7 +66,7 @@ git checkout -b <your_branch_name>
 
 创建完成后可进行代码更改。
 
-####(5)提交代码到远程分支:
+#### (5)提交代码到远程分支:
 
 ```
 git commit -a -m "<you_commit_message>"
@@ -115,25 +115,25 @@ git push origin <your_branch_name>
 
 提交PR时的代码冲突一般是由于多人编辑同一个文件引起的,解决冲突主要通过以下步骤即可:
 
-####(1)切换至主分支
+#### (1)切换至主分支
 
 ``` 
 git checkout master
 ```
    
-####(2)同步远端主分支至本地
+#### (2)同步远端主分支至本地
 
 ``` 
 git pull upstream master
 ```
    
-####(3)切换回刚才的分支(假设分支名为fix)
+#### (3)切换回刚才的分支(假设分支名为fix)
 
 ``` 
 git checkout fix
 ```
    
-####(4)进行rebase
+#### (4)进行rebase
    
 ``` 
 git rebase -i master
@@ -154,7 +154,7 @@ git push -f origin fix
    
 ### 5. 一个例子
 
-####(1)对于已经配置好 upstream 的本地分支 fetch 到最新代码
+#### (1)对于已经配置好 upstream 的本地分支 fetch 到最新代码
 
 ```
 $ git branch
@@ -170,7 +170,7 @@ From https://github.com/apache/incubator-doris
    9c36200..0c4edc2  master     -> upstream/master
 ```
 
-####(2)进行rebase
+#### (2)进行rebase
 
 ```
 $ git rebase upstream/master  
@@ -178,7 +178,7 @@ First, rewinding head to replay your work on top of it...
 Fast-forwarded master to upstream/master.
 ```
 
-####(3)检查看是否有别人提交未同步到自己 repo 的提交
+#### (3)检查看是否有别人提交未同步到自己 repo 的提交
 
 ```
 $ git status
@@ -192,7 +192,7 @@ $ git status
 nothing added to commit but untracked files present (use "git add" to track)
 ```
 
-####(4)合并其他人提交的代码到自己的 repo
+#### (4)合并其他人提交的代码到自己的 repo
 
 ```
 $ git push origin master
@@ -206,7 +206,7 @@ To https://lide-reed:fc35ff925bd8fd6629be3f6412bacee99d4e5f97@github.com/lide-re
    9c36200..0c4edc2  master -> master
 ```
 
-####(5)新建分支,准备开发
+#### (5)新建分支,准备开发
 
 ```
 $ git checkout -b my_branch
@@ -217,13 +217,13 @@ $ git branch
 * my_branch
 ```
 
-####(6)代码修改完成后,准备提交
+#### (6)代码修改完成后,准备提交
 
 ```
 $ git add -u
 ```
 
-####(7)填写 message 并提交到本地的新建分支上
+#### (7)填写 message 并提交到本地的新建分支上
 
 ```
 $ git commit -m "Fix a typo"
@@ -231,7 +231,7 @@ $ git commit -m "Fix a typo"
  1 files changed, 2 insertions(+), 2 deletions(-)
 ```
 
-####(8)将分支推到 GitHub 远端自己的 repo 中
+#### (8)将分支推到 GitHub 远端自己的 repo 中
 
 ```
 $ git push origin my_branch
diff --git a/content/_sources/documentation/cn/developer-guide/debug-tool.md.txt b/content/_sources/documentation/cn/developer-guide/debug-tool.md.txt
index b0f44f7..c761e78 100644
--- a/content/_sources/documentation/cn/developer-guide/debug-tool.md.txt
+++ b/content/_sources/documentation/cn/developer-guide/debug-tool.md.txt
@@ -239,7 +239,7 @@ pprof --svg --seconds=60 http://be_host:be_webport/pprof/profile > be.svg
 
 ### perf + flamegragh
 
-这个是相当通用的一种CPU分析方式,相比于`pprof`,这中方式必须要求能够登陆到分析对象的物理机上。但是相比于pprof只能定时采点,perf是能够通过不同的事件来完成堆栈信息采集的。具体的的使用方式如下:
+这个是相当通用的一种CPU分析方式,相比于`pprof`,这种方式必须要求能够登陆到分析对象的物理机上。但是相比于pprof只能定时采点,perf是能够通过不同的事件来完成堆栈信息采集的。具体的的使用方式如下:
 
 ```
 perf record -g -p be_pid -- sleep 60
diff --git a/content/_sources/documentation/cn/developer-guide/format-code.md.txt b/content/_sources/documentation/cn/developer-guide/format-code.md.txt
new file mode 100644
index 0000000..bfa3a37
--- /dev/null
+++ b/content/_sources/documentation/cn/developer-guide/format-code.md.txt
@@ -0,0 +1,66 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 代码格式化
+为了自动格式化代码,推荐使用clang-format进行代码格式化。
+
+## 代码风格定制
+Doris的代码风格在Google Style的基础上稍有改动,定制为.clang-format文件,位于Doris根目录。
+
+目前,.clang-format配置文件适配clang-format-8.0.1以上的版本。
+
+## 环境准备
+需要下载安装clang-format,也可使用IDE或Editor提供的clang-format插件,下面分别介绍。
+
+### 下载安装clang-format
+Ubuntu: `apt-get install clang-format` 
+
+当前版本为10.0,也可指定旧版本,例如: `apt-get install clang-format-9`
+
+Centos 7: 
+
+centos yum安装的clang-format版本过老,支持的StyleOption太少,建议源码编译10.0版本。
+
+### clang-format插件
+Clion IDE可使用插件"ClangFormat",`File->Setting->Plugins`搜索下载。但版本无法和
+clang-format程序的版本匹配,从支持的StyleOption上看,应该是低于clang-format-9.0。
+
+## 使用方式
+
+### 命令行运行
+`clang-format --style=file -i $File$` 
+
+`--sytle=file`就会自动找到.clang-format文件,根据文件Option配置来格式化代码。
+
+批量文件clang-format时,需注意过滤不应该格式化的文件。例如,只格式化*.h/*.cpp,并排除某些文件夹:
+
+`find . -type f -not \( -wholename ./env/* \) -regextype posix-egrep -regex
+ ".*\.(cpp|h)" | xargs clang-format -i -style=file`
+
+### 在IDE或Editor中使用clang-format
+#### Clion
+Clion如果使用插件,点击`Reformat Code`即可。
+#### VS Code
+VS Code需安装扩展程序Clang-Format,但需要自行提供clang-format执行程序的位置。
+
+```
+"clang-format.executable":  "$clang-format path$",
+"clang-format.style": "file"
+```
+然后,点击`Format Document`即可。
\ No newline at end of file
diff --git a/content/_sources/documentation/cn/extending-doris/user-defined-function.md.txt b/content/_sources/documentation/cn/extending-doris/user-defined-function.md.txt
index 654f716..eb489dd 100644
--- a/content/_sources/documentation/cn/extending-doris/user-defined-function.md.txt
+++ b/content/_sources/documentation/cn/extending-doris/user-defined-function.md.txt
@@ -75,7 +75,7 @@ under the License.
 
 ### 编写CMakeLists.txt
 
-基于上一步生成的`headers|libs`,用户可以使用`CMakeLists`等工具引入该依赖;在`CMakeLists`中,可以通过向`CMAKE_CXX_FLAGS`添加`-I|L`分别指定`headers|libs`路径;然后使用`add_library`添加动态库。例如,在`be/src/udf_samples/CMakeLists.txt`中,使用`add_library(udfsample SHARED udf_sample.cpp)`增加了一个`udfsample`动态库。后面需要写上涉及的所有源文件(不包含头文件)。
+基于上一步生成的`headers|libs`,用户可以使用`CMakeLists`等工具引入该依赖;在`CMakeLists`中,可以通过向`CMAKE_CXX_FLAGS`添加`-I|L`分别指定`headers|libs`路径;然后使用`add_library`添加动态库。例如,在`be/src/udf_samples/CMakeLists.txt`中,使用`add_library(udfsample SHARED udf_sample.cpp)` `target_link_libraries`(udfsample -static-libstdc++ -static-libgcc)增加了一个`udfsample`动态库。后面需要写上涉及的所有源文件(不包含头文件)。
 
 ### 执行编译
 
diff --git a/content/_sources/documentation/cn/getting-started/basic-usage.md.txt b/content/_sources/documentation/cn/getting-started/basic-usage.md.txt
index de83c4b..56fc72e 100644
--- a/content/_sources/documentation/cn/getting-started/basic-usage.md.txt
+++ b/content/_sources/documentation/cn/getting-started/basic-usage.md.txt
@@ -228,6 +228,7 @@ MySQL> DESC table2;
 > 3. 数据导入可以导入指定的 Partition。详见 `HELP LOAD`。
 > 4. 可以动态修改表的 Schema。
 > 5. 可以对 Table 增加上卷表(Rollup)以提高查询性能,这部分可以参见高级使用指南关于 Rollup 的描述。
+> 6. 表的列的Null属性默认为true,会对查询性能有一定的影响。
 
 ### 2.4 导入数据
 
@@ -244,7 +245,7 @@ curl --location-trusted -u test:test -H "label:table1_20170707" -H "column_separ
 ```
 
 > 1. FE_HOST 是任一 FE 所在节点 IP,8030 为 fe.conf 中的 http_port。
-> 2. 可以使用任一 BE 的 IP,以及 be.conf 中的 webserver_port 左右连接目标进行导入。如:`BE_HOST:8040`
+> 2. 可以使用任一 BE 的 IP,以及 be.conf 中的 webserver_port 进行导入。如:`BE_HOST:8040`
 
 本地文件 `table1_data` 以 `,` 作为数据之间的分隔,具体内容如下:
 
diff --git a/content/_sources/documentation/cn/getting-started/best-practice.md.txt b/content/_sources/documentation/cn/getting-started/best-practice.md.txt
index 96f624e..3b58699 100644
--- a/content/_sources/documentation/cn/getting-started/best-practice.md.txt
+++ b/content/_sources/documentation/cn/getting-started/best-practice.md.txt
@@ -25,7 +25,7 @@ under the License.
 
 Doris 数据模型上目前分为三类: AGGREGATE KEY, UNIQUE KEY, DUPLICATE KEY。三种模型中数据都是按KEY进行排序。
 
-1. AGGREGATE KEY
+1.1.1 AGGREGATE KEY
 
     AGGREGATE KEY相同时,新旧记录进行聚合,目前支持的聚合函数有SUM, MIN, MAX, REPLACE。
     
@@ -43,7 +43,7 @@ Doris 数据模型上目前分为三类: AGGREGATE KEY, UNIQUE KEY, DUPLICATE KE
     DISTRIBUTED BY HASH(siteid) BUCKETS 10;
     ```
     
-2. UNIQUE KEY
+1.1.2. UNIQUE KEY
 
     UNIQUE KEY 相同时,新记录覆盖旧记录。目前 UNIQUE KEY 实现上和 AGGREGATE KEY 的 REPLACE 聚合方法一样,二者本质上相同。适用于有更新需求的分析业务。
     
@@ -59,7 +59,7 @@ Doris 数据模型上目前分为三类: AGGREGATE KEY, UNIQUE KEY, DUPLICATE KE
     DISTRIBUTED BY HASH(orderid) BUCKETS 10;
     ```
     
-3. DUPLICATE KEY
+1.1.3. DUPLICATE KEY
 
     只指定排序列,相同的行不会合并。适用于数据无需提前聚合的分析业务。
     
@@ -88,11 +88,11 @@ Doris 数据模型上目前分为三类: AGGREGATE KEY, UNIQUE KEY, DUPLICATE KE
 
 使用过程中,建议用户尽量使用 Star Schema 区分维度表和指标表。频繁更新的维度表也可以放在 MySQL 外部表中。而如果只有少量更新, 可以直接放在 Doris 中。在 Doris 中存储维度表时,可对维度表设置更多的副本,提升 Join 的性能。
  
-### 1.4 分区和分桶
+### 1.3 分区和分桶
 
 Doris 支持两级分区存储, 第一层为 RANGE 分区(partition), 第二层为 HASH 分桶(bucket)。
 
-1. RANGE分区(partition)
+1.3.1. RANGE分区(partition)
 
     RANGE分区用于将数据划分成不同区间, 逻辑上可以理解为将原始表划分成了多个子表。业务上,多数用户会选择采用按时间进行partition, 让时间进行partition有以下好处:
     
@@ -100,14 +100,14 @@ Doris 支持两级分区存储, 第一层为 RANGE 分区(partition), 第二层
     * 可用上Doris分级存储(SSD + SATA)的功能
     * 按分区删除数据时,更加迅速
 
-2. HASH分桶(bucket)
+1.3.2. HASH分桶(bucket)
 
     根据hash值将数据划分成不同的 bucket。
     
     * 建议采用区分度大的列做分桶, 避免出现数据倾斜
     * 为方便数据恢复, 建议单个 bucket 的 size 不要太大, 保持在 10GB 以内, 所以建表或增加 partition 时请合理考虑 bucket 数目, 其中不同 partition 可指定不同的 buckets 数。
 
-### 1.5 稀疏索引和 Bloom Filter
+### 1.4 稀疏索引和 Bloom Filter
 
 Doris对数据进行有序存储, 在数据有序的基础上为其建立稀疏索引,索引粒度为 block(1024行)。
 
@@ -117,13 +117,13 @@ Doris对数据进行有序存储, 在数据有序的基础上为其建立稀疏
 * 这其中有一个特殊的地方,就是 varchar 类型的字段。varchar 类型字段只能作为稀疏索引的最后一个字段。索引会在 varchar 处截断, 因此 varchar 如果出现在前面,可能索引的长度可能不足 36 个字节。具体可以参阅 [数据模型、ROLLUP 及前缀索引](./data-model-rollup.md)。
 * 除稀疏索引之外, Doris还提供bloomfilter索引, bloomfilter索引对区分度比较大的列过滤效果明显。 如果考虑到varchar不能放在稀疏索引中, 可以建立bloomfilter索引。
 
-### 1.6 物化视图(rollup)
+### 1.5 物化视图(rollup)
 
 Rollup 本质上可以理解为原始表(Base Table)的一个物化索引。建立 Rollup 时可只选取 Base Table 中的部分列作为 Schema。Schema 中的字段顺序也可与 Base Table 不同。
 
 下列情形可以考虑建立 Rollup:
 
-1. Base Table 中数据聚合度不高。
+1.5.1. Base Table 中数据聚合度不高。
 
 这一般是因 Base Table 有区分度比较大的字段而导致。此时可以考虑选取部分列,建立 Rollup。
     
@@ -139,7 +139,7 @@ siteid 可能导致数据聚合度不高,如果业务方经常根据城市统
 ALTER TABLE site_visit ADD ROLLUP rollup_city(city, pv);
 ```
     
-2. Base Table 中的前缀索引无法命中
+1.5.2. Base Table 中的前缀索引无法命中
 
 这一般是 Base Table 的建表方式无法覆盖所有的查询模式。此时可以考虑调整列顺序,建立 Rollup。
 
@@ -159,7 +159,7 @@ ALTER TABLE session_data ADD ROLLUP rollup_brower(brower,province,ip,url) DUPLIC
 
 Doris中目前进行 Schema Change 的方式有三种:Sorted Schema Change,Direct Schema Change, Linked Schema Change。
 
-1. Sorted Schema Change
+2.1. Sorted Schema Change
 
     改变了列的排序方式,需对数据进行重新排序。例如删除排序列中的一列, 字段重排序。
     
@@ -167,13 +167,13 @@ Doris中目前进行 Schema Change 的方式有三种:Sorted Schema Change,D
     ALTER TABLE site_visit DROP COLUMN city;
     ```
     
-2. Direct Schema Change: 无需重新排序,但是需要对数据做一次转换。例如修改列的类型,在稀疏索引中加一列等。
+2.2. Direct Schema Change: 无需重新排序,但是需要对数据做一次转换。例如修改列的类型,在稀疏索引中加一列等。
 
     ```
     ALTER TABLE site_visit MODIFY COLUMN username varchar(64);
     ```
     
-3. Linked Schema Change: 无需转换数据,直接完成。例如加列操作。
+2.3. Linked Schema Change: 无需转换数据,直接完成。例如加列操作。
     
     ```
     ALTER TABLE site_visit ADD COLUMN click bigint SUM default '0';
diff --git a/content/_sources/documentation/cn/getting-started/data-model-rollup.md.txt b/content/_sources/documentation/cn/getting-started/data-model-rollup.md.txt
index 6620393..733634d 100644
--- a/content/_sources/documentation/cn/getting-started/data-model-rollup.md.txt
+++ b/content/_sources/documentation/cn/getting-started/data-model-rollup.md.txt
@@ -61,15 +61,15 @@ Doris 的数据模型主要分为3类:
 ```
 CREATE TABLE IF NOT EXISTS example_db.expamle_tbl
 (
-	`user_id` LARGEINT NOT NULL COMMENT "用户id",
-	`date` DATE NOT NULL COMMENT "数据灌入日期时间",
-	`city` VARCHAR(20) COMMENT "用户所在城市",
-	`age` SMALLINT COMMENT "用户年龄",
-	`sex` TINYINT COMMENT "用户性别",
-	`last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00" COMMENT "用户最后一次访问时间",
-	`cost` BIGINT SUM DEFAULT "0" COMMENT "用户总消费",
-	`max_dwell_time` INT MAX DEFAULT "0" COMMENT "用户最大停留时间",
-	`min_dwell_time` INT MIN DEFAULT "99999" COMMENT "用户最小停留时间",
+    `user_id` LARGEINT NOT NULL COMMENT "用户id",
+    `date` DATE NOT NULL COMMENT "数据灌入日期时间",
+    `city` VARCHAR(20) COMMENT "用户所在城市",
+    `age` SMALLINT COMMENT "用户年龄",
+    `sex` TINYINT COMMENT "用户性别",
+    `last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00" COMMENT "用户最后一次访问时间",
+    `cost` BIGINT SUM DEFAULT "0" COMMENT "用户总消费",
+    `max_dwell_time` INT MAX DEFAULT "0" COMMENT "用户最大停留时间",
+    `min_dwell_time` INT MIN DEFAULT "99999" COMMENT "用户最小停留时间",
 )
 AGGREGATE KEY(`user_id`, `date`, `timestamp`, `city`, `age`, `sex`)
 ... /* 省略 Partition 和 Distribution 信息 */
@@ -130,7 +130,7 @@ AGGREGATE KEY(`user_id`, `date`, `timestamp`, `city`, `age`, `sex`)
 前5列没有变化,从第6列 `last_visit_date` 开始:
 
 * `2017-10-01 07:00:00`:因为 `last_visit_date` 列的聚合方式为 REPLACE,所以 `2017-10-01 07:00:00` 替换了 `2017-10-01 06:00:00` 保存了下来。
-	> 注:在同一个导入批次中的数据,对于 REPLACE 这种聚合方式,替换顺序不做保证。如在这个例子中,最终保存下来的,也有可能是 `2017-10-01 06:00:00`。而对于不同导入批次中的数据,可以保证,后一批次的数据会替换前一批次。
+    > 注:在同一个导入批次中的数据,对于 REPLACE 这种聚合方式,替换顺序不做保证。如在这个例子中,最终保存下来的,也有可能是 `2017-10-01 06:00:00`。而对于不同导入批次中的数据,可以保证,后一批次的数据会替换前一批次。
 
 * `35`:因为 `cost` 列的聚合类型为 SUM,所以由 20 + 15 累加获得 35。
 * `10`:因为 `max_dwell_time` 列的聚合类型为 MAX,所以 10 和 2 取最大值,获得 10。
@@ -245,14 +245,14 @@ AGGREGATE KEY(`user_id`, `date`, `timestamp`, `city`, `age`, `sex`)
 ```
 CREATE TABLE IF NOT EXISTS example_db.expamle_tbl
 (
-	`user_id` LARGEINT NOT NULL COMMENT "用户id",
-	`username` VARCHAR(50) NOT NULL COMMENT "用户昵称",
-	`city` VARCHAR(20) COMMENT "用户所在城市",
-	`age` SMALLINT COMMENT "用户年龄",
-	`sex` TINYINT COMMENT "用户性别",
-	`phone` LARGEINT COMMENT "用户电话",
-	`address` VARCHAR(500) COMMENT "用户地址",
-	`register_time` DATETIME COMMENT "用户注册时间"
+    `user_id` LARGEINT NOT NULL COMMENT "用户id",
+    `username` VARCHAR(50) NOT NULL COMMENT "用户昵称",
+    `city` VARCHAR(20) COMMENT "用户所在城市",
+    `age` SMALLINT COMMENT "用户年龄",
+    `sex` TINYINT COMMENT "用户性别",
+    `phone` LARGEINT COMMENT "用户电话",
+    `address` VARCHAR(500) COMMENT "用户地址",
+    `register_time` DATETIME COMMENT "用户注册时间"
 )
 UNIQUE KEY(`user_id`, `user_name`)
 ... /* 省略 Partition 和 Distribution 信息 */
@@ -277,14 +277,14 @@ UNIQUE KEY(`user_id`, `user_name`)
 ```
 CREATE TABLE IF NOT EXISTS example_db.expamle_tbl
 (
-	`user_id` LARGEINT NOT NULL COMMENT "用户id",
-	`username` VARCHAR(50) NOT NULL COMMENT "用户昵称",
-	`city` VARCHAR(20) REPLACE COMMENT "用户所在城市",
-	`age` SMALLINT REPLACE COMMENT "用户年龄",
-	`sex` TINYINT REPLACE COMMENT "用户性别",
-	`phone` LARGEINT REPLACE COMMENT "用户电话",
-	`address` VARCHAR(500) REPLACE COMMENT "用户地址",
-	`register_time` DATETIME REPLACE COMMENT "用户注册时间"
+    `user_id` LARGEINT NOT NULL COMMENT "用户id",
+    `username` VARCHAR(50) NOT NULL COMMENT "用户昵称",
+    `city` VARCHAR(20) REPLACE COMMENT "用户所在城市",
+    `age` SMALLINT REPLACE COMMENT "用户年龄",
+    `sex` TINYINT REPLACE COMMENT "用户性别",
+    `phone` LARGEINT REPLACE COMMENT "用户电话",
+    `address` VARCHAR(500) REPLACE COMMENT "用户地址",
+    `register_time` DATETIME REPLACE COMMENT "用户注册时间"
 )
 AGGREGATE KEY(`user_id`, `user_name`)
 ... /* 省略 Partition 和 Distribution 信息 */
@@ -311,12 +311,12 @@ AGGREGATE KEY(`user_id`, `user_name`)
 ```
 CREATE TABLE IF NOT EXISTS example_db.expamle_tbl
 (
-	`timestamp` DATETIME NOT NULL COMMENT "日志时间",
-	`type` INT NOT NULL COMMENT "日志类型",
-	`error_code` INT COMMENT "错误码",
-	`error_msg` VARCHAR(1024) COMMENT "错误详细信息",
-	`op_id` BIGINT COMMENT "负责人id",
-	`op_time` DATETIME COMMENT "处理时间"
+    `timestamp` DATETIME NOT NULL COMMENT "日志时间",
+    `type` INT NOT NULL COMMENT "日志类型",
+    `error_code` INT COMMENT "错误码",
+    `error_msg` VARCHAR(1024) COMMENT "错误详细信息",
+    `op_id` BIGINT COMMENT "负责人id",
+    `op_time` DATETIME COMMENT "处理时间"
 )
 DUPLICATE KEY(`timestamp`, `type`)
 ... /* 省略 Partition 和 Distribution 信息 */
diff --git a/content/_sources/documentation/cn/getting-started/data-partition.md.txt b/content/_sources/documentation/cn/getting-started/data-partition.md.txt
index af5815d..951d148 100644
--- a/content/_sources/documentation/cn/getting-started/data-partition.md.txt
+++ b/content/_sources/documentation/cn/getting-started/data-partition.md.txt
@@ -112,7 +112,7 @@ Doris 支持两层的数据划分。第一层是 Partition,仅支持 Range 的
     * 当不使用 Partition 建表时,系统会自动生成一个和表名同名的,全值范围的 Partition。该 Partition 对用户不可见,并且不可删改。
     * Partition 支持通过 `VALUES LESS THAN (...)` 仅指定上界,系统会将前一个分区的上界作为该分区的下界,生成一个左闭右开的区间。通过,也支持通过 `VALUES [...)` 指定同时指定上下界,生成一个左闭右开的区间。
 
-    * 通过 `VALUES [...)` 同是指定上下界比较容易理解。这里举例说明,当使用 `VALUES LESS THAN (...)` 语句进行分区的增删操作时,分区范围的变化情况:
+    * 通过 `VALUES [...)` 同时指定上下界比较容易理解。这里举例说明,当使用 `VALUES LESS THAN (...)` 语句进行分区的增删操作时,分区范围的变化情况:
     
         * 如上示例,当建表完成后,会自动生成如下3个分区:
 
@@ -274,14 +274,14 @@ PARTITION BY RANGE(`date`, `id`)
     
     当遇到这个错误是,通常是 BE 在创建数据分片时遇到了问题。可以参照以下步骤排查:
     
-    1. 在 fe.log 中,查找对应时间点的 `Failed to create partition` 日志。在该日志中,会出现一系列类似 `{10001-10010}` 字样的数字对儿。数字对儿的第一个数字表示 Backend ID,第二个数字表示 Tablet ID。如上这个数字对,表示 ID 为 10001 的 Backend 上,创建 ID 为 10010 的 Tablet 失败了。
+    1. 在 fe.log 中,查找对应时间点的 `Failed to create partition` 日志。在该日志中,会出现一系列类似 `{10001-10010}` 字样的数字对。数字对的第一个数字表示 Backend ID,第二个数字表示 Tablet ID。如上这个数字对,表示 ID 为 10001 的 Backend 上,创建 ID 为 10010 的 Tablet 失败了。
     2. 前往对应 Backend 的 be.INFO 日志,查找对应时间段内,tablet id 相关的日志,可以找到错误信息。
     3. 以下罗列一些常见的 tablet 创建失败错误,包括但不限于:
         * BE 没有收到相关 task,此时无法在 be.INFO 中找到 tablet id 相关日志。或者 BE 创建成功,但汇报失败。以上问题,请参阅 [部署与升级文档] 检查 FE 和 BE 的连通性。
         * 预分配内存失败。可能是表中一行的字节长度超过了 100KB。
         * `Too many open files`。打开的文件句柄数超过了 Linux 系统限制。需修改 Linux 系统的句柄数限制。
 
-    也可以通过在 fe.conf 中设置 `tablet_create_timeout_second=xxx` 来延长超时时间。默认是2秒。
+    如果创建数据分片时超时,也可以通过在 fe.conf 中设置 `tablet_create_timeout_second=xxx` 以及 `max_create_table_timeout_second=xxx` 来延长超时时间。其中 `tablet_create_timeout_second` 默认是1秒, `max_create_table_timeout_second` 默认是60秒,总体的超时时间为min(tablet_create_timeout_second * replication_num, max_create_table_timeout_second);
 
 3. 建表命令长时间不返回结果。
 
diff --git a/content/_sources/documentation/cn/getting-started/hit-the-rollup.md.txt b/content/_sources/documentation/cn/getting-started/hit-the-rollup.md.txt
index 211e597..ea9075c 100644
--- a/content/_sources/documentation/cn/getting-started/hit-the-rollup.md.txt
+++ b/content/_sources/documentation/cn/getting-started/hit-the-rollup.md.txt
@@ -124,9 +124,9 @@ rollup_index4(k4, k6, k5, k1, k2, k3, k7)
 
 能用的上前缀索引的列上的条件需要是 `=` `<` `>` `<=` `>=` `in` `between` 这些并且这些条件是并列的且关系使用 `and` 连接,对于`or`、`!=` 等这些不能命中,然后看以下查询:
 
-```
-SELECT * FROM test WHERE k1 = 1 AND k2 > 3;
-```
+
+`SELECT * FROM test WHERE k1 = 1 AND k2 > 3;`
+
 	
 有 k1 以及 k2 上的条件,检查只有 Base 的第一列含有条件里的 k1,所以匹配最长的前缀索引即 test,explain一下:
 
@@ -146,7 +146,7 @@ SELECT * FROM test WHERE k1 = 1 AND k2 > 3;
 
 再看以下查询:
 
-`SELECT * FROM test WHERE k4 =1 AND k5 > 3;`
+`SELECT * FROM test WHERE k4 = 1 AND k5 > 3;`
 	
 有 k4 以及 k5 的条件,检查 rollup_index3、rollup_index4 的第一列含有 k4,但是 rollup_index3 的第二列含有k5,所以匹配的前缀索引最长。
 
diff --git a/content/_sources/documentation/cn/installing/compilation.md.txt b/content/_sources/documentation/cn/installing/compilation.md.txt
index 13e6e8d..27535cb 100644
--- a/content/_sources/documentation/cn/installing/compilation.md.txt
+++ b/content/_sources/documentation/cn/installing/compilation.md.txt
@@ -42,7 +42,8 @@ under the License.
 | image version | commit id | release version |
 |---|---|---|
 | apachedoris/doris-dev:build-env | before [ff0dd0d](https://github.com/apache/incubator-doris/commit/ff0dd0d2daa588f18b6db56f947e813a56d8ec81) | 0.8.x, 0.9.x |
-| apachedoris/doris-dev:build-env-1.1 | [ff0dd0d](https://github.com/apache/incubator-doris/commit/ff0dd0d2daa588f18b6db56f947e813a56d8ec81) or later | 0.10.x or later |
+| apachedoris/doris-dev:build-env-1.1 | [ff0dd0d](https://github.com/apache/incubator-doris/commit/ff0dd0d2daa588f18b6db56f947e813a56d8ec81) [4ef5a8c](https://github.com/apache/incubator-doris/commit/4ef5a8c8560351d7fff7ff8fd51c4c7a75e006a8) | 0.10.x, 0.11.x |
+| apachedoris/doris-dev:build-env-1.2 | [4ef5a8c](https://github.com/apache/incubator-doris/commit/4ef5a8c8560351d7fff7ff8fd51c4c7a75e006a8) or later | 0.12.x or later
 
 2. 运行镜像
 
diff --git a/content/_sources/documentation/cn/installing/install-deploy.md.txt b/content/_sources/documentation/cn/installing/install-deploy.md.txt
index 87d7094..3317b53 100644
--- a/content/_sources/documentation/cn/installing/install-deploy.md.txt
+++ b/content/_sources/documentation/cn/installing/install-deploy.md.txt
@@ -84,7 +84,6 @@ Doris 各个实例直接通过网络进行通讯。以下表格展示了所有
 | 实例名称 | 端口名称 | 默认端口 | 通讯方向 | 说明 | 
 |---|---|---|---| ---|
 | BE | be_port | 9060 | FE --> BE | BE 上 thrift server 的端口,用于接收来自 FE 的请求 |
-| BE | be\_rpc_port | 9070 | BE <--> BE | BE 之间 rpc 使用的端口 |
 | BE | webserver_port | 8040 | BE <--> BE | BE 上的 http server 的端口 |
 | BE | heartbeat\_service_port | 9050 | FE --> BE | BE 上心跳服务端口(thrift),用于接收来自 FE 的心跳 |
 | BE | brpc\_port* | 8060 | FE<-->BE, BE <--> BE | BE 上的 brpc 端口,用于 BE 之间通讯 |
@@ -97,7 +96,6 @@ Doris 各个实例直接通过网络进行通讯。以下表格展示了所有
 > 注:  
 > 1. 当部署多个 FE 实例时,要保证 FE 的 http\_port 配置相同。  
 > 2. 部署前请确保各个端口在应有方向上的访问权限。
-> 3. brpc\_port 在 0.8.2 版本后替代了 be\_rpc_port
 
 #### IP 绑定
 
@@ -136,7 +134,7 @@ BROKER 当前没有,也不需要 priority\_networks 这个选项。Broker 的
 * 配置 FE
 
     1. 配置文件为 conf/fe.conf。其中注意:`meta_dir`:元数据存放位置。默认在 fe/palo-meta/ 下。需**手动创建**该目录。
-    2. fe.conf 中 JAVA_OPTS 默认 java 最大堆内存为 2GB,建议生产环境调整至 8G 以上。
+    2. fe.conf 中 JAVA_OPTS 默认 java 最大堆内存为 4GB,建议生产环境调整至 8G 以上。
 
 * 启动FE
 
@@ -313,7 +311,7 @@ DECOMMISSION 语句如下:
 > 3. 该命令**不一定执行成功**。比如剩余 BE 存储空间不足以容纳下线 BE 上的数据,或者剩余机器数量不满足最小副本数时,该命令都无法完成,并且 BE 会一直处于 isDecommission 为 true 的状态。  
 > 4. DECOMMISSION 的进度,可以通过 ```SHOW PROC '/backends';``` 中的 TabletNum 查看,如果正在进行,TabletNum 将不断减少。  
 > 5. 该操作可以通过:  
-> 		```CANCEL ALTER SYSTEM DECOMMISSION BACKEND "be_host:be_heartbeat_service_port";```  
+> 		```CANCEL DECOMMISSION BACKEND "be_host:be_heartbeat_service_port";```  
 > 	命令取消。取消后,该 BE 上的数据将维持当前剩余的数据量。后续 Doris 重新进行负载均衡
 
 **对于多租户部署环境下,BE 节点的扩容和缩容,请参阅 [多租户设计文档](../administrator-guide/operation/multi-tenant.md)。**
diff --git a/content/_sources/documentation/cn/internal/grouping_sets_design.md.txt b/content/_sources/documentation/cn/internal/grouping_sets_design.md.txt
new file mode 100644
index 0000000..9298b70
--- /dev/null
+++ b/content/_sources/documentation/cn/internal/grouping_sets_design.md.txt
@@ -0,0 +1,510 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# GROUPING SETS 设计文档
+
+## 1. GROUPING SETS 相关背景知识
+
+### 1.1 GROUPING SETS 子句
+
+GROUP BY GROUPING SETS 是对 GROUP BY 子句的扩展,它能够在一个 GROUP BY 子句中一次实现多个集合的分组。其结果等价于将多个相应 GROUP BY 子句进行 UNION 操作。
+
+特别地,一个空的子集意味着将所有的行聚集到一个分组。
+GROUP BY 子句是只含有一个元素的 GROUP BY GROUPING SETS 的特例。
+
+例如,GROUPING SETS 语句:
+
+```
+SELECT k1, k2, SUM( k3 ) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k1), (k2), ( ) );
+```
+
+其查询结果等价于:
+
+```
+SELECT k1, k2, SUM( k3 ) FROM t GROUP BY k1, k2
+UNION
+SELECT k1, null, SUM( k3 ) FROM t GROUP BY k1
+UNION
+SELECT null, k2, SUM( k3 ) FROM t GROUP BY k2
+UNION
+SELECT null, null, SUM( k3 ) FROM t
+```
+
+下面是一个实际数据的例子:
+
+```
+mysql> SELECT * FROM t;
++------+------+------+
+| k1   | k2   | k3   |
++------+------+------+
+| a    | A    |    1 |
+| a    | A    |    2 |
+| a    | B    |    1 |
+| a    | B    |    3 |
+| b    | A    |    1 |
+| b    | A    |    4 |
+| b    | B    |    1 |
+| b    | B    |    5 |
++------+------+------+
+8 rows in set (0.01 sec)
+
+mysql> SELECT k1, k2, SUM(k3) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k2), (k1), ( ) );
++------+------+-----------+
+| k1   | k2   | sum(`k3`) |
++------+------+-----------+
+| b    | B    |         6 |
+| a    | B    |         4 |
+| a    | A    |         3 |
+| b    | A    |         5 |
+| NULL | B    |        10 |
+| NULL | A    |         8 |
+| a    | NULL |         7 |
+| b    | NULL |        11 |
+| NULL | NULL |        18 |
++------+------+-----------+
+9 rows in set (0.06 sec)
+```
+
+### 1.2 ROLLUP 子句
+
+ROLLUP 是对 GROUPING SETS 的扩展。
+
+```
+SELECT a, b,c, SUM( d ) FROM tab1 GROUP BY ROLLUP(a,b,c)
+```
+
+这个 ROLLUP 等价于下面的 GROUPING SETS:
+
+```
+GROUPING SETS (
+(a,b,c),
+( a, b ),
+( a),
+( )
+)
+```
+
+### 1.3 CUBE 子句
+
+CUBE 也是对 GROUPING SETS 的扩展。
+
+```
+CUBE ( e1, e2, e3, ... )
+```
+
+其含义是 GROUPING SETS 后面列表中的所有子集。
+
+例如,CUBE ( a, b, c ) 等价于下面的 GROUPING SETS:
+
+```
+GROUPING SETS (
+( a, b, c ),
+( a, b ),
+( a,    c ),
+( a       ),
+(    b, c ),
+(    b    ),
+(       c ),
+(         )
+)
+```
+
+### 1.4 GROUPING 和 GROUPING_ID 函数
+当我们没有统计某一列时,它的值显示为 NULL,这也可能是列本身就有 NULL 值,这就需要一种方法区分是没有统计还是值本来就是 NULL。为此引入 GROUPING 和 GROUPING_ID 函数。
+GROUPING(column:Column) 函数用于区分分组后的单个列是普通列和聚合列。如果是聚合列,则返回1,反之,则是0. GROUPING() 只能有一个参数列。
+
+GROUPING_ID(column1, column2) 则根据指定的column 顺序,否则根据聚合的时候给的集合的元素顺序,计算出一个列列表的 bitmap 值,一个列如果是聚合列为0,否则为1. GROUPING_ID()函数返回位向量的十进制值。
+比如 [0 1 0] ->2 从下列第三个查询可以看到这种对应关系
+
+例如,对于下面的表:
+
+```
+mysql> select * from t;
++------+------+------+
+| k1   | k2   | k3   |
++------+------+------+
+| a    | A    |    1 |
+| a    | A    |    2 |
+| a    | B    |    1 |
+| a    | B    |    3 |
+| b    | A    |    1 |
+| b    | A    |    4 |
+| b    | B    |    1 |
+| b    | B    |    5 |
++------+------+------+
+```
+
+grouping sets 的结果如下:
+
+```
+mysql> SELECT k1, k2, GROUPING(k1), GROUPING(k2), SUM(k3) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k2), (k1), ( ) );
++------+------+----------------+----------------+-----------+
+| k1   | k2   | grouping(`k1`) | grouping(`k2`) | sum(`k3`) |
++------+------+----------------+----------------+-----------+
+| a    | A    |              0 |              0 |         3 |
+| a    | B    |              0 |              0 |         4 |
+| a    | NULL |              0 |              1 |         7 |
+| b    | A    |              0 |              0 |         5 |
+| b    | B    |              0 |              0 |         6 |
+| b    | NULL |              0 |              1 |        11 |
+| NULL | A    |              1 |              0 |         8 |
+| NULL | B    |              1 |              0 |        10 |
+| NULL | NULL |              1 |              1 |        18 |
++------+------+----------------+----------------+-----------+
+9 rows in set (0.02 sec)
+
+mysql> SELECT k1, k2, GROUPING_ID(k1,k2), SUM(k3) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k2), (k1), ( ) );
++------+------+-------------------------+-----------+
+| k1   | k2   | grouping_id(`k1`, `k2`) | sum(`k3`) |
++------+------+-------------------------+-----------+
+| a    | A    |                       0 |         3 |
+| a    | B    |                       0 |         4 |
+| a    | NULL |                       1 |         7 |
+| b    | A    |                       0 |         5 |
+| b    | B    |                       0 |         6 |
+| b    | NULL |                       1 |        11 |
+| NULL | A    |                       2 |         8 |
+| NULL | B    |                       2 |        10 |
+| NULL | NULL |                       3 |        18 |
++------+------+-------------------------+-----------+
+9 rows in set (0.02 sec)
+
+mysql> SELECT k1, k2, grouping(k1), grouping(k2), GROUPING_ID(k1,k2), SUM(k4) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k2), (k1), ( ) ) order by k1, k2;
++------+------+----------------+----------------+-------------------------+-----------+
+| k1   | k2   | grouping(`k1`) | grouping(`k2`) | grouping_id(`k1`, `k2`) | sum(`k4`) |
++------+------+----------------+----------------+-------------------------+-----------+
+| a    | A    |              0 |              0 |                       0 |         3 |
+| a    | B    |              0 |              0 |                       0 |         4 |
+| a    | NULL |              0 |              1 |                       1 |         7 |
+| b    | A    |              0 |              0 |                       0 |         5 |
+| b    | B    |              0 |              0 |                       0 |         6 |
+| b    | NULL |              0 |              1 |                       1 |        11 |
+| NULL | A    |              1 |              0 |                       2 |         8 |
+| NULL | B    |              1 |              0 |                       2 |        10 |
+| NULL | NULL |              1 |              1 |                       3 |        18 |
++------+------+----------------+----------------+-------------------------+-----------+
+9 rows in set (0.02 sec)
+
+```
+
+### 1.5 GROUPING SETS 的组合与嵌套
+
+首先,一个 GROUP BY 子句本质上是一个 GROUPING SETS 的特例, 例如:
+
+```
+   GROUP BY a
+等同于
+   GROUP BY GROUPING SETS((a))
+同样地,
+   GROUP BY a,b,c
+等同于
+   GROUP BY GROUPING SETS((a,b,c))
+```
+
+同样的,CUBE 和 ROLLUP 也可以展开成 GROUPING SETS,因此 GROUP BY, CUBE, ROLLUP, GROUPING SETS 的各种组合和嵌套本质上就是 GROUPING SETS 的组合与嵌套。
+
+对于 GROUPING SETS 的嵌套,语义上等价于将嵌套内的语句直接写到外面。(参考:<https://www.brytlyt.com/documentation/data-manipulation-dml/grouping-sets-rollup-cube/>),其中写道:
+
+```
+The CUBE and ROLLUP constructs can be used either directly in the GROUP BY clause, or nested inside a GROUPING SETS clause. If one GROUPING SETS clause is nested inside another, the effect is the same as if all the elements of the inner clause had been written directly in the outer clause.
+```
+
+对于多个 GROUPING SETS 的组合列表,很多数据库认为是叉乘(cross product)的关系。
+
+例如:
+
+```
+GROUP BY a, CUBE (b, c), GROUPING SETS ((d), (e))
+
+等同于:
+
+GROUP BY GROUPING SETS (
+(a, b, c, d), (a, b, c, e),
+(a, b, d),    (a, b, e),
+(a, c, d),    (a, c, e),
+(a, d),       (a, e)
+)
+```
+
+对于 GROUPING SETS 的组合与嵌套,各个数据库支持不太一样。例如 snowflake 不支持任何的组合和嵌套。
+(<https://docs.snowflake.net/manuals/sql-reference/constructs/group-by.html>)
+
+Oracle 既支持组合,也支持嵌套。
+(<https://docs.oracle.com/cd/B19306_01/server.102/b14223/aggreg.htm#i1006842>)
+
+Presto 支持组合,但不支持嵌套。
+(<https://prestodb.github.io/docs/current/sql/select.html>)
+
+## 2. 设计目标
+
+从语法上支持 GROUPING SETS, ROLLUP 和 CUBE。实现上述所述的1.1, 1.2, 1.3 1.4.
+
+对于1.6 GROUPING SETS 的组合与嵌套 先不实现。
+
+具体语法列出如下:
+
+### 2.1 GROUPING SETS 语法
+
+```
+SELECT ...
+FROM ...
+[ ... ]
+GROUP BY GROUPING SETS ( groupSet [ , groupSet [ , ... ] ] )
+[ ... ]
+
+groupSet ::= { ( expr  [ , expr [ , ... ] ] )}
+
+<expr>
+各种表达式,包括列名.
+
+```
+
+### 2.2 ROLLUP 语法
+
+```
+SELECT ...
+FROM ...
+[ ... ]
+GROUP BY ROLLUP ( expr  [ , expr [ , ... ] ] )
+[ ... ]
+
+<expr>
+各种表达式,包括列名.
+
+```
+
+### 2.3 CUBE 语法
+
+```
+SELECT ...
+FROM ...
+[ ... ]
+GROUP BY CUBE ( expr  [ , expr [ , ... ] ] )
+[ ... ]
+
+<expr>
+各种表达式,包括列名.
+
+```
+
+## 3. 实现方案
+
+### 3.1 整体思路
+
+既然 GROUPING SET 子句逻辑上等价于多个相应 GROUP BY 子句的 UNION,可以通过扩展输入行(此输入行已经是通过下推条件过滤和投影后的), 在此基础上进行一个单一的 GROUP BY 操作来达到目的。
+
+关键是怎样扩展输入行呢?下面举例说明:
+
+例如,对应下面的语句:
+
+```
+SELECT a, b FROM src GROUP BY a, b GROUPING SETS ((a, b), (a), (b), ());
+
+```
+
+假定 src 表的数据如下:
+
+```
+1, 2
+3, 4
+
+```
+
+根据 GROUPING SETS 子句给出的列表,可以将输入行扩展为下面的 8 行 (GROUPING SETS集合数 * 行数, 同时为每行生成对应的 全列的GROUPING_ID: 和其他grouping 函数的值
+
+```
+1, 2       (GROUPING_ID: a, b -> 00->0)
+1, null    (GUPING_ID: a, null -> 01 -> 1)
+null, 2    (GROUPING_ID: null, b -> 10 -> 2)
+null, null (GROUPING_ID: null, null -> 11 -> 3)
+
+3, 4       (GROUPING_ID: a, b -> 00 -> 0)
+3, null    (GROUPING_ID: a, null -> 01 -> 1)
+null, 4    (GROUPING_ID: null, b -> 10 -> 2)
+null, null (GROUPING_ID: null, null -> 11 -> 3)
+
+```
+
+然后,将上面的 8 行数据作为输入,对 a, b, GROUPING_ID 进行 GROUP BY 操作即可。
+
+### 3.2 具体例子验证说明
+
+假设有一个 t 表,包含如下列和数据:
+
+```
+mysql> select * from t;
++------+------+------+
+| k1   | k2   | k3   |
++------+------+------+
+| a    | A    |    1 |
+| a    | A    |    2 |
+| a    | B    |    1 |
+| a    | B    |    3 |
+| b    | A    |    1 |
+| b    | A    |    4 |
+| b    | B    |    1 |
+| b    | B    |    5 |
++------+------+------+
+8 rows in set (0.01 sec)
+
+```
+
+对于如下的查询:
+
+```
+SELECT k1, k2, GROUPING_ID(k1,k2), SUM(k3) FROM t GROUP BY GROUPING SETS ((k1, k2), (k1), (k2), ());
+
+```
+
+首先,对输入行进行扩展,每行数据扩展成 4 行 (GROUPING SETS子句的集合数目),同时增加 GROUPING_ID() 列 :
+
+例如 a, A, 1 扩展后变成下面的 4 行:
+
+```
++------+------+------+-------------------------+
+| k1   | k2   | k3   | GROUPING_ID(`k1`, `k2`) |
++------+------+------+-------------------------+
+| a    | A    |    1 |                       0 |
+| a    | NULL |    1 |                       1 |
+| NULL | A    |    1 |                       2 |
+| NULL | NULL |    1 |                       3 |
++------+------+------+-------------------------+
+
+```
+
+最终, 全部扩展后的输入行如下(总共 32 行):
+
+```
++------+------+------+-------------------------+
+| k1   | k2   | k3   | GROUPING_ID(`k1`, `k2`) |
++------+------+------+-------------------------+
+| a    | A    |    1 |                       0 |
+| a    | A    |    2 |                       0 |
+| a    | B    |    1 |                       0 |
+| a    | B    |    3 |                       0 |
+| b    | A    |    1 |                       0 |
+| b    | A    |    4 |                       0 |
+| b    | B    |    1 |                       0 |
+| b    | B    |    5 |                       0 |
+| a    | NULL |    1 |                       1 |
+| a    | NULL |    1 |                       1 |
+| a    | NULL |    2 |                       1 |
+| a    | NULL |    3 |                       1 |
+| b    | NULL |    1 |                       1 |
+| b    | NULL |    1 |                       1 |
+| b    | NULL |    4 |                       1 |
+| b    | NULL |    5 |                       1 |
+| NULL | A    |    1 |                       2 |
+| NULL | A    |    1 |                       2 |
+| NULL | A    |    2 |                       2 |
+| NULL | A    |    4 |                       2 |
+| NULL | B    |    1 |                       2 |
+| NULL | B    |    1 |                       2 |
+| NULL | B    |    3 |                       2 |
+| NULL | B    |    5 |                       2 |
+| NULL | NULL |    1 |                       3 |
+| NULL | NULL |    1 |                       3 |
+| NULL | NULL |    1 |                       3 |
+| NULL | NULL |    1 |                       3 |
+| NULL | NULL |    2 |                       3 |
+| NULL | NULL |    3 |                       3 |
+| NULL | NULL |    4 |                       3 |
+| NULL | NULL |    5 |                       3 |
++------+------+------+-------------------------+
+32 rows in set.
+
+```
+
+现在对k1, k2, GROUPING_ID(`k1`, `k2`) 进行 GROUP BY:
+
+```
++------+------+-------------------------+-----------+
+| k1   | k2   | grouping_id(`k1`, `k2`) | sum(`k3`) |
++------+------+-------------------------+-----------+
+| a    | A    |                       0 |         3 |
+| a    | B    |                       0 |         4 |
+| a    | NULL |                       1 |         7 |
+| b    | A    |                       0 |         5 |
+| b    | B    |                       0 |         6 |
+| b    | NULL |                       1 |        11 |
+| NULL | A    |                       2 |         8 |
+| NULL | B    |                       2 |        10 |
+| NULL | NULL |                       3 |        18 |
++------+------+-------------------------+-----------+
+9 rows in set (0.02 sec)
+
+```
+
+可以看到,其结果与对 GROUPING SETS 子句后每个子集进行 GROUP BY 后再进行 UNION 的结果一致。
+
+```
+select k1, k2, sum(k3) from t group by k1, k2
+UNION ALL
+select NULL, k2, sum(k3) from t group by k2
+UNION ALL
+select k1, NULL, sum(k3) from t group by k1
+UNION ALL
+select NULL, NULL, sum(k3) from t;
+
++------+------+-----------+
+| k1   | k2   | sum(`k3`) |
++------+------+-----------+
+| b    | B    |         6 |
+| b    | A    |         5 |
+| a    | A    |         3 |
+| a    | B    |         4 |
+| a    | NULL |         7 |
+| b    | NULL |        11 |
+| NULL | B    |        10 |
+| NULL | A    |         8 |
+| NULL | NULL |        18 |
++------+------+-----------+
+9 rows in set (0.06 sec)
+
+```
+
+### 3.3 FE 规划阶段
+
+#### 3.3.1 主要任务
+
+1. 引入 GroupByClause 类,封装 Group By 相关信息,替换原有的 groupingExprs.
+2. 增加 Grouping Sets, Cube 和 RollUp 的语法支持和语法检查、错误处理和错误信息;
+3. 在 SelectStmt 类中增加 GroupByClause 成员;
+4. 引入 GroupingFunctionCallExpr 类,封装grouping 和grouping_id 函数调用
+5. 引入 VirtualSlot 类,封装grouping,grouping_id  生成的虚拟列和实际列的对应关系
+6. 增加虚拟列 GROUPING_ID 和其他grouping,grouping_id 函数对应的虚拟列,并将此列加入到原有的 groupingExprs 表达式列表中;
+7. 增加一个 PlanNode,考虑更通用的功能,命名为 RepeatNode。对于 GroupingSets 的聚合,在执行计划中插入 RepeatNode。
+
+#### 3.3.2 Tuple
+
+在 GroupByClause 类中为了将 GROUPING_ID 加到 groupingExprs 表达式列表中,需要创建 virtual SlotRef, 相应的,需要对这个 slot 创建一个 tuple, 叫 GROUPING_ID Tuple。
+
+对于 RepeatNode 这个执行计划,其输入是子节点的所有 tuple, 输出的 tuple 除了 repeat 子节点的数据外,还需要填写 GROUPING_ID 和其他grouping,grouping_id 对应的虚拟列,因此。
+
+
+### 3.4 BE 查询执行阶段
+
+主要任务:
+
+1. 通过 RepeatNode 的执行类,增加扩展输入行的逻辑,其功能是在聚合之前将原有数据进行 repeat:对每行增加一列 GROUPING_ID, 然后按照 GroupingSets 中的集合数进行 repeat,并对对应列置为 null。根据grouping list设置新增虚拟列的值
+2. 实现 grouping_id() 和grouping() 函数。
+
+
+
+
diff --git a/content/_sources/documentation/cn/internal/metadata-design.md.txt b/content/_sources/documentation/cn/internal/metadata-design.md.txt
index a6a9f7e..c6cf52a 100644
--- a/content/_sources/documentation/cn/internal/metadata-design.md.txt
+++ b/content/_sources/documentation/cn/internal/metadata-design.md.txt
@@ -82,7 +82,7 @@ Doris 的元数据是全内存的。每个 FE 内存中,都维护一个完整
 3. `image/` 目录下为 image 文件的存放目录。
 
 	* 	`image.[logid]` 是最新的 image 文件。后缀 `logid` 表明 image 所包含的最后一条日志的 id。
-	*  `image.ckpt` 是正在写入的 image 文件,如果写入成功,会重命名为 `image.[logid]`,并替换掉就的 image 文件。
+	*  `image.ckpt` 是正在写入的 image 文件,如果写入成功,会重命名为 `image.[logid]`,并替换掉旧的 image 文件。
 	*  `VERSION` 文件中记录着 `cluster_id`。`cluster_id` 唯一标识一个 Doris 集群。是在 leader 第一次启动时随机生成的一个 32 位整型。也可以通过 fe 配置项 `cluster_id` 来指定一个 cluster id。
 	*  `ROLE` 文件中记录的 FE 自身的角色。只有 `FOLLOWER` 和 `OBSERVER` 两种。其中 `FOLLOWER` 表示 FE 为一个可选举的节点。(注意:即使是 leader 节点,其角色也为 `FOLLOWER`)
 
@@ -106,7 +106,7 @@ Doris 的元数据是全内存的。每个 FE 内存中,都维护一个完整
 
 1. 用户可以使用 mysql 连接任意一个 FE 节点进行元数据的读写访问。如果连接的是 non-leader 节点,则该节点会将写操作转发给 leader 节点。leader 写成功后,会返回一个 leader 当前最新的 log id。之后,non-leader 节点会等待自身回放的 log id 大于回传的 log id 后,才将命令成功的消息返回给客户端。这种方式保证了任意 FE 节点的 Read-Your-Write 语义。
 
-	> 注:一些非写操作,也会转发给 leader 执行。比如 `SHOW LOAD` 操作。因为这些命令通常需要读取一些作业的中间状态,而这些中间状态是不写 bdbje 的,因此 non-leader 节点的内存中,是没有这些中间状态的。(FE 直接的元数据同步完全依赖 bdbje 的日志回放,如果一个元数据修改操作不写 bdbje 日志,则在其他 non-leader 节点中是看不到该操作修改后的结果的。)
+	> 注:一些非写操作,也会转发给 leader 执行。比如 `SHOW LOAD` 操作。因为这些命令通常需要读取一些作业的中间状态,而这些中间状态是不写 bdbje 的,因此 non-leader 节点的内存中,是没有这些中间状态的。(FE 之间的元数据同步完全依赖 bdbje 的日志回放,如果一个元数据修改操作不写 bdbje 日志,则在其他 non-leader 节点中是看不到该操作修改后的结果的。)
 
 2. leader 节点会启动一个 TimePrinter 线程。该线程会定期向 bdbje 中写入一个当前时间的 key-value 条目。其余 non-leader 节点通过回放这条日志,读取日志中记录的时间,和本地时间进行比较,如果发现和本地时间的落后大于指定的阈值(配置项:`meta_delay_toleration_second`。写入间隔为该配置项的一半),则该节点会处于**不可读**的状态。此机制解决了 non-leader 节点在长时间和 leader 失联后,仍然提供过期的元数据服务的问题。
 
diff --git a/content/_sources/documentation/cn/internal/spark_load.md.txt b/content/_sources/documentation/cn/internal/spark_load.md.txt
new file mode 100644
index 0000000..021b02b
--- /dev/null
+++ b/content/_sources/documentation/cn/internal/spark_load.md.txt
@@ -0,0 +1,205 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Doris支持spark导入设计文档
+
+## 背景
+
+Doris现在支持Broker load/routine load/stream load/mini batch load等多种导入方式。
+spark load主要用于解决初次迁移,大量数据迁移doris的场景,用于提升数据导入的速度。
+
+## 名词解释
+
+* FE:Frontend,即 Palo 的前端节点。主要负责接收和返回客户端请求、元数据以及集群管理、查询计划生成等工作。
+* BE:Backend,即 Palo 的后端节点。主要负责数据存储与管理、查询计划执行等工作。
+* Tablet: 一个palo table的水平分片称为tablet。
+* Dpp:Data preprocessing,数据预处理模块,通过外部计算资源(Hadoop、Spark)完成对导入数据预处理,包括转化、清洗、分区、排序和聚合等。
+
+## 设计
+
+### 目标
+
+Doris中现有的导入方式中,针对百G级别以上的数据的批量导入支持不是很好,功能上需要修改很多配置,而且可能无法完成导入,性能上会比较慢,并且由于没有读写分离,需要占用较多的cpu等资源。而这种大数据量导入会在用户迁移的时候遇到,所以需要实现基于spark集群的导入功能,利用spark集群的并发能力,完成导入时的ETL计算,排序、聚合等等,满足用户大数据量导入需求,降低用户导入时间和迁移成本。
+
+在Spark导入中,需要考虑支持多种spark部署模式,设计上需要兼容多种部署方式,可以考虑先实现yarn集群的部署模式;同时,由于用户数据格式多种多样,需要支持包括csv、parquet、orc等多种格式的数据文件。
+
+### 实现方案
+
+在将spark导入的设计实现的时候,有必要讲一下现有的导入框架。现在有的导入框架,可以参考《Doris Broker导入实现解析》。
+
+#### 方案1
+
+参考现有的导入框架和原有适用于百度内部hadoop集群的hadoop导入方式的实现,为了最大程度复用现有的导入框架,降低开发的难度,整体的方案如下:
+
+用户的导入语句经过语法和语意分析之后,生成LoadStmt,LoadStmt中增加一个isSparkLoad标识字段,如果为true,就会创建出SparkLoadJob,跟BrokerLoadJob类似,会通过状态机机制,实现Job的执行,在PENDING,会创建SparkLoadPendingTask,然后在LOADING阶段还是创建LoadLoadingTask,进行数据导入。在BE中,复用现有的计划执行框架,执行导入计划。
+
+实现Spark导入主要需要考虑以下几点:
+
+##### 语法	
+	这块主要考虑用户习惯,导入语句格式上尽量保持跟broker导入语句相似。下面是一个方案:
+
+```
+		LOAD LABEL example_db.label1
+        (
+        DATA INFILE("hdfs://hdfs_host:hdfs_port/user/palo/data/input/file")
+		NEGATIVE
+        INTO TABLE `my_table`
+		PARTITION (p1, p2)
+		COLUMNS TERMINATED BY ","
+		columns(k1,k2,k3,v1,v2)
+		set (
+			v3 = v1 + v2,
+			k4 = hll_hash(k2)
+		)
+		where k1 > 20
+        )
+		with spark.cluster_name
+        PROPERTIES
+        (
+        "spark.master" = "yarn",
+		"spark.executor.cores" = "5",
+		"spark.executor.memory" = "10g",
+		"yarn.resourcemanager.address" = "xxx.tc:8032",
+        "max_filter_ratio" = "0.1",
+        );
+```
+其中spark.cluster_name为用户导入使用的Spark集群名,可以通过SET PROPERTY来设置,可参考原来Hadoop集群的设置。
+property中的Spark集群设置会覆盖spark.cluster_name中对应的内容。
+各个property的含义如下:
+- spark.master是表示spark集群部署模式,支持包括yarn/standalone/local/k8s,预计先实现yarn的支持,并且使用yarn-cluster模式(yarn-client模式一般用于交互式的场景)。
+- spark.executor.cores: executor的cpu个数
+- spark.executor.memory: executor的内存大小
+- yarn.resourcemanager.address:指定yarn的resourcemanager地址
+- max_filter_ratio:指定最大过滤比例阈值
+
+##### SparkLoadJob
+
+用户发送spark load语句,经过parse之后,会创建SparkLoadJob,
+
+```
+SparkLoadJob:
+         +-------+-------+
+         |    PENDING    |-----------------|
+         +-------+-------+                 |
+				 | SparkLoadPendingTask    |
+                 v                         |
+         +-------+-------+                 |
+         |    LOADING    |-----------------|
+         +-------+-------+                 |
+				 | LoadLoadingTask         |
+                 v                         |
+         +-------+-------+                 |
+         |  COMMITTED    |-----------------|
+         +-------+-------+                 |
+				 |                         |
+                 v                         v  
+         +-------+-------+         +-------+-------+     
+         |   FINISHED    |         |   CANCELLED   |
+         +-------+-------+         +-------+-------+
+				 |                         Λ
+                 +-------------------------+
+```
+上图为SparkLoadJob的执行流程。
+
+##### SparkLoadPendingTask
+SparkLoadPendingTask主要用来提交spark etl作业到spark集群中。由于spark支持不同部署模型(localhost, standalone, yarn, k8s),所以需要抽象一个通用的接口SparkEtlJob,实现SparkEtl的功能,主要接口包括:
+- 提交spark etl任务
+- 取消spark etl的任务
+- 获取spark etl任务状态的接口
+
+大体接口如下:
+```
+class SparkEtlJob {
+	// 提交spark etl作业
+	// 返回JobId
+	String submitJob(TBrokerScanRangeParams params);
+
+	// 取消作业,用于支持用户cancel导入作业
+	bool cancelJob(String jobId);
+
+	// 获取作业状态,用于判断是否已经完成
+	JobStatus getJobStatus(String jobId);
+private:
+	std::list<DataDescription> data_descriptions;
+};
+```
+可以实现不同的子类,来实现对不同集群部署模式的支持。可以实现SparkEtlJobForYarn用于支持yarn集群的spark导入作业。具体来说上述接口中JobId就是Yarn集群的appid,如何获取appid?一个方案是通过spark-submit客户端提交spark job,然后分析标准错误中的输出,通过文本匹配获取appid。
+
+这里需要参考hadoop dpp作业的经验,就是需要考虑任务运行可能因为数据量、集群队列等原因,会达到并发导入作业个数限制,导致后续任务提交失败,这块需要考虑一下任务堆积的问题。一个方案是可以单独设置spark load job并发数限制,并且针对每个用户提供一个并发数的限制,这样各个用户之间的作业可以不用相互干扰,提升用户体验。
+
+spark任务执行的事情,包括以下几个关键点:
+1. 类型转化(extraction/Transformation)
+
+	将源文件字段转成具体列类型(判断字段是否合法,进行函数计算等等)
+2. 函数计算(Transformation),包括negative计算
+	
+	完成用户指定的列函数的计算。函数列表:"strftime","time_format","alignment_timestamp","default_value","md5sum","replace_value","now","hll_hash","substitute"
+3. Columns from path的提取
+4. 进行where条件的过滤
+5. 进行分区和分桶
+6. 排序和预聚合
+
+	因为在OlapTableSink过程中会进行排序和聚合,逻辑上可以不需要进行排序和聚合,但是因为排序和预聚合可以提升在BE端执行导入的效率。**如果在spark etl作业中进行排序和聚合,那么在BE执行导入的时候可以省略这个步骤。**这块可以依据后续测试的情况进行调整。目前看,可以先在etl作业中进行排序。
+	还有一个需要考虑的就是如何支持bitmap类型中的全局字典,string类型的bitmap列需要依赖全局字典。
+	为了告诉下游etl作业是否已经完成已经完成排序和聚合,可以在作业完成的时候生成一个job.json的描述文件,里面包含如下属性:
+
+	```
+	{
+		"is_segment_file" : "false",
+		"is_sort" : "true",
+		"is_agg" : "true",
+	}
+	```
+	其中:
+		is_sort表示是否排序
+		is_agg表示是否聚合
+		is_segment_file表示是否生成的是segment文件
+
+7. 现在rollup数据的计算都是基于base表,需要考虑能够根据index之间的层级关系,优化rollup数据的生成。
+
+这里面相对比较复杂一点就是列的表达式计算的支持。
+
+最后,spark load作业完成之后,产出的文件存储格式可以支持csv、parquet、orc,从存储效率上来说,建议默认为parquet。
+
+##### LoadLoadingTask
+	
+LoadLoadingTask可以复现现在的逻辑,但是,有一个地方跟BrokerLoadJob不一样的地址就是,经过SparkEtlTask处理过后的数据文件已经完成列映射、函数计算、负导入、过滤、聚合等操作,这个时候LoadLoadingTask就不用进行这些操作,只需要进行简单的列映射和类型转化。
+
+##### BE导入任务执行
+
+这块可以完全复用现有的导入框架,应该不需要做改动。
+
+#### 方案2
+
+方案1可以最大限度的复用现有的导入框架,能够快速实现支持大数据量导入的功能。但是存在以下问题,就是经过spark etl处理之后的数据其实已经按照tablet划分好了,但是现有的Broker导入框架还是会对流式读取的数据进行分区和bucket计算,然后经过序列化通过rpc发送到对应的目标BE的机器,有一次序列化和网络IO的开销。 方案2是在SparkEtlJob生成数据的时候,直接生成doris的存储格式Segment文件,然后三个副本需要通过类似clone机制的方式,通过add_rowset接口,进行文件的导入。这种方案具体不一样的地方如下:
+
+1. 需要在生成的文件中添加tabletid后缀
+2. 在SparkLoadPendingTask类中增加一个接口protected Map<long, Pair<String, Long>> getFilePathMap()用于返回tabletid和文件之间的映射关系,
+3. 在BE rpc服务中增加一个spark_push接口,实现拉取源端etl转化之后的文件到本地(可以通过broker读取),然后通过add_rowset接口完成数据的导入,类似克隆的逻辑
+4. 生成新的导入任务SparkLoadLoadingTask,该SparkLoadLoadingTask主要功能就是读取job.json文件,解析其中的属性并且,将属性作为rpc参数,调用spark_push接口,向tablet所在的后端BE发送导入请求,进行数据的导入。BE中spark_push根据is_segment_file来决定如何处理,如果为true,则直接下载segment文件,进行add rowset;如果为false,则走pusher逻辑,实现数据导入。
+
+该方案将segment文件的生成也统一放到了spark集群中进行,能够极大的降低doris集群的负载,效率应该会比较高。但是方案2需要依赖于将底层rowset和segment v2的接口打包成独立的so文件,并且通过spark调用该接口来将数据转化成segment文件。
+
+## 总结
+
+综合以上两种方案,第一种方案的改动量比较小,但是BE做了重复的工作。第二种方案可以参考原有的Hadoop导入框架。所以,计划分两步完成spark load的工作。
+
+第一步,按照方案2,实现通过Spark完成导入数据的分区排序聚合,生成parquet格式文件。然后走Hadoop pusher的流程由BE转化格式。
+
+第二步,封装segment写入的库,直接生成Doris底层的格式,并且增加一个rpc接口,实现类似clone的导入逻辑。
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/bitmap.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/bitmap.md.txt
index be6a3b0..8d42886 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/bitmap.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/bitmap.md.txt
@@ -17,74 +17,123 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-#BITMAP
+# BITMAP
 
-## description
-### Syntax
+## Create table
 
-`TO_BITMAP(expr)` : 将TINYINT,SMALLINT和INT类型的列转为Bitmap
+建表时需要使用聚合模型,数据类型是 bitmap , 聚合函数是 bitmap_union
 
-`BITMAP_UNION(expr)` : 计算两个Bitmap的并集,返回值是序列化后的Bitmap值
+```
+CREATE TABLE `pv_bitmap` (
+  `dt` int(11) NULL COMMENT "",
+  `page` varchar(10) NULL COMMENT "",
+  `user_id` bitmap BITMAP_UNION NULL COMMENT ""
+) ENGINE=OLAP
+AGGREGATE KEY(`dt`, `page`)
+COMMENT "OLAP"
+DISTRIBUTED BY HASH(`dt`) BUCKETS 2;
+```
+注:当数据量很大时,最好为高频率的 bitmap_union 查询建立对应的 rollup 表
 
-`BITMAP_COUNT(expr)` : 计算Bitmap中不同值的个数
+```
+ALTER TABLE pv_bitmap ADD ROLLUP pv (page, user_id);
+```
 
-`BITMAP_UNION_INT(expr)` : 计算TINYINT,SMALLINT和INT类型的列中不同值的个数,返回值和
-COUNT(DISTINCT expr)相同
+## Data Load
 
-`BITMAP_EMPTY()`: 生成空Bitmap列,用于insert或导入的时填充默认值
+`TO_BITMAP(expr)` : 将 0 ~ 18446744073709551615 的 unsigned bigint 转为 bitmap
 
+`BITMAP_EMPTY()`: 生成空 bitmap 列,用于 insert 或导入的时填充默认值
 
-注意:
+`BITMAP_HASH(expr)`: 将任意类型的列通过 Hash 的方式转为 bitmap
 
-	1. TO_BITMAP 函数输入的类型必须是TINYINT,SMALLINT,INT
-	2. BITMAP_UNION函数的参数目前仅支持: 
-		- 聚合模型中聚合类型为BITMAP_UNION的列
-		- TO_BITMAP 函数
+### Stream Load
 
-## example
+``` 
+cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,user_id, user_id=to_bitmap(user_id)"   http://host:8410/api/test/testDb/_stream_load
+```
 
+``` 
+cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,user_id, user_id=bitmap_hash(user_id)"   http://host:8410/api/test/testDb/_stream_load
 ```
-CREATE TABLE `bitmap_udaf` (
-  `id` int(11) NULL COMMENT "",
-  `id2` int(11)
-) ENGINE=OLAP
-DUPLICATE KEY(`id`)
-DISTRIBUTED BY HASH(`id`) BUCKETS 10;
 
+``` 
+cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,user_id, user_id=bitmap_empty()"   http://host:8410/api/test/testDb/_stream_load
+```
 
-mysql> select bitmap_count(bitmap_union(to_bitmap(id2))) from bitmap_udaf;
-+----------------------------------------------+
-| bitmap_count(bitmap_union(to_bitmap(`id2`))) |
-+----------------------------------------------+
-|                                            6 |
-+----------------------------------------------+
+### Insert Into
 
-mysql> select bitmap_union_int (id2) from bitmap_udaf;
-+-------------------------+
-| bitmap_union_int(`id2`) |
-+-------------------------+
-|                       6 |
-+-------------------------+
+id2 的列类型是 bitmap
+```
+insert into bitmap_table1 select id, id2 from bitmap_table2;
+```
 
+id2 的列类型是 bitmap
+```
+INSERT INTO bitmap_table1 (id, id2) VALUES (1001, to_bitmap(1000)), (1001, to_bitmap(2000));
+```
 
+id2 的列类型是 bitmap
+```
+insert into bitmap_table1 select id, bitmap_union(id2) from bitmap_table2 group by id;
+```
 
-CREATE TABLE `bitmap_test` (
-  `id` int(11) NULL COMMENT "",
-  `id2` bitmap bitmap_union NULL
-) ENGINE=OLAP
-AGGREGATE KEY(`id`)
-DISTRIBUTED BY HASH(`id`) BUCKETS 10;
+id2 的列类型是 int
+```
+insert into bitmap_table1 select id, to_bitmap(id2) from table;
+```
+
+id2 的列类型是 String
+```
+insert into bitmap_table1 select id, bitmap_hash(id_string) from table;
+```
 
 
-mysql> select bitmap_count(bitmap_union(id2)) from bitmap_test;
-+-----------------------------------+
-| bitmap_count(bitmap_union(`id2`)) |
-+-----------------------------------+
-|                                 8 |
-+-----------------------------------+
+## Data Query
+### Syntax
+
+
+`BITMAP_UNION(expr)` : 计算输入 Bitmap 的并集,返回新的bitmap
+
+`BITMAP_UNION_COUNT(expr)`: 计算输入 Bitmap 的并集,返回其基数,和 BITMAP_COUNT(BITMAP_UNION(expr)) 等价。目前推荐优先使用 BITMAP_UNION_COUNT ,其性能优于 BITMAP_COUNT(BITMAP_UNION(expr))
+
+`BITMAP_UNION_INT(expr)` : 计算 TINYINT,SMALLINT 和 INT 类型的列中不同值的个数,返回值和
+COUNT(DISTINCT expr) 相同
+
+`INTERSECT_COUNT(bitmap_column_to_count, filter_column, filter_values ...)` : 计算满足
+filter_column 过滤条件的多个 bitmap 的交集的基数值。
+bitmap_column_to_count 是 bitmap 类型的列,filter_column 是变化的维度列,filter_values 是维度取值列表
+
+
+### Example
+
+下面的 SQL 以上面的 pv_bitmap table 为例:
+
+计算 user_id 的去重值:
 
 ```
+select bitmap_union_count(user_id) from pv_bitmap;
+
+select bitmap_count(bitmap_union(user_id)) from pv_bitmap;
+```
+
+计算 id 的去重值:
+
+```
+select bitmap_union_int(id) from pv_bitmap;
+```
+
+计算 user_id 的 留存:
+
+```
+select intersect_count(user_id, page, 'meituan') as meituan_uv,
+intersect_count(user_id, page, 'waimai') as waimai_uv,
+intersect_count(user_id, page, 'meituan', 'waimai') as retention //在 'meituan' 和 'waimai' 两个页面都出现的用户数
+from pv_bitmap
+where page in ('meituan', 'waimai');
+```
+
 
 ## keyword
 
-BITMAP,BITMAP_COUNT,BITMAP_EMPTY,BITMAP_UNION,BITMAP_UNION_INT,TO_BITMAP
+BITMAP,BITMAP_COUNT,BITMAP_EMPTY,BITMAP_UNION,BITMAP_UNION_INT,TO_BITMAP,BITMAP_UNION_COUNT,INTERSECT_COUNT
\ No newline at end of file
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_and.md.txt
similarity index 68%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_and.md.txt
index 7c889d4..bafe178 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_and.md.txt
@@ -17,24 +17,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_and
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP BITMAP_AND(BITMAP lhs, BITMAP rhs)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+计算两个输入bitmap的交集,返回新的bitmap.
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(bitmap_and(to_bitmap(1), to_bitmap(2))) cnt;
++------+
+| cnt  |
++------+
+|    0 |
++------+
+
+mysql> select bitmap_count(bitmap_and(to_bitmap(1), to_bitmap(1))) cnt;
++------+
+| cnt  |
++------+
+|    1 |
++------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_AND,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_contains.md.txt
similarity index 68%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_contains.md.txt
index 2da13c8..ed10b6b 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_contains.md.txt
@@ -17,25 +17,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
+# bitmap_contains
 ## description
 ### Syntax
 
-`INT DAY(DATETIME date)`
+`B00LEAN BITMAP_CONTAINS(BITMAP bitmap, BIGINT input)`
 
-
-获得日期中的天信息,返回值范围从1-31。
-
-参数为Date或者Datetime类型
+计算输入值是否在Bitmap列中,返回值是Boolean值.
 
 ## example
 
 ```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
-##keyword
-DAY
+mysql> select bitmap_contains(to_bitmap(1),2) cnt;
++------+
+| cnt  |
++------+
+|    0 |
++------+
+
+mysql> select bitmap_contains(to_bitmap(1),1) cnt;
++------+
+| cnt  |
++------+
+|    1 |
++------+
+```
+
+## keyword
+
+    BITMAP_CONTAINS,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_empty.md.txt
similarity index 60%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_empty.md.txt
index 7c889d4..3d329a5 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_empty.md.txt
@@ -17,24 +17,29 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_empty
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP BITMAP_EMPTY()`
 
+返回一个空bitmap。主要用于 insert 或 stream load 时填充默认值。例如
 
-用于返回满足要求的行的数目,或者非NULL行的数目
+```
+cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,v1,v2=bitmap_empty()"   http://host:8410/api/test/testDb/_stream_load
+```
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(bitmap_empty());
++------------------------------+
+| bitmap_count(bitmap_empty()) |
++------------------------------+
+|                            0 |
++------------------------------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_EMPTY,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_from_string.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_from_string.md.txt
new file mode 100644
index 0000000..261da29
--- /dev/null
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_from_string.md.txt
@@ -0,0 +1,56 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# bitmap_from_string
+
+## description
+### Syntax
+
+`BITMAP BITMAP_FROM_STRING(VARCHAR input)`
+
+将一个字符串转化为一个BITAMP,字符串是由逗号分隔的一组UINT32数字组成.
+比如"0, 1, 2"字符串会转化为一个Bitmap,其中的第0, 1, 2位被设置.
+当输入字段不合法时,返回NULL
+
+## example
+
+```
+mysql> select bitmap_to_string(bitmap_empty());
++----------------------------------+
+| bitmap_to_string(bitmap_empty()) |
++----------------------------------+
+|                                  |
++----------------------------------+
+
+mysql> select bitmap_to_string(bitmap_from_string("0, 1, 2"));
++-------------------------------------------------+
+| bitmap_to_string(bitmap_from_string('0, 1, 2')) |
++-------------------------------------------------+
+| 0,1,2                                           |
++-------------------------------------------------+
+
+mysql> select bitmap_from_string("-1, 0, 1, 2");
++-----------------------------------+
+| bitmap_from_string('-1, 0, 1, 2') |
++-----------------------------------+
+| NULL                              |
++-----------------------------------+
+```
+
+## keyword
+
+    BITMAP_FROM_STRING,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_has_any.md.txt
similarity index 67%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_has_any.md.txt
index 7c889d4..7482107 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_has_any.md.txt
@@ -17,24 +17,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_has_any
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`B00LEAN BITMAP_HAS_ANY(BITMAP lhs, BITMAP rhs)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+计算两个Bitmap列是否存在相交元素,返回值是Boolean值. 
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_has_any(to_bitmap(1),to_bitmap(2)) cnt;
++------+
+| cnt  |
++------+
+|    0 |
++------+
+
+mysql> select bitmap_has_any(to_bitmap(1),to_bitmap(1)) cnt;
++------+
+| cnt  |
++------+
+|    1 |
++------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_HAS_ANY,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_hash.md.txt
similarity index 54%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_hash.md.txt
index 7c889d4..87efc11 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_hash.md.txt
@@ -17,24 +17,29 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_hash
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP BITMAP_HASH(expr)`
 
+对任意类型的输入计算32位的哈希值,返回包含该哈希值的bitmap。主要用于stream load任务将非整型字段导入Doris表的bitmap字段。例如
 
-用于返回满足要求的行的数目,或者非NULL行的数目
+```
+cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,device_id, device_id=bitmap_hash(device_id)"   http://host:8410/api/test/testDb/_stream_load
+```
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(bitmap_hash('hello'));
++------------------------------------+
+| bitmap_count(bitmap_hash('hello')) |
++------------------------------------+
+|                                  1 |
++------------------------------------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_HASH,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_or.md.txt
similarity index 68%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_or.md.txt
index 7c889d4..1c0bca7 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_or.md.txt
@@ -17,24 +17,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_or
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP BITMAP_OR(BITMAP lhs, BITMAP rhs)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+计算两个输入bitmap的并集,返回新的bitmap.
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(bitmap_or(to_bitmap(1), to_bitmap(2))) cnt;
++------+
+| cnt  |
++------+
+|    2 |
++------+
+
+mysql> select bitmap_count(bitmap_or(to_bitmap(1), to_bitmap(1))) cnt;
++------+
+| cnt  |
++------+
+|    1 |
++------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_OR,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_to_string.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_to_string.md.txt
new file mode 100644
index 0000000..e7a25eb
--- /dev/null
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/bitmap_to_string.md.txt
@@ -0,0 +1,62 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# bitmap_to_string
+
+## description
+### Syntax
+
+`VARCHAR BITMAP_TO_STRING(BITMAP input)`
+
+将一个bitmap转化成一个逗号分隔的字符串,字符串中包含所有设置的BIT位。输入是null的话会返回null。
+
+## example
+
+```
+mysql> select bitmap_to_string(null);
++------------------------+
+| bitmap_to_string(NULL) |
++------------------------+
+| NULL                   |
++------------------------+
+
+mysql> select bitmap_to_string(bitmap_empty());
++----------------------------------+
+| bitmap_to_string(bitmap_empty()) |
++----------------------------------+
+|                                  |
++----------------------------------+
+
+mysql> select bitmap_to_string(to_bitmap(1));
++--------------------------------+
+| bitmap_to_string(to_bitmap(1)) |
++--------------------------------+
+| 1                              |
++--------------------------------+
+
+mysql> select bitmap_to_string(bitmap_or(to_bitmap(1), to_bitmap(2)));
++---------------------------------------------------------+
+| bitmap_to_string(bitmap_or(to_bitmap(1), to_bitmap(2))) |
++---------------------------------------------------------+
+| 1,2                                                     |
++---------------------------------------------------------+
+
+```
+
+## keyword
+
+    BITMAP_TO_STRING,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/index.rst.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/index.rst.txt
new file mode 100644
index 0000000..0a7ce16
--- /dev/null
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/index.rst.txt
@@ -0,0 +1,8 @@
+=============
+bitmap函数
+=============
+
+.. toctree::
+    :glob:
+
+    *
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/to_bitmap.md.txt
similarity index 55%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/to_bitmap.md.txt
index 7c889d4..e0236be 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/bitmap-functions/to_bitmap.md.txt
@@ -17,24 +17,30 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# to_bitmap
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP TO_BITMAP(expr)`
 
+输入为取值在 0 ~ 18446744073709551615 区间的 unsigned bigint ,输出为包含该元素的bitmap。
+该函数主要用于stream load任务将整型字段导入Doris表的bitmap字段。例如
 
-用于返回满足要求的行的数目,或者非NULL行的数目
+```
+cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,user_id, user_id=to_bitmap(user_id)"   http://host:8410/api/test/testDb/_stream_load
+```
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(to_bitmap(10));
++-----------------------------+
+| bitmap_count(to_bitmap(10)) |
++-----------------------------+
+|                           1 |
++-----------------------------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    TO_BITMAP,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/curdate.md.txt
similarity index 71%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/curdate.md.txt
index 2da13c8..74decd99 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/curdate.md.txt
@@ -17,25 +17,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
+# curdate
 ## description
 ### Syntax
 
-`INT DAY(DATETIME date)`
+`DATE CURDATE()`
 
+获取当前的日期,以DATE类型返回
 
-获得日期中的天信息,返回值范围从1-31。
-
-参数为Date或者Datetime类型
-
-## example
+## Examples
 
 ```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
+mysql> SELECT CURDATE();
++------------+
+| CURDATE()  |
++------------+
+| 2019-12-20 |
++------------+
+
+mysql> SELECT CURDATE() + 0;
++---------------+
+| CURDATE() + 0 |
++---------------+
+|      20191220 |
++---------------+
+```
+
 ##keyword
-DAY
+
+    CURDATE
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/date_format.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/date_format.md.txt
index b99c575..a4afa60 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/date_format.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/date_format.md.txt
@@ -24,7 +24,7 @@ under the License.
 `VARCHAR DATE_FORMAT(DATETIME date, VARCHAR format)`
 
 
-将日期类型按照format的类型转化位字符串,
+将日期类型按照format的类型转化为字符串,
 当前支持最大128字节的字符串,如果返回值长度超过128,则返回NULL
 
 date 参数是合法的日期。format 规定日期/时间的输出格式。
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
index 2da13c8..f5b33c8 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
@@ -37,5 +37,6 @@ mysql> select day('1987-01-31');
 +----------------------------+
 |                         31 |
 +----------------------------+
+```
 ##keyword
 DAY
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/from_unixtime.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/from_unixtime.md.txt
index 9b48a22..397ceaa 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/from_unixtime.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/from_unixtime.md.txt
@@ -24,7 +24,7 @@ under the License.
 `DATETIME FROM_UNIXTIME(INT unix_timestamp[, VARCHAR string_format])`
 
 
-将 unix 时间戳转化位对应的 time 格式,返回的格式由 `string_format` 指定
+将 unix 时间戳转化为对应的 time 格式,返回的格式由 `string_format` 指定
 
 默认为 yyyy-MM-dd HH:mm:ss ,也支持date_format中的format格式
 
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/hour.md.txt
similarity index 69%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/hour.md.txt
index 7c889d4..c935697 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/hour.md.txt
@@ -17,24 +17,26 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# hour
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`INT HOUR(DATETIME date)`
 
 
-用于返回满足要求的行的数目,或者非NULL行的数目
+获得日期中的小时的信息,返回值范围从0-23。
+
+参数为Date或者Datetime类型
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select hour('2018-12-31 23:59:59');
++-----------------------------+
+| hour('2018-12-31 23:59:59') |
++-----------------------------+
+|                          23 |
++-----------------------------+
 ```
 ##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+HOUR
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/minute.md.txt
similarity index 72%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/minute.md.txt
index 2da13c8..84c3c9e 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/minute.md.txt
@@ -17,25 +17,26 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
+# minute
 ## description
 ### Syntax
 
-`INT DAY(DATETIME date)`
+`INT MINUTE(DATETIME date)`
 
 
-获得日期中的天信息,返回值范围从1-31。
+获得日期中的分钟的信息,返回值范围从0-59。
 
 参数为Date或者Datetime类型
 
 ## example
 
 ```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
+mysql> select minute('2018-12-31 23:59:59');
++-----------------------------+
+| minute('2018-12-31 23:59:59') |
++-----------------------------+
+|                          59 |
++-----------------------------+
+```
 ##keyword
-DAY
+MINUTE
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/second.md.txt
similarity index 73%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/second.md.txt
index 2da13c8..d9226d3 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/second.md.txt
@@ -17,25 +17,26 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
+# second
 ## description
 ### Syntax
 
-`INT DAY(DATETIME date)`
+`INT SECOND(DATETIME date)`
 
 
-获得日期中的天信息,返回值范围从1-31。
+获得日期中的秒的信息,返回值范围从0-59。
 
 参数为Date或者Datetime类型
 
 ## example
 
 ```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
+mysql> select second('2018-12-31 23:59:59');
++-----------------------------+
+| second('2018-12-31 23:59:59') |
++-----------------------------+
+|                          59 |
++-----------------------------+
+```
 ##keyword
-DAY
+SECOND
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/timestampadd.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/timestampadd.md.txt
new file mode 100644
index 0000000..34848e6
--- /dev/null
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/timestampadd.md.txt
@@ -0,0 +1,52 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# timestampadd
+## description
+### Syntax
+
+`DATETIME TIMESTAMPADD(unit, interval, DATETIME datetime_expr)`
+
+
+将整数表达式间隔添加到日期或日期时间表达式datetime_expr中。
+
+interval的单位由unit参数给出,它应该是下列值之一: 
+
+SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, or YEAR。
+
+## example
+
+```
+
+mysql> SELECT TIMESTAMPADD(MINUTE,1,'2019-01-02');
++------------------------------------------------+
+| timestampadd(MINUTE, 1, '2019-01-02 00:00:00') |
++------------------------------------------------+
+| 2019-01-02 00:01:00                            |
++------------------------------------------------+
+
+mysql> SELECT TIMESTAMPADD(WEEK,1,'2019-01-02');
++----------------------------------------------+
+| timestampadd(WEEK, 1, '2019-01-02 00:00:00') |
++----------------------------------------------+
+| 2019-01-09 00:00:00                          |
++----------------------------------------------+
+```
+##keyword
+TIMESTAMPADD
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/timestampdiff.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/timestampdiff.md.txt
new file mode 100644
index 0000000..86a3557
--- /dev/null
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/timestampdiff.md.txt
@@ -0,0 +1,60 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# timestampdiff
+## description
+### Syntax
+
+`INT TIMESTAMPDIFF(unit,DATETIME datetime_expr1, DATETIME datetime_expr2)`
+
+返回datetime_expr2−datetime_expr1,其中datetime_expr1和datetime_expr2是日期或日期时间表达式。
+
+结果(整数)的单位由unit参数给出。interval的单位由unit参数给出,它应该是下列值之一: 
+                   
+SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, or YEAR。
+
+## example
+
+```
+
+MySQL> SELECT TIMESTAMPDIFF(MONTH,'2003-02-01','2003-05-01');
++--------------------------------------------------------------------+
+| timestampdiff(MONTH, '2003-02-01 00:00:00', '2003-05-01 00:00:00') |
++--------------------------------------------------------------------+
+|                                                                  3 |
++--------------------------------------------------------------------+
+
+MySQL> SELECT TIMESTAMPDIFF(YEAR,'2002-05-01','2001-01-01');
++-------------------------------------------------------------------+
+| timestampdiff(YEAR, '2002-05-01 00:00:00', '2001-01-01 00:00:00') |
++-------------------------------------------------------------------+
+|                                                                -1 |
++-------------------------------------------------------------------+
+
+
+MySQL> SELECT TIMESTAMPDIFF(MINUTE,'2003-02-01','2003-05-01 12:05:55');
++---------------------------------------------------------------------+
+| timestampdiff(MINUTE, '2003-02-01 00:00:00', '2003-05-01 12:05:55') |
++---------------------------------------------------------------------+
+|                                                              128885 |
++---------------------------------------------------------------------+
+
+```
+##keyword
+TIMESTAMPDIFF
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/hash-functions/index.rst.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/hash-functions/index.rst.txt
new file mode 100644
index 0000000..b0556ff
--- /dev/null
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/hash-functions/index.rst.txt
@@ -0,0 +1,8 @@
+=============
+Hash函数
+=============
+
+.. toctree::
+    :glob:
+
+    *
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/hash-functions/murmur_hash3_32.md.txt
similarity index 52%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/hash-functions/murmur_hash3_32.md.txt
index 7c889d4..cf0cc98 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/hash-functions/murmur_hash3_32.md.txt
@@ -6,9 +6,7 @@ regarding copyright ownership.  The ASF licenses this file
 to you under the Apache License, Version 2.0 (the
 "License"); you may not use this file except in compliance
 with the License.  You may obtain a copy of the License at
-
   http://www.apache.org/licenses/LICENSE-2.0
-
 Unless required by applicable law or agreed to in writing,
 software distributed under the License is distributed on an
 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@@ -17,24 +15,40 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# murmur_hash3_32
+
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`INT MURMUR_HASH3_32(VARCHAR input, ...)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+返回输入字符串的32位murmur3 hash值
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select murmur_hash3_32(null);
++-----------------------+
+| murmur_hash3_32(NULL) |
++-----------------------+
+|                  NULL |
++-----------------------+
+
+mysql> select murmur_hash3_32("hello");
++--------------------------+
+| murmur_hash3_32('hello') |
++--------------------------+
+|               1321743225 |
++--------------------------+
+
+mysql> select murmur_hash3_32("hello", "world");
++-----------------------------------+
+| murmur_hash3_32('hello', 'world') |
++-----------------------------------+
+|                         984713481 |
++-----------------------------------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    MURMUR_HASH3_32,HASH
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/index.rst.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/index.rst.txt
index 4f929c4..281d1a8 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/index.rst.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/index.rst.txt
@@ -14,3 +14,5 @@ SQL 函数
     spatial-functions/index
     string-functions/index
     aggregate-functions/index
+    bitmap-functions/index
+    hash-functions/index
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/string-functions/ends_with.md.txt
similarity index 55%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/string-functions/ends_with.md.txt
index 7c889d4..c2eecca 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/string-functions/ends_with.md.txt
@@ -17,24 +17,30 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# ends_with
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BOOLEAN ENDS_WITH (VARCHAR str, VARCHAR suffix)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+如果字符串以指定后缀结尾,返回true。否则,返回false。任意参数为NULL,返回NULL。
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select ends_with("Hello doris", "doris");
++-----------------------------------+
+| ends_with('Hello doris', 'doris') |
++-----------------------------------+
+|                                 1 | 
++-----------------------------------+
+
+mysql> select ends_with("Hello doris", "Hello");
++-----------------------------------+
+| ends_with('Hello doris', 'Hello') |
++-----------------------------------+
+|                                 0 | 
++-----------------------------------+
 ```
 ##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+ENDS_WITH
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/cn/sql-reference/sql-functions/string-functions/starts_with.md.txt
similarity index 53%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-functions/string-functions/starts_with.md.txt
index 7c889d4..5997abd 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-functions/string-functions/starts_with.md.txt
@@ -1,4 +1,4 @@
-<!-- 
+<!--
 Licensed to the Apache Software Foundation (ASF) under one
 or more contributor license agreements.  See the NOTICE file
 distributed with this work for additional information
@@ -17,24 +17,30 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# starts_with
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BOOLEAN STARTS_WITH (VARCHAR str, VARCHAR prefix)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+如果字符串以指定前缀开头,返回true。否则,返回false。任意参数为NULL,返回NULL。
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+MySQL [(none)]> select starts_with("hello world","hello");
++-------------------------------------+
+| starts_with('hello world', 'hello') |
++-------------------------------------+
+|                                   1 |
++-------------------------------------+
+
+MySQL [(none)]> select starts_with("hello world","world");
++-------------------------------------+
+| starts_with('hello world', 'world') |
++-------------------------------------+
+|                                   0 |
++-------------------------------------+
 ```
 ##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+STARTS_WITH
\ No newline at end of file
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Account Management/DROP USER.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Account Management/DROP USER.md.txt
index 5d90390..cbdc212 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Account Management/DROP USER.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Account Management/DROP USER.md.txt	
@@ -22,15 +22,20 @@ under the License.
 
 Syntax:
 
-    DROP USER 'user_name'
+    DROP USER 'user_identity'
 
-    DROP USER 命令会删除一个 palo 用户。这里 Doris 不支持删除指定的 user_identity。当删除一个指定用户后,该用户所对应的所有 user_identity 都会被删除。比如之前通过 CREATE USER 语句创建了 jack@'192.%' 以及 jack@['domain'] 两个用户,则在执行 DROP USER 'jack' 后,jack@'192.%' 以及 jack@['domain'] 都将被删除。
+    `user_identity`:
+
+        user@'host'
+        user@['domain']
+
+    删除指定的 user identitiy.
 
 ## example
 
-1. 删除用户 jack
+1. 删除用户 jack@'192.%'
    
-    DROP USER 'jack'
+    DROP USER 'jack'@'192.%'
 
 ## keyword
 
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Administration/SHOW INDEX.md.txt
similarity index 70%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-statements/Administration/SHOW INDEX.md.txt
index 2da13c8..8bad66b 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Administration/SHOW INDEX.md.txt	
@@ -17,25 +17,19 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
-## description
-### Syntax
+# SHOW INDEX
 
-`INT DAY(DATETIME date)`
+## description
 
+    该语句用于展示一个表中索引的相关信息,目前只支持bitmap 索引
+    语法:
+        SHOW INDEX[ES] FROM [db_name.]table_name;
 
-获得日期中的天信息,返回值范围从1-31。
+## example
 
-参数为Date或者Datetime类型
+    1. 展示指定 table_name 的下索引
+        SHOW INDEX FROM example_db.table_name;
 
-## example
+## keyword
 
-```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
-##keyword
-DAY
+    SHOW,INDEX
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/ALTER TABLE.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/ALTER TABLE.md.txt
index 3758aa7..b8974f6 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/ALTER TABLE.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/ALTER TABLE.md.txt	
@@ -18,7 +18,9 @@ under the License.
 -->
 
 # ALTER TABLE
+
 ## description
+
     该语句用于对已有的 table 进行修改。如果没有指定 rollup index,默认操作 base index。
     该语句分为三种操作类型: schema change 、rollup 、partition
     这三种操作类型不能同时出现在一条 ALTER TABLE 语句中。
@@ -29,7 +31,7 @@ under the License.
         ALTER TABLE [database.]table
         alter_clause1[, alter_clause2, ...];
 
-    alter_clause 分为 partition 、rollup、schema change 和 rename 四种。
+    alter_clause 分为 partition 、rollup、schema change、rename 和index五种。
 
     partition 支持如下几种修改方式
     1. 增加分区
@@ -57,7 +59,11 @@ under the License.
         语法:
             MODIFY PARTITION partition_name SET ("key" = "value", ...)
         说明:
-            1) 当前支持修改分区的 storage_medium、storage_cooldown_time 和 replication_num 三个属性。
+            1) 当前支持修改分区的下列属性:
+                - storage_medium
+                - storage_cooldown_time
+                - replication_num 
+                — in_memory
             2) 对于单分区表,partition_name 同表名。
         
     rollup 支持如下几种创建方式:
@@ -66,19 +72,32 @@ under the License.
             ADD ROLLUP rollup_name (column_name1, column_name2, ...)
             [FROM from_index_name]
             [PROPERTIES ("key"="value", ...)]
-        注意:
+        例子:
+            ADD ROLLUP r1(col1,col2) from r0
+    1.2 批量创建 rollup index
+        语法:
+            ADD ROLLUP [rollup_name (column_name1, column_name2, ...)
+                        [FROM from_index_name]
+                        [PROPERTIES ("key"="value", ...)],...]
+        例子:
+            ADD ROLLUP r1(col1,col2) from r0, r2(col3,col4) from r0
+    1.3 注意:
             1) 如果没有指定 from_index_name,则默认从 base index 创建
             2) rollup 表中的列必须是 from_index 中已有的列
             3) 在 properties 中,可以指定存储格式。具体请参阅 CREATE TABLE
             
     2. 删除 rollup index
         语法:
-            DROP ROLLUP rollup_name
-            [PROPERTIES ("key"="value", ...)]
-        注意:
+            DROP ROLLUP rollup_name [PROPERTIES ("key"="value", ...)]
+        例子:
+            DROP ROLLUP r1
+    2.1 批量删除 rollup index
+        语法:DROP ROLLUP [rollup_name [PROPERTIES ("key"="value", ...)],...]
+        例子:DROP ROLLUP r1,r2
+    2.2 注意:
             1) 不能删除 base index
             2) 执行 DROP ROLLUP 一段时间内,可以通过 RECOVER 语句恢复被删除的 rollup index。详见 RECOVER 语句
-    
+
             
     schema change 支持如下几种修改方式:
     1. 向指定 index 的指定位置添加一列
@@ -125,8 +144,15 @@ under the License.
             4) 分区列不能做任何修改
             5) 目前支持以下类型的转换(精度损失由用户保证)
                 TINYINT/SMALLINT/INT/BIGINT 转换成 TINYINT/SMALLINT/INT/BIGINT/DOUBLE。
+                TINTINT/SMALLINT/INT/BIGINT/LARGEINT/FLOAT/DOUBLE/DECIMAL 转换成 VARCHAR
                 LARGEINT 转换成 DOUBLE
                 VARCHAR 支持修改最大长度
+                VARCHAR 转换成 TINTINT/SMALLINT/INT/BIGINT/LARGEINT/FLOAT/DOUBLE
+                VARCHAR 转换成 DATE (目前支持"%Y-%m-%d", "%y-%m-%d", "%Y%m%d", "%y%m%d", "%Y/%m/%d, "%y/%m/%d"六种格式化格式)
+                DATETIME 转换成 DATE(仅保留年-月-日信息, 例如: `2019-12-09 21:47:05` <--> `2019-12-09`)
+                DATE 转换成 DATETIME(时分秒自动补零, 例如: `2019-12-09` <--> `2019-12-09 00:00:00`)
+                FLOAT 转换成 DOUBLE
+                INT 转换成 DATE (如果INT类型数据不合法则转换失败,原始数据不变)
             6) 不支持从NULL转为NOT NULL
                 
     5. 对指定 index 的列进行重新排序
@@ -138,7 +164,7 @@ under the License.
             1) index 中的所有列都要写出来
             2) value 列在 key 列之后
             
-    6. 修改table的属性,目前支持修改bloom filter列和colocate_with 属性
+    6. 修改table的属性,目前支持修改bloom filter列, colocate_with 属性和dynamic_partition属性
         语法:
             PROPERTIES ("key"="value")
         注意:
@@ -157,8 +183,19 @@ under the License.
     3. 修改 partition 名称
         语法:
             RENAME PARTITION old_partition_name new_partition_name;
-      
+    bitmap index 支持如下几种修改方式
+    1. 创建bitmap 索引
+        语法:
+            ADD INDEX index_name (column [, ...],) [USING BITMAP] [COMMENT 'balabala'];
+        注意:
+            1. 目前仅支持bitmap 索引
+            1. BITMAP 索引仅在单列上创建
+    2. 删除索引
+        语法:
+            DROP INDEX index_name;
+
 ## example
+
     [partition]
     1. 增加分区, 现有分区 [MIN, 2013-01-01),增加分区 [2013-01-01, 2014-01-01),使用默认分桶方式
         ALTER TABLE example_db.my_table
@@ -266,6 +303,16 @@ under the License.
     13. 将表的分桶方式由 Random Distribution 改为 Hash Distribution
 
         ALTER TABLE example_db.my_table set ("distribution_type" = "hash");
+    
+    14. 修改表的动态分区属性(支持未添加动态分区属性的表添加动态分区属性)
+        ALTER TABLE example_db.my_table set ("dynamic_partition_enable" = "false");
+        
+        如果需要在未添加动态分区属性的表中添加动态分区属性,则需要指定所有的动态分区属性
+        ALTER TABLE example_db.my_table set ("dynamic_partition.enable" = "true", dynamic_partition.time_unit" = "DAY", "dynamic_partition.end" = "3", "dynamic_partition.prefix" = "p", "dynamic_partition.buckets" = "32");
+    15. 修改表的 in_memory 属性
+
+        ALTER TABLE example_db.my_table set ("in_memory" = "true");
+        
         
     [rename]
     1. 将名为 table1 的表修改为 table2
@@ -276,7 +323,12 @@ under the License.
         
     3. 将表 example_table 中名为 p1 的 partition 修改为 p2
         ALTER TABLE example_table RENAME PARTITION p1 p2;
-        
+    [index]
+    1. 在table1 上为siteid 创建bitmap 索引
+        ALTER TABLE table1 ADD INDEX index_name (siteid) [USING BITMAP] COMMENT 'balabala';
+    2. 删除table1 上的siteid列的bitmap 索引
+        ALTER TABLE table1 DROP INDEX index_name;
+
 ## keyword
+
     ALTER,TABLE,ROLLUP,COLUMN,PARTITION,RENAME
-    
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/ALTER VIEW.md.txt
similarity index 53%
copy from content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/ALTER VIEW.md.txt
index be01915..2baff68 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/ALTER VIEW.md.txt	
@@ -17,19 +17,29 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# SHOW PARTITIONS
+# ALTER VIEW
 ## description
-    该语句用于展示分区信息
-    语法:
-        SHOW PARTITIONS FROM [db_name.]table_name [PARTITION partition_name];
-
-## example
-    1. 展示指定 db 的下指定表的分区信息
-        SHOW PARTITIONS FROM example_db.table_name;
+	该语句用于修改一个view的定义
+	语法:
+		ALTER VIEW
+        [db_name.]view_name
+        (column1[ COMMENT "col comment"][, column2, ...])
+        AS query_stmt
         
-    1. 展示指定 db 的下指定表的指定分区的信息
-        SHOW PARTITIONS FROM example_db.table_name PARTITION p1;
-
-## keyword
-    SHOW,PARTITIONS
-    
+    说明:
+        1. 视图都是逻辑上的,其中的数据不会存储在物理介质上,在查询时视图将作为语句中的子查询,因此,修改视图的定义等价于修改query_stmt。
+        2. query_stmt 为任意支持的 SQL
+       
+## example
+	1、修改example_db上的视图example_view
+	
+		ALTER VIEW example_db.example_view
+		(
+			c1 COMMENT "column 1",
+			c2 COMMENT "column 2",
+			c3 COMMENT "column 3"
+		)
+		AS SELECT k1, k2, SUM(v1) FROM example_table 
+		GROUP BY k1, k2
+		
+   
\ No newline at end of file
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CANCEL ALTER.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CANCEL ALTER.md.txt
index f3600cb..98be482 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CANCEL ALTER.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CANCEL ALTER.md.txt	
@@ -29,8 +29,14 @@ under the License.
     语法:
         CANCEL ALTER TABLE ROLLUP
         FROM db_name.table_name
-        
-    2. 撤销 ALTER CLUSTER 操作
+
+    3. 根据job id批量撤销rollup操作
+    语法:
+        CANCEL ALTER TABLE ROLLUP
+                FROM db_name.table_name (jobid,...)
+    注意:
+        该命令为异步操作,具体是否执行成功需要使用`show alter table rollup`查看任务状态确认
+    4. 撤销 ALTER CLUSTER 操作
     语法:
         (待实现...)
 
@@ -46,6 +52,11 @@ under the License.
         CANCEL ALTER TABLE ROLLUP
         FROM example_db.my_table;
 
+    [CANCEL ALTER TABLE ROLLUP]
+    1. 根据job id撤销 my_table 下的 ADD ROLLUP 操作。
+        CANCEL ALTER TABLE ROLLUP
+         FROM example_db.my_table (12801,12802);
+
 ## keyword
     CANCEL,ALTER,TABLE,COLUMN,ROLLUP
     
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CREATE INDEX.md.txt
similarity index 67%
copy from content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CREATE INDEX.md.txt
index be01915..9767f01 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CREATE INDEX.md.txt	
@@ -17,19 +17,22 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# SHOW PARTITIONS
+# CREATE INDEX
+
 ## description
-    该语句用于展示分区信息
+
+    该语句用于创建索引
     语法:
-        SHOW PARTITIONS FROM [db_name.]table_name [PARTITION partition_name];
+        CREATE INDEX index_name ON table_name (column [, ...],) [USING BITMAP] [COMMENT'balabala'];
+    注意:
+        1. 目前只支持bitmap 索引
+        2. BITMAP 索引仅在单列上创建
 
 ## example
-    1. 展示指定 db 的下指定表的分区信息
-        SHOW PARTITIONS FROM example_db.table_name;
-        
-    1. 展示指定 db 的下指定表的指定分区的信息
-        SHOW PARTITIONS FROM example_db.table_name PARTITION p1;
+
+    1. 在table1 上为siteid 创建bitmap 索引
+        CREATE INDEX index_name ON table1 (siteid) USING BITMAP COMMENT 'balabala';
 
 ## keyword
-    SHOW,PARTITIONS
-    
+
+    CREATE,INDEX
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CREATE TABLE.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CREATE TABLE.md.txt
index d9903e2..8c0b035 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CREATE TABLE.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/CREATE TABLE.md.txt	
@@ -18,135 +18,163 @@ under the License.
 -->
 
 # CREATE TABLE
+
 ## description
-    该语句用于创建 table。
+
+该语句用于创建 table。
+语法:
+
+```
+    CREATE [EXTERNAL] TABLE [IF NOT EXISTS] [database.]table_name
+    (column_definition1[, column_definition2, ...]
+    [, index_definition1[, ndex_definition12,]])
+    [ENGINE = [olap|mysql|broker]]
+    [key_desc]
+    [COMMENT "table comment"];
+    [partition_desc]
+    [distribution_desc]
+    [rollup_index]
+    [PROPERTIES ("key"="value", ...)]
+    [BROKER PROPERTIES ("key"="value", ...)]
+```
+
+1. column_definition
     语法:
-        CREATE [EXTERNAL] TABLE [IF NOT EXISTS] [database.]table_name
-        (column_definition1[, column_definition2, ...])
-        [ENGINE = [olap|mysql|broker]]
-        [key_desc]
-        [COMMENT "table comment"];
-        [partition_desc]
-        [distribution_desc]
-        [PROPERTIES ("key"="value", ...)]
-        [BROKER PROPERTIES ("key"="value", ...)]
-        
-    1. column_definition
-        语法:
-        col_name col_type [agg_type] [NULL | NOT NULL] [DEFAULT "default_value"]
-        
-        说明:
-        col_name:列名称
-        col_type:列类型
-                            TINYINT(1字节)
-                                范围:-2^7 + 1 ~ 2^7 - 1
-                            SMALLINT(2字节)
-                                范围:-2^15 + 1 ~ 2^15 - 1
-                            INT(4字节)
-                                范围:-2^31 + 1 ~ 2^31 - 1
-                            BIGINT(8字节)
-                                范围:-2^63 + 1 ~ 2^63 - 1
-                            LARGEINT(16字节)
-                                范围:-2^127 + 1 ~ 2^127 - 1
-                            FLOAT(4字节)
-                                支持科学计数法
-                            DOUBLE(12字节)
-                                支持科学计数法
-                            DECIMAL[(precision, scale)] (16字节)
-                                保证精度的小数类型。默认是 DECIMAL(10, 0)
-                                precision: 1 ~ 27
-                                scale: 0 ~ 9
-                                其中整数部分为 1 ~ 18
-                                不支持科学计数法
-                            DATE(3字节)
-                                范围:1900-01-01 ~ 9999-12-31
-                            DATETIME(8字节)
-                                范围:1900-01-01 00:00:00 ~ 9999-12-31 23:59:59
-                            CHAR[(length)]
-                                定长字符串。长度范围:1 ~ 255。默认为1
-                            VARCHAR[(length)]
-                                变长字符串。长度范围:1 ~ 65533
-                            HLL (1~16385个字节)
-                                hll列类型,不需要指定长度和默认值、长度根据数据的聚合
-                                程度系统内控制,并且HLL列只能通过配套的hll_union_agg、Hll_cardinality、hll_hash进行查询或使用
-                            BITMAP
-                                bitmap 列类型,不需要指定长度和默认值
-                                BITMAP 列只能通过配套的 BITMAP_UNION、BITMAP_COUNT、TO_BITMAP 进行查询或使用
-                                
-        agg_type:聚合类型,如果不指定,则该列为 key 列。否则,该列为 value 列
-                            
-                            * SUM、MAX、MIN、REPLACE
-                            * HLL_UNION(仅用于HLL列,为HLL独有的聚合方式)、
-                            * BITMAP_UNION(仅用于 BITMAP 列,为 BITMAP 独有的聚合方式)、
-                            * REPLACE_IF_NOT_NULL:这个聚合类型的含义是当且仅当新导入数据是非NULL值时才会发生替换行为,如果新导入的数据是NULL,那么Doris仍然会保留原值。注意:如果用户在建表时REPLACE_IF_NOT_NULL列指定了NOT NULL,那么Doris仍然会将其转化为NULL,不会向用户报错。用户可以借助这个类型完成部分列导入的功能。
-                            *该类型只对聚合模型(key_desc的type为AGGREGATE KEY)有用,其它模型不需要指定这个。
-
-        是否允许为NULL: 默认不允许为 NULL。NULL 值在导入数据中用 \N 来表示
-
-        注意: 
-            BITMAP_UNION聚合类型列在导入时的原始数据类型必须是TINYINT,SMALLINT,INT。
-
-    2. ENGINE 类型
-        默认为 olap。可选 mysql, broker
-        1) 如果是 mysql,则需要在 properties 提供以下信息:
-        
-            PROPERTIES (
-            "host" = "mysql_server_host",
-            "port" = "mysql_server_port",
-            "user" = "your_user_name",
-            "password" = "your_password",
-            "database" = "database_name",
-            "table" = "table_name"
-            )
-        
-        注意:
-            "table" 条目中的 "table_name" 是 mysql 中的真实表名。
-            而 CREATE TABLE 语句中的 table_name 是该 mysql 表在 Palo 中的名字,可以不同。
-            
-            在 Palo 创建 mysql 表的目的是可以通过 Palo 访问 mysql 数据库。
-            而 Palo 本身并不维护、存储任何 mysql 数据。
-        2) 如果是 broker,表示表的访问需要通过指定的broker, 需要在 properties 提供以下信息:
-            PROPERTIES (
-            "broker_name" = "broker_name",
-            "path" = "file_path1[,file_path2]",
-            "column_separator" = "value_separator"
-            "line_delimiter" = "value_delimiter"
-            )
-            另外还需要提供Broker需要的Property信息,通过BROKER PROPERTIES来传递,例如HDFS需要传入
-            BROKER PROPERTIES(
-                "username" = "name", 
-                "password" = "password"
-            )
-            这个根据不同的Broker类型,需要传入的内容也不相同
-        注意:
-            "path" 中如果有多个文件,用逗号[,]分割。如果文件名中包含逗号,那么使用 %2c 来替代。如果文件名中包含 %,使用 %25 代替
-            现在文件内容格式支持CSV,支持GZ,BZ2,LZ4,LZO(LZOP) 压缩格式。
+    `col_name col_type [agg_type] [NULL | NOT NULL] [DEFAULT "default_value"]`
+
+    说明:
+    col_name:列名称
+    col_type:列类型
+
+    ```
+        TINYINT(1字节)
+            范围:-2^7 + 1 ~ 2^7 - 1
+        SMALLINT(2字节)
+            范围:-2^15 + 1 ~ 2^15 - 1
+        INT(4字节)
+            范围:-2^31 + 1 ~ 2^31 - 1
+        BIGINT(8字节)
+            范围:-2^63 + 1 ~ 2^63 - 1
+        LARGEINT(16字节)
+            范围:-2^127 + 1 ~ 2^127 - 1
+        FLOAT(4字节)
+            支持科学计数法
+        DOUBLE(12字节)
+            支持科学计数法
+        DECIMAL[(precision, scale)] (16字节)
+            保证精度的小数类型。默认是 DECIMAL(10, 0)
+            precision: 1 ~ 27
+            scale: 0 ~ 9
+            其中整数部分为 1 ~ 18
+            不支持科学计数法
+        DATE(3字节)
+            范围:1900-01-01 ~ 9999-12-31
+        DATETIME(8字节)
+            范围:1900-01-01 00:00:00 ~ 9999-12-31 23:59:59
+        CHAR[(length)]
+            定长字符串。长度范围:1 ~ 255。默认为1
+        VARCHAR[(length)]
+            变长字符串。长度范围:1 ~ 65533
+        HLL (1~16385个字节)
+            hll列类型,不需要指定长度和默认值、长度根据数据的聚合
+            程度系统内控制,并且HLL列只能通过配套的hll_union_agg、Hll_cardinality、hll_hash进行查询或使用
+        BITMAP
+            bitmap列类型,不需要指定长度和默认值。表示整型的集合,元素最大支持到2^64 - 1
+    ```
+
+    agg_type:聚合类型,如果不指定,则该列为 key 列。否则,该列为 value 列
+       * SUM、MAX、MIN、REPLACE
+       * HLL_UNION(仅用于HLL列,为HLL独有的聚合方式)、
+       * BITMAP_UNION(仅用于 BITMAP 列,为 BITMAP 独有的聚合方式)、
+       * REPLACE_IF_NOT_NULL:这个聚合类型的含义是当且仅当新导入数据是非NULL值时会发生替换行为,如果新导入的数据是NULL,那么Doris仍然会保留原值。注意:如果用在建表时REPLACE_IF_NOT_NULL列指定了NOT NULL,那么Doris仍然会将其转化NULL,不会向用户报错。用户可以借助这个类型完成部分列导入的功能。
+       * 该类型只对聚合模型(key_desc的type为AGGREGATE KEY)有用,其它模型不需要指这个。
+
+    是否允许为NULL: 默认不允许为 NULL。NULL 值在导入数据中用 \N 来表示
+
+    注意:
+        BITMAP_UNION聚合类型列在导入时的原始数据类型必须是TINYINT,SMALLINT,INT,BIGINT。
+
+2. index_definition
+    语法:
+        `INDEX index_name (col_name[, col_name, ...]) [USING BITMAP] COMMENT 'xxxxxx'`
+    说明:
+        index_name:索引名称
+        col_name:列名
+    注意:
+        当前仅支持BITMAP索引, BITMAP索引仅支持应用于单列
+
+3. ENGINE 类型
+    默认为 olap。可选 mysql, broker
+    1) 如果是 mysql,则需要在 properties 提供以下信息:
+
+```
+    PROPERTIES (
+        "host" = "mysql_server_host",
+        "port" = "mysql_server_port",
+        "user" = "your_user_name",
+        "password" = "your_password",
+        "database" = "database_name",
+        "table" = "table_name"
+        )
+```
+
+    注意:
+        "table" 条目中的 "table_name" 是 mysql 中的真实表名。
+        而 CREATE TABLE 语句中的 table_name 是该 mysql 表在 Palo 中的名字,可以不同。
     
-    3. key_desc
-        语法:
-            key_type(k1[,k2 ...])
-        说明:
-            数据按照指定的key列进行排序,且根据不同的key_type具有不同特性。
-            key_type支持一下类型:
-                    AGGREGATE KEY:key列相同的记录,value列按照指定的聚合类型进行聚合,
-                                 适合报表、多维分析等业务场景。
-                    UNIQUE KEY:key列相同的记录,value列按导入顺序进行覆盖,
-                                 适合按key列进行增删改查的点查询业务。
-                    DUPLICATE KEY:key列相同的记录,同时存在于Palo中,
-                                 适合存储明细数据或者数据无聚合特性的业务场景。
-        注意:
-            除AGGREGATE KEY外,其他key_type在建表时,value列不需要指定聚合类型。
+    在 Palo 创建 mysql 表的目的是可以通过 Palo 访问 mysql 数据库。
+        而 Palo 本身并不维护、存储任何 mysql 数据。
+    1) 如果是 broker,表示表的访问需要通过指定的broker, 需要在 properties 提供以下信息:
+        ```
+        PROPERTIES (
+        "broker_name" = "broker_name",
+        "path" = "file_path1[,file_path2]",
+        "column_separator" = "value_separator"
+        "line_delimiter" = "value_delimiter"
+        )
+        ```
+        另外还需要提供Broker需要的Property信息,通过BROKER PROPERTIES来传递,例如HDFS需要传入
+        ```
+        BROKER PROPERTIES(
+            "username" = "name",
+            "password" = "password"
+        )
+        ```
+        这个根据不同的Broker类型,需要传入的内容也不相同
+    注意:
+        "path" 中如果有多个文件,用逗号[,]分割。如果文件名中包含逗号,那么使用 %2c 来替代。如果文件名中包含 %,使用 %25 代替
+        现在文件内容格式支持CSV,支持GZ,BZ2,LZ4,LZO(LZOP) 压缩格式。
+
+1. key_desc
+    语法:
+        `key_type(k1[,k2 ...])`
+    说明:
+        数据按照指定的key列进行排序,且根据不同的key_type具有不同特性。
+        key_type支持一下类型:
+                AGGREGATE KEY:key列相同的记录,value列按照指定的聚合类型进行聚合,
+                             适合报表、多维分析等业务场景。
+                UNIQUE KEY:key列相同的记录,value列按导入顺序进行覆盖,
+                             适合按key列进行增删改查的点查询业务。
+                DUPLICATE KEY:key列相同的记录,同时存在于Palo中,
+                             适合存储明细数据或者数据无聚合特性的业务场景。
+        默认为DUPLICATE KEY,key列为列定义中前36个字节, 如果前36个字节的列数小于3,将使用前三列。
+    注意:
+        除AGGREGATE KEY外,其他key_type在建表时,value列不需要指定聚合类型。
 
-    4. partition_desc
-        partition描述有两种使用方式
-        1) LESS THAN 
+2. partition_desc
+    partition描述有两种使用方式
+    1) LESS THAN
         语法:
+
+        ```
             PARTITION BY RANGE (k1, k2, ...)
             (
             PARTITION partition_name1 VALUES LESS THAN MAXVALUE|("value1", "value2", ...),
             PARTITION partition_name2 VALUES LESS THAN MAXVALUE|("value1", "value2", ...)
             ...
             )
+        ```
+
         说明:
             使用指定的 key 列和指定的数值范围进行分区。
             1) 分区名称仅支持字母开头,字母、数字和下划线组成
@@ -155,260 +183,420 @@ under the License.
             3) 分区为左闭右开区间,首个分区的左边界为做最小值
             4) NULL 值只会存放在包含最小值的分区中。当包含最小值的分区被删除后,NULL 值将无法导入。
             5) 可以指定一列或多列作为分区列。如果分区值缺省,则会默认填充最小值。
-                             
+
         注意:
             1) 分区一般用于时间维度的数据管理
             2) 有数据回溯需求的,可以考虑首个分区为空分区,以便后续增加分区
-            
-        2)Fixed Range
+
+    2)Fixed Range
         语法:
+        ```
             PARTITION BY RANGE (k1, k2, k3, ...)
             (
             PARTITION partition_name1 VALUES [("k1-lower1", "k2-lower1", "k3-lower1",...), ("k1-upper1", "k2-upper1", "k3-upper1", ...)),
             PARTITION partition_name2 VALUES [("k1-lower1-2", "k2-lower1-2", ...), ("k1-upper1-2", MAXVALUE, ))
             "k3-upper1-2", ...
             )
+        ```
         说明:
             1)Fixed Range比LESS THAN相对灵活些,左右区间完全由用户自己确定
             2)其他与LESS THAN保持同步
 
-    5. distribution_desc
+3. distribution_desc
         1) Hash 分桶
         语法:
-            DISTRIBUTED BY HASH (k1[,k2 ...]) [BUCKETS num]
+            `DISTRIBUTED BY HASH (k1[,k2 ...]) [BUCKETS num]`
         说明:
             使用指定的 key 列进行哈希分桶。默认分区数为10
 
-        建议:建议使用Hash分桶方式
-
-    6. PROPERTIES
-        1) 如果 ENGINE 类型为 olap,则可以在 properties 中指定列存(目前我们仅支持列存)
+    建议:建议使用Hash分桶方式
 
-            PROPERTIES (
-            "storage_type" = "[column]",
-            )
-        
-        2) 如果 ENGINE 类型为 olap
+4. PROPERTIES
+    1) 如果 ENGINE 类型为 olap
            可以在 properties 设置该表数据的初始存储介质、存储到期时间和副本数。
-           
-           PROPERTIES (
+
+    ```
+       PROPERTIES (
            "storage_medium" = "[SSD|HDD]",
            ["storage_cooldown_time" = "yyyy-MM-dd HH:mm:ss"],
            ["replication_num" = "3"]
            )
-           
-           storage_medium:        用于指定该分区的初始存储介质,可选择 SSD 或 HDD。默认为 HDD。
+    ```
+
+       storage_medium:        用于指定该分区的初始存储介质,可选择 SSD 或 HDD。默认为 HDD。
            storage_cooldown_time: 当设置存储介质为 SSD 时,指定该分区在 SSD 上的存储到期时间。
                                    默认存放 7 天。
                                    格式为:"yyyy-MM-dd HH:mm:ss"
            replication_num:        指定分区的副本数。默认为 3
-           
-           当表为单分区表时,这些属性为表的属性。
+    
+       当表为单分区表时,这些属性为表的属性。
            当表为两级分区时,这些属性为附属于每一个分区。
            如果希望不同分区有不同属性。可以通过 ADD PARTITION 或 MODIFY PARTITION 进行操作
 
-        3) 如果 Engine 类型为 olap, 并且 storage_type 为 column, 可以指定某列使用 bloom filter 索引
+    2 如果 Engine 类型为 olap, 可以指定某列使用 bloom filter 索引
            bloom filter 索引仅适用于查询条件为 in 和 equal 的情况,该列的值越分散效果越好
            目前只支持以下情况的列:除了 TINYINT FLOAT DOUBLE 类型以外的 key 列及聚合方法为 REPLACE 的 value 列
-           
-           PROPERTIES (
+
+```
+       PROPERTIES (
            "bloom_filter_columns"="k1,k2,k3"
            )
-        4) 如果希望使用Colocate Join 特性,需要在 properties 中指定
+```
+
+    3) 如果希望使用 Colocate Join 特性,需要在 properties 中指定
 
-           PROPERTIES (
+```
+       PROPERTIES (
            "colocate_with"="table1"
            )
+```
     
-## example
-    1. 创建一个 olap 表,使用 HASH 分桶,使用列存,相同key的记录进行聚合
-        CREATE TABLE example_db.table_hash
-        (
-        k1 TINYINT,
-        k2 DECIMAL(10, 2) DEFAULT "10.5",
-        v1 CHAR(10) REPLACE,
-        v2 INT SUM
-        )
-        ENGINE=olap
-        AGGREGATE KEY(k1, k2)
-        COMMENT "my first doris table"
-        DISTRIBUTED BY HASH(k1) BUCKETS 32
-        PROPERTIES ("storage_type"="column");
-        
-    2. 创建一个 olap 表,使用 Hash 分桶,使用列存,相同key的记录进行覆盖,
-       设置初始存储介质和冷却时间
-        CREATE TABLE example_db.table_hash
-        (
-        k1 BIGINT,
-        k2 LARGEINT,
-        v1 VARCHAR(2048) REPLACE,
-        v2 SMALLINT SUM DEFAULT "10"
-        )
-        ENGINE=olap
-        UNIQUE KEY(k1, k2)
-        DISTRIBUTED BY HASH (k1, k2) BUCKETS 32
-        PROPERTIES(
-        "storage_type"="column",
-        "storage_medium" = "SSD",
-        "storage_cooldown_time" = "2015-06-04 00:00:00"
-        );
+    4) 如果希望使用动态分区特性,需要在properties 中指定
     
-    3. 创建一个 olap 表,使用 Range 分区,使用Hash分桶,默认使用列存,
-       相同key的记录同时存在,设置初始存储介质和冷却时间
-       
-    1)LESS THAN
-        CREATE TABLE example_db.table_range
-        (
-        k1 DATE,
-        k2 INT,
-        k3 SMALLINT,
-        v1 VARCHAR(2048),
-        v2 DATETIME DEFAULT "2014-02-04 15:36:00"
-        )
-        ENGINE=olap
-        DUPLICATE KEY(k1, k2, k3)
-        PARTITION BY RANGE (k1)
-        (
-        PARTITION p1 VALUES LESS THAN ("2014-01-01"),
-        PARTITION p2 VALUES LESS THAN ("2014-06-01"),
-        PARTITION p3 VALUES LESS THAN ("2014-12-01")
-        )
-        DISTRIBUTED BY HASH(k2) BUCKETS 32
-        PROPERTIES(
-        "storage_medium" = "SSD", "storage_cooldown_time" = "2015-06-04 00:00:00"
-        );
-        
-        说明:
-        这个语句会将数据划分成如下3个分区:
-        ( {    MIN     },   {"2014-01-01"} )
-        [ {"2014-01-01"},   {"2014-06-01"} )
-        [ {"2014-06-01"},   {"2014-12-01"} )
-        
-        不在这些分区范围内的数据将视为非法数据被过滤
-        
-    2) Fixed Range
-        CREATE TABLE table_range
-        (
-        k1 DATE,
-        k2 INT,
-        k3 SMALLINT,
-        v1 VARCHAR(2048),
-        v2 DATETIME DEFAULT "2014-02-04 15:36:00"
-        )
-        ENGINE=olap
-        DUPLICATE KEY(k1, k2, k3)
-        PARTITION BY RANGE (k1, k2, k3)
-        (
-        PARTITION p1 VALUES [("2014-01-01", "10", "200"), ("2014-01-01", "20", "300")),
-        PARTITION p2 VALUES [("2014-06-01", "100", "200"), ("2014-07-01", "100", "300"))
-        )
-        DISTRIBUTED BY HASH(k2) BUCKETS 32
-        PROPERTIES(
-        "storage_medium" = "SSD"
-        );
-
-    4. 创建一个 mysql 表
-        CREATE TABLE example_db.table_mysql
-        (
-        k1 DATE,
-        k2 INT,
-        k3 SMALLINT,
-        k4 VARCHAR(2048),
-        k5 DATETIME
-        )
-        ENGINE=mysql
-        PROPERTIES
-        (
-        "host" = "127.0.0.1",
-        "port" = "8239",
-        "user" = "mysql_user",
-        "password" = "mysql_passwd",
-        "database" = "mysql_db_test",
-        "table" = "mysql_table_test"
-        )
-        
-    5. 创建一个数据文件存储在HDFS上的 broker 外部表, 数据使用 "|" 分割,"\n" 换行
-        CREATE EXTERNAL TABLE example_db.table_broker (
-        k1 DATE,
-        k2 INT,
-        k3 SMALLINT,
-        k4 VARCHAR(2048),
-        k5 DATETIME
-        )
-        ENGINE=broker
-        PROPERTIES (
-        "broker_name" = "hdfs",
-        "path" = "hdfs://hdfs_host:hdfs_port/data1,hdfs://hdfs_host:hdfs_port/data2,hdfs://hdfs_host:hdfs_port/data3%2c4",
-        "column_separator" = "|",
-        "line_delimiter" = "\n"
-        )
-        BROKER PROPERTIES (
-        "username" = "hdfs_user",
-        "password" = "hdfs_password"
-        )
+```
+      PROPERTIES (
+          "dynamic_partition.enable" = "true|false",
+          "dynamic_partition.time_unit" = "DAY|WEEK|MONTH",
+          "dynamic_partitoin.end" = "${integer_value}",
+          "dynamic_partition.prefix" = "${string_value}",
+          "dynamic_partition.buckets" = "${integer_value}
+```
+    dynamic_partition.enable: 用于指定表级别的动态分区功能是否开启
+    dynamic_partition.time_unit: 用于指定动态添加分区的时间单位,可选择为DAY(天),WEEK(周),MONTH(月)
+    dynamic_partition.end: 用于指定提前创建的分区数量
+    dynamic_partition.prefix: 用于指定创建的分区名前缀,例如分区名前缀为p,则自动创建分区名为p20200108
+    dynamic_partition.buckets: 用于指定自动创建的分区分桶数量
 
-    6. 创建一张含有HLL列的表
-        CREATE TABLE example_db.example_table
-        (
-        k1 TINYINT,
-        k2 DECIMAL(10, 2) DEFAULT "10.5",
-        v1 HLL HLL_UNION,
-        v2 HLL HLL_UNION
-        )
-        ENGINE=olap
-        AGGREGATE KEY(k1, k2)
-        DISTRIBUTED BY HASH(k1) BUCKETS 32
-        PROPERTIES ("storage_type"="column");
-
-    7. 创建一张含有BITMAP_UNION聚合类型的表(v1和v2列的原始数据类型必须是TINYINT,SMALLINT,INT)
-        CREATE TABLE example_db.example_table
-        (
-        k1 TINYINT,
-        k2 DECIMAL(10, 2) DEFAULT "10.5",
-        v1 BITMAP BITMAP_UNION,
-        v2 BITMAP BITMAP_UNION
-        )
-        ENGINE=olap
-        AGGREGATE KEY(k1, k2)
-        DISTRIBUTED BY HASH(k1) BUCKETS 32
-        PROPERTIES ("storage_type"="column");
-
-    8. 创建两张支持Colocat Join的表t1 和t2
-        CREATE TABLE `t1` (
-        `id` int(11) COMMENT "",
-        `value` varchar(8) COMMENT ""
-        ) ENGINE=OLAP
-        DUPLICATE KEY(`id`)
-        DISTRIBUTED BY HASH(`id`) BUCKETS 10
-        PROPERTIES (
-        "colocate_with" = "t1"
-        );
-
-        CREATE TABLE `t2` (
-        `id` int(11) COMMENT "",
-        `value` varchar(8) COMMENT ""
-        ) ENGINE=OLAP
-        DUPLICATE KEY(`id`)
-        DISTRIBUTED BY HASH(`id`) BUCKETS 10
-        PROPERTIES (
-        "colocate_with" = "t1"
-        );
+    5) 建表时可以批量创建多个 Rollup
+    语法:
+    ```
+        ROLLUP (rollup_name (column_name1, column_name2, ...)
+               [FROM from_index_name]
+                [PROPERTIES ("key"="value", ...)],...)
+    ```
 
-    9. 创建一个数据文件存储在BOS上的 broker 外部表
-        CREATE EXTERNAL TABLE example_db.table_broker (
-        k1 DATE
-        )
-        ENGINE=broker
+    6) 如果希望使用 内存表 特性,需要在 properties 中指定
+
+```
         PROPERTIES (
-        "broker_name" = "bos",
-        "path" = "bos://my_bucket/input/file",
-        )
-        BROKER PROPERTIES (
-          "bos_endpoint" = "http://bj.bcebos.com",
-          "bos_accesskey" = "xxxxxxxxxxxxxxxxxxxxxxxxxx",
-          "bos_secret_accesskey"="yyyyyyyyyyyyyyyyyyyy"
-        )
+           "in_memory"="true"
+        )   
+```
+    当 in_memory 属性为 true 时,Doris会尽可能将该表的数据和索引Cache到BE 内存中
+## example
+
+1. 创建一个 olap 表,使用 HASH 分桶,使用列存,相同key的记录进行聚合
+
+    ```
+    CREATE TABLE example_db.table_hash
+    (
+    k1 TINYINT,
+    k2 DECIMAL(10, 2) DEFAULT "10.5",
+    v1 CHAR(10) REPLACE,
+    v2 INT SUM
+    )
+    ENGINE=olap
+    AGGREGATE KEY(k1, k2)
+    COMMENT "my first doris table"
+    DISTRIBUTED BY HASH(k1) BUCKETS 32
+    PROPERTIES ("storage_type"="column");
+    ```
+
+2. 创建一个 olap 表,使用 Hash 分桶,使用列存,相同key的记录进行覆盖,
+   设置初始存储介质和冷却时间
+
+   ```
+    CREATE TABLE example_db.table_hash
+    (
+    k1 BIGINT,
+    k2 LARGEINT,
+    v1 VARCHAR(2048) REPLACE,
+    v2 SMALLINT SUM DEFAULT "10"
+    )
+    ENGINE=olap
+    UNIQUE KEY(k1, k2)
+    DISTRIBUTED BY HASH (k1, k2) BUCKETS 32
+    PROPERTIES(
+    "storage_type"="column",
+    "storage_medium" = "SSD",
+    "storage_cooldown_time" = "2015-06-04 00:00:00"
+    );
+   ```
+
+3. 创建一个 olap 表,使用 Range 分区,使用Hash分桶,默认使用列存,
+   相同key的记录同时存在,设置初始存储介质和冷却时间
+
+    1)LESS THAN
+
+    ```
+    CREATE TABLE example_db.table_range
+    (
+    k1 DATE,
+    k2 INT,
+    k3 SMALLINT,
+    v1 VARCHAR(2048),
+    v2 DATETIME DEFAULT "2014-02-04 15:36:00"
+    )
+    ENGINE=olap
+    DUPLICATE KEY(k1, k2, k3)
+    PARTITION BY RANGE (k1)
+    (
+    PARTITION p1 VALUES LESS THAN ("2014-01-01"),
+    PARTITION p2 VALUES LESS THAN ("2014-06-01"),
+    PARTITION p3 VALUES LESS THAN ("2014-12-01")
+    )
+    DISTRIBUTED BY HASH(k2) BUCKETS 32
+    PROPERTIES(
+    "storage_medium" = "SSD", "storage_cooldown_time" = "2015-06-04 00:00:00"
+    );
+    ```
+
+    说明:
+    这个语句会将数据划分成如下3个分区:
+
+    ```
+    ( {    MIN     },   {"2014-01-01"} )
+    [ {"2014-01-01"},   {"2014-06-01"} )
+    [ {"2014-06-01"},   {"2014-12-01"} )
+    ```
+
+    不在这些分区范围内的数据将视为非法数据被过滤
+
+   2) Fixed Range
+
+    ```
+    CREATE TABLE table_range
+    (
+    k1 DATE,
+    k2 INT,
+    k3 SMALLINT,
+    v1 VARCHAR(2048),
+    v2 DATETIME DEFAULT "2014-02-04 15:36:00"
+    )
+    ENGINE=olap
+    DUPLICATE KEY(k1, k2, k3)
+    PARTITION BY RANGE (k1, k2, k3)
+    (
+    PARTITION p1 VALUES [("2014-01-01", "10", "200"), ("2014-01-01", "20", "300")),
+    PARTITION p2 VALUES [("2014-06-01", "100", "200"), ("2014-07-01", "100", "300"))
+    )
+    DISTRIBUTED BY HASH(k2) BUCKETS 32
+    PROPERTIES(
+    "storage_medium" = "SSD"
+    );
+    ```
+
+4. 创建一个 mysql 表
+
+```
+    CREATE TABLE example_db.table_mysql
+    (
+    k1 DATE,
+    k2 INT,
+    k3 SMALLINT,
+    k4 VARCHAR(2048),
+    k5 DATETIME
+    )
+    ENGINE=mysql
+    PROPERTIES
+    (
+    "host" = "127.0.0.1",
+    "port" = "8239",
+    "user" = "mysql_user",
+    "password" = "mysql_passwd",
+    "database" = "mysql_db_test",
+    "table" = "mysql_table_test"
+    )
+```
+
+5. 创建一个数据文件存储在HDFS上的 broker 外部表, 数据使用 "|" 分割,"\n" 换行
+
+```
+    CREATE EXTERNAL TABLE example_db.table_broker (
+    k1 DATE,
+    k2 INT,
+    k3 SMALLINT,
+    k4 VARCHAR(2048),
+    k5 DATETIME
+    )
+    ENGINE=broker
+    PROPERTIES (
+    "broker_name" = "hdfs",
+    "path" = "hdfs://hdfs_host:hdfs_port/data1,hdfs://hdfs_host:hdfs_port/data2,hdfs://hdfs_host:hdfs_port/data3%2c4",
+    "column_separator" = "|",
+    "line_delimiter" = "\n"
+    )
+    BROKER PROPERTIES (
+    "username" = "hdfs_user",
+    "password" = "hdfs_password"
+    )
+```
+
+6. 创建一张含有HLL列的表
+
+```
+    CREATE TABLE example_db.example_table
+    (
+    k1 TINYINT,
+    k2 DECIMAL(10, 2) DEFAULT "10.5",
+    v1 HLL HLL_UNION,
+    v2 HLL HLL_UNION
+    )
+    ENGINE=olap
+    AGGREGATE KEY(k1, k2)
+    DISTRIBUTED BY HASH(k1) BUCKETS 32
+    PROPERTIES ("storage_type"="column");
+```
+
+7. 创建一张含有BITMAP_UNION聚合类型的表(v1和v2列的原始数据类型必须是TINYINT,SMALLINT,INT)
+
+```
+    CREATE TABLE example_db.example_table
+    (
+    k1 TINYINT,
+    k2 DECIMAL(10, 2) DEFAULT "10.5",
+    v1 BITMAP BITMAP_UNION,
+    v2 BITMAP BITMAP_UNION
+    )
+    ENGINE=olap
+    AGGREGATE KEY(k1, k2)
+    DISTRIBUTED BY HASH(k1) BUCKETS 32
+    PROPERTIES ("storage_type"="column");
+```
+
+8. 创建两张支持Colocat Join的表t1 和t2
+
+```
+    CREATE TABLE `t1` (
+    `id` int(11) COMMENT "",
+    `value` varchar(8) COMMENT ""
+    ) ENGINE=OLAP
+    DUPLICATE KEY(`id`)
+    DISTRIBUTED BY HASH(`id`) BUCKETS 10
+    PROPERTIES (
+    "colocate_with" = "t1"
+    );
+
+    CREATE TABLE `t2` (
+    `id` int(11) COMMENT "",
+    `value` varchar(8) COMMENT ""
+    ) ENGINE=OLAP
+    DUPLICATE KEY(`id`)
+    DISTRIBUTED BY HASH(`id`) BUCKETS 10
+    PROPERTIES (
+    "colocate_with" = "t1"
+    );
+```
+
+9. 创建一个数据文件存储在BOS上的 broker 外部表
+
+```
+    CREATE EXTERNAL TABLE example_db.table_broker (
+    k1 DATE
+    )
+    ENGINE=broker
+    PROPERTIES (
+    "broker_name" = "bos",
+    "path" = "bos://my_bucket/input/file",
+    )
+    BROKER PROPERTIES (
+      "bos_endpoint" = "http://bj.bcebos.com",
+      "bos_accesskey" = "xxxxxxxxxxxxxxxxxxxxxxxxxx",
+      "bos_secret_accesskey"="yyyyyyyyyyyyyyyyyyyy"
+    )
+```
+
+10. 创建一个带有bitmap 索引的表
+
+```
+    CREATE TABLE example_db.table_hash
+    (
+    k1 TINYINT,
+    k2 DECIMAL(10, 2) DEFAULT "10.5",
+    v1 CHAR(10) REPLACE,
+    v2 INT SUM,
+    INDEX k1_idx (k1) USING BITMAP COMMENT 'xxxxxx'
+    )
+    ENGINE=olap
+    AGGREGATE KEY(k1, k2)
+    COMMENT "my first doris table"
+    DISTRIBUTED BY HASH(k1) BUCKETS 32
+    PROPERTIES ("storage_type"="column");
+```
+
+11. 创建一个动态分区表(需要在FE配置中开启动态分区功能),该表每天提前创建3天的分区,例如今天为`2020-01-08`,则会创建分区名为`p20200108`, `p20200109`, `p20200110`, `p20200111`的分区. 分区范围分别为: 
+
+```
+[types: [DATE]; keys: [2020-01-08]; ‥types: [DATE]; keys: [2020-01-09]; )
+[types: [DATE]; keys: [2020-01-09]; ‥types: [DATE]; keys: [2020-01-10]; )
+[types: [DATE]; keys: [2020-01-10]; ‥types: [DATE]; keys: [2020-01-11]; )
+[types: [DATE]; keys: [2020-01-11]; ‥types: [DATE]; keys: [2020-01-12]; )
+```
+
+```
+    CREATE TABLE example_db.dynamic_partition
+    (
+    k1 DATE,
+    k2 INT,
+    k3 SMALLINT,
+    v1 VARCHAR(2048),
+    v2 DATETIME DEFAULT "2014-02-04 15:36:00"
+    )
+    ENGINE=olap
+    DUPLICATE KEY(k1, k2, k3)
+    PARTITION BY RANGE (k1)
+    (
+    PARTITION p1 VALUES LESS THAN ("2014-01-01"),
+    PARTITION p2 VALUES LESS THAN ("2014-06-01"),
+    PARTITION p3 VALUES LESS THAN ("2014-12-01")
+    )
+    DISTRIBUTED BY HASH(k2) BUCKETS 32
+    PROPERTIES(
+    "storage_medium" = "SSD",
+    "dynamic_partition.time_unit" = "DAY",
+    "dynamic_partition.end" = "3",
+    "dynamic_partition.prefix" = "p",
+    "dynamic_partition.buckets" = "32"
+     );
+```
+
+12. Create a table with rollup index
+```
+    CREATE TABLE example_db.rolup_index_table
+    (
+        event_day DATE,
+        siteid INT DEFAULT '10',
+        citycode SMALLINT,
+        username VARCHAR(32) DEFAULT '',
+        pv BIGINT SUM DEFAULT '0'
+    )
+    AGGREGATE KEY(event_day, siteid, citycode, username)
+    DISTRIBUTED BY HASH(siteid) BUCKETS 10
+    rollup (
+    r1(event_day,siteid),
+    r2(event_day,citycode),
+    r3(event_day)
+    )
+    PROPERTIES("replication_num" = "3");
+    
+13. 创建一个内存表
+
+```
+    CREATE TABLE example_db.table_hash
+    (
+    k1 TINYINT,
+    k2 DECIMAL(10, 2) DEFAULT "10.5",
+    v1 CHAR(10) REPLACE,
+    v2 INT SUM,
+    INDEX k1_idx (k1) USING BITMAP COMMENT 'xxxxxx'
+    )
+    ENGINE=olap
+    AGGREGATE KEY(k1, k2)
+    COMMENT "my first doris table"
+    DISTRIBUTED BY HASH(k1) BUCKETS 32
+    PROPERTIES ("in_memory"="true");
+```
 
 ## keyword
+
     CREATE,TABLE
-        
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/DROP INDEX.md.txt
similarity index 69%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/DROP INDEX.md.txt
index 2da13c8..5ed5a78 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/DROP INDEX.md.txt	
@@ -17,25 +17,14 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
-## description
-### Syntax
-
-`INT DAY(DATETIME date)`
+# DROP INDEX
 
+## description
 
-获得日期中的天信息,返回值范围从1-31。
-
-参数为Date或者Datetime类型
+    该语句用于从一个表中删除指定名称的索引,目前仅支持bitmap 索引
+    语法:
+        DROP INDEX index_name ON [db_name.]table_name;
 
-## example
+## keyword
 
-```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
-##keyword
-DAY
+    DROP,INDEX
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/RECOVER.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/RECOVER.md.txt
index fceeaa5..1faa151 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/RECOVER.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/RECOVER.md.txt	
@@ -29,7 +29,7 @@ under the License.
             RECOVER PARTITION partition_name FROM [db_name.]table_name;
     
     说明:
-        1. 该操作仅能恢复之前一段时间内删除的元信息。默认为 3600 秒。
+        1. 该操作仅能恢复之前一段时间内删除的元信息。默认为 1 天。(可通过fe.conf中`catalog_trash_expire_second`参数配置)
         2. 如果删除元信息后新建立了同名同类型的元信息,则之前删除的元信息不能被恢复
 
 ## example
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/show-function.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/show-functions.md.txt
similarity index 72%
copy from content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/show-function.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/show-functions.md.txt
index 97d22ae..e6d89b1 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/show-function.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Definition/show-functions.md.txt	
@@ -17,42 +17,56 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# SHOW FUNCTION
+# SHOW FUNCTIONS
 ## description
 ### Syntax
 
 ```
-SHOW FUNCTION [FROM db]
+SHOW [FULL] [BUILTIN] FUNCTIONS [IN|FROM db] [LIKE 'function_pattern']
 ```
 
 ### Parameters
 
-> `db`: 要查询的数据库名字
+>`full`:表示显示函数的详细信息
+>`builtin`:表示显示系统提供的函数
+>`db`: 要查询的数据库名字
+>`function_pattern`: 用来过滤函数名称的参数  
 
 
-查看数据库下所有的自定义函数。如果用户指定了数据库,那么查看对应数据库的,否则直接查询当前会话所在数据库
+查看数据库下所有的自定义(系统提供)的函数。如果用户指定了数据库,那么查看对应数据库的,否则直接查询当前会话所在数据库
 
 需要对这个数据库拥有 `SHOW` 权限
 
 ## example
 
 ```
-mysql> show function in testDb\G
+mysql> show full functions in testDb\G
 *************************** 1. row ***************************
-        Signature: my_count(BIGINT)
-      Return Type: BIGINT
-    Function Type: Aggregate
-Intermediate Type: NULL
-       Properties: {"object_file":"http://host:port/libudasample.so","finalize_fn":"_ZN9doris_udf13CountFinalizeEPNS_15FunctionContextERKNS_9BigIntValE","init_fn":"_ZN9doris_udf9CountInitEPNS_15FunctionContextEPNS_9BigIntValE","merge_fn":"_ZN9doris_udf10CountMergeEPNS_15FunctionContextERKNS_9BigIntValEPS2_","md5":"37d185f80f95569e2676da3d5b5b9d2f","update_fn":"_ZN9doris_udf11CountUpdateEPNS_15FunctionContextERKNS_6IntValEPNS_9BigIntValE"}
-*************************** 2. row ***************************
         Signature: my_add(INT,INT)
       Return Type: INT
     Function Type: Scalar
 Intermediate Type: NULL
        Properties: {"symbol":"_ZN9doris_udf6AddUdfEPNS_15FunctionContextERKNS_6IntValES4_","object_file":"http://host:port/libudfsample.so","md5":"cfe7a362d10f3aaf6c49974ee0f1f878"}
+*************************** 2. row ***************************
+        Signature: my_count(BIGINT)
+      Return Type: BIGINT
+    Function Type: Aggregate
+Intermediate Type: NULL
+       Properties: {"object_file":"http://host:port/libudasample.so","finalize_fn":"_ZN9doris_udf13CountFinalizeEPNS_15FunctionContextERKNS_9BigIntValE","init_fn":"_ZN9doris_udf9CountInitEPNS_15FunctionContextEPNS_9BigIntValE","merge_fn":"_ZN9doris_udf10CountMergeEPNS_15FunctionContextERKNS_9BigIntValEPS2_","md5":"37d185f80f95569e2676da3d5b5b9d2f","update_fn":"_ZN9doris_udf11CountUpdateEPNS_15FunctionContextERKNS_6IntValEPNS_9BigIntValE"}
+
+2 rows in set (0.00 sec)
+mysql> show builtin functions in testDb like 'year%';
++---------------+
+| Function Name |
++---------------+
+| year          |
+| years_add     |
+| years_diff    |
+| years_sub     |
++---------------+
 2 rows in set (0.00 sec)
 ```
 
 ## keyword
 
-    SHOW,FUNCTION
+    SHOW,FUNCTIONS
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/BROKER LOAD.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/BROKER LOAD.md.txt
index cb8386e..a89f26c 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/BROKER LOAD.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/BROKER LOAD.md.txt	
@@ -84,7 +84,7 @@ under the License.
             
             file_type:
 
-            用于指定导入文件的类型,例如:parquet、csv。默认值通过文件后缀名判断。 
+            用于指定导入文件的类型,例如:parquet、orc、csv。默认值通过文件后缀名判断。 
  
             column_list:
 
@@ -155,7 +155,7 @@ under the License.
         timeout:         指定导入操作的超时时间。默认超时为4小时。单位秒。
         max_filter_ratio:最大容忍可过滤(数据不规范等原因)的数据比例。默认零容忍。
         exec_mem_limit:  导入内存限制。默认为 2GB。单位为字节。
-        strict mode:     是否对数据进行严格限制。默认为true。
+        strict mode:     是否对数据进行严格限制。默认为 false。
         timezone:         指定某些受时区影响的函数的时区,如 strftime/alignment_timestamp/from_unixtime 等等,具体请查阅 [时区] 文档。如果不指定,则使用 "Asia/Shanghai" 时区。
 
     5. 导入数据格式样例
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/GROUP BY.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/GROUP BY.md.txt
new file mode 100644
index 0000000..e67b9e6
--- /dev/null
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/GROUP BY.md.txt	
@@ -0,0 +1,163 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# GROUP BY
+
+## description
+
+  GROUP BY `GROUPING SETS` | `CUBE` | `ROLLUP` 是对 GROUP BY 子句的扩展,它能够在一个 GROUP BY 子句中实现多个集合的分组的聚合。其结果等价于将多个相应 GROUP BY 子句进行 UNION 操作。
+
+  GROUP BY 子句是只含有一个元素的 GROUP BY GROUPING SETS 的特例。
+  例如,GROUPING SETS 语句:
+
+  ```
+  SELECT a, b, SUM( c ) FROM tab1 GROUP BY GROUPING SETS ( (a, b), (a), (b), ( ) );
+  ```
+
+  其查询结果等价于:
+
+  ```
+  SELECT a, b, SUM( c ) FROM tab1 GROUP BY a, b
+  UNION
+  SELECT a, null, SUM( c ) FROM tab1 GROUP BY a
+  UNION
+  SELECT null, b, SUM( c ) FROM tab1 GROUP BY b
+  UNION
+  SELECT null, null, SUM( c ) FROM tab1
+  ```
+
+  `GROUPING(expr)` 指示一个列是否为聚合列,如果是聚合列为0,否则为1
+
+  `GROUPING_ID(expr  [ , expr [ , ... ] ])` 与GROUPING 类似, GROUPING_ID根据指定的column 顺序,计算出一个列列表的 bitmap 值,每一位为GROUPING的值. GROUPING_ID()函数返回位向量的十进制值。
+
+### Syntax
+
+  ```
+  SELECT ...
+  FROM ...
+  [ ... ]
+  GROUP BY [
+      , ... |
+      GROUPING SETS [, ...] (  groupSet [ , groupSet [ , ... ] ] ) |
+      ROLLUP(expr  [ , expr [ , ... ] ]) |
+      expr  [ , expr [ , ... ] ] WITH ROLLUP |
+      CUBE(expr  [ , expr [ , ... ] ]) |
+      expr  [ , expr [ , ... ] ] WITH CUBE
+      ]
+  [ ... ]
+  ```
+
+### Parameters
+
+  `groupSet` 表示 select list 中的列,别名或者表达式组成的集合 `groupSet ::= { ( expr  [ , expr [ , ... ] ] )}`
+
+  `expr`  表示 select list 中的列,别名或者表达式
+
+### Note
+
+  doris 支持类似PostgreSQL 语法, 语法实例如下
+
+  ```
+  SELECT a, b, SUM( c ) FROM tab1 GROUP BY GROUPING SETS ( (a, b), (a), (b), ( ) );
+  SELECT a, b,c, SUM( d ) FROM tab1 GROUP BY ROLLUP(a,b,c)
+  SELECT a, b,c, SUM( d ) FROM tab1 GROUP BY CUBE(a,b,c)
+  ```
+
+  `ROLLUP(a,b,c)` 等价于如下`GROUPING SETS` 语句
+
+  ```
+  GROUPING SETS (
+  (a,b,c),
+  ( a, b ),
+  ( a),
+  ( )
+  )
+  ```
+
+  `CUBE ( a, b, c )` 等价于如下`GROUPING SETS` 语句
+
+  ```
+  GROUPING SETS (
+  ( a, b, c ),
+  ( a, b ),
+  ( a,    c ),
+  ( a       ),
+  (    b, c ),
+  (    b    ),
+  (       c ),
+  (         )
+  )
+  ```
+
+## example
+
+  下面是一个实际数据的例子
+
+  ```
+  > SELECT * FROM t;
+  +------+------+------+
+  | k1   | k2   | k3   |
+  +------+------+------+
+  | a    | A    |    1 |
+  | a    | A    |    2 |
+  | a    | B    |    1 |
+  | a    | B    |    3 |
+  | b    | A    |    1 |
+  | b    | A    |    4 |
+  | b    | B    |    1 |
+  | b    | B    |    5 |
+  +------+------+------+
+  8 rows in set (0.01 sec)
+
+  > SELECT k1, k2, SUM(k3) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k2), (k1), ( ) );
+  +------+------+-----------+
+  | k1   | k2   | sum(`k3`) |
+  +------+------+-----------+
+  | b    | B    |         6 |
+  | a    | B    |         4 |
+  | a    | A    |         3 |
+  | b    | A    |         5 |
+  | NULL | B    |        10 |
+  | NULL | A    |         8 |
+  | a    | NULL |         7 |
+  | b    | NULL |        11 |
+  | NULL | NULL |        18 |
+  +------+------+-----------+
+  9 rows in set (0.06 sec)
+
+  > SELECT k1, k2, GROUPING_ID(k1,k2), SUM(k3) FROM t GROUP BY GROUPING SETS ((k1, k2), (k1), (k2), ());
+  +------+------+---------------+----------------+
+  | k1   | k2   | grouping_id(k1,k2) | sum(`k3`) |
+  +------+------+---------------+----------------+
+  | a    | A    |             0 |              3 |
+  | a    | B    |             0 |              4 |
+  | a    | NULL |             1 |              7 |
+  | b    | A    |             0 |              5 |
+  | b    | B    |             0 |              6 |
+  | b    | NULL |             1 |             11 |
+  | NULL | A    |             2 |              8 |
+  | NULL | B    |             2 |             10 |
+  | NULL | NULL |             3 |             18 |
+  +------+------+---------------+----------------+
+  9 rows in set (0.02 sec)
+  ```
+
+## keyword
+
+  GROUP, GROUPING, GROUPING_ID, GROUPING_SETS, GROUPING SETS, CUBE, ROLLUP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/LOAD.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/LOAD.md.txt
index f9299c3..6134c14 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/LOAD.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/LOAD.md.txt	
@@ -93,7 +93,7 @@ under the License.
             
             file_type:
 
-            用于指定导入文件的类型,例如:parquet、csv。默认值通过文件后缀名判断。 
+            用于指定导入文件的类型,例如:parquet、orc、csv。默认值通过文件后缀名判断。 
  
             column_list:
 
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW ALTER.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW ALTER.md.txt
index a7678b0..e073651 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW ALTER.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW ALTER.md.txt	
@@ -25,6 +25,7 @@ under the License.
         
     说明:
         TABLE COLUMN:展示修改列的 ALTER 任务
+                      支持语法[WHERE TableName|CreateTime|FinishTime|State] [ORDER BY] [LIMIT]
         TABLE ROLLUP:展示创建或删除 ROLLUP index 的任务
         如果不指定 db_name,使用当前默认 db
         CLUSTER: 展示集群操作相关任务情况(仅管理员使用!待实现...)
@@ -32,11 +33,14 @@ under the License.
 ## example
     1. 展示默认 db 的所有修改列的任务执行情况
         SHOW ALTER TABLE COLUMN;
-        
-    2. 展示指定 db 的创建或删除 ROLLUP index 的任务执行情况
+    
+    2. 展示某个表最近一次修改列的任务执行情况
+        SHOW ALTER TABLE COLUMN WHERE TableName = "table1" ORDER BY CreateTime DESC LIMIT 1;
+ 
+    3. 展示指定 db 的创建或删除 ROLLUP index 的任务执行情况
         SHOW ALTER TABLE ROLLUP FROM example_db;
         
-    3. 展示集群操作相关任务(仅管理员使用!待实现...)
+    4. 展示集群操作相关任务(仅管理员使用!待实现...)
         SHOW ALTER CLUSTER;
         
 ## keyword
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW DYNAMIC PARTITION TABLES.md.txt
similarity index 69%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW DYNAMIC PARTITION TABLES.md.txt
index 2da13c8..f209f55 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW DYNAMIC PARTITION TABLES.md.txt	
@@ -17,25 +17,16 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
+# SHOW DYNAMIC PARTITION TABLES
 ## description
-### Syntax
-
-`INT DAY(DATETIME date)`
-
-
-获得日期中的天信息,返回值范围从1-31。
-
-参数为Date或者Datetime类型
+    该语句用于展示当前db下所有的动态分区表状态
+    语法:
+        SHOW DYNAMIC PARTITION TABLES [FROM db_name];
 
 ## example
-
-```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
-##keyword
-DAY
+    1. 展示数据库 database 的所有动态分区表状态
+        SHOW DYNAMIC PARTITION TABLES FROM database;
+        
+## keyword
+    SHOW,DYNAMIC,PARTITION,TABLES
+    
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt
index be01915..4e2b855 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt	
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW PARTITIONS.md.txt	
@@ -21,15 +21,18 @@ under the License.
 ## description
     该语句用于展示分区信息
     语法:
-        SHOW PARTITIONS FROM [db_name.]table_name [PARTITION partition_name];
+        SHOW PARTITIONS FROM [db_name.]table_name [WHERE] [ORDER BY] [LIMIT];
+    说明:
+        支持PartitionId,PartitionName,State,Buckets,ReplicationNum,LastConsistencyCheckTime等列的过滤 
 
 ## example
-    1. 展示指定 db 的下指定表的分区信息
+    1.展示指定db下指定表的所有分区信息
         SHOW PARTITIONS FROM example_db.table_name;
         
-    1. 展示指定 db 的下指定表的指定分区的信息
-        SHOW PARTITIONS FROM example_db.table_name PARTITION p1;
-
+    2.展示指定db下指定表的指定分区的信息
+        SHOW PARTITIONS FROM example_db.table_name WHERE PartitionName = "p1";
+    
+    3.展示指定db下指定表的最新分区的信息        
+        SHOW PARTITIONS FROM example_db.table_name ORDER BY PartitionId DESC LIMIT 1;
 ## keyword
     SHOW,PARTITIONS
-    
diff --git a/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW TRANSACTION.md.txt b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW TRANSACTION.md.txt
new file mode 100644
index 0000000..42d45e4
--- /dev/null
+++ b/content/_sources/documentation/cn/sql-reference/sql-statements/Data Manipulation/SHOW TRANSACTION.md.txt	
@@ -0,0 +1,80 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# SHOW TRANSACTION
+## description
+
+该语法用于查看指定 transaction id 的事务详情。
+
+语法:
+    
+```
+SHOW TRANSACTION
+[FROM db_name]
+WHERE id = transaction_id;
+```
+        
+返回结果示例:
+
+```
+     TransactionId: 4005
+             Label: insert_8d807d5d-bcdd-46eb-be6d-3fa87aa4952d
+       Coordinator: FE: 10.74.167.16
+ TransactionStatus: VISIBLE
+ LoadJobSourceType: INSERT_STREAMING
+       PrepareTime: 2020-01-09 14:59:07
+        CommitTime: 2020-01-09 14:59:09
+        FinishTime: 2020-01-09 14:59:09
+            Reason:
+ErrorReplicasCount: 0
+        ListenerId: -1
+         TimeoutMs: 300000
+```
+
+* TransactionId:事务id
+* Label:导入任务对应的 label
+* Coordinator:负责事务协调的节点
+* TransactionStatus:事务状态
+    * PREPARE:准备阶段
+    * COMMITTED:事务成功,但数据不可见
+    * VISIBLE:事务成功且数据可见
+    * ABORTED:事务失败
+* LoadJobSourceType:导入任务的类型。
+* PrepareTime:事务开始时间
+* CommitTime:事务提交成功的时间
+* FinishTime:数据可见的时间
+* Reason:错误信息
+* ErrorReplicasCount:有错误的副本数
+* ListenerId:相关的导入作业的id
+* TimeoutMs:事务超时时间,单位毫秒
+
+## example
+
+1. 查看 id 为 4005 的事务:
+
+    SHOW TRANSACTION WHERE ID=4005;
+
+2. 指定 db 中,查看 id 为 4005 的事务:
+
+    SHOW TRANSACTION FROM db WHERE ID=4005;
+
+## keyword
+
+    SHOW, TRANSACTION
+    
diff --git a/content/_sources/documentation/en/administrator-guide/alter-table/alter-table-bitmap-index_EN.md.txt b/content/_sources/documentation/en/administrator-guide/alter-table/alter-table-bitmap-index_EN.md.txt
new file mode 100644
index 0000000..7aa72a0
--- /dev/null
+++ b/content/_sources/documentation/en/administrator-guide/alter-table/alter-table-bitmap-index_EN.md.txt
@@ -0,0 +1,77 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Bitmap Index
+Users can speed up queries by creating a bitmap index
+This document focuses on how to create an index job, as well as some considerations and frequently asked questions when creating an index.
+
+## Glossary
+* bitmap index:a fast data structure that speeds up queries
+
+## Basic Principles
+Creating and droping index is essentially a schema change job. For details, please refer to
+[Schema Change](alter-table-schema-change_EN.md#Basic Principles)。
+
+## Syntax
+There are two forms of index creation and modification related syntax, one is integrated with alter table statement, and the other is using separate
+create/drop index syntax
+1. Create Index
+
+    Please refer to [CREATE INDEX](../../sql-reference/sql-statements/Data%20Definition/CREATE%20INDEX_EN.md) 
+    or [ALTER TABLE](../../sql-reference/sql-statements/Data%20Definition/ALTER%20TABLE_EN.md#description),
+    You can also specify a bitmap index when creating a table,Please refer to [CREATE TABLE](../../sql-reference/sql-statements/Data%20Definition/CREATE%20TABLE_EN.md)
+
+2. Show Index
+
+    Please refer to [SHOW INDEX](../../sql-reference/sql-statements/Administration/SHOW%20INDEX_EN.md)
+3. Drop Index
+
+    Please refer to [DROP INDEX](../../sql-reference/sql-statements/Data%20Definition/DROP%20INDEX_EN.md) or [ALTER TABLE
+    ](../../sql-reference/sql-statements/Data%20Definition/ALTER%20TABLE_EN.md#description)
+
+## Create Job
+Please refer to [Scheam Change](alter-table-schema-change_EN.md#Create Job)
+## View Job
+Please refer to [Scheam Change](alter-table-schema-change_EN.md#View Job)
+
+## Cancel Job
+Please refer to [Scheam Change](alter-table-schema-change_EN.md#Cancel Job)
+
+## Notice
+* Currently only index of bitmap type is supported.
+* The bitmap index is only created on a single column.
+* Bitmap indexes can be applied to all columns of the `Duplicate` data model and key columns of the `Aggregate` and `Uniq` models.
+* The data types supported by bitmap indexes are as follows:
+    * `TINYINT`
+    * `SMALLINT`
+    * `INT`
+    * `UNSIGNEDINT`
+    * `BIGINT`
+    * `CHAR`
+    * `VARCHAE`
+    * `DATE`
+    * `DATETIME`
+    * `LARGEINT`
+    * `DECIMAL`
+    * `BOOL`
+* The bitmap index takes effect only in segmentV2. You need to add the following configuration to the configuration file of be
+    ```
+    default_rowset_type=BETA
+    compaction_rowset_type=BETA
+    ``` 
diff --git a/content/_sources/documentation/en/administrator-guide/alter-table/alter-table-rollup_EN.md.txt b/content/_sources/documentation/en/administrator-guide/alter-table/alter-table-rollup_EN.md.txt
new file mode 100644
index 0000000..3dc6b52
--- /dev/null
+++ b/content/_sources/documentation/en/administrator-guide/alter-table/alter-table-rollup_EN.md.txt
@@ -0,0 +1,181 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Rollup
+
+Users can speed up queries by creating rollup tables. For the concept and usage of Rollup, please refer to [Data
+ Model, ROLLUP and Prefix Index](../../getting-started/data-model-rollup_EN.md) and 
+ [Rollup and query](../../getting-started/hit-the-rollup_EN.md).
+
+This document focuses on how to create a Rollup job, as well as some considerations and frequently asked questions about creating a Rollup.
+
+## Glossary
+
+* Base Table:When each table is created, it corresponds to a base table. The base table stores the complete data of this table. Rollups are usually created based on the data in the base table (and can also be created from other rollups).
+* Index:Materialized index. Rollup or Base Table are both called materialized indexes.
+* Transaction:Each import task is a transaction, and each transaction has a unique incrementing Transaction ID.
+
+## Basic Principles
+
+The basic process of creating a Rollup is to generate a new Rollup data containing the specified column from the data in the Base table. Among them, two parts of data conversion are needed. One is the conversion of existing historical data, and the other is the conversion of newly arrived imported data during Rollup execution.
+
+```
++----------+
+| Load Job |
++----+-----+
+     |
+     | Load job generates both base and rollup index data
+     |
+     |      +------------------+ +---------------+
+     |      | Base Index       | | Base Index    |
+     +------> New Incoming Data| | History Data  |
+     |      +------------------+ +------+--------+
+     |                                  |
+     |                                  | Convert history data
+     |                                  |
+     |      +------------------+ +------v--------+
+     |      | Rollup Index     | | Rollup Index  |
+     +------> New Incoming Data| | History Data  |
+            +------------------+ +---------------+
+```
+
+Before starting the conversion of historical data, Doris will obtain a latest transaction ID. And wait for all import transactions before this Transaction ID to complete. This Transaction ID becomes a watershed. This means that Doris guarantees that all import tasks after the watershed will generate data for the Rollup Index at the same time. In this way, after the historical data conversion is completed, the data of the Rollup and Base tables can be guaranteed to be flush.
+
+## Create Job
+
+The specific syntax for creating a Rollup can be found in the description of the Rollup section in the help `HELP ALTER TABLE`.
+
+The creation of Rollup is an asynchronous process. After the job is submitted successfully, the user needs to use the `SHOW ALTER TABLE ROLLUP` command to view the progress of the job.
+
+## View Job
+
+`SHOW ALTER TABLE ROLLUP` You can view rollup jobs that are currently executing or completed. For example:
+
+```
+          JobId: 20037
+      TableName: tbl1
+     CreateTime: 2019-08-06 15:38:49
+   FinishedTime: N/A
+  BaseIndexName: tbl1
+RollupIndexName: r1
+       RollupId: 20038
+  TransactionId: 10034
+          State: PENDING
+            Msg:
+       Progress: N/A
+        Timeout: 86400
+```
+
+* JobId: A unique ID for each Rollup job.
+* TableName: The table name of the base table corresponding to Rollup.
+* CreateTime: Job creation time.
+* FinishedTime: The end time of the job. If it is not finished, "N / A" is displayed.
+* BaseIndexName: The name of the source Index corresponding to Rollup.
+* RollupIndexName: The name of the Rollup.
+* RollupId: The unique ID of the Rollup.
+* TransactionId: the watershed transaction ID of the conversion history data.
+* State: The phase of the operation.
+     * PENDING: The job is waiting in the queue to be scheduled.
+     * WAITING_TXN: Wait for the import task before the watershed transaction ID to complete.
+     * RUNNING: Historical data conversion.
+     * FINISHED: The operation was successful.
+     * CANCELLED: The job failed.
+* Msg: If the job fails, a failure message is displayed here.
+* Progress: operation progress. Progress is displayed only in the RUNNING state. Progress is displayed in M / N. Where N is the total number of copies of Rollup. M is the number of copies of historical data conversion completed.
+* Timeout: Job timeout time. Unit of second.
+
+## Cancel Job
+
+In the case that the job status is not FINISHED or CANCELLED, you can cancel the Rollup job with the following command:
+
+`CANCEL ALTER TABLE ROLLUP FROM tbl_name;`
+
+## Notice
+
+* A table can have only one Rollup job running at a time. And only one rollup can be created in a job.
+
+* Rollup operations do not block import and query operations.
+
+* If a DELETE operation has a Key column in a where condition that does not exist in a Rollup, the DELETE is not allowed.
+
+    If a Key column does not exist in a Rollup, the DELETE operation cannot delete data from the Rollup, so the data consistency between the Rollup table and the Base table cannot be guaranteed.
+
+* Rollup columns must exist in the Base table.
+
+    Rollup columns are always a subset of the Base table columns. Columns that do not exist in the Base table cannot appear.
+
+* If a rollup contains columns of the REPLACE aggregation type, the rollup must contain all the key columns.
+
+    Assume the structure of the Base table is as follows:
+    
+    `` `(k1 INT, k2 INT, v1 INT REPLACE, v2 INT SUM)` ``
+    
+    If you need to create a Rollup that contains `v1` columns, you must include the` k1`, `k2` columns. Otherwise, the system cannot determine the value of `v1` listed in Rollup.
+    
+    Note that all Value columns in the Unique data model table are of the REPLACE aggregation type.
+    
+* Rollup of the DUPLICATE data model table, you can specify the DUPLICATE KEY of the rollup.
+
+    The DUPLICATE KEY in the DUPLICATE data model table is actually sorted. Rollup can specify its own sort order, but the sort order must be a prefix of the Rollup column order. If not specified, the system will check if the Rollup contains all sort columns of the Base table, and if it does not, it will report an error. For example:
+    
+    Base table structure: `(k1 INT, k2 INT, k3 INT) DUPLICATE KEY (k1, k2)`
+    
+    Rollup can be: `(k2 INT, k1 INT) DUPLICATE KEY (k2)`
+
+* Rollup does not need to include partitioned or bucket columns for the Base table.
+
+## FAQ
+
+* How many rollups can a table create
+
+    There is theoretically no limit to the number of rollups a table can create, but too many rollups can affect import performance. Because when importing, data will be generated for all rollups at the same time. At the same time, Rollup will take up physical storage space. Usually the number of rollups for a table is less than 10.
+    
+* Rollup creation speed
+
+    Rollup creation speed is currently estimated at about 10MB / s based on the worst efficiency. To be conservative, users can set the timeout for jobs based on this rate.
+
+* Submitting job error `Table xxx is not stable. ...`
+
+    Rollup can start only when the table data is complete and unbalanced. If some data shard copies of the table are incomplete, or if some copies are undergoing an equalization operation, the submission is rejected.
+    
+    Whether the data shard copy is complete can be checked with the following command:
+    
+    ```ADMIN SHOW REPLICA STATUS FROM tbl WHERE STATUS! =" OK ";```
+    
+    If a result is returned, there is a problem with the copy. These problems are usually fixed automatically by the system. You can also use the following commands to repair this table first:
+    
+    ```ADMIN REPAIR TABLE tbl1; ```
+    
+    You can check if there are running balancing tasks with the following command:
+    
+    ```SHOW PROC" / cluster_balance / pending_tablets ";```
+    
+    You can wait for the balancing task to complete, or temporarily disable the balancing operation with the following command:
+    
+    ```ADMIN SET FRONTEND CONFIG ("disable_balance" = "true");```
+    
+## Configurations
+
+### FE Configurations
+
+* `alter_table_timeout_second`:The default timeout for the job is 86400 seconds.
+
+### BE Configurations
+
+* `alter_tablet_worker_count`:Number of threads used to perform historical data conversion on the BE side. The default is 3. If you want to speed up the rollup job, you can increase this parameter appropriately and restart the BE. But too many conversion threads can cause increased IO pressure and affect other operations. This thread is shared with the Schema Change job.
diff --git a/content/_sources/documentation/en/administrator-guide/alter-table/alter-table-schema-change_EN.md.txt b/content/_sources/documentation/en/administrator-guide/alter-table/alter-table-schema-change_EN.md.txt
new file mode 100644
index 0000000..d9f3ced
--- /dev/null
+++ b/content/_sources/documentation/en/administrator-guide/alter-table/alter-table-schema-change_EN.md.txt
@@ -0,0 +1,224 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Scheam Change
+
+Users can modify the schema of existing tables through the Scheam Change operation. Doris currently supports the following modifications:
+
+* Add and delete columns
+* Modify column type
+* Adjust column order
+* Add and modify Bloom Filter
+* Add and delete bitmap index
+
+This document mainly describes how to create a Scheam Change job, as well as some considerations and frequently asked questions about Scheam Change.
+## Glossary
+
+* Base Table:When each table is created, it corresponds to a base table. The base table stores the complete data of this table. Rollups are usually created based on the data in the base table (and can also be created from other rollups).
+* Index:Materialized index. Rollup or Base Table are both called materialized indexes.
+* Transaction:Each import task is a transaction, and each transaction has a unique incrementing Transaction ID.
+* Rollup:Roll-up tables based on base tables or other rollups.
+
+## Basic Principles
+
+The basic process of executing a Schema Change is to generate a copy of the index data of the new schema from the data of the original index. Among them, two parts of data conversion are required. One is the conversion of existing historical data, and the other is the conversion of newly arrived imported data during the execution of Schema Change.
+```
++----------+
+| Load Job |
++----+-----+
+     |
+     | Load job generates both origin and new index data
+     |
+     |      +------------------+ +---------------+
+     |      | Origin Index     | | Origin Index  |
+     +------> New Incoming Data| | History Data  |
+     |      +------------------+ +------+--------+
+     |                                  |
+     |                                  | Convert history data
+     |                                  |
+     |      +------------------+ +------v--------+
+     |      | New Index        | | New Index     |
+     +------> New Incoming Data| | History Data  |
+            +------------------+ +---------------+
+```
+
+Before starting the conversion of historical data, Doris will obtain a latest transaction ID. And wait for all import transactions before this Transaction ID to complete. This Transaction ID becomes a watershed. This means that Doris guarantees that all import tasks after the watershed will generate data for both the original Index and the new Index. In this way, when the historical data conversion is completed, the data in the new Index can be guaranteed to be complete.
+## Create Job
+
+The specific syntax for creating a Scheam Change can be found in the description of the Scheam Change section in the help `HELP ALTER TABLE`.
+
+The creation of Scheam Change is an asynchronous process. After the job is submitted successfully, the user needs to view the job progress through the `SHOW ALTER TABLE COLUMN` command.
+## View Job
+
+`SHOW ALTER TABLE COLUMN` You can view the Schema Change jobs that are currently executing or completed. When multiple indexes are involved in a Schema Change job, the command displays multiple lines, each corresponding to an index. For example:
+
+```
+        JobId: 20021
+    TableName: tbl1
+   CreateTime: 2019-08-05 23:03:13
+   FinishTime: 2019-08-05 23:03:42
+    IndexName: tbl1
+      IndexId: 20022
+OriginIndexId: 20017
+SchemaVersion: 2:792557838
+TransactionId: 10023
+        State: FINISHED
+          Msg:
+     Progress: N/A
+      Timeout: 86400
+```
+
+* JobId: A unique ID for each Schema Change job.
+* TableName: The table name of the base table corresponding to Schema Change.
+* CreateTime: Job creation time.
+* FinishedTime: The end time of the job. If it is not finished, "N / A" is displayed.
+* IndexName: The name of an Index involved in this modification.
+* IndexId: The unique ID of the new Index.
+* OriginIndexId: The unique ID of the old Index.
+* SchemaVersion: Displayed in M: N format. M is the version of this Schema Change, and N is the corresponding hash value. With each Schema Change, the version is incremented.
+* TransactionId: the watershed transaction ID of the conversion history data.
+* State: The phase of the operation.
+    * PENDING: The job is waiting in the queue to be scheduled.
+    * WAITING_TXN: Wait for the import task before the watershed transaction ID to complete.
+    * RUNNING: Historical data conversion.
+    * FINISHED: The operation was successful.
+    * CANCELLED: The job failed.
+* Msg: If the job fails, a failure message is displayed here.
+* Progress: operation progress. Progress is displayed only in the RUNNING state. Progress is displayed in M ​​/ N. Where N is the total number of copies involved in the Schema Change. M is the number of copies of historical data conversion completed.
+* Timeout: Job timeout time. Unit of second.
+
+## Cancel Job
+
+In the case that the job status is not FINISHED or CANCELLED, you can cancel the Schema Change job with the following command:
+`CANCEL ALTER TABLE COLUMN FROM tbl_name;`
+
+## Best Practice
+
+Schema Change can make multiple changes to multiple indexes in one job. For example:
+Source Schema:
+
+```
++-----------+-------+------+------+------+---------+-------+
+| IndexName | Field | Type | Null | Key  | Default | Extra |
++-----------+-------+------+------+------+---------+-------+
+| tbl1      | k1    | INT  | No   | true | N/A     |       |
+|           | k2    | INT  | No   | true | N/A     |       |
+|           | k3    | INT  | No   | true | N/A     |       |
+|           |       |      |      |      |         |       |
+| rollup2   | k2    | INT  | No   | true | N/A     |       |
+|           |       |      |      |      |         |       |
+| rollup1   | k1    | INT  | No   | true | N/A     |       |
+|           | k2    | INT  | No   | true | N/A     |       |
++-----------+-------+------+------+------+---------+-------+
+```
+
+You can add a row k4 to both rollup1 and rollup2 by adding the following k5 to rollup2:
+```
+ALTER TABLE tbl1
+ADD COLUMN k4 INT default "1" to rollup1,
+ADD COLUMN k4 INT default "1" to rollup2,
+ADD COLUMN k5 INT default "1" to rollup2;
+```
+
+When completion, the Schema becomes:
+
+```
++-----------+-------+------+------+------+---------+-------+
+| IndexName | Field | Type | Null | Key  | Default | Extra |
++-----------+-------+------+------+------+---------+-------+
+| tbl1      | k1    | INT  | No   | true | N/A     |       |
+|           | k2    | INT  | No   | true | N/A     |       |
+|           | k3    | INT  | No   | true | N/A     |       |
+|           | k4    | INT  | No   | true | 1       |       |
+|           | k5    | INT  | No   | true | 1       |       |
+|           |       |      |      |      |         |       |
+| rollup2   | k2    | INT  | No   | true | N/A     |       |
+|           | k4    | INT  | No   | true | 1       |       |
+|           | k5    | INT  | No   | true | 1       |       |
+|           |       |      |      |      |         |       |
+| rollup1   | k1    | INT  | No   | true | N/A     |       |
+|           | k2    | INT  | No   | true | N/A     |       |
+|           | k4    | INT  | No   | true | 1       |       |
++-----------+-------+------+------+------+---------+-------+
+```
+
+As you can see, the base table tbl1 also automatically added k4, k5 columns. That is, columns added to any rollup are automatically added to the Base table.
+
+At the same time, columns that already exist in the Base table are not allowed to be added to Rollup. If you need to do this, you can re-create a Rollup with the new columns and then delete the original Rollup.
+## Notice
+
+* Only one Schema Change job can be running on a table at a time.
+
+* Schema Change operation does not block import and query operations.
+
+* The partition column and bucket column cannot be modified.
+
+* If there is a value column aggregated by REPLACE in the schema, the Key column is not allowed to be deleted.
+
+     If the Key column is deleted, Doris cannot determine the value of the REPLACE column.
+    
+     All non-Key columns of the Unique data model table are REPLACE aggregated.
+    
+* When adding a value column whose aggregation type is SUM or REPLACE, the default value of this column has no meaning to historical data.
+
+     Because the historical data has lost the detailed information, the default value cannot actually reflect the aggregated value.
+    
+* When modifying the column type, fields other than Type need to be completed according to the information on the original column.
+
+     If you modify the column `k1 INT SUM NULL DEFAULT" 1 "` as type BIGINT, you need to execute the following command:
+    
+    ```ALTER TABLE tbl1 MODIFY COLUMN `k1` BIGINT SUM NULL DEFAULT "1"; ```
+    
+   Note that in addition to the new column types, such as the aggregation mode, Nullable attributes, and default values must be completed according to the original information.
+    
+* Modifying column names, aggregation types, nullable attributes, default values, and column comments is not supported.
+
+## FAQ
+    
+* the execution speed of Schema Change
+
+    At present, the execution speed of Schema Change is estimated to be about 10MB / s according to the worst efficiency. To be conservative, users can set the timeout for jobs based on this rate.
+
+* Submit job error `Table xxx is not stable. ...`
+
+    Schema Change can only be started when the table data is complete and unbalanced. If some data shard copies of the table are incomplete, or if some copies are undergoing an equalization operation, the submission is rejected.
+        
+    Whether the data shard copy is complete can be checked with the following command:
+        ```ADMIN SHOW REPLICA STATUS FROM tbl WHERE STATUS != "OK";```
+    
+    If a result is returned, there is a problem with the copy. These problems are usually fixed automatically by the system. You can also use the following commands to repair this table first:    
+    ```ADMIN REPAIR TABLE tbl1;```
+    
+    You can check if there are running balancing tasks with the following command:
+    
+    ```SHOW PROC "/cluster_balance/pending_tablets";```
+    
+    You can wait for the balancing task to complete, or temporarily disable the balancing operation with the following command:
+    
+    ```ADMIN SET FRONTEND CONFIG ("disable_balance" = "true");```
+    
+## Configurations
+
+### FE Configurations
+
+* `alter_table_timeout_second`:The default timeout for the job is 86400 seconds.
+
+### BE Configurations
+
+* `alter_tablet_worker_count`:Number of threads used to perform historical data conversion on the BE side. The default is 3. If you want to speed up the Schema Change job, you can increase this parameter appropriately and restart the BE. But too many conversion threads can cause increased IO pressure and affect other operations. This thread is shared with the Rollup job.
diff --git a/content/_sources/documentation/en/administrator-guide/alter-table/index.rst.txt b/content/_sources/documentation/en/administrator-guide/alter-table/index.rst.txt
new file mode 100644
index 0000000..f44960b
--- /dev/null
+++ b/content/_sources/documentation/en/administrator-guide/alter-table/index.rst.txt
@@ -0,0 +1,9 @@
+=============
+Schema Change
+=============
+
+.. toctree::
+    :maxdepth: 2
+    :glob:
+
+    *
diff --git a/content/_sources/documentation/en/administrator-guide/broker_EN.md.txt b/content/_sources/documentation/en/administrator-guide/broker_EN.md.txt
new file mode 100644
index 0000000..9756383
--- /dev/null
+++ b/content/_sources/documentation/en/administrator-guide/broker_EN.md.txt
@@ -0,0 +1,286 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Broker
+
+Broker is an optional process in the Doris cluster. It is mainly used to support Doris to read and write files or directories on remote storage, such as HDFS, BOS, and AFS.
+
+Broker provides services through an RPC service port. It is a stateless JVM process that is responsible for encapsulating some POSIX-like file operations for read and write operations on remote storage, such as open, pred, pwrite, and so on.
+In addition, the Broker does not record any other information, so the connection information, file information, permission information, and so on stored remotely need to be passed to the Broker process in the RPC call through parameters in order for the Broker to read and write files correctly .
+
+Broker only acts as a data channel and does not participate in any calculations, so it takes up less memory. Usually one or more Broker processes are deployed in a Doris system. And the same type of Broker will form a group and set a ** Broker name **.
+
+Broker's position in the Doris system architecture is as follows:
+
+```
++----+   +----+
+| FE |   | BE |
++-^--+   +--^-+
+  |         |
+  |         |
++-v---------v-+
+|   Broker    |
++------^------+
+       |
+       |
++------v------+
+|HDFS/BOS/AFS |
++-------------+
+```
+
+This document mainly introduces the parameters that Broker needs when accessing different remote storages, such as connection information,
+authorization information, and so on.
+
+## Supported Storage System
+
+Different types of brokers support different different storage systems。
+
+1. Community HDFS
+
+    * Support simple authentication access
+    * Support kerberos authentication access
+    * Support HDFS HA mode access
+
+2. Baidu HDFS / AFS (not supported by open source version)
+
+    * Support UGI simple authentication access
+
+3. Baidu Object Storage BOS (not supported by open source version)
+
+    * Support AK / SK authentication access
+
+## Function provided by Broker
+
+1. Broker Load
+    
+    The Broker Load function reads the file data on the remote storage through the Broker process and imports it into Doris. Examples are as follows:
+    
+    ```
+    LOAD LABEL example_db.label6
+    (
+        DATA INFILE("bos://my_bucket/input/file")
+        INTO TABLE `my_table`
+    )
+    WITH BROKER "broker_name"
+    (
+        "bos_endpoint" = "http://bj.bcebos.com",
+        "bos_accesskey" = "xxxxxxxxxxxxxxxxxxxxxxxxxx",
+        "bos_secret_accesskey" = "yyyyyyyyyyyyyyyyyyyy"
+    )
+    ```
+    
+    `WITH BROKER` and following Property Map are used to provide Broker's related information.
+    
+2. Export
+
+    The Export function export the data stored in Doris to a file stored in remote storage in text format through Broker process. Examples are as follows:
+    
+    ```
+    EXPORT TABLE testTbl 
+    TO "hdfs://hdfs_host:port/a/b/c" 
+    WITH BROKER "broker_name" 
+    (
+        "username" = "xxx",
+        "password" = "yyy"
+    );
+    ```
+
+    `WITH BROKER` and following Property Map are used to provide Broker's related information.
+
+3. Create Repository
+
+    When users need to use the backup and restore function, they need to first create a "repository" with the `CREATE REPOSITORY` command,and the broker metadata and related information are recorded in the warehouse metadata.
+    Subsequent backup and restore operations will use Broker to back up data to this warehouse, or read data from this warehouse to restore to Doris. Examples are as follows:
+    
+    ```
+    CREATE REPOSITORY `bos_repo`
+    WITH BROKER `broker_name`
+    ON LOCATION "bos://doris_backup"
+    PROPERTIES
+    (
+        "bos_endpoint" = "http://gz.bcebos.com",
+        "bos_accesskey" = "069fc2786e664e63a5f111111114ddbs22",
+        "bos_secret_accesskey" = "70999999999999de274d59eaa980a"
+    );
+    ```
+    
+   `WITH BROKER` and following Property Map are used to provide Broker's related information.
+    
+
+## Broker Information
+
+Broker information includes two parts: ** Broker name ** and ** Certification information **. The general syntax is as follows:
+
+```
+WITH BROKER "broker_name" 
+(
+    "username" = "xxx",
+    "password" = "yyy",
+    "other_prop" = "prop_value",
+    ...
+);
+```
+
+### Broker Name
+
+Usually the user needs to specify an existing Broker Name through the `WITH BROKER" broker_name "` clause in the operation command.
+Broker Name is a name that the user specifies when adding a Broker process through the ALTER SYSTEM ADD BROKER command.
+A name usually corresponds to one or more broker processes. Doris selects available broker processes based on the name.
+You can use the `SHOW BROKER` command to view the Brokers that currently exist in the cluster.
+
+**Note: Broker Name is just a user-defined name and does not represent the type of Broker.**
+
+### Certification Information
+
+Different broker types and different access methods need to provide different authentication information.
+Authentication information is usually provided as a Key-Value in the Property Map after `WITH BROKER" broker_name "`.
+
+#### Community HDFS
+
+1. Simple Authentication
+
+    Simple authentication means that Hadoop configures `hadoop.security.authentication` to` simple`.
+
+    Use system users to access HDFS. Or add in the environment variable started by Broker:```HADOOP_USER_NAME```。
+    
+    ```
+    (
+        "username" = "user",
+        "password" = ""
+    );
+    ```
+    
+    Just leave the password blank.
+
+2. Kerberos Authentication
+
+    The authentication method needs to provide the following information::
+    
+    * `hadoop.security.authentication`: Specify the authentication method as kerberos.
+    * `kerberos_principal`: Specify the principal of kerberos.
+    * `kerberos_keytab`: Specify the path to the keytab file for kerberos. The file must be an absolute path to a file on the server where the broker process is located. And can be accessed by the Broker process.径。
+    * `kerberos_keytab_content`: Specify the content of the keytab file in kerberos after base64 encoding. You can choose one of these with `kerberos_keytab` configuration.
+
+    Examples are as follows:
+    
+    ```
+    (
+        "hadoop.security.authentication" = "kerberos",
+        "kerberos_principal" = "doris@YOUR.COM",
+        "kerberos_keytab" = "/home/doris/my.keytab"
+    )
+    ```
+    ```
+    (
+        "hadoop.security.authentication" = "kerberos",
+        "kerberos_principal" = "doris@YOUR.COM",
+        "kerberos_keytab_content" = "ASDOWHDLAWIDJHWLDKSALDJSDIWALD"
+    )
+    ```
+    If Kerberos authentication is used, the [krb5.conf](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html) file is required when deploying the Broker process.
+    The krb5.conf file contains Kerberos configuration information,Normally, you should install your krb5.conf file in the directory /etc. You can override the default location by setting the environment variable KRB5_CONFIG.
+    An example of the contents of the krb5.conf file is as follows:
+    ```
+    [libdefaults]
+        default_realm = DORIS.HADOOP
+        default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
+        default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
+        dns_lookup_kdc = true
+        dns_lookup_realm = false
+    
+    [realms]
+        DORIS.HADOOP = {
+            kdc = kerberos-doris.hadoop.service:7005
+        }
+    ```
+    
+3. HDFS HA Mode
+
+    This configuration is used to access HDFS clusters deployed in HA mode.
+    
+    * `dfs.nameservices`: Specify the name of the hdfs service, custom, such as "dfs.nameservices" = "my_ha".
+    * `dfs.ha.namenodes.xxx`:  Custom namenode names. Multiple names are separated by commas, where xxx is the custom name in `dfs.nameservices`, such as" dfs.ha.namenodes.my_ha "=" my_nn ".
+    * `dfs.namenode.rpc-address.xxx.nn`: Specify the rpc address information of namenode, Where nn represents the name of the namenode configured in `dfs.ha.namenodes.xxx`, such as: "dfs.namenode.rpc-address.my_ha.my_nn" = "host:port".
+    * `dfs.client.failover.proxy.provider`: Specify the provider for the client to connect to the namenode. The default is: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.
+
+    Examples are as follows:
+    
+    ```
+    (
+        "dfs.nameservices" = "my_ha",
+        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
+        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
+        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
+        "dfs.client.failover.proxy.provider" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    )
+    ```
+    
+    The HA mode can be combined with the previous two authentication methods for cluster access. If you access HA HDFS with simple authentication:
+    
+    ```
+    (
+        "username"="user",
+        "password"="passwd",
+        "dfs.nameservices" = "my_ha",
+        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
+        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
+        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
+        "dfs.client.failover.proxy.provider" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    )
+    ```
+   The configuration for accessing the HDFS cluster can be written to the hdfs-site.xml file. When users use the Broker process to read data from the HDFS cluster, they only need to fill in the cluster file path and authentication information.
+    
+#### Baidu Object Storage BOS
+
+**(Open source version is not supported)**
+
+1. Access via AK / SK
+
+    * AK/SK: Access Key and Secret Key. You can check the user's AK / SK in Baidu Cloud Security Certification Center.
+    * Region Endpoint: Endpoint of the BOS region:
+
+        * North China-Beijing: http://bj.bcebos.com
+        * North China-Baoding: http://bd.bcebos.com
+        * South China-Guangzhou: http://gz.bcebos.com
+        * East China-Suzhou: http://sz.bcebos.com
+
+    Examples are as follows:
+
+    ```
+    (
+        "bos_endpoint" = "http://bj.bcebos.com",
+        "bos_accesskey" = "xxxxxxxxxxxxxxxxxxxxxxxxxx",
+        "bos_secret_accesskey" = "yyyyyyyyyyyyyyyyyyyyyyyyyy"
+    )
+    ```
+
+#### Baidu HDFS/AFS
+
+**(Open source version is not supported)**
+
+Baidu AFS and HDFS only support simple authentication access using UGI. Examples are as follows:
+
+```
+(
+    "username" = "user",
+    "password" = "passwd"
+);
+```
+
+User and passwd are UGI configurations for Hadoop.
\ No newline at end of file
diff --git a/content/_sources/documentation/en/administrator-guide/config/fe_config_en.md.txt b/content/_sources/documentation/en/administrator-guide/config/fe_config_en.md.txt
index 43b7d15..ab915a0 100644
--- a/content/_sources/documentation/en/administrator-guide/config/fe_config_en.md.txt
+++ b/content/_sources/documentation/en/administrator-guide/config/fe_config_en.md.txt
@@ -24,3 +24,11 @@ under the License.
   This configuration is mainly used to modify the parameter max_body_size of brpc. The default configuration is 64M. It usually occurs in multi distinct + no group by + exceeds 1t data. In particular, if you find that the query is stuck, and be appears the word "body size is too large" in log.
 
   Because this is a brpc configuration, users can also directly modify this parameter on-the-fly by visiting ```http://host:brpc_port/flags```
+
+## max_running_txn_num_per_db
+
+   This configuration is mainly used to control the number of concurrent load job in the same db. The default configuration is 100. When the number of concurrent load job exceeds the configured value, the load which is synchronously executed will fail, such as stream load. The load which is asynchronously will always be in a pending state such as broker load.
+
+   It is generally not recommended to change this property. If the current load concurrency exceeds this value, you need to first check if a single load job is too slow, or if there are too many small files, there is no problem of load after merging those small files.
+
+   Error information such as: current running txns on db xxx is xx, larger than limit xx. The above info is related by this property.
diff --git a/content/_sources/documentation/en/administrator-guide/dynamic-partition_EN.md.txt b/content/_sources/documentation/en/administrator-guide/dynamic-partition_EN.md.txt
new file mode 100644
index 0000000..3dc9ef1
--- /dev/null
+++ b/content/_sources/documentation/en/administrator-guide/dynamic-partition_EN.md.txt
@@ -0,0 +1,185 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Dynamic Partition
+
+Dynamic partition is a new feature introduced in Doris verion 0.12. It's designed to manage partition's Time-to-Life (TTL), reducing the burden on users.
+
+The original design, implementation and effect can be referred to [ISSUE 2262](https://github.com/apache/incubator-doris/issues/2262)。
+
+Currently, the function of adding partitions dynamically is implemented, and the next version will support removing partitions dynamically.
+
+## Noun Interpretation
+
+* FE: Frontend, the front-end node of Doris. Responsible for metadata management and request access.
+* BE: Backend, Doris's back-end node. Responsible for query execution and data storage.
+
+## Principle
+
+In some scenarios, the user will create partitions for the table according to the day and perform routine tasks regularly every day. In this case, the user needs to manually manage the partition, otherwise the data import may fail because the partition is forgot to create, which brings additional maintenance costs to the user.
+
+The design of implementation is that FE will starts a background thread that determines whether or not to start the thread and the scheduling frequency of the thread based on the parameters `dynamic_partition_enable` and `dynamic_partition_check_interval_seconds` in `fe.conf`.
+
+When create a olap table, the `dynamic_partition` properties will be assigned. FE will parse `dynamic_partition` properties and check the legitimacy of the input parameters firstly, and then persist the properties to FE metadata, register the table to the list of dynamic partition at the same time. Daemon thread will scan the dynamic partition list periodically according to the configuration parameters,
+read dynamic partition properties of the table, and doing the task of adding partitions. The scheduling information of each time will be kept in the memory of FE. You can check whether the scheduling task is successful through `SHOW DYNAMIC PARTITION TABLES`.
+
+## Usage
+
+### Establishment of tables
+
+When creating a table, you can specify the attribute `dynamic_partition` in `PROPERTIES`, which means that the table is a dynamic partition table.
+    
+Examples:
+
+```
+CREATE TABLE example_db.dynamic_partition
+(
+k1 DATE,
+k2 INT,
+k3 SMALLINT,
+v1 VARCHAR(2048),
+v2 DATETIME DEFAULT "2014-02-04 15:36:00"
+)
+ENGINE=olap
+DUPLICATE KEY(k1, k2, k3)
+PARTITION BY RANGE (k1)
+(
+PARTITION p1 VALUES LESS THAN ("2014-01-01"),
+PARTITION p2 VALUES LESS THAN ("2014-06-01"),
+PARTITION p3 VALUES LESS THAN ("2014-12-01")
+)
+DISTRIBUTED BY HASH(k2) BUCKETS 32
+PROPERTIES(
+"storage_medium" = "SSD",
+"dynamic_partition.enable" = "true"
+"dynamic_partition.time_unit" = "DAY",
+"dynamic_partition.end" = "3",
+"dynamic_partition.prefix" = "p",
+"dynamic_partition.buckets" = "32"
+ );
+```
+Create a dynamic partition table, specify enable dynamic partition features, take today is 2020-01-08 for example, at every time of scheduling, will create today and after 3 days in advance of four partitions 
+(if the partition is existed, the task will be ignored), partition name respectively according to the specified prefix `p20200108` `p20200109` `p20200110` `p20200111`, each partition to 32 the number of points barrels, each partition scope is as follows:
+```
+[types: [DATE]; keys: [2020-01-08]; ‥types: [DATE]; keys: [2020-01-09]; )
+[types: [DATE]; keys: [2020-01-09]; ‥types: [DATE]; keys: [2020-01-10]; )
+[types: [DATE]; keys: [2020-01-10]; ‥types: [DATE]; keys: [2020-01-11]; )
+[types: [DATE]; keys: [2020-01-11]; ‥types: [DATE]; keys: [2020-01-12]; )
+```
+    
+### Enable Dynamic Partition Feature
+
+1. First of all, `dynamic_partition_enable=true` needs to be set in fe.conf, which can be specified by modifying the configuration file when the cluster starts up, or dynamically modified by HTTP interface at run time
+
+2. If you need to add dynamic partitioning properties to a table prior to version 0.12, you need to modify the properties of the table with the following command
+
+```
+ALTER TABLE dynamic_partition set ("dynamic_partition.enable" = "true", "dynamic_partition.time_unit" = "DAY", "dynamic_partition.end" = "3", "dynamic_partition.prefix" = "p", "dynamic_partition.buckets" = "32");
+```
+
+### Disable Dynamic Partition Feature
+
+If you need to stop dynamic partitioning for all dynamic partitioning tables in the cluster, you need to set 'dynamic_partition_enable=true' in fe.conf
+
+If you need to stop dynamic partitioning for a specified table, you can modify the properties of the table with the following command
+
+```
+ALTER TABLE dynamic_partition set ("dynamic_partition.enable" = "false")
+```
+
+### Modify Dynamic Partition Properties
+
+You can modify the properties of the dynamic partition with the following command
+
+```
+ALTER TABLE dynamic_partition set("key" = "value")
+```
+
+### Check Dynamic Partition Table Scheduling Status
+
+You can further view the scheduling of dynamic partitioned tables by using the following command:
+
+```    
+SHOW DYNAMIC PARTITION TABLES;
+
++-------------------+--------+----------+------+--------+---------+---------------------+---------------------+--------+------+
+| TableName         | Enable | TimeUnit | End  | Prefix | Buckets | LastUpdateTime      | LastSchedulerTime   | State  | Msg  |
++-------------------+--------+----------+------+--------+---------+---------------------+---------------------+--------+------+
+| dynamic_partition | true   | DAY      | 3    | p      | 32      | 2020-01-08 20:19:09 | 2020-01-08 20:19:34 | NORMAL | N/A  |
++-------------------+--------+----------+------+--------+---------+---------------------+---------------------+--------+------+
+1 row in set (0.00 sec)
+
+```
+    
+* LastUpdateTime: The last time of modifying dynamic partition properties 
+* LastSchedulerTime:   The last time of performing dynamic partition scheduling
+* State:    The state of the last execution of dynamic partition scheduling
+* Msg:  Error message for the last time dynamic partition scheduling was performed 
+
+## Advanced Operation
+
+### FE Configuration Item
+
+* dynamic\_partition\_enable
+
+    Whether to enable Doris's dynamic partition feature. The default value is false, which is off. This parameter only affects the partitioning operation of dynamic partition tables, not normal tables.
+    
+* dynamic\_partition\_check\_interval\_seconds
+
+    The execution frequency of dynamically partitioned threads, by default 3600(1 hour), which means scheduled every 1 hour.
+    
+### HTTP Restful API
+
+Doris provides an HTTP Restful API for modifying dynamic partition configuration parameters at run time.
+
+The API is implemented in FE, user can access it by `fe_host:fe_http_port`.The operation needs admin privilege.
+
+1. Set dynamic_partition_enable to true or false
+    
+    * Set to true
+    
+        ```
+        GET /api/_set_config?dynamic_partition_enable=true
+        
+        For example: curl --location-trusted -u username:password -XGET http://fe_host:fe_http_port/api/_set_config?dynamic_partition_enable=true
+        
+        Return Code:200
+        ```
+        
+    * Set to false
+    
+        ```
+        GET /api/_set_config?dynamic_partition_enable=false
+        
+        For example: curl --location-trusted -u username:password -XGET http://fe_host:fe_http_port/api/_set_config?dynamic_partition_enable=false
+        
+        Return Code:200
+        ```
+    
+2.  Set the scheduling frequency for dynamic partition 
+    
+    * Set schedule frequency to 12 hours.
+        
+        ```
+        GET /api/_set_config?dynamic_partition_check_interval_seconds=432000
+        
+        For example: curl --location-trusted -u username:password -XGET http://fe_host:fe_http_port/api/_set_config?dynamic_partition_check_interval_seconds=432000
+        
+        Return Code:200
+        ```
diff --git a/content/_sources/documentation/en/administrator-guide/http-actions/compaction-action_EN.md.txt b/content/_sources/documentation/en/administrator-guide/http-actions/compaction-action_EN.md.txt
new file mode 100644
index 0000000..79ce029
--- /dev/null
+++ b/content/_sources/documentation/en/administrator-guide/http-actions/compaction-action_EN.md.txt
@@ -0,0 +1,78 @@
+<!-
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements. See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership. The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License. You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied. See the License for the
+specific language governing permissions and limitations
+under the License.
+->
+
+# Compaction Action
+
+This API is used to view the overall compaction status of a BE node or the compaction status of a specified tablet. It can also be used to manually trigger Compaction.
+
+## View Compaction status
+
+### The overall compaction status of the node
+
+(TODO)
+
+### Specify the compaction status of the tablet
+
+```
+curl -X GET http://be_host:webserver_port/api/compaction/show?tablet_id=xxxx\&schema_hash=yyyy
+```
+
+If the tablet does not exist, an error in JSON format is returned:
+
+```
+{
+    "status": "Fail",
+    "msg": "Tablet not found"
+}
+```
+
+If the tablet exists, the result is returned in JSON format:
+
+```
+{
+    "cumulative point": 50,
+    "last cumulative failure time": "2019-12-16 18:13:43.224",
+    "last base failure time": "2019-12-16 18:13:23.320",
+    "last cumu success time": "2019-12-16 18:12:15.110",
+    "last base success time": "2019-12-16 18:11:50.780",
+    "rowsets": [
+        "[0-48] 10 DATA OVERLAPPING",
+        "[49-49] 2 DATA OVERLAPPING",
+        "[50-50] 0 DELETE NONOVERLAPPING",
+        "[51-51] 5 DATE OVERLAPPING"
+    ]
+}
+```
+
+Explanation of results:
+
+* cumulative point: The version boundary between base and cumulative compaction. Versions before (excluding) points are handled by base compaction. Versions after (inclusive) are handled by cumulative compaction.
+* last cumulative failure time: The time when the last cumulative compaction failed. After 10 minutes by default, cumulative compaction is attempted on the this tablet again.
+* last base failure time: The time when the last base compaction failed. After 10 minutes by default, base compaction is attempted on the this tablet again.
+* rowsets: The current rowsets collection of this tablet. [0-48] means a rowset with version 0-48. The second number is the number of segments in a rowset. The `DELETE` indicates the delete version. `OVERLAPPING` and `NONOVERLAPPING` indicates whether data between segments is overlap.
+
+### Examples
+
+```
+curl -X GET http://192.168.10.24:8040/api/compaction/show?tablet_id=10015\&schema_hash=1294206575
+```
+
+## Manually trigger Compaction
+
+(TODO)
diff --git a/content/_sources/documentation/en/administrator-guide/index.rst.txt b/content/_sources/documentation/en/administrator-guide/index.rst.txt
index a1237db..bbb7493 100644
--- a/content/_sources/documentation/en/administrator-guide/index.rst.txt
+++ b/content/_sources/documentation/en/administrator-guide/index.rst.txt
@@ -6,6 +6,7 @@ Administrator Guide
     :hidden:
 
     load-data/index
+    alter-table/index 
     http-actions/index
     operation/index
     config/index
diff --git a/content/_sources/documentation/en/administrator-guide/load-data/insert-into-manual_EN.md.txt b/content/_sources/documentation/en/administrator-guide/load-data/insert-into-manual_EN.md.txt
index 0df1153..71abc96 100644
--- a/content/_sources/documentation/en/administrator-guide/load-data/insert-into-manual_EN.md.txt
+++ b/content/_sources/documentation/en/administrator-guide/load-data/insert-into-manual_EN.md.txt
@@ -47,6 +47,21 @@ INSERT INTO tbl2 WITH LABEL label1 SELECT * FROM tbl3;
 INSERT INTO tbl1 VALUES ("qweasdzxcqweasdzxc"), ("a");
 ```
 
+**Notice**
+
+When using `CTE(Common Table Expressions)` as the query part of insert operation, the `WITH LABEL` or column list part must be specified.
+For example:
+
+```
+INSERT INTO tbl1 WITH LABEL label1
+WITH cte1 AS (SELECT * FROM tbl1), cte2 AS (SELECT * FROM tbl2)
+SELECT k1 FROM cte1 JOIN cte2 WHERE cte1.k1 = 1;
+
+INSERT INTO tbl1 (k1)
+WITH cte1 AS (SELECT * FROM tbl1), cte2 AS (SELECT * FROM tbl2)
+SELECT k1 FROM cte1 JOIN cte2 WHERE cte1.k1 = 1;
+```
+
 The following is a brief introduction to the parameters used in creating import statements:
 
 + partition\_info
@@ -79,47 +94,100 @@ The following is a brief introduction to the parameters used in creating import
 
 ### Load results
 
-Insert Into itself is an SQL command, so the return behavior is the same as the return behavior of the SQL command.
+Insert Into itself is a SQL command, and the return result is divided into the following types according to the different execution results:
 
-If the load fails, the error will be returned. Examples are as follows:
+1. Result set is empty
 
+    If the result set of the insert corresponding SELECT statement is empty, it is returned as follows:
 
-```
-ERROR 1064 (HY000): All partitions have no load data. url: http://ip:port/api/_load_error_log?File=_shard_14/error_log_insert_stmt_f435264d82f342e4-a33764f5f0dfbf00_f4364d82f342e4_a764f344e4_a764f5f5f0df0df0dbf00
-```
+    ```
+    mysql> insert into tbl1 select * from empty_tbl;
+    Query OK, 0 rows affected (0.02 sec)
+    ```
 
-Where URL can be used to query the wrong data, see the following **view error line** summary.
+    `Query OK` indicates successful execution. `0 rows affected` means that no data was loaded.
 
-If the load succeeds, the success will be returned. Examples are as follows:
+2. The result set is not empty
 
-```
-Query OK, 100 row affected, 0 warning (0.22 sec)
-```
+    In the case where the result set is not empty. The returned results are divided into the following situations:
 
-If the user specifies Label, the label will be returned as well.
+    1. Insert is successful and data is visible:
 
-```
-Query OK, 100 row affected, 0 warning (0.22 sec)
-{label':'user_specified_label'}
-```
+        ```
+        mysql> insert into tbl1 select * from tbl2;
+        Query OK, 4 rows affected (0.38 sec)
+        {'label': 'insert_8510c568-9eda-4173-9e36-6adc7d35291c', 'status': 'visible', 'txnId': '4005'}
+        
+        mysql> insert into tbl1 with label my_label1 select * from tbl2;
+        Query OK, 4 rows affected (0.38 sec)
+        {'label': 'my_label1', 'status': 'visible', 'txnId': '4005'}
+        
+        mysql> insert into tbl1 select * from tbl2;
+        Query OK, 2 rows affected, 2 warnings (0.31 sec)
+        {'label': 'insert_f0747f0e-7a35-46e2-affa-13a235f4020d', 'status': 'visible', 'txnId': '4005'}
+        
+        mysql> insert into tbl1 select * from tbl2;
+        Query OK, 2 rows affected, 2 warnings (0.31 sec)
+        {'label': 'insert_f0747f0e-7a35-46e2-affa-13a235f4020d', 'status': 'committed', 'txnId': '4005'}
+        ```
 
-If the load may be partially successful, the Label field is appended. Examples are as follows:
+        `Query OK` indicates successful execution. `4 rows affected` means that a total of 4 rows of data were imported. `2 warnings` indicates the number of lines to be filtered.
 
-```
-Query OK, 100 row affected, 1 warning (0.23 sec)
-{label':'7d66c457-658b-4a3e-bdcf-8beee872ef2c'}
-```
+        Also returns a json string:
 
-```
-Query OK, 100 row affected, 1 warning (0.23 sec)
-{label':'user_specified_label'}
-```
+        ```
+        {'label': 'my_label1', 'status': 'visible', 'txnId': '4005'}
+        {'label': 'insert_f0747f0e-7a35-46e2-affa-13a235f4020d', 'status': 'committed', 'txnId': '4005'}
+        {'label': 'my_label1', 'status': 'visible', 'txnId': '4005', 'err': 'some other error'}
+        ```
+
+        `label` is a user-specified label or an automatically generated label. Label is the ID of this Insert Into load job. Each load job has a label that is unique within a single database.
+
+        `status` indicates whether the loaded data is visible. If visible, show `visible`, if not, show` committed`.
+
+        `txnId` is the id of the load transaction corresponding to this insert.
+
+        The `err` field displays some other unexpected errors.
+
+        When user need to view the filtered rows, the user can use the following statement
+
+        ```
+        show load where label = "xxx";
+        ```
+
+        The URL in the returned result can be used to query the wrong data. For details, see the following **View Error Lines** Summary.
+    
+        **"Data is not visible" is a temporary status, this batch of data must be visible eventually**
+
+        You can view the visible status of this batch of data with the following statement:
+
+        ```
+        show transaction where id = 4005;
+        ```
+
+        If the `TransactionStatus` column in the returned result is `visible`, the data is visible.
+
+    2. Insert fails
+
+        Execution failure indicates that no data was successfully loaded, and returns as follows:
+
+        ```
+        mysql> insert into tbl1 select * from tbl2 where k1 = "a";
+        ERROR 1064 (HY000): all partitions have no load data. Url: http://10.74.167.16:8042/api/_load_error_log?file=__shard_2/error_log_insert_stmt_ba8bb9e158e4879-ae8de8507c0bf8a2_ba8bb9e158e4879_ae8de850e8de850
+        ```
+
+        Where `ERROR 1064 (HY000): all partitions have no load data` shows the reason for the failure. The latter url can be used to query the wrong data. For details, see the following **View Error Lines** Summary.
 
-Where affected represents the number of rows loaded. Warning denotes the number of rows that failed. Users need to view the wrong line through `SHOW LOAD WHERE LABEL='xxx';` command, and get url to view the errors.
+**In summary, the correct processing logic for the results returned by the insert operation should be:**
 
-If there is no data, it will return success, and both affected and warning are 0.
+1. If the returned result is `ERROR 1064 (HY000)`, it means that the import failed.
+2. If the returned result is `Query OK`, it means the execution was successful.
 
-Label is the identifier of the Insert Into import job. Each import job has a unique Label inside a single database. Insert Into's Label is generated by the system. Users can use the Label to asynchronously obtain the import status by querying the import command.
+    1. If `rows affected` is 0, the result set is empty and no data is loaded.
+    2. If `rows affected` is greater than 0:
+        1. If `status` is` committed`, the data is not yet visible. You need to check the status through the `show transaction` statement until `visible`.
+        2. If `status` is` visible`, the data is loaded successfully.
+    3. If `warnings` is greater than 0, it means that some data is filtered. You can get the url through the `show load` statement to see the filtered rows.
 
 ## Relevant System Configuration
 
diff --git a/content/_sources/documentation/en/administrator-guide/load-data/load-manual_EN.md.txt b/content/_sources/documentation/en/administrator-guide/load-data/load-manual_EN.md.txt
index fbbe89f..f571bfb 100644
--- a/content/_sources/documentation/en/administrator-guide/load-data/load-manual_EN.md.txt
+++ b/content/_sources/documentation/en/administrator-guide/load-data/load-manual_EN.md.txt
@@ -36,7 +36,7 @@ Doris supports multiple imports. It is recommended to read this document in full
 
 To adapt to different data import requirements, Doris system provides five different import methods. Each import mode supports different data sources and has different usage modes (asynchronous, synchronous).
 
-All import methods support CSV data format. Broker load also supports parquet data format.
+All import methods support CSV data format. Broker load also supports parquet and orc data format.
 
 For instructions on each import mode, please refer to the operation manual for a single import mode.
 
diff --git a/content/_sources/documentation/en/administrator-guide/load-data/stream-load-manual_EN.md.txt b/content/_sources/documentation/en/administrator-guide/load-data/stream-load-manual_EN.md.txt
index 37238df..0020d52 100644
--- a/content/_sources/documentation/en/administrator-guide/load-data/stream-load-manual_EN.md.txt
+++ b/content/_sources/documentation/en/administrator-guide/load-data/stream-load-manual_EN.md.txt
@@ -66,8 +66,7 @@ Users can also operate through other HTTP clients.
 ```
 curl --location-trusted -u user:passwd [-H ""...] -T data.file -XPUT http://fe_host:http_port/api/{db}/{table}/_stream_load
 
-The following attributes are supported in Header:
-label, column_separator, columns, where, max_filter_ratio, partitions
+The properties supported in the header are described in "Load Parameters" below
 The format is: - H "key1: value1"
 ```
 
@@ -84,7 +83,7 @@ The detailed syntax for creating imports helps to execute ``HELP STREAM LOAD`` v
 
 	Stream load uses the HTTP protocol to create the imported protocol and signs it through the Basic Access authentication. The Doris system verifies user identity and import permissions based on signatures.
 
-#### Import Task Parameters
+#### Load Parameters
 
 Stream load uses HTTP protocol, so all parameters related to import tasks are set in the header. The significance of some parameters of the import task parameters of Stream load is mainly introduced below.
 
@@ -172,7 +171,7 @@ The following main explanations are given for the Stream load import result para
 
 	"Publish Timeout": This state also indicates that the import has been completed, except that the data may be delayed and visible without retrying.
 
-	"Label Already Exists":Label 重复,需更换 Label。
+	"Label Already Exists":Label duplicate, need to be replaced Label.
 
 	"Fail": Import failed.
 	
diff --git a/content/_sources/documentation/en/administrator-guide/operation/metadata-operation_EN.md.txt b/content/_sources/documentation/en/administrator-guide/operation/metadata-operation_EN.md.txt
index e3fc346..9b9a72d 100644
--- a/content/_sources/documentation/en/administrator-guide/operation/metadata-operation_EN.md.txt
+++ b/content/_sources/documentation/en/administrator-guide/operation/metadata-operation_EN.md.txt
@@ -258,7 +258,7 @@ In some extreme cases, the image file on the disk may be damaged, but the metada
 
 2. Execute the following command to dump metadata from the Master FE memory: (hereafter called image_mem)
 ```
-curl -u $root_user:$password http://$master_hostname:8410/dump
+curl -u $root_user:$password http://$master_hostname:8030/dump
 ```
 3. Replace the image file in the `meta_dir/image` directory on the OBSERVER FE node with the image_mem file, restart the OBSERVER FE node, and verify the integrity and correctness of the image_mem file. You can check whether the DB and Table metadata are normal on the FE Web page, whether there is an exception in `fe.log`, whether it is in a normal replayed jour.
 
diff --git a/content/_sources/documentation/en/administrator-guide/privilege_EN.md.txt b/content/_sources/documentation/en/administrator-guide/privilege_EN.md.txt
index 4390428..157aec6 100644
--- a/content/_sources/documentation/en/administrator-guide/privilege_EN.md.txt
+++ b/content/_sources/documentation/en/administrator-guide/privilege_EN.md.txt
@@ -53,7 +53,7 @@ Doris's new privilege management system refers to Mysql's privilege management m
 6. Delete Roles: DROP ROLE
 7. View current user privileges: SHOW GRANTS
 8. View all user privilegesSHOW ALL GRANTS;
-9. View the created roles: SHOW ROELS
+9. View the created roles: SHOW ROLES
 10. View user attributes: SHOW PROPERTY
 
 For detailed help with the above commands, you can use help + command to get help after connecting Doris through the MySQL client. For example `HELP CREATE USER`.
@@ -188,6 +188,14 @@ ADMIN\_PRIV and GRANT\_PRIV have the authority of **"grant authority"** at the s
 
 8. Having GRANT\_PRIV at GLOBAL level is actually equivalent to having ADMIN\_PRIV, because GRANT\_PRIV at this level has the right to grant arbitrary permissions, please use it carefully.
 
+9. `current_user()` and `user()`
+
+    Users can view `current_user` and `user` respectively by `SELECT current_user();` and `SELECT user();`. Where `current_user` indicates which identity the current user is passing through the authentication system, and `user` is the user's current actual `user_identity`.
+
+    For example, suppose the user `user1@'192.%'` is created, and then a user user1 from 192.168.10.1 is logged into the system. At this time, `current_user` is `user1@'192.%'`, and `user` is `user1@'192.168.10.1'`.
+
+    All privileges are given to a `current_user`, and the real user has all the privileges of the corresponding `current_user`.
+
 ## Best Practices
 
 Here are some usage scenarios of Doris privilege system.
@@ -202,6 +210,8 @@ Here are some usage scenarios of Doris privilege system.
 
 	There are multiple services in a cluster, and each business may use one or more data. Each business needs to manage its own users. In this scenario. Administrator users can create a user with GRANT privileges at the DATABASE level for each database. The user can only authorize the specified database for the user.
 
+3. Blacklist
 
+    Doris itself does not support blacklist, only whitelist, but we can simulate blacklist in some way. Suppose you first create a user named `user@'192.%'`, which allows users from `192.*` to login. At this time, if you want to prohibit users from `192.168.10.1` from logging in, you can create another user with `cmy@'192.168.10.1'` and set a new password. Since `192.168.10.1` has a higher priority than `192.%`, user can no longer login by using the old password from `192.168.10.1`.
 
 
diff --git a/content/_sources/documentation/en/administrator-guide/variables_EN.md.txt b/content/_sources/documentation/en/administrator-guide/variables_EN.md.txt
index 12dddfa..4be7a7c 100644
--- a/content/_sources/documentation/en/administrator-guide/variables_EN.md.txt
+++ b/content/_sources/documentation/en/administrator-guide/variables_EN.md.txt
@@ -254,7 +254,7 @@ SET forward_to_master = concat('tr', 'u', 'e');
     
     A query plan typically produces a set of scan ranges, the range of data that needs to be scanned. These data are distributed across multiple BE nodes. A BE node will have one or more scan ranges. By default, a set of scan ranges for each BE node is processed by only one execution instance. When the machine resources are abundant, you can increase the variable and let more execution instances process a set of scan ranges at the same time, thus improving query efficiency.
     
-    Modifying this parameter is only helpful for improving the efficiency of the scan node. Larger values ​​may consume more machine resources such as CPU, memory, and disk IO.
+    The number of scan instances determines the number of other execution nodes in the upper layer, such as aggregate nodes and join nodes. Therefore, it is equivalent to increasing the concurrency of the entire query plan execution. Modifying this parameter will help improve the efficiency of large queries, but larger values will consume more machine resources, such as CPU, memory, and disk IO.
     
 * `query_cache_size`
 
@@ -307,3 +307,11 @@ SET forward_to_master = concat('tr', 'u', 'e');
 * `wait_timeout`
 
     The length of the connection used to set up an idle connection. When an idle connection does not interact with Doris for that length of time, Doris will actively disconnect the link. The default is 8 hours, in seconds.
+
+* `default_rowset_type`
+
+    Used for setting the default storage format of Backends storage engine. Valid options: alpha/beta
+
+* `use_v2_rollup`
+
+    Used to control the sql query to use segment v2 rollup index to get data. This variable is only used for validation when upgrading to segment v2 feature. Otherwise, not recommended to use.
diff --git a/content/_sources/documentation/en/developer-guide/format-code.md.txt b/content/_sources/documentation/en/developer-guide/format-code.md.txt
new file mode 100644
index 0000000..be5faa3
--- /dev/null
+++ b/content/_sources/documentation/en/developer-guide/format-code.md.txt
@@ -0,0 +1,78 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Format Code
+To automatically format the code, clang-format is a good choice.
+
+## Code Style
+Doris Code Style is based on Google's, makes a few changes. The customized .clang-format
+file is in the root dir of Doris.
+Now, .clang-format file only works on clang-format-8.0.1+.
+
+## Preparing
+You should install clang-format, or you can use clang-format plugins which support by IDEs or Editors.
+
+### Install clang-format
+Ubuntu: `apt-get install clang-format` 
+
+The current release is 10.0, you can specify old version, e.g.
+ 
+ `apt-get install clang-format-9`
+
+Centos 7: 
+
+The version of clang-format installed by yum is too old. Compiling clang from source
+is recommended.
+
+### Clang-format plugins
+Clion IDE supports the plugin "ClangFormat", you can search in `File->Setting->Plugins`
+ and download it.
+But the version is not match with clang-format. Judging from the options supported, 
+the version is lower than clang-format-9.0.
+
+## Usage
+
+### CMD
+`clang-format --style=file -i $File$` 
+
+When using `-style=file`, clang-format for each input file will try to find the 
+.clang-format file located in the closest parent directory of the input file. 
+When the standard input is used, the search is started from the current directory.
+
+Note: filter out the files which should not be formatted, when batch clang-formating
+ files. 
+ 
+ A example of how to filter \*.h/\*.cpp and exclude some dirs:
+ 
+`find . -type f -not \( -wholename ./env/* \) -regextype posix-egrep -regex
+ ".*\.(cpp|h)" | xargs clang-format -i -style=file`
+
+### Using clang-format in IDEs or Editors
+#### Clion
+If using the plugin 'ClangFormat' in Clion, choose `Reformat Code` or press the keyboard 
+shortcut.
+#### VS Code
+VS Code needs install the extension 'Clang-Format', and specify the executable path of 
+clang-format in settings.
+
+```
+"clang-format.executable":  "$clang-format path$",
+"clang-format.style": "file"
+```
+Then, choose `Format Document`.
\ No newline at end of file
diff --git a/content/_sources/documentation/en/extending-doris/user-defined-function_EN.md.txt b/content/_sources/documentation/en/extending-doris/user-defined-function_EN.md.txt
index e60b8f5..82f8fe8 100644
--- a/content/_sources/documentation/en/extending-doris/user-defined-function_EN.md.txt
+++ b/content/_sources/documentation/en/extending-doris/user-defined-function_EN.md.txt
@@ -75,7 +75,7 @@ Executing `sh build.sh` in the Doris root directory generates the corresponding
 
 ### Edit CMakeLists.txt
 
-Based on the `headers | libs` generated in the previous step, users can introduce the dependency using tools such as `CMakeLists`; in `CMakeLists`, dynamic libraries can be added by adding `-I|L` to `CMAKE_CXX_FLAGS`, respectively. For example, in `be/src/udf_samples/CMakeLists.txt`, a `udf sample` dynamic library is added using `add_library` (udfsample SHARED udf_sample.cpp). You need to write down all the source files involved later (no header files included).
+Based on the `headers | libs` generated in the previous step, users can introduce the dependency using tools such as `CMakeLists`; in `CMakeLists`, dynamic libraries can be added by adding `-I|L` to `CMAKE_CXX_FLAGS`, respectively. For example, in `be/src/udf_samples/CMakeLists.txt`, a `udf sample` dynamic library is added using `add_library` (udfsample SHARED udf_sample.cpp) `target_link_libraries`(udfsample -static-libstdc++ -static-libgcc). You need to write down all the source files  [...]
 
 ### Execute compilation
 
diff --git a/content/_sources/documentation/en/getting-started/basic-usage_EN.md.txt b/content/_sources/documentation/en/getting-started/basic-usage_EN.md.txt
index 59c0c0d..dbabc6b 100644
--- a/content/_sources/documentation/en/getting-started/basic-usage_EN.md.txt
+++ b/content/_sources/documentation/en/getting-started/basic-usage_EN.md.txt
@@ -229,6 +229,7 @@ MySQL> DESC table2;
 > 3. Data import can import the specified Partition. See `HELP LOAD'.
 > 4. Schema of table can be dynamically modified.
 > 5. Rollup can be added to Table to improve query performance. This section can be referred to the description of Rollup in Advanced Usage Guide.
+> 6. The default value of Null property for column is true, which may result in poor scan performance.
 
 ### 2.4 Import data
 
@@ -309,7 +310,7 @@ Broker imports are asynchronous commands. Successful execution of the above comm
 
 In the return result, FINISHED in the `State'field indicates that the import was successful.
 
-关于 `SHOW LOAD` 的更多说明,可以参阅 `HELP SHOW LOAD;`
+For more instructions on `SHOW LOAD`, see` HELP SHOW LOAD; `
 
 Asynchronous import tasks can be cancelled before the end:
 
@@ -371,4 +372,4 @@ MySQL> SELECT SUM(pv) FROM table2 WHERE siteid IN (SELECT siteid FROM table1 WHE
 |         8 |
 +-----------+
 1 row in set (0.13 sec)
-```
\ No newline at end of file
+```
diff --git a/content/_sources/documentation/en/getting-started/best-practice_EN.md.txt b/content/_sources/documentation/en/getting-started/best-practice_EN.md.txt
index 0ddc251..7001bce 100644
--- a/content/_sources/documentation/en/getting-started/best-practice_EN.md.txt
+++ b/content/_sources/documentation/en/getting-started/best-practice_EN.md.txt
@@ -26,7 +26,7 @@ under the License.
 
 Doris data model is currently divided into three categories: AGGREGATE KEY, UNIQUE KEY, DUPLICATE KEY. Data in all three models are sorted by KEY.
 
-1. AGGREGATE KEY
+1.1.1. AGGREGATE KEY
 
 When AGGREGATE KEY is the same, old and new records are aggregated. The aggregation functions currently supported are SUM, MIN, MAX, REPLACE.
 
@@ -44,7 +44,7 @@ AGGREGATE KEY(siteid, city, username)
 DISTRIBUTED BY HASH(siteid) BUCKETS 10;
 ```
 
-2. KEY UNIQUE
+1.1.2. KEY UNIQUE
 
 When UNIQUE KEY is the same, the new record covers the old record. At present, UNIQUE KEY implements the same RPLACE aggregation method as GGREGATE KEY, and they are essentially the same. Suitable for analytical business with updated requirements.
 
@@ -60,7 +60,7 @@ KEY (orderid) UNIT
 DISTRIBUTED BY HASH(orderid) BUCKETS 10;
 ```
 
-3. DUPLICATE KEY
+1.1.3. DUPLICATE KEY
 
 Only sort columns are specified, and the same rows are not merged. It is suitable for the analysis business where data need not be aggregated in advance.
 
@@ -89,11 +89,11 @@ In order to adapt to the front-end business, business side often does not distin
 
 In the process of using Star Schema, users are advised to use Star Schema to distinguish dimension tables from indicator tables as much as possible. Frequently updated dimension tables can also be placed in MySQL external tables. If there are only a few updates, they can be placed directly in Doris. When storing dimension tables in Doris, more copies of dimension tables can be set up to improve Join's performance.
 
-### 1.4 Partitions and Barrels
+### 1.3 Partitions and Barrels
 
 Doris supports two-level partitioned storage. The first layer is RANGE partition and the second layer is HASH bucket.
 
-1. RANGE分区(partition)
+1.3.1. RANGE分区(partition)
 
 The RANGE partition is used to divide data into different intervals, which can be logically understood as dividing the original table into multiple sub-tables. In business, most users will choose to partition on time, which has the following advantages:
 
@@ -101,14 +101,14 @@ The RANGE partition is used to divide data into different intervals, which can b
 * Availability of Doris Hierarchical Storage (SSD + SATA)
 * Delete data by partition more quickly
 
-2. HASH分桶(bucket)
+1.3.2. HASH分桶(bucket)
 
 The data is divided into different buckets according to the hash value.
 
 * It is suggested that columns with large differentiation should be used as buckets to avoid data skew.
 * In order to facilitate data recovery, it is suggested that the size of a single bucket should not be too large and should be kept within 10GB. Therefore, the number of buckets should be considered reasonably when building tables or increasing partitions, among which different partitions can specify different buckets.
 
-### 1.5 Sparse Index and Bloom Filter
+### 1.4 Sparse Index and Bloom Filter
 
 Doris stores the data in an orderly manner, and builds a sparse index for Doris on the basis of ordered data. The index granularity is block (1024 rows).
 
@@ -118,13 +118,13 @@ Sparse index chooses fixed length prefix in schema as index content, and Doris c
 * One particular feature of this is the varchar type field. The varchar type field can only be used as the last field of the sparse index. The index is truncated at varchar, so if varchar appears in front, the length of the index may be less than 36 bytes. Specifically, you can refer to [data model, ROLLUP and prefix index] (. / data-model-rollup. md).
 * In addition to sparse index, Doris also provides bloomfilter index. Bloomfilter index has obvious filtering effect on columns with high discrimination. If you consider that varchar cannot be placed in a sparse index, you can create a bloomfilter index.
 
-### 1.6 Physical and Chemical View (rollup)
+### 1.5 Physical and Chemical View (rollup)
 
 Rollup can essentially be understood as a physical index of the original table. When creating Rollup, only some columns in Base Table can be selected as Schema. The order of fields in Schema can also be different from that in Base Table.
 
 Rollup can be considered in the following cases:
 
-1. Base Table 中数据聚合度不高。
+1.5.1. Base Table 中数据聚合度不高。
 
 This is usually due to the fact that Base Table has more differentiated fields. At this point, you can consider selecting some columns and establishing Rollup.
 
@@ -140,7 +140,7 @@ Siteid may lead to a low degree of data aggregation. If business parties often b
 ALTER TABLE site_visit ADD ROLLUP rollup_city(city, pv);
 ```
 
-2. The prefix index in Base Table cannot be hit
+1.5.2. The prefix index in Base Table cannot be hit
 
 Generally, the way Base Table is constructed cannot cover all query modes. At this point, you can consider adjusting the column order and establishing Rollup.
 
@@ -160,7 +160,7 @@ ALTER TABLE session_data ADD ROLLUP rollup_brower(brower,province,ip,url) DUPLIC
 
 Doris中目前进行 Schema Change 的方式有三种:Sorted Schema Change,Direct Schema Change, Linked Schema Change。
 
-1. Sorted Schema Change
+2.1. Sorted Schema Change
 
 The sorting of columns has been changed and the data needs to be reordered. For example, delete a column in a sorted column and reorder the fields.
 
@@ -168,13 +168,14 @@ The sorting of columns has been changed and the data needs to be reordered. For
 ALTER TABLE site_visit DROP COLUMN city;
 ```
 
-2. Direct Schema Change: There is no need to reorder, but there is a need to convert the data. For example, modify the type of column, add a column to the sparse index, etc.
+2.2. Direct Schema Change: There is no need to reorder, but there is a need to convert the data. For example, modify
+ the type of column, add a column to the sparse index, etc.
 
 ```
 ALTER TABLE site_visit MODIFY COLUMN username varchar(64);
 ```
 
-3. Linked Schema Change: 无需转换数据,直接完成。例如加列操作。
+2.3. Linked Schema Change: 无需转换数据,直接完成。例如加列操作。
 
 ```
 ALTER TABLE site_visit ADD COLUMN click bigint SUM default '0';
diff --git a/content/_sources/documentation/en/getting-started/data-model-rollup_EN.md.txt b/content/_sources/documentation/en/getting-started/data-model-rollup_EN.md.txt
index 8a54b4c..2ac0a8e 100644
--- a/content/_sources/documentation/en/getting-started/data-model-rollup_EN.md.txt
+++ b/content/_sources/documentation/en/getting-started/data-model-rollup_EN.md.txt
@@ -17,7 +17,6 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-
 # Data Model, ROLLUP and Prefix Index
 
 This document describes Doris's data model, ROLLUP and prefix index concepts at the logical level to help users better use Doris to cope with different business scenarios.
@@ -31,7 +30,7 @@ Columns can be divided into two categories: Key and Value. From a business persp
 
 Doris's data model is divided into three main categories:
 
-*Aggregate
+* Aggregate
 * Uniq
 * Duplicate
 
@@ -45,7 +44,7 @@ We illustrate what aggregation model is and how to use it correctly with practic
 
 Assume that the business has the following data table schema:
 
-Columns
+|ColumnName|Type|AggregationType|Comment|
 |---|---|---|---|
 | userid | LARGEINT | | user id|
 | date | DATE | | date of data filling|
@@ -62,27 +61,27 @@ If converted into a table-building statement, the following is done (omitting th
 ```
 CREATE TABLE IF NOT EXISTS example_db.expamle_tbl
 (
-`user_id` LARGEINT NOT NULL COMMENT "用户id",
-"Date `date not null how `index `Fufu 8;'Back
-` City `VARCHAR (20) COMMENT `User City',
-"Age" SMALLINT COMMENT "29992;" 25143;"24180;" 40836 ",
-`sex` TINYINT COMMENT "用户性别",
-"last visit date" DATETIME REPLACE DEFAULT "1970 -01 -01 00:00" COMMENT "25143;" 27425;"35775;" 3838382",
-`cost` BIGINT SUM DEFAULT "0" COMMENT "用户总消费",
-Best Answer: Best Answer
-How about "99999" as time goes by???????????????????????????????????????????????????????????????????????????????????????????
+    `user_id` LARGEINT NOT NULL COMMENT "user id",
+    `date` DATE NOT NULL COMMENT "data import time",
+    `city` VARCHAR(20) COMMENT "city",
+    `age` SMALLINT COMMENT "age",
+    `sex` TINYINT COMMENT "gender",
+    `last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00" COMMENT "last visit date time",
+    `cost` BIGINT SUM DEFAULT "0" COMMENT "user total cost",
+    `max_dwell_time` INT MAX DEFAULT "0" COMMENT "user max dwell time",
+    `min_dwell_time` INT MIN DEFAULT "99999" COMMENT "user min dwell time",
 )
 AGGREGATE KEY(`user_id`, `date`, `timestamp`, `city`, `age`, `sex`)
-... /* 省略 Partition 和 Distribution 信息 */
+... /* ignore Partition and Distribution */
 ;
 ```
 
 As you can see, this is a typical fact table of user information and access behavior.
 In general star model, user information and access behavior are stored in dimension table and fact table respectively. Here, in order to explain Doris's data model more conveniently, we store the two parts of information in a single table.
 
-The columns in the table are divided into Key (dimension column) and Value (indicator column) according to whether `AggregationType'is set or not. No `AggregationType', such as `user_id', `date', `age', etc., is set as ** Key **, while `AggregationType'is set as ** Value **.
+The columns in the table are divided into Key (dimension column) and Value (indicator column) according to whether `AggregationType`is set or not. No `AggregationType`, such as `user_id`, `date`, `age`, etc., is set as **Key**, while AggregationType'is set as **Value**.
 
-When we import data, the same rows and aggregates into one row for the Key column, while the Value column aggregates according to the set `AggregationType'. ` AggregationType `currently has the following four ways of aggregation:
+When we import data, the same rows and aggregates into one row for the Key column, while the Value column aggregates according to the set `AggregationType`. `AggregationType`currently has the following four ways of aggregation:
 
 1. SUM: Sum, multi-line Value accumulation.
 2. REPLACE: Instead, Values in the next batch of data will replace Values in rows previously imported.
@@ -130,12 +129,12 @@ As you can see, there is only one line of aggregated data left for 10,000 users.
 
 The first five columns remain unchanged, starting with column 6 `last_visit_date':
 
-*` 2017-10-01 07:00 `: Because the `last_visit_date'column is aggregated by REPLACE, the `2017-10-01 07:00 ` column has been replaced by `2017-10-01 06:00'.
+*`2017-10-01 07:00`: Because the `last_visit_date`column is aggregated by REPLACE, the `2017-10-01 07:00` column has been replaced by `2017-10-01 06:00'.
 > Note: For data in the same import batch, the order of replacement is not guaranteed for the aggregation of REPLACE. For example, in this case, it may be `2017-10-01 06:00'. For data from different imported batches, it can be guaranteed that the data from the latter batch will replace the former batch.
 
-*` 35 `: Because the aggregation type of the `cost'column is SUM, 35 is accumulated from 20 + 15.
-*` 10 `: Because the aggregation type of the `max_dwell_time'column is MAX, 10 and 2 take the maximum and get 10.
-*` 2 `: Because the aggregation type of `min_dwell_time'column is MIN, 10 and 2 take the minimum value and get 2.
+*`35`: Because the aggregation type of the `cost'column is SUM, 35 is accumulated from 20 + 15.
+*`10`: Because the aggregation type of the`max_dwell_time'column is MAX, 10 and 2 take the maximum and get 10.
+*`2`: Because the aggregation type of `min_dwell_time'column is MIN, 10 and 2 take the minimum value and get 2.
 
 After aggregation, Doris ultimately only stores aggregated data. In other words, detailed data will be lost and users can no longer query the detailed data before aggregation.
 
@@ -143,7 +142,7 @@ After aggregation, Doris ultimately only stores aggregated data. In other words,
 
 Following example 1, we modify the table structure as follows:
 
-Columns
+|ColumnName|Type|AggregationType|Comment|
 |---|---|---|---|
 | userid | LARGEINT | | user id|
 | date | DATE | | date of data filling|
@@ -182,7 +181,7 @@ Then when this batch of data is imported into Doris correctly, the final storage
 | 10004 | 2017-10-01 | 2017-10-01 12:12:48 | Shenzhen | 35 | 0 | 2017-10-01 10:00:15 | 100 | 3 | 3|
 | 10004 | 2017-10-03 | 2017-10-03 12:38:20 | Shenzhen | 35 | 0 | 2017-10-03 10:20:22 | 11 | 6 | 6|
 
-We can see that the stored data, just like the imported data, does not aggregate at all. This is because, in this batch of data, because the `timestamp'column is added, the Keys of all rows are ** not exactly the same **. That is, as long as the keys of each row are not identical in the imported data, Doris can save the complete detailed data even in the aggregation model.
+We can see that the stored data, just like the imported data, does not aggregate at all. This is because, in this batch of data, because the `timestamp'column is added, the Keys of all rows are **not exactly the same**. That is, as long as the keys of each row are not identical in the imported data, Doris can save the complete detailed data even in the aggregation model.
 
 ### Example 3: Importing data and aggregating existing data
 
@@ -224,13 +223,13 @@ Data aggregation occurs in Doris in the following three stages:
 2. The stage in which the underlying BE performs data Compaction. At this stage, BE aggregates data from different batches that have been imported.
 3. Data query stage. In data query, the data involved in the query will be aggregated accordingly.
 
-Data may be aggregated to varying degrees at different times. For example, when a batch of data is just imported, it may not be aggregated with the existing data. But for users, user** can only query aggregated data. That is, different degrees of aggregation are transparent to user queries. Users should always assume that data exists in terms of the degree of aggregation that ** ultimately completes, and ** should not assume that some aggregation has not yet occurred **. (See the section [...]
+Data may be aggregated to varying degrees at different times. For example, when a batch of data is just imported, it may not be aggregated with the existing data. But for users, user**can only query aggregated data**. That is, different degrees of aggregation are transparent to user queries. Users should always assume that data exists in terms of the degree of aggregation that **ultimately completes**, and **should not assume that some aggregation has not yet occurred**. (See the section [...]
 
 ## Uniq Model
 
 In some multi-dimensional analysis scenarios, users are more concerned with how to ensure the uniqueness of Key, that is, how to obtain the Primary Key uniqueness constraint. Therefore, we introduce Uniq's data model. This model is essentially a special case of aggregation model and a simplified representation of table structure. Let's give an example.
 
-Columns
+|ColumnName|Type|IsKey|Comment|
 |---|---|---|---|
 | user_id | BIGINT | Yes | user id|
 | username | VARCHAR (50) | Yes | User nickname|
@@ -262,7 +261,7 @@ Unique Key ("User", "User", "Name")
 
 This table structure is exactly the same as the following table structure described by the aggregation model:
 
-Columns
+|ColumnName|Type|AggregationType|Comment|
 |---|---|---|---|
 | user_id | BIGINT | | user id|
 | username | VARCHAR (50) | | User nickname|
@@ -298,17 +297,16 @@ That is to say, Uniq model can be completely replaced by REPLACE in aggregation
 
 In some multidimensional analysis scenarios, data has neither primary keys nor aggregation requirements. Therefore, we introduce Duplicate data model to meet this kind of demand. Examples are given.
 
-+ 124; Columname = 124; type = 124; sortkey = 124; comment = 124;
+|ColumnName|Type|SortKey|Comment|
 |---|---|---|---|
 | Timstamp | DATETIME | Yes | Logging Time|
 | Type | INT | Yes | Log Type|
-|error_code|INT|Yes|错误码|
+|error_code|INT|Yes|error code|
 | Error_msg | VARCHAR (1024) | No | Error Details|
-1.2.2.2.;2.2.2.1.;2.2.2.2.2.2.2.2.2.2.
-| op_time | DATETIME | No | Processing time|
+|op_id|BIGINT|No|operator id|
+|op_time|DATETIME|No|operation time|
 
 The TABLE statement is as follows:
-
 ```
 CREATE TABLE IF NOT EXISTS example_db.expamle_tbl
 (
@@ -337,7 +335,7 @@ ROLLUP in multidimensional analysis means "scroll up", which means that data is
 
 In Doris, we make the table created by the user through the table building statement a Base table. Base table holds the basic data stored in the way specified by the user's table-building statement.
 
-On top of the Base table, we can create any number of ROLLUP tables. These ROLLUP data are generated based on the Base table and physically ** stored independently **.
+On top of the Base table, we can create any number of ROLLUP tables. These ROLLUP data are generated based on the Base table and physically **stored independently**.
 
 The basic function of ROLLUP tables is to obtain coarser aggregated data on the basis of Base tables.
 
@@ -349,9 +347,9 @@ Because Uniq is only a special case of the Aggregate model, we do not distinguis
 
 Example 1: Get the total consumption per user
 
-Following ** Example 2 ** in the ** Aggregate Model ** section, the Base table structure is as follows:
+Following **Example 2** in the **Aggregate Model** section, the Base table structure is as follows:
 
-Columns
+|ColumnName|Type|AggregationType|Comment|
 |---|---|---|---|
 | user_id | LARGEINT | | user id|
 | date | DATE | | date of data filling|
@@ -378,7 +376,7 @@ The data stored are as follows:
 
 On this basis, we create a ROLLUP:
 
-1240; Colonname 12412;
+|ColumnName|
 |---|
 |user_id|
 |cost|
@@ -403,7 +401,7 @@ Doris automatically hits the ROLLUP table, thus completing the aggregated query
 
 Follow example 1. Based on the Base table, we create a ROLLUP:
 
-Columns
+|ColumnName|Type|AggregationType|Comment|
 |---|---|---|---|
 | City | VARCHAR (20) | | User City|
 | age | SMALLINT | | User age|
@@ -448,23 +446,23 @@ We use the prefix index of ** 36 bytes ** of a row of data as the prefix index o
 
 1. The prefix index of the following table structure is user_id (8Byte) + age (8Bytes) + message (prefix 20 Bytes).
 
-+ 124; Columname = 124; type = 124;
+|ColumnName|Type|
 |---|---|
 |user_id|BIGINT|
 |age|INT|
-Message
-124max \\u dwell u team 124DATE
-124m;min \\u dwell u team 124DATE
+|message|VARCHAR(100)|
+|max\_dwell\_time|DATETIME|
+|min\_dwell\_time|DATETIME|
 
 2. The prefix index of the following table structure is user_name (20 Bytes). Even if it does not reach 36 bytes, because it encounters VARCHAR, it truncates directly and no longer continues.
 
-+ 124; Columname = 124; type = 124;
+|ColumnName|Type|
 |---|---|
-User name
+|user_name|VARCHAR(20)|
 |age|INT|
-Message
-124max \\u dwell u team 124DATE
-124m;min \\u dwell u team 124DATE
+|message|VARCHAR(100)|
+|max\_dwell\_time|DATETIME|
+|min\_dwell\_time|DATETIME|
 
 When our query condition is the prefix of ** prefix index **, it can greatly speed up the query speed. For example, in the first example, we execute the following queries:
 
@@ -482,23 +480,23 @@ Because column order is specified when a table is built, there is only one prefi
 
 The structure of the Base table is as follows:
 
-+ 124; Columname = 124; type = 124;
+|ColumnName|Type|
 |---|---|
 |user\_id|BIGINT|
 |age|INT|
-Message
-124max \\u dwell u team 124DATE
-124m;min \\u dwell u team 124DATE
+|message|VARCHAR(100)|
+|max\_dwell\_time|DATETIME|
+|min\_dwell\_time|DATETIME|
 
 On this basis, we can create a ROLLUP table:
 
-+ 124; Columname = 124; type = 124;
+|ColumnName|Type|
 |---|---|
 |age|INT|
 |user\_id|BIGINT|
-Message
-124max \\u dwell u team 124DATE
-124m;min \\u dwell u team 124DATE
+|message|VARCHAR(100)|
+|max\_dwell\_time|DATETIME|
+|min\_dwell\_time|DATETIME|
 
 As you can see, the columns of ROLLUP and Base tables are exactly the same, just changing the order of user_id and age. So when we do the following query:
 
@@ -514,9 +512,9 @@ The ROLLUP table is preferred because the prefix index of ROLLUP matches better.
 * Data updates for ROLLUP are fully synchronized with Base representations. Users need not care about this problem.
 * Columns in ROLLUP are aggregated in exactly the same way as Base tables. There is no need to specify or modify ROLLUP when creating it.
 * A necessary (inadequate) condition for a query to hit ROLLUP is that all columns ** (including the query condition columns in select list and where) involved in the query exist in the column of the ROLLUP. Otherwise, the query can only hit the Base table.
-* Certain types of queries (such as count (*)) cannot hit ROLLUP under any conditions. See the next section ** Limitations of the aggregation model **.
+* Certain types of queries (such as count (*)) cannot hit ROLLUP under any conditions. See the next section **Limitations of the aggregation model**.
 * The query execution plan can be obtained by `EXPLAIN your_sql;` command, and in the execution plan, whether ROLLUP has been hit or not can be checked.
-* Base tables and all created ROLLUPs can be displayed by `DESC tbl_name ALL'; `statement.
+* Base tables and all created ROLLUPs can be displayed by `DESC tbl_name ALL;` statement.
 
 In this document, you can see [Query how to hit Rollup] (hit-the-rollup)
 
@@ -528,7 +526,7 @@ In the aggregation model, what the model presents is the aggregated data. That i
 
 The hypothesis table is structured as follows:
 
-Columns
+|ColumnName|Type|AggregationType|Comment|
 |---|---|---|---|
 | userid | LARGEINT | | user id|
 | date | DATE | | date of data filling|
@@ -602,22 +600,22 @@ Because the final aggregation result is:
 |10002|2017-11-21|39|
 |10003|2017-11-22|22|
 
-So `select count (*) from table; `The correct result should be ** 4 **. But if we only scan the `user_id'column and add query aggregation, the final result is ** 3 ** (10001, 10002, 10003). If aggregated without queries, the result is ** 5 ** (a total of five rows in two batches). It can be seen that both results are wrong.
+So `select count (*) from table;` The correct result should be **4**. But if we only scan the `user_id'column and add query aggregation, the final result is **3** (10001, 10002, 10003). If aggregated without queries, the result is **5** (a total of five rows in two batches). It can be seen that both results are wrong.
 
-In order to get the correct result, we must read the data of `user_id'and `date', and ** together with aggregate ** when querying, to return the correct result of ** 4 **. That is to say, in the count (*) query, Doris must scan all AGGREGATE KEY columns (here are `user_id` and `date') and aggregate them to get the semantically correct results. When aggregated columns are large, count (*) queries need to scan a large amount of data.
+In order to get the correct result, we must read the data of `user_id` and `date`, and **together with aggregate** when querying, to return the correct result of **4**. That is to say, in the count (*) query, Doris must scan all AGGREGATE KEY columns (here are `user_id` and `date`) and aggregate them to get the semantically correct results. When aggregated columns are large, count (*) queries need to scan a large amount of data.
 
-Therefore, when there are frequent count (*) queries in the business, we recommend that users simulate count (*)**) by adding a column with a ** value of 1 and aggregation type of SUM. As the table structure in the previous example, we modify it as follows:
+Therefore, when there are frequent count (*) queries in the business, we recommend that users simulate count (*) by adding a column with a value of 1 and aggregation type of SUM. As the table structure in the previous example, we modify it as follows:
 
-Columns
+|ColumnName|Type|AggregationType|Comment|
 |---|---|---|---|
 | user ID | BIGINT | | user id|
 | date | DATE | | date of data filling|
 | Cost | BIGINT | SUM | Total User Consumption|
 | count | BIGINT | SUM | for counting|
 
-Add a count column and import the data with the column value ** equal to 1 **. The result of `select count (*) from table; `is equivalent to `select sum (count) from table; ` The query efficiency of the latter is much higher than that of the former. However, this method also has limitations, that is, users need to guarantee that they will not import rows with the same AGGREGATE KEY column repeatedly. Otherwise, `select sum (count) from table; `can only express the number of rows original [...]
+Add a count column and import the data with the column value **equal to 1**. The result of `select count (*) from table;`is equivalent to `select sum (count) from table;` The query efficiency of the latter is much higher than that of the former. However, this method also has limitations, that is, users need to guarantee that they will not import rows with the same AGGREGATE KEY column repeatedly. Otherwise, `select sum (count) from table;`can only express the number of rows originally im [...]
 
-Another way is to ** change the aggregation type of the `count'column above to REPLACE, and still weigh 1 **. Then `select sum (count) from table; `and `select count (*) from table; `the results will be consistent. And in this way, there is no restriction on importing duplicate rows.
+Another way is to **change the aggregation type of the count'column above to REPLACE, and still weigh 1**. Then`select sum (count) from table;` and `select count (*) from table;` the results will be consistent. And in this way, there is no restriction on importing duplicate rows.
 
 ### Duplicate Model
 
@@ -625,7 +623,7 @@ Duplicate model has no limitation of aggregation model. Because the model does n
 
 ## Suggestions for Choosing Data Model
 
-Because the data model was established when the table was built, and ** could not be modified **. Therefore, it is very important to select an appropriate data model **.
+Because the data model was established when the table was built, and **could not be modified **. Therefore, it is very important to select an appropriate data model**.
 
 1. Aggregate model can greatly reduce the amount of data scanned and the amount of query computation by pre-aggregation. It is very suitable for report query scenarios with fixed patterns. But this model is not very friendly for count (*) queries. At the same time, because the aggregation method on the Value column is fixed, semantic correctness should be considered in other types of aggregation queries.
 2. Uniq model guarantees the uniqueness of primary key for scenarios requiring unique primary key constraints. However, the query advantage brought by pre-aggregation such as ROLLUP can not be exploited (because the essence is REPLACE, there is no such aggregation as SUM).
diff --git a/content/_sources/documentation/en/getting-started/data-partition_EN.md.txt b/content/_sources/documentation/en/getting-started/data-partition_EN.md.txt
index 111ad28..7d809c5 100644
--- a/content/_sources/documentation/en/getting-started/data-partition_EN.md.txt
+++ b/content/_sources/documentation/en/getting-started/data-partition_EN.md.txt
@@ -17,7 +17,6 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-
 # Data Partition
 
 This document mainly introduces Doris's table construction and data partitioning, as well as problems and solutions that may be encountered in the construction of the table.
@@ -53,31 +52,31 @@ This section introduces Doris's approach to building tables with an example.
 ```
 CREATE TABLE IF NOT EXISTS example_db.expamle_tbl
 (
-    `user_id` LARGEINT NOT NULL COMMENT "user id",
-    `date` DATE NOT NULL COMMENT "Data fill in date time",
-    `timestamp` DATETIME NOT NULL COMMENT "Timestamp of data being poured",
-    `city` VARCHAR(20) COMMENT "The city where the user is located",
-    `age` SMALLINT COMMENT "user age",
-    `sex` TINYINT COMMENT "User Gender",
-    `last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00" COMMENT "User last visit time",
-    `cost` BIGINT SUM DEFAULT "0" COMMENT "Total user consumption",
-    `max_dwell_time` INT MAX DEFAULT "0" COMMENT "User maximum dwell time",
-    `min_dwell_time` INT MIN DEFAULT "99999" COMMENT "User minimum dwell time"
+    `user_id` LARGEINT NOT NULL COMMENT "user id",
+    `date` DATE NOT NULL COMMENT "Data fill in date time",
+    `timestamp` DATETIME NOT NULL COMMENT "Timestamp of data being poured",
+    `city` VARCHAR(20) COMMENT "The city where the user is located",
+    `age` SMALLINT COMMENT "user age",
+    `sex` TINYINT COMMENT "User Gender",
+    `last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00" COMMENT "User last visit time",
+    `cost` BIGINT SUM DEFAULT "0" COMMENT "Total user consumption",
+    `max_dwell_time` INT MAX DEFAULT "0" COMMENT "User maximum dwell time",
+    `min_dwell_time` INT MIN DEFAULT "99999" COMMENT "User minimum dwell time"
 )
 ENGINE=olap
 AGGREGATE KEY(`user_id`, `date`, `timestamp`, `city`, `age`, `sex`)
 PARTITION BY RANGE(`date`)
 (
-    PARTITION `p201701` VALUES LESS THAN ("2017-02-01"),
-    PARTITION `p201702` VALUES LESS THAN ("2017-03-01"),
-    PARTITION `p201703` VALUES LESS THAN ("2017-04-01")
+    PARTITION `p201701` VALUES LESS THAN ("2017-02-01"),
+    PARTITION `p201702` VALUES LESS THAN ("2017-03-01"),
+    PARTITION `p201703` VALUES LESS THAN ("2017-04-01")
 )
 DISTRIBUTED BY HASH(`user_id`) BUCKETS 16
 PROPERTIES
 (
-    "replication_num" = "3",
-    "storage_medium" = "SSD",
-    "storage_cooldown_time" = "2018-01-01 12:00:00"
+    "replication_num" = "3",
+    "storage_medium" = "SSD",
+    "storage_cooldown_time" = "2018-01-01 12:00:00"
 );
 
 ```
@@ -106,98 +105,93 @@ It is also possible to use only one layer of partitioning. When using a layer pa
 
 1. Partition
 
-    * The Partition column can specify one or more columns. The partition class must be a KEY column. The use of multi-column partitions is described later in the **Multi-column partitioning** summary.
-    
+    * The Partition column can specify one or more columns. The partition class must be a KEY column. The use of multi-column partitions is described later in the **Multi-column partitioning** summary. 
     * Regardless of the type of partition column, double quotes are required when writing partition values.
     * Partition columns are usually time columns for easy management of old and new data.
     * There is no theoretical limit on the number of partitions.
     * When you do not use Partition to build a table, the system will automatically generate a Partition with the same name as the table name. This Partition is not visible to the user and cannot be modified.
     * Partition supports only the upper bound by `VALUES LESS THAN (...)`, the system will use the upper bound of the previous partition as the lower bound of the partition, and generate a left closed right open interval. Passing, also supports specifying the upper and lower bounds by `VALUES [...)`, and generating a left closed right open interval.
-
     * It is easier to understand by specifying `VALUES [...)`. Here is an example of the change in partition range when adding or deleting partitions using the `VALUES LESS THAN (...)` statement:
-    
         * As the example above, when the table is built, the following 3 partitions are automatically generated:
+            ```
+            P201701: [MIN_VALUE, 2017-02-01)
+            P201702: [2017-02-01, 2017-03-01)
+            P201703: [2017-03-01, 2017-04-01)
+            ```
+        * When we add a partition p201705 VALUES LESS THAN ("2017-06-01"), the partition results are as follows:
 
-            ```
-            P201701: [MIN_VALUE, 2017-02-01)
-            P201702: [2017-02-01, 2017-03-01)
-            P201703: [2017-03-01, 2017-04-01)
-            ```
-        
-        * When we add a partition p201705 VALUES LESS THAN ("2017-06-01"), the partition results are as follows:
-
-            ```
-            P201701: [MIN_VALUE, 2017-02-01)
-            P201702: [2017-02-01, 2017-03-01)
-            P201703: [2017-03-01, 2017-04-01)
-            P201705: [2017-04-01, 2017-06-01)
-            ```
-            
-        * At this point we delete the partition p201703, the partition results are as follows:
-        
-            ```
-            p201701: [MIN_VALUE, 2017-02-01)
-            p201702: [2017-02-01, 2017-03-01)
-            p201705: [2017-04-01, 2017-06-01)
-            ```
-            
-            > Note that the partition range of p201702 and p201705 has not changed, and there is a hole between the two partitions: [2017-03-01, 2017-04-01). That is, if the imported data range is within this hole, it cannot be imported.
-            
-        * Continue to delete partition p201702, the partition results are as follows:
-        
-            ```
-            p201701: [MIN_VALUE, 2017-02-01)
-            p201705: [2017-04-01, 2017-06-01)
-            The void range becomes: [2017-02-01, 2017-04-01)
-            ```
-            
-        * Now add a partition p201702new VALUES LESS THAN ("2017-03-01"), the partition results are as follows:
-            
-            ```
-            p201701: [MIN_VALUE, 2017-02-01)
-            p201702new: [2017-02-01, 2017-03-01)
-            p201705: [2017-04-01, 2017-06-01)
-            ```
-            
-            > You can see that the hole size is reduced to: [2017-03-01, 2017-04-01)
-            
-        * Now delete partition p201701 and add partition p201612 VALUES LESS THAN ("2017-01-01"), the partition result is as follows:
-
-            ```
-            p201612: [MIN_VALUE, 2017-01-01)
-            p201702new: [2017-02-01, 2017-03-01)
-            p201705: [2017-04-01, 2017-06-01)
             ```
-            
+            P201701: [MIN_VALUE, 2017-02-01)
+            P201702: [2017-02-01, 2017-03-01)
+            P201703: [2017-03-01, 2017-04-01)
+            P201705: [2017-04-01, 2017-06-01)
+            ```
+            
+        * At this point we delete the partition p201703, the partition results are as follows:
+        
+            ```
+            p201701: [MIN_VALUE, 2017-02-01)
+            p201702: [2017-02-01, 2017-03-01)
+            p201705: [2017-04-01, 2017-06-01)
+            ```
+            
+            > Note that the partition range of p201702 and p201705 has not changed, and there is a hole between the two partitions: [2017-03-01, 2017-04-01). That is, if the imported data range is within this hole, it cannot be imported.
+            
+        * Continue to delete partition p201702, the partition results are as follows:
+        
+            ```
+            p201701: [MIN_VALUE, 2017-02-01)
+            p201705: [2017-04-01, 2017-06-01)
+            The void range becomes: [2017-02-01, 2017-04-01)
+            ```
+            
+        * Now add a partition p201702new VALUES LESS THAN ("2017-03-01"), the partition results are as follows:
+            
+            ```
+            p201701: [MIN_VALUE, 2017-02-01)
+            p201702new: [2017-02-01, 2017-03-01)
+            p201705: [2017-04-01, 2017-06-01)
+            ```
+            
+            > You can see that the hole size is reduced to: [2017-03-01, 2017-04-01)
+            
+        * Now delete partition p201701 and add partition p201612 VALUES LESS THAN ("2017-01-01"), the partition result is as follows:
+
+            ```
+            p201612: [MIN_VALUE, 2017-01-01)
+            p201702new: [2017-02-01, 2017-03-01)
+            p201705: [2017-04-01, 2017-06-01)
+            ```
+            
             > A new void appeared: [2017-01-01, 2017-02-01)
-        
-    In summary, the deletion of a partition does not change the scope of an existing partition. There may be holes in deleting partitions. When a partition is added by the `VALUES LESS THAN` statement, the lower bound of the partition immediately follows the upper bound of the previous partition.
-    
-    You cannot add partitions with overlapping ranges.
+        
+    In summary, the deletion of a partition does not change the scope of an existing partition. There may be holes in deleting partitions. When a partition is added by the `VALUES LESS THAN` statement, the lower bound of the partition immediately follows the upper bound of the previous partition.
+    
+    You cannot add partitions with overlapping ranges.
 
 2. Bucket
 
-    * If a Partition is used, the `DISTRIBUTED ...` statement describes the division rules for the data in each partition. If you do not use Partition, it describes the rules for dividing the data of the entire table.
-    * The bucket column can be multiple columns, but it must be a Key column. The bucket column can be the same or different from the Partition column.
-    * The choice of bucket column is a trade-off between **query throughput** and **query concurrency**:
+    * If a Partition is used, the `DISTRIBUTED ...` statement describes the division rules for the data in each partition. If you do not use Partition, it describes the rules for dividing the data of the entire table.
+    * The bucket column can be multiple columns, but it must be a Key column. The bucket column can be the same or different from the Partition column.
+    * The choice of bucket column is a trade-off between **query throughput** and **query concurrency**:
 
-        1. If you select multiple bucket columns, the data is more evenly distributed. However, if the query condition does not include the equivalent condition for all bucket columns, a query will scan all buckets. The throughput of such queries will increase, but the latency of a single query will increase. This method is suitable for large throughput and low concurrent query scenarios.
-        2. If you select only one or a few bucket columns, the point query can query only one bucket. This approach is suitable for high-concurrency point query scenarios.
-        
-    * There is no theoretical limit on the number of buckets.
+        1. If you select multiple bucket columns, the data is more evenly distributed. However, if the query condition does not include the equivalent condition for all bucket columns, a query will scan all buckets. The throughput of such queries will increase, but the latency of a single query will increase. This method is suitable for large throughput and low concurrent query scenarios.
+        2. If you select only one or a few bucket columns, the point query can query only one bucket. This approach is suitable for high-concurrency point query scenarios.
+        
+    * There is no theoretical limit on the number of buckets.
 
 3. Recommendations on the number and amount of data for Partitions and Buckets.
 
-    * The total number of tablets in a table is equal to (Partition num * Bucket num).
-    * The number of tablets in a table, which is slightly more than the number of disks in the entire cluster, regardless of capacity expansion.
-    * The data volume of a single tablet does not theoretically have an upper and lower bound, but is recommended to be in the range of 1G - 10G. If the amount of data for a single tablet is too small, the aggregation of the data is not good and the metadata management pressure is high. If the amount of data is too large, it is not conducive to the migration, completion, and increase the cost of Schema Change or Rollup operation failure retry (the granularity of these operations failure  [...]
-    * When the tablet's data volume principle and quantity principle conflict, it is recommended to prioritize the data volume principle.
-    * When building a table, the number of Buckets for each partition is uniformly specified. However, when dynamically increasing partitions (`ADD PARTITION`), you can specify the number of Buckets for the new partition separately. This feature can be used to easily reduce or expand data.
-    * Once the number of Buckets for a Partition is specified, it cannot be changed. Therefore, when determining the number of Buckets, you need to consider the expansion of the cluster in advance. For example, there are currently only 3 hosts, and each host has 1 disk. If the number of Buckets is only set to 3 or less, then even if you add more machines later, you can't increase the concurrency.
-    * Give some examples: Suppose there are 10 BEs, one for each BE disk. If the total size of a table is 500MB, you can consider 4-8 shards. 5GB: 8-16. 50GB: 32. 500GB: Recommended partitions, each partition is about 50GB in size, with 16-32 shards per partition. 5TB: Recommended partitions, each with a size of around 50GB and 16-32 shards per partition.
-    
-    > Note: The amount of data in the table can be viewed by the `show data` command. The result is divided by the number of copies, which is the amount of data in the table.
-    
+    * The total number of tablets in a table is equal to (Partition num * Bucket num).
+    * The number of tablets in a table, which is slightly more than the number of disks in the entire cluster, regardless of capacity expansion.
+    * The data volume of a single tablet does not theoretically have an upper and lower bound, but is recommended to be in the range of 1G - 10G. If the amount of data for a single tablet is too small, the aggregation of the data is not good and the metadata management pressure is high. If the amount of data is too large, it is not conducive to the migration, completion, and increase the cost of Schema Change or Rollup operation failure retry (the granularity of these operations failure  [...]
+    * When the tablet's data volume principle and quantity principle conflict, it is recommended to prioritize the data volume principle.
+    * When building a table, the number of Buckets for each partition is uniformly specified. However, when dynamically increasing partitions (`ADD PARTITION`), you can specify the number of Buckets for the new partition separately. This feature can be used to easily reduce or expand data.
+    * Once the number of Buckets for a Partition is specified, it cannot be changed. Therefore, when determining the number of Buckets, you need to consider the expansion of the cluster in advance. For example, there are currently only 3 hosts, and each host has 1 disk. If the number of Buckets is only set to 3 or less, then even if you add more machines later, you can't increase the concurrency.
+    * Give some examples: Suppose there are 10 BEs, one for each BE disk. If the total size of a table is 500MB, you can consider 4-8 shards. 5GB: 8-16. 50GB: 32. 500GB: Recommended partitions, each partition is about 50GB in size, with 16-32 shards per partition. 5TB: Recommended partitions, each with a size of around 50GB and 16-32 shards per partition.
+    
+    > Note: The amount of data in the table can be viewed by the `show data` command. The result is divided by the number of copies, which is the amount of data in the table.
+    
 #### Multi-column partition
 
 Doris supports specifying multiple columns as partition columns, examples are as follows:
@@ -205,9 +199,9 @@ Doris supports specifying multiple columns as partition columns, examples are as
 ```
 PARTITION BY RANGE(`date`, `id`)
 (
-    PARTITION `p201701_1000` VALUES LESS THAN ("2017-02-01", "1000"),
-    PARTITION `p201702_2000` VALUES LESS THAN ("2017-03-01", "2000"),
-    PARTITION `p201703_all` VALUES LESS THAN ("2017-04-01")
+    PARTITION `p201701_1000` VALUES LESS THAN ("2017-02-01", "1000"),
+    PARTITION `p201702_2000` VALUES LESS THAN ("2017-03-01", "2000"),
+    PARTITION `p201703_all` VALUES LESS THAN ("2017-04-01")
 )
 ```
 
@@ -240,17 +234,17 @@ In the last PROPERTIES of the table statement, you can specify the following two
 
 Replication_num
 
-    * The number of copies per tablet. The default is 3, it is recommended to keep the default. In the build statement, the number of Tablet copies in all Partitions is uniformly specified. When you add a new partition, you can individually specify the number of copies of the tablet in the new partition.
-    * The number of copies can be modified at runtime. It is strongly recommended to keep odd numbers.
-    * The maximum number of copies depends on the number of independent IPs in the cluster (note that it is not the number of BEs). The principle of replica distribution in Doris is that the copies of the same Tablet are not allowed to be distributed on the same physical machine, and the physical machine is identified as IP. Therefore, even if 3 or more BE instances are deployed on the same physical machine, if the BEs have the same IP, you can only set the number of copies to 1.
-    * For some small, and infrequently updated dimension tables, consider setting more copies. In this way, when joining queries, there is a greater probability of local data join.
+    * The number of copies per tablet. The default is 3, it is recommended to keep the default. In the build statement, the number of Tablet copies in all Partitions is uniformly specified. When you add a new partition, you can individually specify the number of copies of the tablet in the new partition.
+    * The number of copies can be modified at runtime. It is strongly recommended to keep odd numbers.
+    * The maximum number of copies depends on the number of independent IPs in the cluster (note that it is not the number of BEs). The principle of replica distribution in Doris is that the copies of the same Tablet are not allowed to be distributed on the same physical machine, and the physical machine is identified as IP. Therefore, even if 3 or more BE instances are deployed on the same physical machine, if the BEs have the same IP, you can only set the number of copies to 1.
+    * For some small, and infrequently updated dimension tables, consider setting more copies. In this way, when joining queries, there is a greater probability of local data join.
 
 2. storage_medium & storage\_cooldown\_time
 
-    * The BE data storage directory can be explicitly specified as SSD or HDD (differentiated by .SSD or .HDD suffix). When you build a table, you can uniformly specify the media for all Partition initial storage. Note that the suffix is ​​to explicitly specify the disk media without checking to see if it matches the actual media type.
-    * The default initial storage medium is HDD. If specified as an SSD, the data is initially stored on the SSD.
-    * If storage\_cooldown\_time is not specified, the data is automatically migrated from the SSD to the HDD after 7 days by default. If storage\_cooldown\_time is specified, the data will not migrate until the storage_cooldown_time time is reached.
-    * Note that this parameter is just a "best effort" setting when storage_medium is specified. Even if no SSD storage media is set in the cluster, no error is reported and it is automatically stored in the available data directory. Similarly, if the SSD media is inaccessible and out of space, the data may initially be stored directly on other available media. When the data expires and is migrated to the HDD, if the HDD media is inaccessible and there is not enough space, the migration  [...]
+    * The BE data storage directory can be explicitly specified as SSD or HDD (differentiated by .SSD or .HDD suffix). When you build a table, you can uniformly specify the media for all Partition initial storage. Note that the suffix is ​​to explicitly specify the disk media without checking to see if it matches the actual media type.
+    * The default initial storage medium is HDD. If specified as an SSD, the data is initially stored on the SSD.
+    * If storage\_cooldown\_time is not specified, the data is automatically migrated from the SSD to the HDD after 7 days by default. If storage\_cooldown\_time is specified, the data will not migrate until the storage_cooldown_time time is reached.
+    * Note that this parameter is just a "best effort" setting when storage_medium is specified. Even if no SSD storage media is set in the cluster, no error is reported and it is automatically stored in the available data directory. Similarly, if the SSD media is inaccessible and out of space, the data may initially be stored directly on other available media. When the data expires and is migrated to the HDD, if the HDD media is inaccessible and there is not enough space, the migration  [...]
 
 ### ENGINE
 
@@ -258,7 +252,7 @@ In this example, the type of ENGINE is olap, the default ENGINE type. In Doris,
 
 ### Other
 
-    `IF NOT EXISTS` indicates that if the table has not been created, it is created. Note that only the table name is judged here, and it is not determined whether the new table structure is the same as the existing table structure. So if there is a table with the same name but different structure, the command will also return success, but it does not mean that a new table and a new structure have been created.
+`IF NOT EXISTS` indicates that if the table has not been created, it is created. Note that only the table name is judged here, and it is not determined whether the new table structure is the same as the existing table structure. So if there is a table with the same name but different structure, the command will also return success, but it does not mean that a new table and a new structure have been created.
 
 ## common problem
 
@@ -266,27 +260,27 @@ In this example, the type of ENGINE is olap, the default ENGINE type. In Doris,
 
 1. If a syntax error occurs in a long build statement, a syntax error may be incomplete. Here is a list of possible syntax errors for manual error correction:
 
-    * The syntax is incorrect. Please read `HELP CREATE TABLE;` carefully to check the relevant syntax structure.
-    * Reserved words. When the user-defined name encounters a reserved word, it needs to be enclosed in the backquote ``. It is recommended that all custom names be generated using this symbol.
-    * Chinese characters or full-width characters. Non-utf8 encoded Chinese characters, or hidden full-width characters (spaces, punctuation, etc.) can cause syntax errors. It is recommended to check with a text editor with invisible characters.
+    * The syntax is incorrect. Please read `HELP CREATE TABLE;` carefully to check the relevant syntax structure.
+    * Reserved words. When the user-defined name encounters a reserved word, it needs to be enclosed in the backquote ``. It is recommended that all custom names be generated using this symbol.
+    * Chinese characters or full-width characters. Non-utf8 encoded Chinese characters, or hidden full-width characters (spaces, punctuation, etc.) can cause syntax errors. It is recommended to check with a text editor with invisible characters.
 
 2. `Failed to create partition [xxx] . Timeout`
 
-    Doris builds are created in order of Partition granularity. This error may be reported when a Partition creation fails. Even if you don't use Partition, you will report `Failed to create partition` when there is a problem with the built table, because as mentioned earlier, Doris will create an unchangeable default Partition for tables that do not have a Partition specified.
-    
-    When this error is encountered, it is usually the BE that has encountered problems creating data fragments. You can follow the steps below to troubleshoot:
-    
-    1. In fe.log, find the `Failed to create partition` log for the corresponding point in time. In this log, a series of numbers like `{10001-10010}` will appear. The first number of the pair is the Backend ID and the second number is the Tablet ID. As for the pair of numbers above, on the Backend with ID 10001, creating a tablet with ID 10010 failed.
-    2. Go to the be.INFO log corresponding to Backend and find the log related to the tablet id in the corresponding time period. You can find the error message.
-    3. Listed below are some common tablet creation failure errors, including but not limited to:
-        * BE did not receive the relevant task, and the tablet id related log could not be found in be.INFO. Or the BE is created successfully, but the report fails. For the above questions, see [Deployment and Upgrade Documentation] to check the connectivity of FE and BE.
-        * Pre-allocated memory failed. It may be that the length of a line in a row in the table exceeds 100KB.
-        * `Too many open files`. The number of open file handles exceeds the Linux system limit. The handle limit of the Linux system needs to be modified.
+    Doris builds are created in order of Partition granularity. This error may be reported when a Partition creation fails. Even if you don't use Partition, you will report `Failed to create partition` when there is a problem with the built table, because as mentioned earlier, Doris will create an unchangeable default Partition for tables that do not have a Partition specified.
+
+    When this error is encountered, it is usually the BE that has encountered problems creating data fragments. You can follow the steps below to troubleshoot:
 
-    You can also extend the timeout by setting `tablet_create_timeout_second=xxx` in fe.conf. The default is 2 seconds.
+    1. In fe.log, find the `Failed to create partition` log for the corresponding point in time. In this log, a series of numbers like `{10001-10010}` will appear. The first number of the pair is the Backend ID and the second number is the Tablet ID. As for the pair of numbers above, on the Backend with ID 10001, creating a tablet with ID 10010 failed.
+    2. Go to the be.INFO log corresponding to Backend and find the log related to the tablet id in the corresponding time period. You can find the error message.
+    3. Listed below are some common tablet creation failure errors, including but not limited to:
+        * BE did not receive the relevant task, and the tablet id related log could not be found in be.INFO. Or the BE is created successfully, but the report fails. For the above questions, see [Deployment and Upgrade Documentation] to check the connectivity of FE and BE.
+        * Pre-allocated memory failed. It may be that the length of a line in a row in the table exceeds 100KB.
+        * `Too many open files`. The number of open file handles exceeds the Linux system limit. The handle limit of the Linux system needs to be modified.
+
+    You can also extend the timeout by setting `tablet_create_timeout_second=xxx` in fe.conf. The default is 2 seconds.
 
 3. The build table command does not return results for a long time.
 
-    Doris's table creation command is a synchronous command. The timeout of this command is currently set to be relatively simple, ie (tablet num * replication num) seconds. If you create more data fragments and have fragment creation failed, it may cause an error to be returned after waiting for a long timeout.
-    
-    Under normal circumstances, the statement will return in a few seconds or ten seconds. If it is more than one minute, it is recommended to cancel this operation directly and go to the FE or BE log to view the related errors.
+    Doris's table creation command is a synchronous command. The timeout of this command is currently set to be relatively simple, ie (tablet num * replication num) seconds. If you create more data fragments and have fragment creation failed, it may cause an error to be returned after waiting for a long timeout.
+
+    Under normal circumstances, the statement will return in a few seconds or ten seconds. If it is more than one minute, it is recommended to cancel this operation directly and go to the FE or BE log to view the related errors.
diff --git a/content/_sources/documentation/en/installing/install-deploy_EN.md.txt b/content/_sources/documentation/en/installing/install-deploy_EN.md.txt
index a36f37e..7c5c6cc 100644
--- a/content/_sources/documentation/en/installing/install-deploy_EN.md.txt
+++ b/content/_sources/documentation/en/installing/install-deploy_EN.md.txt
@@ -87,7 +87,6 @@ Doris instances communicate directly over the network. The following table shows
 | Instance Name | Port Name | Default Port | Communication Direction | Description|
 | ---|---|---|---|---|
 | BE | be_port | 9060 | FE - > BE | BE for receiving requests from FE|
-| BE | be\_rpc_port | 9070 | BE < - > BE | port used by RPC between BE | BE|
 | BE | webserver\_port | 8040 | BE <--> BE | BE|
 | BE | heartbeat\_service_port | 9050 | FE - > BE | the heart beat service port (thrift) on BE, used to receive heartbeat from FE|
 | BE | brpc\_port* | 8060 | FE < - > BE, BE < - > BE | BE for communication between BEs|
@@ -101,7 +100,6 @@ Doris instances communicate directly over the network. The following table shows
 > 
 > 1. When deploying multiple FE instances, make sure that the http port configuration of FE is the same.
 > 2. Make sure that each port has access in its proper direction before deployment.
-> 3. brpc port replaced be rpc_port after version 0.8.2
 
 #### IP binding
 
@@ -140,7 +138,7 @@ BROKER does not currently have, nor does it need, priority\ networks. Broker's s
 * Configure FE
 
 	1. The configuration file is conf/fe.conf. Note: `meta_dir`: Metadata storage location. The default is fe/palo-meta/. The directory needs to be **created manually** by.
-	2. JAVA_OPTS in fe.conf defaults to a maximum heap memory of 2GB for java, and it is recommended that the production environment be adjusted to more than 8G.
+	2. JAVA_OPTS in fe.conf defaults to a maximum heap memory of 4GB for java, and it is recommended that the production environment be adjusted to more than 8G.
 
 * Start FE
 
diff --git a/content/_sources/documentation/en/internal/grouping_sets_design_EN.md.txt b/content/_sources/documentation/en/internal/grouping_sets_design_EN.md.txt
new file mode 100644
index 0000000..e1c82b4
--- /dev/null
+++ b/content/_sources/documentation/en/internal/grouping_sets_design_EN.md.txt
@@ -0,0 +1,494 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+# GROUPING SETS DESIGN
+
+## 1. GROUPING SETS Background
+
+The `CUBE`, `ROLLUP`, and `GROUPING` `SETS` extensions to SQL make querying and reporting easier and faster. `CUBE`, `ROLLUP`, and grouping sets produce a single result set that is equivalent to a `UNION` `ALL` of differently grouped rows. `ROLLUP` calculates aggregations such as `SUM`, `COUNT`, `MAX`, `MIN`, and `AVG` at increasing levels of aggregation, from the most detailed up to a grand total. `CUBE` is an extension similar to `ROLLUP`, enabling a single statement to calculate all p [...]
+To enhance performance, `CUBE`, `ROLLUP`, and `GROUPING SETS` can be parallelized: multiple processes can simultaneously execute all of these statements. These capabilities make aggregate calculations more efficient, thereby enhancing database performance, and scalability.
+
+The three `GROUPING` functions help you identify the group each row belongs to and enable sorting subtotal rows and filtering results.
+
+### 1.1 GROUPING SETS Syntax
+
+`GROUPING SETS` syntax lets you define multiple groupings in the same query. `GROUP BY` computes all the groupings specified and combines them with `UNION ALL`. For example, consider the following statement:
+
+```
+SELECT k1, k2, SUM( k3 ) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k1), (k2), ( ) );
+```
+
+
+This statement is equivalent to:
+
+```
+SELECT k1, k2, SUM( k3 ) FROM t GROUP BY k1, k2
+UNION
+SELECT k1, null, SUM( k3 ) FROM t GROUP BY k1
+UNION
+SELECT null, k2, SUM( k3 ) FROM t GROUP BY k2
+UNION
+SELECT null, null, SUM( k3 ) FROM t
+```
+
+This is an example of real query:
+
+```
+mysql> SELECT * FROM t;
++------+------+------+
+| k1   | k2   | k3   |
++------+------+------+
+| a    | A    |    1 |
+| a    | A    |    2 |
+| a    | B    |    1 |
+| a    | B    |    3 |
+| b    | A    |    1 |
+| b    | A    |    4 |
+| b    | B    |    1 |
+| b    | B    |    5 |
++------+------+------+
+8 rows in set (0.01 sec)
+
+mysql> SELECT k1, k2, SUM(k3) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k2), (k1), ( ) );
++------+------+-----------+
+| k1   | k2   | sum(`k3`) |
++------+------+-----------+
+| b    | B    |         6 |
+| a    | B    |         4 |
+| a    | A    |         3 |
+| b    | A    |         5 |
+| NULL | B    |        10 |
+| NULL | A    |         8 |
+| a    | NULL |         7 |
+| b    | NULL |        11 |
+| NULL | NULL |        18 |
++------+------+-----------+
+9 rows in set (0.06 sec)
+```
+
+### 1.2 ROLLUP Syntax
+
+`ROLLUP` enables a `SELECT` statement to calculate multiple levels of subtotals across a specified group of dimensions. It also calculates a grand total. `ROLLUP` is a simple extension to the `GROUP` `BY` clause, so its syntax is extremely easy to use. The `ROLLUP` extension is highly efficient, adding minimal overhead to a query.
+
+`ROLLUP` appears in the `GROUP` `BY` clause in a `SELECT` statement. Its form is:
+
+```
+SELECT a, b,c, SUM( d ) FROM tab1 GROUP BY ROLLUP(a,b,c)
+```
+
+This statement is equivalent to GROUPING SETS as followed:
+
+```
+GROUPING SETS (
+(a,b,c),
+( a, b ),
+( a),
+( )
+)
+```
+
+### 1.3 CUBE Syntax
+
+Like `ROLLUP`   `CUBE` generates all the subtotals that could be calculated for a data cube with the specified dimensions.
+
+```
+SELECT a, b,c, SUM( d ) FROM tab1 GROUP BY CUBE(a,b,c)
+```
+
+e.g.  CUBE ( a, b, c )  is equivalent to GROUPING SETS as followed:
+
+```
+GROUPING SETS (
+( a, b, c ),
+( a, b ),
+( a,    c ),
+( a       ),
+(    b, c ),
+(    b    ),
+(       c ),
+(         )
+)
+```
+
+### 1.4 GROUPING and GROUPING_ID Function
+
+Indicates whether a specified column expression in a `GROUP BY` list is aggregated or not. `GROUPING `returns 1 for aggregated or 0 for not aggregated in the result set. `GROUPING` can be used only in the `SELECT` list, `HAVING`, and `ORDER BY` clauses when `GROUP BY` is specified.
+
+`GROUPING_ID` describes which of a list of expressions are grouped in a row produced by a `GROUP BY` query. The `GROUPING_ID` function simply returns the decimal equivalent of the binary value formed as a result of the concatenation of the values returned by the `GROUPING` functions.
+
+Each `GROUPING_ID` argument must be an element of the `GROUP BY` list. `GROUPING_ID ()` returns an **integer** bitmap whose lowest N bits may be lit. A lit **bit** indicates the corresponding argument is not a grouping column for the given output row. The lowest-order **bit** corresponds to argument N, and the N-1th lowest-order **bit** corresponds to argument 1. If the column is a grouping column the bit is 0 else is 1.
+
+For example:
+
+```
+mysql> select * from t;
++------+------+------+
+| k1   | k2   | k3   |
++------+------+------+
+| a    | A    |    1 |
+| a    | A    |    2 |
+| a    | B    |    1 |
+| a    | B    |    3 |
+| b    | A    |    1 |
+| b    | A    |    4 |
+| b    | B    |    1 |
+| b    | B    |    5 |
++------+------+------+
+```
+
+grouping sets result:
+
+```
+mysql> SELECT k1, k2, GROUPING(k1), GROUPING(k2), SUM(k3) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k2), (k1), ( ) );
++------+------+----------------+----------------+-----------+
+| k1   | k2   | grouping(`k1`) | grouping(`k2`) | sum(`k3`) |
++------+------+----------------+----------------+-----------+
+| a    | A    |              0 |              0 |         3 |
+| a    | B    |              0 |              0 |         4 |
+| a    | NULL |              0 |              1 |         7 |
+| b    | A    |              0 |              0 |         5 |
+| b    | B    |              0 |              0 |         6 |
+| b    | NULL |              0 |              1 |        11 |
+| NULL | A    |              1 |              0 |         8 |
+| NULL | B    |              1 |              0 |        10 |
+| NULL | NULL |              1 |              1 |        18 |
++------+------+----------------+----------------+-----------+
+9 rows in set (0.02 sec)
+
+mysql> SELECT k1, k2, GROUPING_ID(k1,k2), SUM(k3) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k2), (k1), ( ) );
++------+------+-------------------------+-----------+
+| k1   | k2   | grouping_id(`k1`, `k2`) | sum(`k3`) |
++------+------+-------------------------+-----------+
+| a    | A    |                       0 |         3 |
+| a    | B    |                       0 |         4 |
+| a    | NULL |                       1 |         7 |
+| b    | A    |                       0 |         5 |
+| b    | B    |                       0 |         6 |
+| b    | NULL |                       1 |        11 |
+| NULL | A    |                       2 |         8 |
+| NULL | B    |                       2 |        10 |
+| NULL | NULL |                       3 |        18 |
++------+------+-------------------------+-----------+
+9 rows in set (0.02 sec)
+
+mysql> SELECT k1, k2, grouping(k1), grouping(k2), GROUPING_ID(k1,k2), SUM(k4) FROM t GROUP BY GROUPING SETS ( (k1, k2), (k2), (k1), ( ) ) order by k1, k2;
++------+------+----------------+----------------+-------------------------+-----------+
+| k1   | k2   | grouping(`k1`) | grouping(`k2`) | grouping_id(`k1`, `k2`) | sum(`k4`) |
++------+------+----------------+----------------+-------------------------+-----------+
+| a    | A    |              0 |              0 |                       0 |         3 |
+| a    | B    |              0 |              0 |                       0 |         4 |
+| a    | NULL |              0 |              1 |                       1 |         7 |
+| b    | A    |              0 |              0 |                       0 |         5 |
+| b    | B    |              0 |              0 |                       0 |         6 |
+| b    | NULL |              0 |              1 |                       1 |        11 |
+| NULL | A    |              1 |              0 |                       2 |         8 |
+| NULL | B    |              1 |              0 |                       2 |        10 |
+| NULL | NULL |              1 |              1 |                       3 |        18 |
++------+------+----------------+----------------+-------------------------+-----------+
+9 rows in set (0.02 sec)
+
+```
+### 1.5 Composition and nesting of GROUPING SETS
+
+First of all, a GROUP BY clause is essentially a special case of GROUPING SETS, for example:
+
+```
+   GROUP BY a
+is equivalent to:
+   GROUP BY GROUPING SETS((a))
+also,
+   GROUP BY a,b,c
+is equivalent to:
+   GROUP BY GROUPING SETS((a,b,c))
+```
+
+Similarly, CUBE and ROLLUP can be expanded into GROUPING SETS, so the various combinations and nesting of GROUP BY, CUBE, ROLLUP, GROUPING SETS are essentially the combination and nesting of GROUPING SETS.
+
+For GROUPING SETS nesting, it is semantically equivalent to writing the statements inside the nest directly outside. (ref:<https://www.brytlyt.com/documentation/data-manipulation-dml/grouping-sets-rollup-cube/>) mentions: 
+
+```
+The CUBE and ROLLUP constructs can be used either directly in the GROUP BY clause, or nested inside a GROUPING SETS clause. If one GROUPING SETS clause is nested inside another, the effect is the same as if all the elements of the inner clause had been written directly in the outer clause.
+```
+
+For a combined list of multiple GROUPING SETS, many databases consider it a cross product relationship.
+
+for example:
+
+```
+GROUP BY a, CUBE (b, c), GROUPING SETS ((d), (e))
+
+is equivalent to:
+
+GROUP BY GROUPING SETS (
+(a, b, c, d), (a, b, c, e),
+(a, b, d),    (a, b, e),
+(a, c, d),    (a, c, e),
+(a, d),       (a, e)
+)
+```
+
+For the combination and nesting of GROUPING SETS, each database support is not the same. For example snowflake does not support any combination and nesting.
+(<https://docs.snowflake.net/manuals/sql-reference/constructs/group-by.html>)
+
+Oracle supports both composition and nesting.
+(<https://docs.oracle.com/cd/B19306_01/server.102/b14223/aggreg.htm#i1006842>)
+
+Presto supports composition, but not nesting.
+(<https://prestodb.github.io/docs/current/sql/select.html>)
+
+## 2. Object
+
+Support `GROUPING SETS`, `ROLLUP` and `CUBE ` syntax,impliments 1.1, 1.2, 1.3 1.4, 1.5, not support the combination
+ and nesting of GROUPING SETS at current version.
+
+### 2.1 GROUPING SETS Syntax
+
+```
+SELECT ...
+FROM ...
+[ ... ]
+GROUP BY GROUPING SETS ( groupSet [ , groupSet [ , ... ] ] )
+[ ... ]
+
+groupSet ::= { ( expr  [ , expr [ , ... ] ] )}
+
+<expr>
+Expression,column name.
+```
+
+### 2.2 ROLLUP Syntax
+
+```
+SELECT ...
+FROM ...
+[ ... ]
+GROUP BY ROLLUP ( expr  [ , expr [ , ... ] ] )
+[ ... ]
+
+<expr>
+Expression,column name.
+```
+
+### 2.3 CUBE Syntax
+
+```
+SELECT ...
+FROM ...
+[ ... ]
+GROUP BY CUBE ( expr  [ , expr [ , ... ] ] )
+[ ... ]
+
+<expr>
+Expression,column name.
+```
+
+## 3. Implementation
+
+### 3.1 Overall Design Approaches
+
+For `GROUPING SET`  is equivalent to the `UNION` of  `GROUP BY` .  So we can expand input rows, and run an  GROUP BY on these rows。
+
+For example:
+
+```
+SELECT a, b FROM src GROUP BY a, b GROUPING SETS ((a, b), (a), (b), ());
+```
+
+Data in  table src :
+
+```
+1, 2
+3, 4
+```
+
+Base on  GROUPING SETS , we can expend the input to:
+
+```
+1, 2       (GROUPING_ID: a, b -> 00 -> 0)
+1, null    (GROUPING_ID: a, null -> 01 -> 1)
+null, 2    (GROUPING_ID: null, b -> 10 -> 2)
+null, null (GROUPING_ID: null, null -> 11 -> 3)
+
+3, 4       (GROUPING_ID: a, b -> 00 -> 0)
+3, null    (GROUPING_ID: a, null -> 01 -> 1)
+null, 4    (GROUPING_ID: null, b -> 10 -> 2)
+null, null (GROUPING_ID: null, null -> 11 -> 3)
+```
+
+And then use those row as input, then GROUP BY  a, b, GROUPING_ID
+
+### 3.2 Example
+
+Table t:
+
+```
+mysql> select * from t;
++------+------+------+
+| k1   | k2   | k3   |
++------+------+------+
+| a    | A    |    1 |
+| a    | A    |    2 |
+| a    | B    |    1 |
+| a    | B    |    3 |
+| b    | A    |    1 |
+| b    | A    |    4 |
+| b    | B    |    1 |
+| b    | B    |    5 |
++------+------+------+
+8 rows in set (0.01 sec)
+```
+
+for the query:
+
+```
+SELECT k1, k2, GROUPING_ID(k1,k2), SUM(k3) FROM t GROUP BY GROUPING SETS ((k1, k2), (k1), (k2), ());
+```
+
+First,expand the input,every row expand into 4 rows ( the size of GROUPING SETS), and insert GROUPING_ID column
+
+e.g.  a, A, 1 expanded to:
+
+```
++------+------+------+-------------------------+
+| k1   | k2   | k3   | GROUPING_ID(`k1`, `k2`) |
++------+------+------+-------------------------+
+| a    | A    |    1 |                       0 |
+| a    | NULL |    1 |                       1 |
+| NULL | A    |    1 |                       2 |
+| NULL | NULL |    1 |                       3 |
++------+------+------+-------------------------+
+```
+
+Finally, all rows expended as follows (32 rows):
+
+```
++------+------+------+-------------------------+
+| k1   | k2   | k3   | GROUPING_ID(`k1`, `k2`) |
++------+------+------+-------------------------+
+| a    | A    |    1 |                       0 |
+| a    | A    |    2 |                       0 |
+| a    | B    |    1 |                       0 |
+| a    | B    |    3 |                       0 |
+| b    | A    |    1 |                       0 |
+| b    | A    |    4 |                       0 |
+| b    | B    |    1 |                       0 |
+| b    | B    |    5 |                       0 |
+| a    | NULL |    1 |                       1 |
+| a    | NULL |    1 |                       1 |
+| a    | NULL |    2 |                       1 |
+| a    | NULL |    3 |                       1 |
+| b    | NULL |    1 |                       1 |
+| b    | NULL |    1 |                       1 |
+| b    | NULL |    4 |                       1 |
+| b    | NULL |    5 |                       1 |
+| NULL | A    |    1 |                       2 |
+| NULL | A    |    1 |                       2 |
+| NULL | A    |    2 |                       2 |
+| NULL | A    |    4 |                       2 |
+| NULL | B    |    1 |                       2 |
+| NULL | B    |    1 |                       2 |
+| NULL | B    |    3 |                       2 |
+| NULL | B    |    5 |                       2 |
+| NULL | NULL |    1 |                       3 |
+| NULL | NULL |    1 |                       3 |
+| NULL | NULL |    1 |                       3 |
+| NULL | NULL |    1 |                       3 |
+| NULL | NULL |    2 |                       3 |
+| NULL | NULL |    3 |                       3 |
+| NULL | NULL |    4 |                       3 |
+| NULL | NULL |    5 |                       3 |
++------+------+------+-------------------------+
+32 rows in set.
+```
+
+now GROUP BY k1, k2, GROUPING_ID(k1,k2):
+
+```
++------+------+-------------------------+-----------+
+| k1   | k2   | grouping_id(`k1`, `k2`) | sum(`k3`) |
++------+------+-------------------------+-----------+
+| a    | A    |                       0 |         3 |
+| a    | B    |                       0 |         4 |
+| a    | NULL |                       1 |         7 |
+| b    | A    |                       0 |         5 |
+| b    | B    |                       0 |         6 |
+| b    | NULL |                       1 |        11 |
+| NULL | A    |                       2 |         8 |
+| NULL | B    |                       2 |        10 |
+| NULL | NULL |                       3 |        18 |
++------+------+-------------------------+-----------+
+9 rows in set (0.02 sec)
+```
+
+The result is equivalent to the UNION ALL
+
+```
+select k1, k2, sum(k3) from t group by k1, k2
+UNION ALL
+select NULL, k2, sum(k3) from t group by k2
+UNION ALL
+select k1, NULL, sum(k3) from t group by k1
+UNION ALL
+select NULL, NULL, sum(k3) from t;
+
++------+------+-----------+
+| k1   | k2   | sum(`k3`) |
++------+------+-----------+
+| b    | B    |         6 |
+| b    | A    |         5 |
+| a    | A    |         3 |
+| a    | B    |         4 |
+| a    | NULL |         7 |
+| b    | NULL |        11 |
+| NULL | B    |        10 |
+| NULL | A    |         8 |
+| NULL | NULL |        18 |
++------+------+-----------+
+9 rows in set (0.06 sec)
+```
+
+### 3.3 FE 
+
+#### 3.3.1 Tasks
+
+1. Add GroupByClause, repalce groupingExprs.
+2. Add Grouping Sets, Cube and RollUp syntax.
+3. Add GroupByClause in SelectStmt.
+4. Add GroupingFunctionCallExpr, impliments grouping grouping_id function call
+5. Add VirtualSlot, generate the map of virtual slots and real slots
+6. add virtual column GROUPING_ID and other virtual columns generated by grouping and grouping_id, insert into groupingExprs,
+7. Add a PlanNode, name as RepeatNode. For GroupingSets aggregation insert RepeatNode to the plan.
+
+#### 3.3.2 Tuple
+
+In order to add GROUPING_ID to groupingExprs in GroupByClause, need to create virtual SlotRef, also, need tot create a tuple for this slot, named GROUPING\_\_ID Tuple.
+
+For the plannode RepeatNode, it's input is all the  tuple of it's children, It's output tuple is the repeat data and GROUPING_ID.
+
+
+#### 3.3.3 Expression and Function Substitution
+
+expr -> if(bitand(pos, grouping_id)=0, expr, null) for expr in extension grouping clause
+grouping_id() -> grouping_id(grouping_id) for grouping_id function
+
+### 3.4 BE
+
+#### 3.4.1 Tasks
+
+1. Add RepeatNode executor, expend the input data and append GROUPING_ID to every row
+2. Implements grouping_id() and grouping() function.
\ No newline at end of file
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/bitmap_EN.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/bitmap_EN.md.txt
index bdfc36a..3542e9f 100644
--- a/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/bitmap_EN.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/bitmap_EN.md.txt
@@ -18,69 +18,122 @@ under the License.
 -->
 
 
-#BITMAP
+# BITMAP
 
-## description
-### Syntax
+## Create table
+
+The aggregation model needs to be used when creating the table. The data type is bitmap and the aggregation function is bitmap_union.
+```
+CREATE TABLE `pv_bitmap` (
+  `dt` int (11) NULL COMMENT" ",
+  `page` varchar (10) NULL COMMENT" ",
+  `user_id` bitmap BITMAP_UNION NULL COMMENT" "
+) ENGINE = OLAP
+AGGREGATE KEY (`dt`,` page`)
+COMMENT "OLAP"
+DISTRIBUTED BY HASH (`dt`) BUCKETS 2;
+```
+
+Note: When the amount of data is large, it is best to create a corresponding rollup table for high-frequency bitmap_union queries
+
+```
+ALTER TABLE pv_bitmap ADD ROLLUP pv (page, user_id);
+```
+
+## Data Load
 
-`TO_BITMAP(expr)` : Convert TINYINT,SMALLINT,INT type column to Bitmap.
+`TO_BITMAP (expr)`: Convert 0 ~ 18446744073709551615 unsigned bigint to bitmap
 
-`BITMAP_UNION(expr)` : Calculate the union of Bitmap, return the serialized Bitmap value.
+`BITMAP_EMPTY ()`: Generate empty bitmap columns, used for insert or import to fill the default value
 
-`BITMAP_COUNT(expr)` : Calculate the distinct value number of a Bitmap.
+`BITMAP_HASH (expr)`: Convert any type of column to a bitmap by hashing
 
-`BITMAP_UNION_INT(expr)` : Calculate the distinct value number of TINYINT,SMALLINT and INT type column. Same as COUNT(DISTINCT expr)
+### Stream Load
 
-`BITMAP_EMPTY()`: Generate empty bitmap column for insert into or load data.
+```
+cat data | curl --location-trusted -u user: passwd -T--H "columns: dt, page, user_id, user_id = to_bitmap (user_id)" http: // host: 8410 / api / test / testDb / _stream_load
+```
 
-Notice:
+```
+cat data | curl --location-trusted -u user: passwd -T--H "columns: dt, page, user_id, user_id = bitmap_hash (user_id)" http: // host: 8410 / api / test / testDb / _stream_load
+```
+
+```
+cat data | curl --location-trusted -u user: passwd -T--H "columns: dt, page, user_id, user_id = bitmap_empty ()" http: // host: 8410 / api / test / testDb / _stream_load
+```
+
+### Insert Into
+
+id2's column type is bitmap
+```
+insert into bitmap_table1 select id, id2 from bitmap_table2;
+```
+
+id2's column type is bitmap
+```
+INSERT INTO bitmap_table1 (id, id2) VALUES (1001, to_bitmap (1000)), (1001, to_bitmap (2000));
+```
 
-	1. TO_BITMAP function only receives TINYINT,SMALLINT,INT.
-	2. BITMAP_UNION only receives following types of parameter:
-		- Column with BITMAP_UNION aggregate type in AGGREGATE KEY mode.
-		- TO_BITMAP function.
+id2's column type is bitmap
+```
+insert into bitmap_table1 select id, bitmap_union (id2) from bitmap_table2 group by id;
+```
 
-## example
+id2's column type is int
+```
+insert into bitmap_table1 select id, to_bitmap (id2) from table;
+```
 
+id2's column type is String
+```
+insert into bitmap_table1 select id, bitmap_hash (id_string) from table;
 ```
-CREATE TABLE `bitmap_udaf` (
-  `id` int(11) NULL COMMENT "",
-  `id2` int(11)
-) ENGINE=OLAP
-DUPLICATE KEY(`id`)
-DISTRIBUTED BY HASH(`id`) BUCKETS 10;
 
-mysql> select bitmap_count(bitmap_union(to_bitmap(id2))) from bitmap_udaf;
-+----------------------------------------------+
-| bitmap_count(bitmap_union(to_bitmap(`id2`))) |
-+----------------------------------------------+
-|                                            6 |
-+----------------------------------------------+
 
-mysql> select bitmap_union_int (id2) from bitmap_udaf;
-+-------------------------+
-| bitmap_union_int(`id2`) |
-+-------------------------+
-|                       6 |
-+-------------------------+
+## Data Query
+
+### Syntax
+
+
+`BITMAP_UNION (expr)`: Calculate the union of two Bitmaps. The return value is the new Bitmap value.
+
+`BITMAP_UNION_COUNT (expr)`: Calculate the cardinality of the union of two Bitmaps, equivalent to BITMAP_COUNT (BITMAP_UNION (expr)). It is recommended to use the BITMAP_UNION_COUNT function first, its performance is better than BITMAP_COUNT (BITMAP_UNION (expr)).
+
+`BITMAP_UNION_INT (expr)`: Count the number of different values ​​in columns of type TINYINT, SMALLINT and INT, return the sum of COUNT (DISTINCT expr) same
+
+`INTERSECT_COUNT (bitmap_column_to_count, filter_column, filter_values ​​...)`: The calculation satisfies
+filter_column The cardinality of the intersection of multiple bitmaps of the filter.
+bitmap_column_to_count is a column of type bitmap, filter_column is a column of varying dimensions, and filter_values ​​is a list of dimension values.
 
+### Example
 
-CREATE TABLE `bitmap_test` (
-  `id` int(11) NULL COMMENT "",
-  `id2` bitmap bitmap_union NULL 
-) ENGINE=OLAP
-AGGREGATE KEY(`id`)
-DISTRIBUTED BY HASH(`id`) BUCKETS 10;
+The following SQL uses the pv_bitmap table above as an example:
 
-mysql> select bitmap_count(bitmap_union(id2)) from bitmap_test;
-+-----------------------------------+
-| bitmap_count(bitmap_union(`id2`)) |
-+-----------------------------------+
-|                                 8 |
-+-----------------------------------+
+Calculate the deduplication value for user_id:
 
 ```
+select bitmap_union_count (user_id) from pv_bitmap;
+
+select bitmap_count (bitmap_union (user_id)) from pv_bitmap;
+```
+
+Calculate the deduplication value of id:
+
+```
+select bitmap_union_int (id) from pv_bitmap;
+```
+
+Calculate the retention of user_id:
+
+```
+select intersect_count (user_id, page, 'meituan') as meituan_uv,
+intersect_count (user_id, page, 'waimai') as waimai_uv,
+intersect_count (user_id, page, 'meituan', 'waimai') as retention // Number of users appearing on both 'meituan' and 'waimai' pages
+from pv_bitmap
+where page in ('meituan', 'waimai');
+```
+
 
 ## keyword
 
-    BITMAP,BITMAP_COUNT,BITMAP_UNION,BITMAP_UNION_INT,TO_BITMAP
+BITMAP, BITMAP_COUNT, BITMAP_EMPTY, BITMAP_UNION, BITMAP_UNION_INT, TO_BITMAP, BITMAP_UNION_COUNT, INTERSECT_COUNT
\ No newline at end of file
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_and_EN.md.txt
similarity index 67%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_and_EN.md.txt
index 7c889d4..a7162e8 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_and_EN.md.txt
@@ -17,24 +17,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_and
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP BITMAP_AND(BITMAP lhs, BITMAP rhs)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+Compute intersection of two input bitmaps, return the new bitmap.
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(bitmap_and(to_bitmap(1), to_bitmap(2))) cnt;
++------+
+| cnt  |
++------+
+|    0 |
++------+
+
+mysql> select bitmap_count(bitmap_and(to_bitmap(1), to_bitmap(1))) cnt;
++------+
+| cnt  |
++------+
+|    1 |
++------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_AND,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_contains_EN.md.txt
similarity index 67%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_contains_EN.md.txt
index 7c889d4..0cb2e06 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_contains_EN.md.txt
@@ -17,24 +17,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_contains
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`B00LEAN BITMAP_CONTAINS(BITMAP bitmap, BIGINT input)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+Calculates whether the input value is in the Bitmap column and returns a Boolean value.
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_contains(to_bitmap(1),2) cnt;
++------+
+| cnt  |
++------+
+|    0 |
++------+
+
+mysql> select bitmap_contains(to_bitmap(1),1) cnt;
++------+
+| cnt  |
++------+
+|    1 |
++------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_CONTAINS,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_empty_EN.md.txt
similarity index 60%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_empty_EN.md.txt
index 7c889d4..acb8d58 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_empty_EN.md.txt
@@ -17,24 +17,29 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_empty
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP BITMAP_EMPTY()`
 
+Return an empty bitmap. Mainly be used to supply default value for bitmap column when loading, e.g.,
 
-用于返回满足要求的行的数目,或者非NULL行的数目
+```
+cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,v1,v2=bitmap_empty()"   http://host:8410/api/test/testDb/_stream_load
+```
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(bitmap_empty());
++------------------------------+
+| bitmap_count(bitmap_empty()) |
++------------------------------+
+|                            0 |
++------------------------------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_EMPTY,BITMAP
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_from_string.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_from_string.md.txt
new file mode 100644
index 0000000..76c72c3
--- /dev/null
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_from_string.md.txt
@@ -0,0 +1,56 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# bitmap_from_string
+
+## description
+### Syntax
+
+`BITMAP BITMAP_FROM_STRING(VARCHAR input)`
+
+Convert a string into a bitmap. The input string should be a comma separated UNIT32.
+For example: input string "0, 1, 2" will be converted to a Bitmap with bit 0, 1, 2 set.
+If input string is invalid, return NULL.
+
+## example
+
+```
+mysql> select bitmap_to_string(bitmap_empty());
++----------------------------------+
+| bitmap_to_string(bitmap_empty()) |
++----------------------------------+
+|                                  |
++----------------------------------+
+
+mysql> select bitmap_to_string(bitmap_from_string("0, 1, 2"));
++-------------------------------------------------+
+| bitmap_to_string(bitmap_from_string('0, 1, 2')) |
++-------------------------------------------------+
+| 0,1,2                                           |
++-------------------------------------------------+
+
+mysql> select bitmap_from_string("-1, 0, 1, 2");
++-----------------------------------+
+| bitmap_from_string('-1, 0, 1, 2') |
++-----------------------------------+
+| NULL                              |
++-----------------------------------+
+```
+
+## keyword
+
+    BITMAP_FROM_STRING,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_has_any_EN.md.txt
similarity index 65%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_has_any_EN.md.txt
index 7c889d4..fc253d6 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_has_any_EN.md.txt
@@ -17,24 +17,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_has_any
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`B00LEAN BITMAP_HAS_ANY(BITMAP lhs, BITMAP rhs)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+Calculate whether there are intersecting elements in the two Bitmap columns. The return value is Boolean.
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_has_any(to_bitmap(1),to_bitmap(2)) cnt;
++------+
+| cnt  |
++------+
+|    0 |
++------+
+
+mysql> select bitmap_has_any(to_bitmap(1),to_bitmap(1)) cnt;
++------+
+| cnt  |
++------+
+|    1 |
++------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_HAS_ANY,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_hash_EN.md.txt
similarity index 54%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_hash_EN.md.txt
index 7c889d4..7c244ad 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_hash_EN.md.txt
@@ -17,24 +17,29 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_hash
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP BITMAP_HASH(expr)`
 
+Compute the 32-bits hash value of a expr of any type, then return a bitmap containing that hash value. Mainly be used to load non-integer value into bitmap column, e.g.,
 
-用于返回满足要求的行的数目,或者非NULL行的数目
+```
+cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,device_id, device_id=bitmap_hash(device_id)"   http://host:8410/api/test/testDb/_stream_load
+```
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(bitmap_hash('hello'));
++------------------------------------+
+| bitmap_count(bitmap_hash('hello')) |
++------------------------------------+
+|                                  1 |
++------------------------------------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_HASH,BITMAP
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_or_EN.md.txt
similarity index 68%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_or_EN.md.txt
index 7c889d4..5cfd58f 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_or_EN.md.txt
@@ -17,24 +17,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# bitmap_or
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP BITMAP_OR(BITMAP lhs, BITMAP rhs)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+Compute union of two input bitmaps, returns the new bitmap.
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(bitmap_or(to_bitmap(1), to_bitmap(2))) cnt;
++------+
+| cnt  |
++------+
+|    2 |
++------+
+
+mysql> select bitmap_count(bitmap_or(to_bitmap(1), to_bitmap(1))) cnt;
++------+
+| cnt  |
++------+
+|    1 |
++------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    BITMAP_OR,BITMAP
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_to_string.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_to_string.md.txt
new file mode 100644
index 0000000..fbaf4b2
--- /dev/null
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/bitmap_to_string.md.txt
@@ -0,0 +1,63 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# bitmap_to_string
+
+## description
+### Syntax
+
+`VARCHAR BITMAP_TO_STRING(BITMAP input)`
+
+Convert a input BITMAP to a string. The string is a separated string, contains all set bits in Bitmap.
+If input is null, return null.
+
+## example
+
+```
+mysql> select bitmap_to_string(null);
++------------------------+
+| bitmap_to_string(NULL) |
++------------------------+
+| NULL                   |
++------------------------+
+
+mysql> select bitmap_to_string(bitmap_empty());
++----------------------------------+
+| bitmap_to_string(bitmap_empty()) |
++----------------------------------+
+|                                  |
++----------------------------------+
+
+mysql> select bitmap_to_string(to_bitmap(1));
++--------------------------------+
+| bitmap_to_string(to_bitmap(1)) |
++--------------------------------+
+|  1                             |
++--------------------------------+
+
+mysql> select bitmap_to_string(bitmap_or(to_bitmap(1), to_bitmap(2)));
++---------------------------------------------------------+
+| bitmap_to_string(bitmap_or(to_bitmap(1), to_bitmap(2))) |
++---------------------------------------------------------+
+|  1,2                                                    |
++---------------------------------------------------------+
+
+```
+
+## keyword
+
+    BITMAP_TO_STRING,BITMAP
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/index.rst.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/index.rst.txt
new file mode 100644
index 0000000..23bf465
--- /dev/null
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/index.rst.txt
@@ -0,0 +1,8 @@
+================
+bitmap functions
+================
+
+.. toctree::
+    :glob:
+
+    *
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/to_bitmap_EN.md.txt
similarity index 57%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/to_bitmap_EN.md.txt
index 7c889d4..39f26ee 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/bitmap-functions/to_bitmap_EN.md.txt
@@ -17,24 +17,29 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# to_bitmap
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BITMAP TO_BITMAP(expr)`
 
+Convert an unsigned bigint (ranging from 0 to 18446744073709551615) to a bitmap containing that value. Mainly be used to load interger value into bitmap column, e.g.,
 
-用于返回满足要求的行的数目,或者非NULL行的数目
+```
+cat data | curl --location-trusted -u user:passwd -T - -H "columns: dt,page,user_id, user_id=to_bitmap(user_id)"   http://host:8410/api/test/testDb/_stream_load
+```
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select bitmap_count(to_bitmap(10));
++-----------------------------+
+| bitmap_count(to_bitmap(10)) |
++-----------------------------+
+|                           1 |
++-----------------------------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    TO_BITMAP,BITMAP
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/count_distinct_EN.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/curdate_EN.md.txt
similarity index 68%
copy from content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/count_distinct_EN.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/curdate_EN.md.txt
index 3f7259e..cfa2a54 100644
--- a/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/count_distinct_EN.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/curdate_EN.md.txt
@@ -17,24 +17,30 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# curdate
 ## Description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+'DATE CURDATE()'
 
-
-The number of rows used to return the required number, or the number of non-NULL rows
+Get the current date and return it in Date type
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> SELECT CURDATE();
++------------+
+| CURDATE()  |
++------------+
+| 2019-12-20 |
++------------+
+
+mysql> SELECT CURDATE() + 0;
++---------------+
+| CURDATE() + 0 |
++---------------+
+|      20191220 |
++---------------+
 ```
 ##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+CURDATE
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/hour_EN.md.txt
similarity index 70%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/hour_EN.md.txt
index 2da13c8..b245269 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/hour_EN.md.txt
@@ -17,25 +17,25 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
+# hour
 ## description
 ### Syntax
 
-`INT DAY(DATETIME date)`
+`INT HOUR(DATETIME date)`
 
+Returns hour information in the time type, ranging from 0,23
 
-获得日期中的天信息,返回值范围从1-31。
-
-参数为Date或者Datetime类型
+The parameter is Date or Datetime type
 
 ## example
 
 ```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
+mysql> select hour('2018-12-31 23:59:59');
++-----------------------------+
+| hour('2018-12-31 23:59:59') |
++-----------------------------+
+|                          23 |
++-----------------------------+
+```
 ##keyword
-DAY
+HOUR
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/minute_EN.md.txt
similarity index 69%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/minute_EN.md.txt
index 2da13c8..c43c26c 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/minute_EN.md.txt
@@ -17,25 +17,25 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
+# minute
 ## description
 ### Syntax
 
-`INT DAY(DATETIME date)`
+`INT MINUTE(DATETIME date)`
 
+Returns minute information in the time type, ranging from 0,59
 
-获得日期中的天信息,返回值范围从1-31。
-
-参数为Date或者Datetime类型
+The parameter is Date or Datetime type
 
 ## example
 
 ```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
+mysql> select minute('2018-12-31 23:59:59');
++-----------------------------+
+| minute('2018-12-31 23:59:59') |
++-----------------------------+
+|                          59 |
++-----------------------------+
+```
 ##keyword
-DAY
+MINUTE
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/second_EN.md.txt
similarity index 69%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/second_EN.md.txt
index 2da13c8..8318a1c 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/second_EN.md.txt
@@ -17,25 +17,25 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
+# second
 ## description
 ### Syntax
 
-`INT DAY(DATETIME date)`
+`INT SECOND(DATETIME date)`
 
+Returns second information in the time type, ranging from 0,59
 
-获得日期中的天信息,返回值范围从1-31。
-
-参数为Date或者Datetime类型
+The parameter is Date or Datetime type
 
 ## example
 
 ```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
+mysql> select second('2018-12-31 23:59:59');
++-----------------------------+
+| second('2018-12-31 23:59:59') |
++-----------------------------+
+|                          59 |
++-----------------------------+
+```
 ##keyword
-DAY
+SECOND
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/timestampadd_EN.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/timestampadd_EN.md.txt
new file mode 100644
index 0000000..71dcd7b
--- /dev/null
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/timestampadd_EN.md.txt
@@ -0,0 +1,51 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# timestampadd
+## description
+### Syntax
+
+`DATETIME TIMESTAMPADD(unit, interval, DATETIME datetime_expr)`
+
+Adds the integer expression interval to the date or datetime expression datetime_expr. 
+
+The unit for interval is given by the unit argument, which should be one of the following values: 
+
+SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, or YEAR.
+
+## example
+
+```
+
+mysql> SELECT TIMESTAMPADD(MINUTE,1,'2019-01-02');
++------------------------------------------------+
+| timestampadd(MINUTE, 1, '2019-01-02 00:00:00') |
++------------------------------------------------+
+| 2019-01-02 00:01:00                            |
++------------------------------------------------+
+
+mysql> SELECT TIMESTAMPADD(WEEK,1,'2019-01-02');
++----------------------------------------------+
+| timestampadd(WEEK, 1, '2019-01-02 00:00:00') |
++----------------------------------------------+
+| 2019-01-09 00:00:00                          |
++----------------------------------------------+
+```
+##keyword
+TIMESTAMPADD
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/timestampdiff_EN.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/timestampdiff_EN.md.txt
new file mode 100644
index 0000000..2eb8c00
--- /dev/null
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/date-time-functions/timestampdiff_EN.md.txt
@@ -0,0 +1,60 @@
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# timestampdiff
+## description
+### Syntax
+
+`INT TIMESTAMPDIFF(unit,DATETIME datetime_expr1, DATETIME datetime_expr2)`
+
+Returns datetime_expr2 − datetime_expr1, where datetime_expr1 and datetime_expr2 are date or datetime expressions. 
+
+The unit for the result (an integer) is given by the unit argument.
+ 
+The legal values for unit are the same as those listed in the description of the TIMESTAMPADD() function.
+
+## example
+
+```
+
+MySQL> SELECT TIMESTAMPDIFF(MONTH,'2003-02-01','2003-05-01');
++--------------------------------------------------------------------+
+| timestampdiff(MONTH, '2003-02-01 00:00:00', '2003-05-01 00:00:00') |
++--------------------------------------------------------------------+
+|                                                                  3 |
++--------------------------------------------------------------------+
+
+MySQL> SELECT TIMESTAMPDIFF(YEAR,'2002-05-01','2001-01-01');
++-------------------------------------------------------------------+
+| timestampdiff(YEAR, '2002-05-01 00:00:00', '2001-01-01 00:00:00') |
++-------------------------------------------------------------------+
+|                                                                -1 |
++-------------------------------------------------------------------+
+
+
+MySQL> SELECT TIMESTAMPDIFF(MINUTE,'2003-02-01','2003-05-01 12:05:55');
++---------------------------------------------------------------------+
+| timestampdiff(MINUTE, '2003-02-01 00:00:00', '2003-05-01 12:05:55') |
++---------------------------------------------------------------------+
+|                                                              128885 |
++---------------------------------------------------------------------+
+
+```
+##keyword
+TIMESTAMPDIFF
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/hash-functions/index.rst.txt b/content/_sources/documentation/en/sql-reference/sql-functions/hash-functions/index.rst.txt
new file mode 100644
index 0000000..5de92cd
--- /dev/null
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/hash-functions/index.rst.txt
@@ -0,0 +1,8 @@
+=======================
+Hash Functions
+=======================
+
+.. toctree::
+    :glob:
+
+    *
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/hash-functions/murmur_hash3_32.md.txt
similarity index 52%
rename from content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
rename to content/_sources/documentation/en/sql-reference/sql-functions/hash-functions/murmur_hash3_32.md.txt
index 7c889d4..1be7ff0 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/aggregate-functions/count_distinct.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/hash-functions/murmur_hash3_32.md.txt
@@ -6,9 +6,7 @@ regarding copyright ownership.  The ASF licenses this file
 to you under the Apache License, Version 2.0 (the
 "License"); you may not use this file except in compliance
 with the License.  You may obtain a copy of the License at
-
   http://www.apache.org/licenses/LICENSE-2.0
-
 Unless required by applicable law or agreed to in writing,
 software distributed under the License is distributed on an
 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@@ -17,24 +15,40 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# murmur_hash3_32
+
 ## description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`INT MURMUR_HASH3_32(VARCHAR input, ...)`
 
-
-用于返回满足要求的行的数目,或者非NULL行的数目
+Return the 32 bits murmur3 hash of input string.
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select murmur_hash3_32(null);
++-----------------------+
+| murmur_hash3_32(NULL) |
++-----------------------+
+|                  NULL |
++-----------------------+
+
+mysql> select murmur_hash3_32("hello");
++--------------------------+
+| murmur_hash3_32('hello') |
++--------------------------+
+|               1321743225 |
++--------------------------+
+
+mysql> select murmur_hash3_32("hello", "world");
++-----------------------------------+
+| murmur_hash3_32('hello', 'world') |
++-----------------------------------+
+|                         984713481 |
++-----------------------------------+
 ```
-##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+
+## keyword
+
+    MURMUR_HASH3_32,HASH
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/index.rst.txt b/content/_sources/documentation/en/sql-reference/sql-functions/index.rst.txt
index cbabe87..d41a980 100644
--- a/content/_sources/documentation/en/sql-reference/sql-functions/index.rst.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/index.rst.txt
@@ -14,3 +14,5 @@ SQL Functions
     spatial-functions/index
     string-functions/index
     aggregate-functions/index
+    bitmap-functions/index
+    hash-functions/index
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/count_distinct_EN.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/string-functions/ends_with_EN.md.txt
similarity index 54%
copy from content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/count_distinct_EN.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-functions/string-functions/ends_with_EN.md.txt
index 3f7259e..7da9bd5 100644
--- a/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/count_distinct_EN.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/string-functions/ends_with_EN.md.txt
@@ -17,24 +17,31 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# ends_with
 ## Description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BOOLEAN ENDS_WITH (VARCHAR str, VARCHAR suffix)`
 
-
-The number of rows used to return the required number, or the number of non-NULL rows
+It returns true if the string ends with the specified suffix, otherwise it returns false. 
+If any parameter is NULL, it returns NULL.
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+mysql> select ends_with("Hello doris", "doris");
++-----------------------------------+
+| ends_with('Hello doris', 'doris') |
++-----------------------------------+
+|                                 1 | 
++-----------------------------------+
+
+mysql> select ends_with("Hello doris", "Hello");
++-----------------------------------+
+| ends_with('Hello doris', 'Hello') |
++-----------------------------------+
+|                                 0 | 
++-----------------------------------+
 ```
 ##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+ENDS_WITH
diff --git a/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/count_distinct_EN.md.txt b/content/_sources/documentation/en/sql-reference/sql-functions/string-functions/starts_with_EN.md.txt
similarity index 52%
rename from content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/count_distinct_EN.md.txt
rename to content/_sources/documentation/en/sql-reference/sql-functions/string-functions/starts_with_EN.md.txt
index 3f7259e..81f1456 100644
--- a/content/_sources/documentation/en/sql-reference/sql-functions/aggregate-functions/count_distinct_EN.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-functions/string-functions/starts_with_EN.md.txt
@@ -1,4 +1,4 @@
-<!-- 
+<!--
 Licensed to the Apache Software Foundation (ASF) under one
 or more contributor license agreements.  See the NOTICE file
 distributed with this work for additional information
@@ -17,24 +17,31 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# COUNT_DISTINCT
+# starts_with
 ## Description
 ### Syntax
 
-`COUNT_DISTINCT(expr)`
+`BOOLEAN STARTS_WITH (VARCHAR str, VARCHAR prefix)`
 
-
-The number of rows used to return the required number, or the number of non-NULL rows
+It returns true if the string starts with the specified prefix, otherwise it returns false.
+If any parameter is NULL, it returns NULL.
 
 ## example
 
 ```
-MySQL > select count_distinct(query_id) from log_statis group by datetime;
-+----------------------------+
-| count_distinct(`query_id`) |
-+----------------------------+
-|                        577 |
-+----------------------------+
+MySQL [(none)]> select starts_with("hello world","hello");
++-------------------------------------+
+| starts_with('hello world', 'hello') |
++-------------------------------------+
+|                                   1 |
++-------------------------------------+
+
+MySQL [(none)]> select starts_with("hello world","world");
++-------------------------------------+
+| starts_with('hello world', 'world') |
++-------------------------------------+
+|                                   0 |
++-------------------------------------+
 ```
 ##keyword
-COUNT_DISTINCT,COUNT,DISTINCT
+STARTS_WITH
\ No newline at end of file
diff --git a/content/_sources/documentation/en/sql-reference/sql-statements/Account Management/DROP USER_EN.md.txt b/content/_sources/documentation/en/sql-reference/sql-statements/Account Management/DROP USER_EN.md.txt
index 87faafd..3e69a39 100644
--- a/content/_sources/documentation/en/sql-reference/sql-statements/Account Management/DROP USER_EN.md.txt	
+++ b/content/_sources/documentation/en/sql-reference/sql-statements/Account Management/DROP USER_EN.md.txt	
@@ -22,15 +22,21 @@ under the License.
 
 Syntax:
 
-DROP USER 'user_name'
+    DROP USER 'user_identity'
 
-The DROP USER command deletes a Palo user. Doris does not support deleting the specified user_identity here. When a specified user is deleted, all user_identities corresponding to that user are deleted. For example, two users, Jack @'192%'and Jack @['domain'] were created through the CREATE USER statement. After DROP USER'jack' was executed, Jack @'192%'and Jack @['domain'] would be deleted.
+    `user_identity`:
+
+        user@'host'
+        user@['domain']
+
+    Drop a specified user identity.
 
 ## example
 
-1. Delete user jack
+1. Delete user jack@'192.%'
 
-DROP USER 'jack'
+    DROP USER 'jack'@'192.%'
 
 ## keyword
-DROP, USER
+
+    DROP, USER
diff --git a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt b/content/_sources/documentation/en/sql-reference/sql-statements/Administration/SHOW INDEX_EN.md.txt
similarity index 70%
copy from content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
copy to content/_sources/documentation/en/sql-reference/sql-statements/Administration/SHOW INDEX_EN.md.txt
index 2da13c8..bae48c7 100644
--- a/content/_sources/documentation/cn/sql-reference/sql-functions/date-time-functions/day.md.txt
+++ b/content/_sources/documentation/en/sql-reference/sql-statements/Administration/SHOW INDEX_EN.md.txt	
@@ -17,25 +17,19 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# day
-## description
-### Syntax
+# SHOW INDEX
 
-`INT DAY(DATETIME date)`
+## description
 
+    This statement is used to show all index(only bitmap index in current version) of a table
+    语法:
+        SHOW INDEX[ES] FROM [db_name.]table_name;
 
-获得日期中的天信息,返回值范围从1-31。
+## example
 
-参数为Date或者Datetime类型
+    1.  dispaly all indexes in table table_name
+        SHOW INDEX FROM example_db.table_name;
 
-## example
+## keyword
 
-```
-mysql> select day('1987-01-31');
-+----------------------------+
-| day('1987-01-31 00:00:00') |
-+----------------------------+
-|                         31 |
-+----------------------------+
-##keyword
-DAY
+    SHOW,INDEX
diff --git a/content/_sources/documentation/en/sql-reference/sql-statements/Data Definition/ALTER TABLE_EN.md.txt b/content/_sources/documentation/en/sql-reference/sql-statements/Data Definition/ALTER TABLE_EN.md.txt
index eb982dc..bfbed30 100644
--- a/content/_sources/documentation/en/sql-reference/sql-statements/Data Definition/ALTER TABLE_EN.md.txt	
+++ b/content/_sources/documentation/en/sql-reference/sql-statements/Data Definition/ALTER TABLE_EN.md.txt	
@@ -18,264 +18,322 @@ under the License.
 -->
 
 # ALTER TABLE
+
 ## description
-    This statement is used to modify an existing table. If no rollup index is specified, the base operation is the default.
-    The statement is divided into three types of operations: schema change, rollup, partition
-    These three types of operations cannot appear in an ALTER TABLE statement at the same time.
-    Where schema change and rollup are asynchronous operations and are returned if the task commits successfully. You can then use the SHOW ALTER command to view the progress.
-    Partition is a synchronous operation, and a command return indicates that execution is complete.
-
-    grammar:
-        ALTER TABLE [database.]table
-        Alter_clause1[, alter_clause2, ...];
-
-    The alter_clause is divided into partition, rollup, schema change, and rename.
-
-    Partition supports the following modifications
-    Increase the partition
-        grammar:
-            ADD PARTITION [IF NOT EXISTS] partition_name
-            Partition_desc ["key"="value"]
-            [DISTRIBUTED BY HASH (k1[,k2 ...]) [BUCKETS num]]
-        note:
-            1) partition_desc supports two ways of writing:
-                * VALUES LESS THAN [MAXVALUE|("value1", ...)]
-                * VALUES [("value1", ...), ("value1", ...))
-            1) The partition is the left closed right open interval. If the user only specifies the right boundary, the system will automatically determine the left boundary.
-            2) If the bucket mode is not specified, the bucket method used by the built-in table is automatically used.
-            3) If the bucket mode is specified, only the bucket number can be modified, and the bucket mode or bucket column cannot be modified.
-            4) ["key"="value"] section can set some properties of the partition, see CREATE TABLE for details.
-
-    2. Delete the partition
-        grammar:
-            DROP PARTITION [IF EXISTS] partition_name
-        note:
-            1) Use a partitioned table to keep at least one partition.
-            2) Execute DROP PARTITION For a period of time, the deleted partition can be recovered by the RECOVER statement. See the RECOVER statement for details.
-            
-    3. Modify the partition properties
-        grammar:
-            MODIFY PARTITION partition_name SET ("key" = "value", ...)
-        Description:
-            1) The storage_medium, storage_cooldown_time, and replication_num attributes of the modified partition are currently supported.
-            2) For single-partition tables, partition_name is the same as the table name.
-        
-    Rollup supports the following ways to create:
-    1. Create a rollup index
-        grammar:
-            ADD ROLLUP rollup_name (column_name1, column_name2, ...)
-            [FROM from_index_name]
-            [PROPERTIES ("key"="value", ...)]
-        note:
-            1) If from_index_name is not specified, it is created by default from base index
-            2) The columns in the rollup table must be existing columns in from_index
-            3) In properties, you can specify the storage format. See CREATE TABLE for details.
-            
-    2. Delete the rollup index
-        grammar:
-            DROP ROLLUP rollup_name
-            [PROPERTIES ("key"="value", ...)]
-        note:
-            1) Cannot delete base index
-            2) Execute DROP ROLLUP For a period of time, the deleted rollup index can be restored by the RECOVER statement. See the RECOVER statement for details.
-    
-            
-    Schema change supports the following modifications:
-    1. Add a column to the specified location of the specified index
-        grammar:
-            ADD COLUMN column_name column_type [KEY | agg_type] [DEFAULT "default_value"]
-            [AFTER column_name|FIRST]
-            [TO rollup_index_name]
-            [PROPERTIES ("key"="value", ...)]
-        note:
-            1) Aggregate model If you add a value column, you need to specify agg_type
-            2) Non-aggregate models (such as DUPLICATE KEY) If you add a key column, you need to specify the KEY keyword.
-            3) You cannot add a column that already exists in the base index to the rollup index
-                Recreate a rollup index if needed
-            
-    2. Add multiple columns to the specified index
-        grammar:
-            ADD COLUMN (column_name1 column_type [KEY | agg_type] DEFAULT "default_value", ...)
-            [TO rollup_index_name]
-            [PROPERTIES ("key"="value", ...)]
-        note:
... 27900 lines suppressed ...


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org