You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by do...@apache.org on 2021/11/16 03:39:49 UTC

[incubator-inlong-website] branch master updated: [INLONG-1512] Document Version Management (#185)

This is an automated email from the ASF dual-hosted git repository.

dockerzhang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-inlong-website.git


The following commit(s) were added to refs/heads/master by this push:
     new a72cce9  [INLONG-1512] Document Version Management (#185)
a72cce9 is described below

commit a72cce91489a04f4aaee7b7c895686172747949a
Author: lizhwang <88...@users.noreply.github.com>
AuthorDate: Tue Nov 16 11:37:56 2021 +0800

    [INLONG-1512] Document Version Management (#185)
---
 README.md                                          |   7 +
 docusaurus.config.js                               |  17 +-
 .../version-0.11.0.json                            |  46 ++
 .../version-0.11.0/modules/agent/architecture.md   |  48 ++
 .../modules/agent/img/architecture.png             | Bin 0 -> 43613 bytes
 .../version-0.11.0/modules/agent/quick_start.md    | 183 ++++
 .../modules/dataproxy-sdk/architecture.md          |  65 ++
 .../modules/dataproxy-sdk/quick_start.md           |  12 +
 .../modules/dataproxy/architecture.md              | 150 ++++
 .../modules/dataproxy/img/architecture.png         | Bin 0 -> 431999 bytes
 .../modules/dataproxy/quick_start.md               |  58 ++
 .../version-0.11.0/modules/manager/architecture.md |  33 +
 .../modules/manager/img/datamodel.jpg              | Bin 0 -> 88671 bytes
 .../modules/manager/img/inlong-manager.png         | Bin 0 -> 73086 bytes
 .../modules/manager/img/interactive.jpg            | Bin 0 -> 67852 bytes
 .../version-0.11.0/modules/manager/quick_start.md  |  85 ++
 .../version-0.11.0/modules/sort/img.png            | Bin 0 -> 10583 bytes
 .../version-0.11.0/modules/sort/introduction.md    |  42 +
 .../modules/sort/protocol_introduction.md          |  25 +
 .../version-0.11.0/modules/sort/quick_start.md     |  64 ++
 .../http_access_api_definition_cn.xls              | Bin 0 -> 200704 bytes
 .../version-0.11.0/modules/tubemq/architecture.md  |  84 ++
 .../version-0.11.0/modules/tubemq/client_rpc.md    | 197 +++++
 .../version-0.11.0/modules/tubemq/clients_java.md  | 231 ++++++
 .../modules/tubemq/configure_introduction.md       | 151 ++++
 .../modules/tubemq/console_introduction.md         | 118 +++
 .../modules/tubemq/consumer_example.md             |  82 ++
 .../version-0.11.0/modules/tubemq/deployment.md    | 157 ++++
 .../version-0.11.0/modules/tubemq/error_code.md    | 111 +++
 .../modules/tubemq/http_access_api.md              |  20 +
 .../modules/tubemq/img/api_interface/http-api.png  | Bin 0 -> 85071 bytes
 .../tubemq/img/client_rpc/rpc_broker_info.png      | Bin 0 -> 20919 bytes
 .../tubemq/img/client_rpc/rpc_bytes_def.png        | Bin 0 -> 38706 bytes
 .../tubemq/img/client_rpc/rpc_conn_detail.png      | Bin 0 -> 30322 bytes
 .../tubemq/img/client_rpc/rpc_consumer_diagram.png | Bin 0 -> 48407 bytes
 .../img/client_rpc/rpc_convert_topicinfo.png       | Bin 0 -> 43133 bytes
 .../tubemq/img/client_rpc/rpc_event_proto.png      | Bin 0 -> 11275 bytes
 .../img/client_rpc/rpc_event_proto_optype.png      | Bin 0 -> 92896 bytes
 .../img/client_rpc/rpc_event_proto_status.png      | Bin 0 -> 93691 bytes
 .../tubemq/img/client_rpc/rpc_header_fill.png      | Bin 0 -> 156495 bytes
 .../tubemq/img/client_rpc/rpc_inner_structure.png  | Bin 0 -> 24843 bytes
 .../img/client_rpc/rpc_master_authorizedinfo.png   | Bin 0 -> 6689 bytes
 .../tubemq/img/client_rpc/rpc_message_data.png     | Bin 0 -> 23773 bytes
 .../tubemq/img/client_rpc/rpc_pbmsg_structure.png  | Bin 0 -> 11652 bytes
 .../tubemq/img/client_rpc/rpc_producer_close2M.png | Bin 0 -> 13375 bytes
 .../tubemq/img/client_rpc/rpc_producer_diagram.png | Bin 0 -> 44307 bytes
 .../img/client_rpc/rpc_producer_heartbeat2M.png    | Bin 0 -> 27314 bytes
 .../img/client_rpc/rpc_producer_register2M.png     | Bin 0 -> 24320 bytes
 .../img/client_rpc/rpc_producer_sendmsg2B.png      | Bin 0 -> 23692 bytes
 .../tubemq/img/client_rpc/rpc_proto_def.png        | Bin 0 -> 4798 bytes
 .../modules/tubemq/img/configure/conf_ini_pos.png  | Bin 0 -> 26192 bytes
 .../tubemq/img/configure/conf_velocity_pos.png     | Bin 0 -> 21544 bytes
 .../modules/tubemq/img/console/1568169770714.png   | Bin 0 -> 21062 bytes
 .../modules/tubemq/img/console/1568169796122.png   | Bin 0 -> 13461 bytes
 .../modules/tubemq/img/console/1568169806810.png   | Bin 0 -> 15847 bytes
 .../modules/tubemq/img/console/1568169823675.png   | Bin 0 -> 13307 bytes
 .../modules/tubemq/img/console/1568169839931.png   | Bin 0 -> 21185 bytes
 .../modules/tubemq/img/console/1568169851085.png   | Bin 0 -> 35596 bytes
 .../modules/tubemq/img/console/1568169863402.png   | Bin 0 -> 17502 bytes
 .../modules/tubemq/img/console/1568169879529.png   | Bin 0 -> 19652 bytes
 .../modules/tubemq/img/console/1568169889594.png   | Bin 0 -> 20553 bytes
 .../modules/tubemq/img/console/1568169900634.png   | Bin 0 -> 26003 bytes
 .../modules/tubemq/img/console/1568169908522.png   | Bin 0 -> 18358 bytes
 .../modules/tubemq/img/console/1568169916091.png   | Bin 0 -> 20093 bytes
 .../modules/tubemq/img/console/1568169925657.png   | Bin 0 -> 18024 bytes
 .../modules/tubemq/img/console/1568169946683.png   | Bin 0 -> 20407 bytes
 .../modules/tubemq/img/console/1568169954746.png   | Bin 0 -> 30020 bytes
 .../tubemq/img/development/create_pull_request.png | Bin 0 -> 216800 bytes
 .../img/development/github_fork_repository.png     | Bin 0 -> 207753 bytes
 .../tubemq/img/development/jira_create_issue.png   | Bin 0 -> 140548 bytes
 .../modules/tubemq/img/development/jira_filter.png | Bin 0 -> 273110 bytes
 .../img/development/jira_resolve_issue_1.png       | Bin 0 -> 224601 bytes
 .../img/development/jira_resolve_issue_2.png       | Bin 0 -> 122300 bytes
 .../tubemq/img/development/new_pull_request.png    | Bin 0 -> 231812 bytes
 .../modules/tubemq/img/mqs_comare.png              | Bin 0 -> 82005 bytes
 .../modules/tubemq/img/perf_appendix_1_bx1_1.png   | Bin 0 -> 74263 bytes
 .../modules/tubemq/img/perf_appendix_1_bx1_2.png   | Bin 0 -> 74022 bytes
 .../modules/tubemq/img/perf_appendix_1_bx1_3.png   | Bin 0 -> 59398 bytes
 .../modules/tubemq/img/perf_appendix_1_bx1_4.png   | Bin 0 -> 57854 bytes
 .../modules/tubemq/img/perf_appendix_1_cg1_1.png   | Bin 0 -> 407483 bytes
 .../modules/tubemq/img/perf_appendix_1_cg1_2.png   | Bin 0 -> 398721 bytes
 .../modules/tubemq/img/perf_appendix_1_cg1_3.png   | Bin 0 -> 40886 bytes
 .../modules/tubemq/img/perf_appendix_1_cg1_4.png   | Bin 0 -> 39318 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_1.png    | Bin 0 -> 87549 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_2.png    | Bin 0 -> 124450 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_3.png    | Bin 0 -> 63570 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_4.png    | Bin 0 -> 65748 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_5.png    | Bin 0 -> 68593 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_6.png    | Bin 0 -> 68854 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_7.png    | Bin 0 -> 88648 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_8.png    | Bin 0 -> 67459 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_9.png    | Bin 0 -> 61548 bytes
 .../tubemq/img/perf_appendix_2_topic_100_1.png     | Bin 0 -> 89756 bytes
 .../tubemq/img/perf_appendix_2_topic_100_2.png     | Bin 0 -> 89267 bytes
 .../tubemq/img/perf_appendix_2_topic_100_3.png     | Bin 0 -> 69181 bytes
 .../tubemq/img/perf_appendix_2_topic_100_4.png     | Bin 0 -> 76628 bytes
 .../tubemq/img/perf_appendix_2_topic_100_5.png     | Bin 0 -> 65706 bytes
 .../tubemq/img/perf_appendix_2_topic_100_6.png     | Bin 0 -> 72361 bytes
 .../tubemq/img/perf_appendix_2_topic_100_7.png     | Bin 0 -> 81769 bytes
 .../tubemq/img/perf_appendix_2_topic_100_8.png     | Bin 0 -> 62333 bytes
 .../tubemq/img/perf_appendix_2_topic_100_9.png     | Bin 0 -> 58918 bytes
 .../tubemq/img/perf_appendix_2_topic_200_1.png     | Bin 0 -> 89887 bytes
 .../tubemq/img/perf_appendix_2_topic_200_2.png     | Bin 0 -> 101799 bytes
 .../tubemq/img/perf_appendix_2_topic_200_3.png     | Bin 0 -> 66126 bytes
 .../tubemq/img/perf_appendix_2_topic_200_4.png     | Bin 0 -> 71926 bytes
 .../tubemq/img/perf_appendix_2_topic_200_5.png     | Bin 0 -> 60011 bytes
 .../tubemq/img/perf_appendix_2_topic_200_6.png     | Bin 0 -> 67402 bytes
 .../tubemq/img/perf_appendix_2_topic_200_7.png     | Bin 0 -> 84250 bytes
 .../tubemq/img/perf_appendix_2_topic_200_8.png     | Bin 0 -> 62805 bytes
 .../tubemq/img/perf_appendix_2_topic_200_9.png     | Bin 0 -> 59190 bytes
 .../tubemq/img/perf_appendix_2_topic_500_1.png     | Bin 0 -> 92805 bytes
 .../tubemq/img/perf_appendix_2_topic_500_2.png     | Bin 0 -> 105098 bytes
 .../tubemq/img/perf_appendix_2_topic_500_3.png     | Bin 0 -> 67610 bytes
 .../tubemq/img/perf_appendix_2_topic_500_4.png     | Bin 0 -> 72538 bytes
 .../tubemq/img/perf_appendix_2_topic_500_5.png     | Bin 0 -> 65052 bytes
 .../tubemq/img/perf_appendix_2_topic_500_6.png     | Bin 0 -> 66872 bytes
 .../tubemq/img/perf_appendix_2_topic_500_7.png     | Bin 0 -> 84331 bytes
 .../tubemq/img/perf_appendix_2_topic_500_8.png     | Bin 0 -> 63651 bytes
 .../tubemq/img/perf_appendix_2_topic_500_9.png     | Bin 0 -> 58875 bytes
 .../modules/tubemq/img/perf_scenario_1.png         | Bin 0 -> 136439 bytes
 .../modules/tubemq/img/perf_scenario_1_index.png   | Bin 0 -> 401432 bytes
 .../modules/tubemq/img/perf_scenario_2.png         | Bin 0 -> 116241 bytes
 .../modules/tubemq/img/perf_scenario_2_index.png   | Bin 0 -> 289200 bytes
 .../modules/tubemq/img/perf_scenario_3.png         | Bin 0 -> 113325 bytes
 .../modules/tubemq/img/perf_scenario_3_index.png   | Bin 0 -> 390736 bytes
 .../modules/tubemq/img/perf_scenario_4_index.png   | Bin 0 -> 241519 bytes
 .../modules/tubemq/img/perf_scenario_6_index.png   | Bin 0 -> 171738 bytes
 .../modules/tubemq/img/perf_scenario_7.png         | Bin 0 -> 285131 bytes
 .../modules/tubemq/img/perf_scenario_8.png         | Bin 0 -> 70370 bytes
 .../modules/tubemq/img/perf_scenario_8_index.png   | Bin 0 -> 177352 bytes
 .../modules/tubemq/img/perf_scheme.png             | Bin 0 -> 250270 bytes
 .../modules/tubemq/img/store_file.png              | Bin 0 -> 23316 bytes
 .../modules/tubemq/img/store_mem.png               | Bin 0 -> 38829 bytes
 .../modules/tubemq/img/sys_structure.png           | Bin 0 -> 54641 bytes
 .../tubemq/img/sysdeployment/sys_address_host.png  | Bin 0 -> 3690 bytes
 .../img/sysdeployment/sys_broker_configure.png     | Bin 0 -> 59822 bytes
 .../tubemq/img/sysdeployment/sys_broker_deploy.png | Bin 0 -> 46767 bytes
 .../img/sysdeployment/sys_broker_finished.png      | Bin 0 -> 46756 bytes
 .../tubemq/img/sysdeployment/sys_broker_online.png | Bin 0 -> 44770 bytes
 .../img/sysdeployment/sys_broker_online_2.png      | Bin 0 -> 62302 bytes
 .../img/sysdeployment/sys_broker_restart_1.png     | Bin 0 -> 19355 bytes
 .../img/sysdeployment/sys_broker_restart_2.png     | Bin 0 -> 86408 bytes
 .../tubemq/img/sysdeployment/sys_broker_start.png  | Bin 0 -> 42862 bytes
 .../img/sysdeployment/sys_broker_start_error.png   | Bin 0 -> 56744 bytes
 .../tubemq/img/sysdeployment/sys_compile.png       | Bin 0 -> 23543 bytes
 .../tubemq/img/sysdeployment/sys_configure_1.png   | Bin 0 -> 188535 bytes
 .../tubemq/img/sysdeployment/sys_configure_2.png   | Bin 0 -> 193819 bytes
 .../img/sysdeployment/sys_master_console.png       | Bin 0 -> 35541 bytes
 .../tubemq/img/sysdeployment/sys_master_start.png  | Bin 0 -> 35457 bytes
 .../img/sysdeployment/sys_master_startted.png      | Bin 0 -> 99107 bytes
 .../tubemq/img/sysdeployment/sys_node_log.png      | Bin 0 -> 20891 bytes
 .../tubemq/img/sysdeployment/sys_node_status.png   | Bin 0 -> 71771 bytes
 .../tubemq/img/sysdeployment/sys_node_status_2.png | Bin 0 -> 123306 bytes
 .../tubemq/img/sysdeployment/sys_package.png       | Bin 0 -> 69467 bytes
 .../tubemq/img/sysdeployment/sys_package_list.png  | Bin 0 -> 44553 bytes
 .../tubemq/img/sysdeployment/sys_topic_create.png  | Bin 0 -> 50660 bytes
 .../tubemq/img/sysdeployment/sys_topic_deploy.png  | Bin 0 -> 46372 bytes
 .../tubemq/img/sysdeployment/sys_topic_error.png   | Bin 0 -> 151646 bytes
 .../img/sysdeployment/sys_topic_finished.png       | Bin 0 -> 46354 bytes
 .../tubemq/img/sysdeployment/sys_topic_select.png  | Bin 0 -> 54280 bytes
 .../tubemq/img/sysdeployment/test_sendmessage.png  | Bin 0 -> 52958 bytes
 .../img/sysdeployment/test_sendmessage_2.png       | Bin 0 -> 98658 bytes
 .../modules/tubemq/img/test_scheme.png             | Bin 0 -> 93610 bytes
 .../modules/tubemq/img/test_summary.png            | Bin 0 -> 38172 bytes
 .../modules/tubemq/img/tubemq-add-broker-1.png     | Bin 0 -> 81899 bytes
 .../modules/tubemq/img/tubemq-add-broker-2.png     | Bin 0 -> 81379 bytes
 .../modules/tubemq/img/tubemq-add-broker-3.png     | Bin 0 -> 68899 bytes
 .../modules/tubemq/img/tubemq-add-topic-1.png      | Bin 0 -> 65638 bytes
 .../modules/tubemq/img/tubemq-add-topic-2.png      | Bin 0 -> 27929 bytes
 .../modules/tubemq/img/tubemq-add-topic-3.png      | Bin 0 -> 27938 bytes
 .../modules/tubemq/img/tubemq-add-topic-4.png      | Bin 0 -> 16117 bytes
 .../modules/tubemq/img/tubemq-add-topic-5.png      | Bin 0 -> 30719 bytes
 .../modules/tubemq/img/tubemq-add-topic-6.png      | Bin 0 -> 15003 bytes
 .../modules/tubemq/img/tubemq-console-gui.png      | Bin 0 -> 45053 bytes
 .../modules/tubemq/img/tubemq-consume-message.png  | Bin 0 -> 89770 bytes
 .../modules/tubemq/img/tubemq-send-message.png     | Bin 0 -> 69960 bytes
 .../modules/tubemq/producer_example.md             | 150 ++++
 .../version-0.11.0/modules/tubemq/quick_start.md   | 183 ++++
 .../modules/tubemq/tubemq-manager/quick_start.md   | 123 +++
 .../modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md | 239 ++++++
 .../version-0.11.0/modules/website/quick_start.md  |  56 ++
 .../version-0.11.0/user_guide/example.md           | 107 +++
 .../version-0.11.0/user_guide/quick_start.md       |  76 ++
 .../version-0.11.0/user_guide/user_manual.md       | 246 ++++++
 i18n/zh-CN/docusaurus-theme-classic/navbar.json    |   4 +
 package.json                                       |   1 +
 src/pages/versions/config.json                     |  39 +
 src/pages/versions/index.js                        |  66 ++
 src/pages/versions/index.less                      |   5 +
 versioned_docs/version-0.11.0/contact.md           |  24 +
 .../version-0.11.0/modules/_category_.json         |   4 +
 .../version-0.11.0/modules/agent/_category_.json   |   4 +
 .../version-0.11.0/modules/agent/architecture.md   |  46 ++
 .../modules/agent/img/architecture.png             | Bin 0 -> 43613 bytes
 .../version-0.11.0/modules/agent/quick_start.md    | 185 +++++
 .../modules/dataproxy-sdk/_category_.json          |   4 +
 .../modules/dataproxy-sdk/architecture.md          |  60 ++
 .../modules/dataproxy-sdk/quick_start.md           |  12 +
 .../modules/dataproxy/_category_.json              |   4 +
 .../modules/dataproxy/architecture.md              | 152 ++++
 .../modules/dataproxy/img/architecture.png         | Bin 0 -> 431999 bytes
 .../modules/dataproxy/quick_start.md               |  58 ++
 .../version-0.11.0/modules/manager/_category_.json |   4 +
 .../version-0.11.0/modules/manager/architecture.md |  32 +
 .../modules/manager/img/datamodel.jpg              | Bin 0 -> 88671 bytes
 .../modules/manager/img/inlong-manager.png         | Bin 0 -> 73086 bytes
 .../modules/manager/img/interactive.jpg            | Bin 0 -> 67852 bytes
 .../version-0.11.0/modules/manager/quick_start.md  |  90 ++
 .../version-0.11.0/modules/sort/_category_.json    |   4 +
 versioned_docs/version-0.11.0/modules/sort/img.png | Bin 0 -> 10583 bytes
 .../version-0.11.0/modules/sort/introduction.md    |  37 +
 .../modules/sort/protocol_introduction.md          |  25 +
 .../version-0.11.0/modules/sort/quick_start.md     |  69 ++
 .../version-0.11.0/modules/tubemq/_category_.json  |   4 +
 .../http_access_api_definition_cn.xls              | Bin 0 -> 200704 bytes
 .../version-0.11.0/modules/tubemq/architecture.md  |  43 +
 .../version-0.11.0/modules/tubemq/client_rpc.md    | 202 +++++
 .../version-0.11.0/modules/tubemq/clients_java.md  | 251 ++++++
 .../modules/tubemq/configure_introduction.md       | 172 ++++
 .../modules/tubemq/console_introduction.md         | 118 +++
 .../modules/tubemq/consumer_example.md             |  77 ++
 .../version-0.11.0/modules/tubemq/deployment.md    | 156 ++++
 .../version-0.11.0/modules/tubemq/error_code.md    | 115 +++
 .../modules/tubemq/http_access_api.md              | 919 +++++++++++++++++++++
 .../version-0.11.0/modules/tubemq/img/.gitkeep     |   3 +
 .../tubemq/img/client_rpc/rpc_broker_info.png      | Bin 0 -> 20919 bytes
 .../tubemq/img/client_rpc/rpc_bytes_def.png        | Bin 0 -> 38706 bytes
 .../tubemq/img/client_rpc/rpc_conn_detail.png      | Bin 0 -> 30322 bytes
 .../tubemq/img/client_rpc/rpc_consumer_diagram.png | Bin 0 -> 48407 bytes
 .../img/client_rpc/rpc_convert_topicinfo.png       | Bin 0 -> 43133 bytes
 .../tubemq/img/client_rpc/rpc_event_proto.png      | Bin 0 -> 11275 bytes
 .../img/client_rpc/rpc_event_proto_optype.png      | Bin 0 -> 92896 bytes
 .../img/client_rpc/rpc_event_proto_status.png      | Bin 0 -> 93691 bytes
 .../tubemq/img/client_rpc/rpc_header_fill.png      | Bin 0 -> 156495 bytes
 .../tubemq/img/client_rpc/rpc_inner_structure.png  | Bin 0 -> 24843 bytes
 .../img/client_rpc/rpc_master_authorizedinfo.png   | Bin 0 -> 6689 bytes
 .../tubemq/img/client_rpc/rpc_message_data.png     | Bin 0 -> 23773 bytes
 .../tubemq/img/client_rpc/rpc_pbmsg_structure.png  | Bin 0 -> 11652 bytes
 .../tubemq/img/client_rpc/rpc_producer_close2M.png | Bin 0 -> 13375 bytes
 .../tubemq/img/client_rpc/rpc_producer_diagram.png | Bin 0 -> 44307 bytes
 .../img/client_rpc/rpc_producer_heartbeat2M.png    | Bin 0 -> 27314 bytes
 .../img/client_rpc/rpc_producer_register2M.png     | Bin 0 -> 24320 bytes
 .../img/client_rpc/rpc_producer_sendmsg2B.png      | Bin 0 -> 23692 bytes
 .../tubemq/img/client_rpc/rpc_proto_def.png        | Bin 0 -> 4798 bytes
 .../modules/tubemq/img/configure/conf_ini_pos.png  | Bin 0 -> 26192 bytes
 .../tubemq/img/configure/conf_velocity_pos.png     | Bin 0 -> 21544 bytes
 .../modules/tubemq/img/console/1568169770714.png   | Bin 0 -> 21062 bytes
 .../modules/tubemq/img/console/1568169796122.png   | Bin 0 -> 13461 bytes
 .../modules/tubemq/img/console/1568169806810.png   | Bin 0 -> 15847 bytes
 .../modules/tubemq/img/console/1568169823675.png   | Bin 0 -> 13307 bytes
 .../modules/tubemq/img/console/1568169839931.png   | Bin 0 -> 21185 bytes
 .../modules/tubemq/img/console/1568169851085.png   | Bin 0 -> 35596 bytes
 .../modules/tubemq/img/console/1568169863402.png   | Bin 0 -> 17502 bytes
 .../modules/tubemq/img/console/1568169879529.png   | Bin 0 -> 19652 bytes
 .../modules/tubemq/img/console/1568169889594.png   | Bin 0 -> 20553 bytes
 .../modules/tubemq/img/console/1568169900634.png   | Bin 0 -> 26003 bytes
 .../modules/tubemq/img/console/1568169908522.png   | Bin 0 -> 18358 bytes
 .../modules/tubemq/img/console/1568169916091.png   | Bin 0 -> 20093 bytes
 .../modules/tubemq/img/console/1568169925657.png   | Bin 0 -> 18024 bytes
 .../modules/tubemq/img/console/1568169946683.png   | Bin 0 -> 20407 bytes
 .../modules/tubemq/img/console/1568169954746.png   | Bin 0 -> 30020 bytes
 .../tubemq/img/development/create_pull_request.png | Bin 0 -> 216800 bytes
 .../img/development/github_fork_repository.png     | Bin 0 -> 207753 bytes
 .../tubemq/img/development/jira_create_issue.png   | Bin 0 -> 140548 bytes
 .../modules/tubemq/img/development/jira_filter.png | Bin 0 -> 273110 bytes
 .../img/development/jira_resolve_issue_1.png       | Bin 0 -> 224601 bytes
 .../img/development/jira_resolve_issue_2.png       | Bin 0 -> 122300 bytes
 .../tubemq/img/development/new_pull_request.png    | Bin 0 -> 231812 bytes
 .../modules/tubemq/img/mqs_comare.png              | Bin 0 -> 82005 bytes
 .../modules/tubemq/img/perf_appendix_1_bx1_1.png   | Bin 0 -> 74263 bytes
 .../modules/tubemq/img/perf_appendix_1_bx1_2.png   | Bin 0 -> 74022 bytes
 .../modules/tubemq/img/perf_appendix_1_bx1_3.png   | Bin 0 -> 59398 bytes
 .../modules/tubemq/img/perf_appendix_1_bx1_4.png   | Bin 0 -> 57854 bytes
 .../modules/tubemq/img/perf_appendix_1_cg1_1.png   | Bin 0 -> 407483 bytes
 .../modules/tubemq/img/perf_appendix_1_cg1_2.png   | Bin 0 -> 398721 bytes
 .../modules/tubemq/img/perf_appendix_1_cg1_3.png   | Bin 0 -> 40886 bytes
 .../modules/tubemq/img/perf_appendix_1_cg1_4.png   | Bin 0 -> 39318 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_1.png    | Bin 0 -> 87549 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_2.png    | Bin 0 -> 124450 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_3.png    | Bin 0 -> 63570 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_4.png    | Bin 0 -> 65748 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_5.png    | Bin 0 -> 68593 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_6.png    | Bin 0 -> 68854 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_7.png    | Bin 0 -> 88648 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_8.png    | Bin 0 -> 67459 bytes
 .../tubemq/img/perf_appendix_2_topic_1000_9.png    | Bin 0 -> 61548 bytes
 .../tubemq/img/perf_appendix_2_topic_100_1.png     | Bin 0 -> 89756 bytes
 .../tubemq/img/perf_appendix_2_topic_100_2.png     | Bin 0 -> 89267 bytes
 .../tubemq/img/perf_appendix_2_topic_100_3.png     | Bin 0 -> 69181 bytes
 .../tubemq/img/perf_appendix_2_topic_100_4.png     | Bin 0 -> 76628 bytes
 .../tubemq/img/perf_appendix_2_topic_100_5.png     | Bin 0 -> 65706 bytes
 .../tubemq/img/perf_appendix_2_topic_100_6.png     | Bin 0 -> 72361 bytes
 .../tubemq/img/perf_appendix_2_topic_100_7.png     | Bin 0 -> 81769 bytes
 .../tubemq/img/perf_appendix_2_topic_100_8.png     | Bin 0 -> 62333 bytes
 .../tubemq/img/perf_appendix_2_topic_100_9.png     | Bin 0 -> 58918 bytes
 .../tubemq/img/perf_appendix_2_topic_200_1.png     | Bin 0 -> 89887 bytes
 .../tubemq/img/perf_appendix_2_topic_200_2.png     | Bin 0 -> 101799 bytes
 .../tubemq/img/perf_appendix_2_topic_200_3.png     | Bin 0 -> 66126 bytes
 .../tubemq/img/perf_appendix_2_topic_200_4.png     | Bin 0 -> 71926 bytes
 .../tubemq/img/perf_appendix_2_topic_200_5.png     | Bin 0 -> 60011 bytes
 .../tubemq/img/perf_appendix_2_topic_200_6.png     | Bin 0 -> 67402 bytes
 .../tubemq/img/perf_appendix_2_topic_200_7.png     | Bin 0 -> 84250 bytes
 .../tubemq/img/perf_appendix_2_topic_200_8.png     | Bin 0 -> 62805 bytes
 .../tubemq/img/perf_appendix_2_topic_200_9.png     | Bin 0 -> 59190 bytes
 .../tubemq/img/perf_appendix_2_topic_500_1.png     | Bin 0 -> 92805 bytes
 .../tubemq/img/perf_appendix_2_topic_500_2.png     | Bin 0 -> 105098 bytes
 .../tubemq/img/perf_appendix_2_topic_500_3.png     | Bin 0 -> 67610 bytes
 .../tubemq/img/perf_appendix_2_topic_500_4.png     | Bin 0 -> 72538 bytes
 .../tubemq/img/perf_appendix_2_topic_500_5.png     | Bin 0 -> 65052 bytes
 .../tubemq/img/perf_appendix_2_topic_500_6.png     | Bin 0 -> 66872 bytes
 .../tubemq/img/perf_appendix_2_topic_500_7.png     | Bin 0 -> 84331 bytes
 .../tubemq/img/perf_appendix_2_topic_500_8.png     | Bin 0 -> 63651 bytes
 .../tubemq/img/perf_appendix_2_topic_500_9.png     | Bin 0 -> 58875 bytes
 .../modules/tubemq/img/perf_scenario_1.png         | Bin 0 -> 136439 bytes
 .../modules/tubemq/img/perf_scenario_1_index.png   | Bin 0 -> 401432 bytes
 .../modules/tubemq/img/perf_scenario_2.png         | Bin 0 -> 116241 bytes
 .../modules/tubemq/img/perf_scenario_2_index.png   | Bin 0 -> 289200 bytes
 .../modules/tubemq/img/perf_scenario_3.png         | Bin 0 -> 113325 bytes
 .../modules/tubemq/img/perf_scenario_3_index.png   | Bin 0 -> 390736 bytes
 .../modules/tubemq/img/perf_scenario_4_index.png   | Bin 0 -> 241519 bytes
 .../modules/tubemq/img/perf_scenario_6_index.png   | Bin 0 -> 171738 bytes
 .../modules/tubemq/img/perf_scenario_7.png         | Bin 0 -> 285131 bytes
 .../modules/tubemq/img/perf_scenario_8.png         | Bin 0 -> 70370 bytes
 .../modules/tubemq/img/perf_scenario_8_index.png   | Bin 0 -> 177352 bytes
 .../modules/tubemq/img/perf_scheme.png             | Bin 0 -> 250270 bytes
 .../modules/tubemq/img/store_file.png              | Bin 0 -> 23316 bytes
 .../modules/tubemq/img/store_mem.png               | Bin 0 -> 38829 bytes
 .../modules/tubemq/img/sys_structure.png           | Bin 0 -> 54641 bytes
 .../tubemq/img/sysdeployment/sys_address_host.png  | Bin 0 -> 3690 bytes
 .../img/sysdeployment/sys_broker_configure.png     | Bin 0 -> 59822 bytes
 .../tubemq/img/sysdeployment/sys_broker_deploy.png | Bin 0 -> 46767 bytes
 .../img/sysdeployment/sys_broker_finished.png      | Bin 0 -> 46756 bytes
 .../tubemq/img/sysdeployment/sys_broker_online.png | Bin 0 -> 44770 bytes
 .../img/sysdeployment/sys_broker_online_2.png      | Bin 0 -> 62302 bytes
 .../img/sysdeployment/sys_broker_restart_1.png     | Bin 0 -> 19355 bytes
 .../img/sysdeployment/sys_broker_restart_2.png     | Bin 0 -> 86408 bytes
 .../tubemq/img/sysdeployment/sys_broker_start.png  | Bin 0 -> 42862 bytes
 .../img/sysdeployment/sys_broker_start_error.png   | Bin 0 -> 56744 bytes
 .../tubemq/img/sysdeployment/sys_compile.png       | Bin 0 -> 23543 bytes
 .../tubemq/img/sysdeployment/sys_configure_1.png   | Bin 0 -> 188535 bytes
 .../tubemq/img/sysdeployment/sys_configure_2.png   | Bin 0 -> 193819 bytes
 .../img/sysdeployment/sys_master_console.png       | Bin 0 -> 35541 bytes
 .../tubemq/img/sysdeployment/sys_master_start.png  | Bin 0 -> 35457 bytes
 .../img/sysdeployment/sys_master_startted.png      | Bin 0 -> 99107 bytes
 .../tubemq/img/sysdeployment/sys_node_log.png      | Bin 0 -> 20891 bytes
 .../tubemq/img/sysdeployment/sys_node_status.png   | Bin 0 -> 71771 bytes
 .../tubemq/img/sysdeployment/sys_node_status_2.png | Bin 0 -> 123306 bytes
 .../tubemq/img/sysdeployment/sys_package.png       | Bin 0 -> 69467 bytes
 .../tubemq/img/sysdeployment/sys_package_list.png  | Bin 0 -> 44553 bytes
 .../tubemq/img/sysdeployment/sys_topic_create.png  | Bin 0 -> 50660 bytes
 .../tubemq/img/sysdeployment/sys_topic_deploy.png  | Bin 0 -> 46372 bytes
 .../tubemq/img/sysdeployment/sys_topic_error.png   | Bin 0 -> 151646 bytes
 .../img/sysdeployment/sys_topic_finished.png       | Bin 0 -> 46354 bytes
 .../tubemq/img/sysdeployment/sys_topic_select.png  | Bin 0 -> 54280 bytes
 .../tubemq/img/sysdeployment/test_sendmessage.png  | Bin 0 -> 52958 bytes
 .../img/sysdeployment/test_sendmessage_2.png       | Bin 0 -> 98658 bytes
 .../modules/tubemq/img/test_scheme.png             | Bin 0 -> 93610 bytes
 .../modules/tubemq/img/test_summary.png            | Bin 0 -> 38172 bytes
 .../modules/tubemq/img/tubemq-add-broker-1.png     | Bin 0 -> 81899 bytes
 .../modules/tubemq/img/tubemq-add-broker-2.png     | Bin 0 -> 81379 bytes
 .../modules/tubemq/img/tubemq-add-broker-3.png     | Bin 0 -> 68899 bytes
 .../modules/tubemq/img/tubemq-add-topic-1.png      | Bin 0 -> 65638 bytes
 .../modules/tubemq/img/tubemq-add-topic-2.png      | Bin 0 -> 27929 bytes
 .../modules/tubemq/img/tubemq-add-topic-3.png      | Bin 0 -> 27938 bytes
 .../modules/tubemq/img/tubemq-add-topic-4.png      | Bin 0 -> 16117 bytes
 .../modules/tubemq/img/tubemq-add-topic-5.png      | Bin 0 -> 30719 bytes
 .../modules/tubemq/img/tubemq-add-topic-6.png      | Bin 0 -> 15003 bytes
 .../modules/tubemq/img/tubemq-console-gui.png      | Bin 0 -> 45053 bytes
 .../modules/tubemq/img/tubemq-consume-message.png  | Bin 0 -> 89770 bytes
 .../modules/tubemq/img/tubemq-send-message.png     | Bin 0 -> 69960 bytes
 .../modules/tubemq/producer_example.md             | 152 ++++
 .../version-0.11.0/modules/tubemq/quick_start.md   | 197 +++++
 .../modules/tubemq/tubemq-manager/quick_start.md   | 125 +++
 .../modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md | 239 ++++++
 .../version-0.11.0/modules/website/_category_.json |   4 +
 .../version-0.11.0/modules/website/quick_start.md  |  55 ++
 .../version-0.11.0/user_guide/_category_.json      |   4 +
 .../version-0.11.0/user_guide/example.md           | 104 +++
 .../version-0.11.0/user_guide/quick_start.md       |  76 ++
 .../version-0.11.0/user_guide/user_manual.md       | 286 +++++++
 versioned_sidebars/version-0.11.0-sidebars.json    |   8 +
 versions.json                                      |   3 +
 382 files changed, 7406 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 1cd4ae6..de67aff 100644
--- a/README.md
+++ b/README.md
@@ -48,3 +48,10 @@ Make sure you have submit issue for tracking PR: [https://github.com/apache/incu
 1. Add new .md file under `docs` or `i18n`.
 2. Run dev server locally to verify the article can be displayed correctly.
 3. Send the pull request contains the *.md and development.js only.
+
+
+### Add a new version for documents
+
+1. Modify the document in `docs`, then run `npm run docusaurus docs:version replace_by_target_version` locally to copy a document.
+2. Add a label to DOC's item in `docusaurus.config.js` file.
+3. Modify the last table version in `/src/pages/version/index.js`.
\ No newline at end of file
diff --git a/docusaurus.config.js b/docusaurus.config.js
index ab320c1..b637941 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -68,10 +68,23 @@ const darkCodeTheme = require('prism-react-renderer/themes/dracula');
             activeBaseRegex: `^/$`,
           },
           {
-            type: 'doc',
-            docId: 'user_guide/quick_start',
             position: 'right',
             label: 'DOC',
+            to: "/docs/user_guide/quick_start",
+            items: [
+              {
+                label: "latest",
+                to: "/docs/user_guide/quick_start",
+              },
+              {
+                label: "0.11.0",
+                to: "/docs/user_guide/quick_start",
+              },
+              {
+                label: "All versions",
+                to: "/versions/",
+              },
+            ],
           },
           {
             to: '/download/main',
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0.json b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0.json
new file mode 100644
index 0000000..1c0ee4a
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0.json
@@ -0,0 +1,46 @@
+{
+  "version.label": {
+    "message": "0.11.0",
+    "description": "The label for version current"
+  },
+  "sidebar.tutorialSidebar.category.User Guide": {
+    "message": "引导",
+    "description": "The label for category User Guide in sidebar tutorialSidebar"
+  },
+  "sidebar.tutorialSidebar.category.Components": {
+    "message": "组件介绍",
+    "description": "The label for category Components in sidebar tutorialSidebar"
+  },
+  "sidebar.tutorialSidebar.category.Manager": {
+    "message": "Manager",
+    "description": "The label for category Manager in sidebar tutorialSidebar"
+  },
+  "sidebar.tutorialSidebar.category.Website": {
+    "message": "Website",
+    "description": "The label for category Website in sidebar tutorialSidebar"
+  },
+  "sidebar.tutorialSidebar.category.Agent": {
+    "message": "Agent",
+    "description": "The label for category Agent in sidebar tutorialSidebar"
+  },
+  "sidebar.tutorialSidebar.category.DataProxy": {
+    "message": "DataProxy",
+    "description": "The label for category DataProxy in sidebar tutorialSidebar"
+  },
+  "sidebar.tutorialSidebar.category.DataProxy-SDK": {
+    "message": "DataProxy-SDK",
+    "description": "The label for category DataProxy-SDK in sidebar tutorialSidebar"
+  },
+  "sidebar.tutorialSidebar.category.TubeMQ": {
+    "message": "TubeMQ",
+    "description": "The label for category TubeMQ in sidebar tutorialSidebar"
+  },
+  "sidebar.tutorialSidebar.category.tubemq-manager": {
+    "message": "tubemq-manager",
+    "description": "The label for category tubemq-manager in sidebar tutorialSidebar"
+  },
+  "sidebar.tutorialSidebar.category.Sort": {
+    "message": "Sort",
+    "description": "The label for category Sort in sidebar tutorialSidebar"
+  }
+}
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/agent/architecture.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/agent/architecture.md
new file mode 100644
index 0000000..e7b65aa
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/agent/architecture.md
@@ -0,0 +1,48 @@
+---
+title: 架构介绍
+---
+## 一. InLong-Agent 概览
+InLong-Agent是一个支持多种数据源类型的收集工具,致力于实现包括file、sql、Binlog、metrics等多种异构数据源之间稳定高效的数据采集功能。
+
+### 简要的架构图如下:
+![](img/architecture.png)
+
+
+
+### 设计理念
+为了解决数据源多样性问题,InLong-agent 将多种数据源抽象成统一的source概念,并抽象出sink来对数据进行写入。当需要接入一个新的数据源的时候,只需要配置好数据源的格式与读取参数便能跟做到高效读取。
+
+### 当前使用现状
+InLong-Agent在腾讯集团内被广泛使用,承担了大部分的数据采集业务,线上数据量达百亿级别。
+
+## 二. InLong-Agent 架构介绍
+InLong Agent本身作为数据采集框架,采用channel + plugin架构构建。将数据源读取和写入抽象成为Reader/Writer插件,纳入到整个框架中。
+
++ Reader:Reader为数据采集模块,负责采集数据源的数据,将数据发送给channel。
++ Writer: Writer为数据写入模块,负责不断向channel取数据,并将数据写入到目的端。
++ Channel:Channel用于连接reader和writer,作为两者的数据传输通道,并起到了数据的写入读取监控作用
+
+
+## 三. InLong-Agent 采集分类说明
+### 3.1 文件采集
+文件采集包含如下功能:
+
+用户配置的路径监听,能够监听出创建的文件信息
+目录正则过滤,支持YYYYMMDD+正则表达式的路径配置
+断点重传,InLong-Agent重启时,能够支持自动从上次读取位置重新读取,保证不重读不漏读。
+### 3.2 sql采集
+这类数据是指通过SQL执行的方式
+SQL正则分解,转化成多条SQL语句
+分别执行SQL,拉取数据集,拉取过程需要注意对mysql本身的影响
+执行周期,这种一般是定时执行
+### 3.3 binlog 采集
+这类采集通过配置mysql slave的方式,读取binlog,并还原数据
+需要注意binlog读取的时候多线程解析,多线程解析的数据需要打上顺序标签
+代码基于老版本的dbsync,主要的修改是将tdbus-sender的发送改为推送到agent-channel的方式做融合
+### 3.4 Metrics采集类
+这种方式采集属于文件采集,只不过metric采集的时候,单行的数据有格式规范
+
+
+
+
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/agent/img/architecture.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/agent/img/architecture.png
new file mode 100644
index 0000000..1138fe1
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/agent/img/architecture.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/agent/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/agent/quick_start.md
new file mode 100644
index 0000000..714c318
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/agent/quick_start.md
@@ -0,0 +1,183 @@
+---
+title: 编译部署
+---
+
+## 1、配置
+```
+cd inlong-agent
+```
+
+agent 支持本地运行以及线上运行,其中线上运行从inlong manager拉取任务,本地运行可使用http请求提交任务
+
+### Agent 线上运行相关设置
+
+线上运行需要从inlong-manager拉取配置,配置conf/agent.properties如下:
+```ini
+agent.fetcher.classname=org.apache.inlong.agent.plugin.fetcher.ManagerFetcher (设置任务获取的类名,默认为ManagerFetcher)
+agent.local.ip=写入本机ip
+agent.manager.vip.http.host=manager web host
+agent.manager.vip.http.port=manager web port
+```
+
+## 2、运行
+
+解压后如下命令运行
+```bash
+sh agent.sh start
+```
+
+### 3 实时添加job配置
+
+#### 3.1 agent.properties 修改下面两处
+
+```ini
+# whether enable http service
+agent.http.enable=true
+# http default port
+agent.http.port=可用端口
+```
+
+#### 3.2 执行如下命令:
+
+```bash
+curl --location --request POST 'http://localhost:8008/config/job' \
+--header 'Content-Type: application/json' \
+--data '{
+"job": {
+"dir": {
+"path": "",
+"pattern": "/data/inlong-agent/test.log"
+},
+"trigger": "org.apache.inlong.agent.plugin.trigger.DirectoryTrigger",
+"id": 1,
+"thread": {
+"running": {
+"core": "4"
+}
+},
+"name": "fileAgentTest",
+"source": "org.apache.inlong.agent.plugin.sources.TextFileSource",
+"sink": "org.apache.inlong.agent.plugin.sinks.ProxySink",
+"channel": "org.apache.inlong.agent.plugin.channel.MemoryChannel"
+},
+"proxy": {
+"groupId": "groupId10",
+"streamId": "streamId10"
+},
+"op": "add"
+}'
+```
+
+    其中各个参数含义为:
+    - job.dir.pattern: 配置读取的文件路径,可包含正则表达式
+    - job.trigger: 触发器名称,默认为DirectoryTrigger,功能为监听文件夹下的文件产生事件,任务运行时已有的文件不会读取
+    - job.source: 使用的数据源类型,默认为TextFileSource,读取文本文件
+    - job.sink:使用的写入器类型,默认为ProxySink,发送消息到dataproxy中
+    - proxy.groupId: 写入proxy时使用的groupId,groupId是指manager界面中,数据接入中业务信息的业务ID,此处不是创建的tube topic名称
+    - proxy.streamId: 写入proxy时使用的streamId,streamId是指manager界面中,数据接入中数据流的数据流ID
+
+## 4、可支持的路径配置方案
+
+    例如:
+    /data/inlong-agent/test.log  //代表读取inlong-agent文件夹下的的新增文件test.log
+    /data/inlong-agent/test[0-9]{1} //代表读取inlong-agent文件夹下的新增文件test后接一个数字结尾
+    /data/inlong-agent/test //如果test为目录,则代表读取test下的所有新增文件
+    /data/inlong-agent/^\\d+(\\.\\d+)? // 以一个或多个数字开头,之后可以是.或者一个.或多个数字结尾,?代表可选,可以匹配的实例:"5", "1.5" 和 "2.21"
+
+
+## 5、支持从文件名称中获取数据时间
+
+    Agent支持从文件名称中获取时间当作数据的生产时间,配置说明如下:
+    /data/inlong-agent/***YYYYMMDDHH***
+    其中YYYYDDMMHH代表数据时间,YYYY表示年,MM表示月份,DD表示天,HH表示小时
+    其中***为任意字符
+
+    同时需要在job conf中加入当前数据的周期,当前支持天周期以及小时周期,
+    在添加任务时,加入属性job.cycleUnit
+    
+    job.cycleUnit 包含如下两种类型:
+    1、D : 代表数据时间天维度
+    2、H : 代表数据时间小时维度
+
+    例如:
+    配置数据源为
+    /data/inlong-agent/YYYYMMDDHH.log
+    写入数据到 2021020211.log
+    配置 job.cycleUnit 为 D
+    则agent会在2021020211时间尝试2021020211.log文件,读取文件中的数据时,会将所有数据以20210202这个时间写入到后端proxy
+    如果配置 job.cycleUnit 为 H
+    则采集2021020211.log文件中的数据时,会将所有数据以2021020211这个时间写入到后端proxy
+
+    
+    提交job举例
+```bash
+curl --location --request POST 'http://localhost:8008/config/job' \
+--header 'Content-Type: application/json' \
+--data '{
+"job": {
+"dir": {
+"path": "",
+"pattern": "/data/inlong-agent/test.log"
+},
+"trigger": "org.apache.inlong.agent.plugin.trigger.DirectoryTrigger",
+"id": 1,
+"thread": {
+"running": {
+"core": "4"
+}
+},
+"name": "fileAgentTest",
+"cycleUnit": "D",
+"source": "org.apache.inlong.agent.plugin.sources.TextFileSource",
+"sink": "org.apache.inlong.agent.plugin.sinks.ProxySink",
+"channel": "org.apache.inlong.agent.plugin.channel.MemoryChannel"
+},
+"proxy": {
+"groupId": "groupId",
+"streamId": "streamId"
+},
+"op": "add"
+}'
+```
+
+
+## 6、支持时间偏移量offset读取
+
+    在配置按照时间读取之后,如果想要读取当前时间之外的其他时间的数据,可以通过配置时间偏移量完成
+    配置job属性名称为job.timeOffset,值为数字 + 时间维度,时间维度包括天和小时
+    例如支持如下设置
+    1、 1d 读取当前时间后一天的数据 
+    2、 -1h 读取当前时间前一个小时的数据
+
+
+    提交job举例
+```bash
+curl --location --request POST 'http://localhost:8008/config/job' \
+--header 'Content-Type: application/json' \
+--data '{
+"job": {
+"dir": {
+"path": "",
+"pattern": "/data/inlong-agent/test.log"
+},
+"trigger": "org.apache.inlong.agent.plugin.trigger.DirectoryTrigger",
+"id": 1,
+"thread": {
+"running": {
+"core": "4"
+}
+},
+"name": "fileAgentTest",
+"cycleUnit": "D",
+"timeOffset": "-1d",
+"source": "org.apache.inlong.agent.plugin.sources.TextFileSource",
+"sink": "org.apache.inlong.agent.plugin.sinks.ProxySink",
+"channel": "org.apache.inlong.agent.plugin.channel.MemoryChannel"
+},
+"proxy": {
+"groupId": "groupId",
+"streamId": "streamId"
+},
+"op": "add"
+}'
+```
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy-sdk/architecture.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy-sdk/architecture.md
new file mode 100644
index 0000000..40a8a59
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy-sdk/architecture.md
@@ -0,0 +1,65 @@
+---
+title: 架构介绍
+---
+# 一、说明
+
+在业务使用消息接入方式时,业务一般仅需将数据按照DataProxy可识别的格式(如六段协议、数字化协议等)
+进行组包发送,就可以将数据接入到inlong。但为了保证数据可靠性、负载均衡、动态更新proxy列表等安全特性
+用户程序就需要考虑更多最终导致程序过于繁琐臃肿。
+
+API的设计初衷就是为了简化用户接入,承担部分可靠性相关的逻辑。用户通过在服务送程序中集成API后,即可将数据发送到DataProxy,而不用关心组包格式、负载均衡等逻辑。
+
+# 二、功能说明
+
+## 2.1 整体功能说明
+
+|  功能   | 详细描述  |
+|  ----  | ----  |
+| 组包功能(新)  | 将用户数据按打包发送到DataProxy可识别的组包格式(如六段协议、数字化协议等)打包发送到DataProxy|
+| 压缩功能  | 发送打包发送到DataProxy前,将用户数据进行压缩,减少网络带宽占用 |
+| 维护DataProxy列表  | 每隔五分钟获取打包发送到DataProxy列表 ,以检测运维侧是否存在上下线DataProxy机器的情况;每隔20s自动剔除不可用连接,以保证已连接的DataProxy能正常运作 |
+| 指标统计(新)  | 增加业务分钟级别发送量的指标(接口级) |
+| 负载均衡(新)  | 使用新的策略将发送的数据在多个DataProxy间进行负载均衡,不再依靠简单的随机+轮询机制来保证 |
+| DataProxy列表持久化(新)  | 根据业务id对DataProxy列表持久化,防止程序启动时配置中心发生故障无法发送数据
+
+
+## 2.2 数据发送功能说明
+
+### 同步批量函数
+
+    public SendResult sendMessage(List<byte[]> bodyList, String groupId, String streamId, long dt, long timeout, TimeUnit timeUnit)
+
+    参数说明
+
+    bodyList是用户需要发送的多条数据的集合,总长度建议小于512k。groupId代表业务id,streamId代表接口id。dt表示该数据的时间戳,精确到毫秒级别。也可直接设置为0,此时api会后台获取当前时间作为其时间戳。timeout & timeUnit:这两个参数是设置发送数据的超时时间,一般建议设置成20s。
+
+
+
+###同步单条函数
+
+    public SendResult sendMessage(byte[] body, String groupId, String streamId, long dt, long timeout, TimeUnit timeUnit)
+
+    参数说明
+
+    body是用户要发送的单条数据内容,其余各参数涵义基本与批量发送接口一致。
+
+
+
+###异步批量函数
+
+    public void asyncSendMessage(SendMessageCallback callback, List<byte[]> bodyList, String groupId, String streamId, long dt, long timeout,TimeUnit timeUnit)
+
+    参数说明
+
+    SendMessageCallback 是处理消息的callback。bodyList为用户需要发送的多条数据的集合,多条数据的总长度建议小于512k。groupId是业务id,streamId是接口id。dt表示该数据的时间戳,精确到毫秒级别。也可直接设置为0,此时api会后台获取当前时间作为其时间戳。timeout和timeUnit是发送数据的超时时间,一般建议设置成20s。
+
+
+###异步单条函数
+
+    public void asyncSendMessage(SendMessageCallback callback, byte[] body, String groupId, String streamId, long dt, long timeout, TimeUnit timeUnit)
+
+    参数说明
+
+    body为单条消息内容,其余各参数涵义基本与批量发送接口一致
+
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy-sdk/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy-sdk/quick_start.md
new file mode 100644
index 0000000..72b8179
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy-sdk/quick_start.md
@@ -0,0 +1,12 @@
+---
+title: 编译部署
+---
+# 使用
+
+编写java程序时,增加pom配置如下并使用[architecture](architecture.md)中定义的api进行发送:
+
+    <dependency>
+            <groupId>org.apache.inlong</groupId>
+            <artifactId>inlong-dataproxy-sdk</artifactId>
+            <version>0.10.0-incubating-SNAPSHOT</version>
+    </dependency>
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy/architecture.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy/architecture.md
new file mode 100644
index 0000000..9c63de0
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy/architecture.md
@@ -0,0 +1,150 @@
+---
+title: 架构介绍
+---
+# 一、说明
+
+    InLong-dataProxy属于inlong proxy层,用于数据的汇集接收以及转发。通过格式转换,将数据转为cache层可以缓存处理的TDMsg1格式
+    InLong-dataProxy充当了InLong采集端到InLong缓冲端的桥梁,dataproxy从manager模块拉取业务id与对应topic名称的关系,内部管理多个topic的生产者
+    当dataproxy收到消息时,会首先缓存到本地的Channel中,并使用本地的producer往后端即cache层发送数据
+    InLong-dataProxy整体架构基于Apache Flume。inlong-dataproxy在该项目的基础上,扩展了source层和sink层,并对容灾转发做了优化处理,提升了系统的稳定性。
+    
+    
+# 二、架构
+
+![](img/architecture.png)
+
+    1.Source层开启端口监听,通过netty server实现。解码之后的数据发到channel层
+    2.channel层有一个selector,用于选择走哪种类型的channel,如果memory最终满了,会对数据做落地处理
+    3.channel层的数据会通过sink层做转发,这里主要是将数据转为TDMsg1的格式,并推送到cache层(这里用的比较多的是tube)
+
+
+# 三、DataProxy功能配置说明
+
+DataProxy支持配置化的source-channel-sink,配置方式与flume的配置文件结构相同:
+
+Source配置示例以及对应的注解:
+
+    agent1.sources.tcp-source.channels = ch-msg1 ch-msg2 ch-msg3 ch-more1 ch-more2 ch-more3 ch-msg5 ch-msg6 ch-msg7 ch-msg8 ch-msg9 ch-msg10 ch-transfer ch-back
+    定义source中使用到的channel,注意此source下面的配置如果有使用到channel,均需要在此注释
+
+    agent1.sources.tcp-source.type = org.apache.flume.source.SimpleTcpSource
+    tcp解析类型定义,这里提供类名用于实例化,SimpleTcpSource主要是初始化配置并启动端口监听
+
+    agent1.sources.tcp-source.msg-factory-name = org.apache.flume.source.ServerMessageFactory
+    用于构造消息解析的handler,并设置read stream handler和write stream handler
+
+    agent1.sources.tcp-source.host = 0.0.0.0    
+    tcp ip绑定监听,默认绑定所有网卡
+
+    agent1.sources.tcp-source.port = 46801
+    tcp 端口绑定,默认绑定46801端口
+
+    agent1.sources.tcp-source.highWaterMark=2621440 
+    netty概念,设置netty高水位值
+
+    agent1.sources.tcp-source.max-msg-length = 524288
+    限制单个包大小,这里如果传输的是压缩包,则是压缩包大小,限制512KB
+
+    agent1.sources.tcp-source.topic = test_token
+    默认topic值,如果groupId和topic的映射关系找不到,则发送到此topic中
+
+    agent1.sources.tcp-source.attr = m=9
+    默认m值设置,这里的m值是inlong内部TdMsg协议的版本
+
+    agent1.sources.tcp-source.connections = 5000
+    并发连接上线,超过上限值时会对新连接做断链处理
+
+    agent1.sources.tcp-source.max-threads = 64
+    netty线程池工作线程上限,一般推荐选择cpu的两倍
+
+    agent1.sources.tcp-source.receiveBufferSize = 524288
+    netty server tcp调优参数
+
+    agent1.sources.tcp-source.sendBufferSize = 524288
+    netty server tcp调优参数
+
+    agent1.sources.tcp-source.custom-cp = true
+    是否使用自研的channel process,自研channel process可在主channel阻塞时,选择备用channel发送
+
+    agent1.sources.tcp-source.selector.type = org.apache.flume.channel.FailoverChannelSelector
+    这个channel selector就是自研的channel selector,和官网的差别不大,主要是有channel主从选择逻辑
+
+    agent1.sources.tcp-source.selector.master = ch-msg5 ch-msg6 ch-msg7 ch-msg8 ch-msg9
+    指定master channel,这些channel会被优先选择用于数据推送。那些不在master、transfer、fileMetric、slaMetric配置项里的channel,但在
+    channels里面有定义的channel,统归为slave channel,当master channel都被占满时,就会选择使用slave channel,slave channel一般建议使用file channel类型
+
+    agent1.sources.tcp-source.selector.transfer = ch-msg5 ch-msg6 ch-msg7 ch-msg8 ch-msg9
+    指定transfer channel,承接transfer类型的数据,这里的transfer一般是指推送到非tube集群的数据,仅做转发,这里预留出来供后续功能使用
+
+    agent1.sources.tcp-source.selector.fileMetric = ch-back
+    指定fileMetric channel,用于接收agent上报的指标数据
+
+Channel配置示例以及对应的注解
+
+memory channel
+
+    agent1.channels.ch-more1.type = memory
+    memory channel类型
+
+    agent1.channels.ch-more1.capacity = 10000000
+    memory channel 队列大小,可缓存最大消息条数
+
+    agent1.channels.ch-more1.keep-alive = 0
+    
+    agent1.channels.ch-more1.transactionCapacity = 20
+    原子操作时批量处理最大条数,memory channel使用时需要用到加锁,因此会有批处理流程增加效率
+
+file channel
+
+    agent1.channels.ch-msg5.type = file
+    file channel类型
+
+    agent1.channels.ch-msg5.capacity = 100000000
+    file channel最大可缓存消息条数
+
+    agent1.channels.ch-msg5.maxFileSize = 1073741824
+    file channel文件最大上限,字节数
+
+    agent1.channels.ch-msg5.minimumRequiredSpace = 1073741824
+    file channel所在磁盘最小可用空间,设置此值可以防止磁盘写满
+
+    agent1.channels.ch-msg5.checkpointDir = /data/work/file/ch-msg5/check
+    file channel checkpoint路径
+
+    agent1.channels.ch-msg5.dataDirs = /data/work/file/ch-msg5/data
+    file channel数据路径
+
+    agent1.channels.ch-msg5.fsyncPerTransaction = false
+    是否对每个原子操作做同步磁盘,建议改false,否则会对性能有影响
+
+    agent1.channels.ch-msg5.fsyncInterval = 5
+    数据从内存flush到磁盘的时间间隔,单位秒
+
+Sink配置示例以及对应的注解
+
+    agent1.sinks.meta-sink-more1.channel = ch-msg1
+    sink的上游channel名称
+
+    agent1.sinks.meta-sink-more1.type = org.apache.flume.sink.MetaSink
+    sink类实现,此处实现消息向tube集群推送数据
+
+    agent1.sinks.meta-sink-more1.master-host-port-list = 
+    tube集群master节点列表
+
+    agent1.sinks.meta-sink-more1.send_timeout = 30000
+    发送到tube时超时时间限制
+
+    agent1.sinks.meta-sink-more1.stat-interval-sec = 60
+    sink指标统计间隔时间,单位秒
+
+    agent1.sinks.meta-sink-more1.thread-num = 8
+    Sink类发送消息的工作线程,8表示启动8个并发线程
+
+    agent1.sinks.meta-sink-more1.client-id-cache = true
+    agent id缓存,用于检查agent上报数据去重
+
+    agent1.sinks.meta-sink-more1.max-survived-time = 300000
+    缓存最大时间
+    
+    agent1.sinks.meta-sink-more1.max-survived-size = 3000000
+    缓存最大个数
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy/img/architecture.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy/img/architecture.png
new file mode 100644
index 0000000..bc46026
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy/img/architecture.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy/quick_start.md
new file mode 100644
index 0000000..18e7df1
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/dataproxy/quick_start.md
@@ -0,0 +1,58 @@
+---
+title: 编译部署
+---
+## 部署 DataProxy
+
+所有的安装文件都在 `inlong-dataproxy` 目录下。
+
+### 配置tube地址和端口号
+
+`tubemq_master_list`是TubeMQ master rpc地址,多个逗号分隔。
+```
+$ sed -i 's/TUBE_LIST/tubemq_master_list/g' conf/flume.conf
+```
+
+注意conf/flume.conf中FLUME_HOME为proxy的中间数据文件存放地址
+
+### 环境准备
+
+```
+sh prepare_env.sh
+```
+
+### 配置manager地址
+
+配置文件:`conf/common.properties`:
+```
+# manager web url 
+manager_hosts=ip:port 
+```
+
+## 启动
+
+```
+sh bin/start.sh
+```
+
+## 检查启动状态
+
+```
+telnet 127.0.0.1 46801
+```
+
+## 将 DataProxy 配置添加到 InLong-Manager
+
+安装完 DataProxy 后,需要将 DataProxy 所在主机的 IP 插入到 InLong-Manager 的后台数据库中。
+
+InLong-Manager 的后台数据库地址,请参考 InLong-Manager 模块的部署文档。
+
+插入 SQL 语句为:
+
+```sql
+-- name 为 DataProxy 的名称,可自定义
+-- address 为 DataProxy 服务所在主机的 IP
+-- port 为 DataProxy 服务所在的端口号,默认是 46801
+insert into data_proxy_cluster (name, address, port, status, is_deleted, create_time, modify_time)
+values ("data_proxy_name", "data_proxy_ip", 46801, 0, 0, now(), now());
+```
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/architecture.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/architecture.md
new file mode 100644
index 0000000..9a01a35
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/architecture.md
@@ -0,0 +1,33 @@
+---
+title: 架构介绍
+---
+
+## Apache InLong Manager介绍
+
++ 目标定位:Apache inlong 定位为一站式数据接入解决方案,提供完整覆盖大数据接入场景从数据采集、传输、分拣、落地的技术能力。
+
++ 平台价值:用户可以通过平台中自带的管理、配置平台完成任务的配置、管理、指标监控,同时通过平台在流程中各主要环节提供SPI扩展点按需要实现自定义逻辑。保证功能稳定高效的同时降低平台使用门槛。
+
++ Apache InLong Manager作为整个数据接入平台面向用户的统一管理入口,用户登录后会根据对应角色提供不同的功能权限以及数据权限。页面提供平台各基础集群(如mq、分拣)维护入口,可随时查看维护基本信息、容量规划调整。同时业务用户可完成数据接入任务的创建、修改维护、指标查看、接入对账等功能。其对应的后台服务在用户创建并启动任务的同时会与底层各模块进行数据交互,将各模块需要执行的任务通过合理的方式下发。起到协调串联后台业务执行流程的作用。
+
+## Architecture
+
+![](img/inlong-manager.png)
+
+
+## 模块分工
+
+| 模块 | 职责 |
+| :-----| :---- |
+| manager-common | 模块公共代码,入异常定义,工具类,枚举等 |
+| manager-dao | 数据库操作 |
+| manager-service | 业务逻辑层 |
+| manager-web | 前端交互对应接口 |
+| manager-workflow-engine | 工作流引擎|
+
+## 系统使用流程
+![](img/interactive.jpg)
+
+
+## 数据模型
+![](img/datamodel.jpg)
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/img/datamodel.jpg b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/img/datamodel.jpg
new file mode 100644
index 0000000..7d0b578
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/img/datamodel.jpg differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/img/inlong-manager.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/img/inlong-manager.png
new file mode 100644
index 0000000..3db4937
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/img/inlong-manager.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/img/interactive.jpg b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/img/interactive.jpg
new file mode 100644
index 0000000..7238d00
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/img/interactive.jpg differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/quick_start.md
new file mode 100644
index 0000000..2ca2d38
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/manager/quick_start.md
@@ -0,0 +1,85 @@
+---
+title: 编译部署
+---
+
+# 1. 环境准备
+- 安装并启动 MySQL 5.7+,把 inlong-manager 模块中的 `doc/sql/apache_inlong_manager.sql` 文件拷贝到 MySQL 数据库所在的服务器
+(比如拷贝到 `/data/` 目录下),通过下述命令加载此文件,完成表结构及基础数据的初始化:
+
+  ```shell
+  # 通过用户名和密码,登录 MySQL 服务器:
+  mysql -u xxx -p xxx
+  ...
+  # 创建数据库
+  CREATE DATABASE IF NOT EXISTS apache_inlong_manager;
+  USE apache_inlong_manager;
+  # 通过 source 命令加载上述 SQL 文件:
+  mysql> source /data/apache_inlong_manager.sql;
+  ```
+
+- 参照 [编译部署TubeMQ](https://inlong.apache.org/zh-cn/docs/modules/tubemq/quick_start.html),安装并启动 Tube 集群;
+
+- 参照 [编译部署TubeMQ Manager](https://inlong.apache.org/zh-cn/docs/modules/tubemq/tubemq-manager/quick_start.html),安装并启动
+  TubeManager。
+  
+# 2. 部署、启动 manager-web
+
+**manager-web 是与前端页面交互的后台服务。**
+
+## 2.1 准备安装文件
+
+安装文件在 `inlong-manager-web` 目录下。
+
+## 2.2 修改配置
+
+前往 `inlong-manager-web` 目录,修改 `conf/application.properties` 文件:
+
+```properties
+# manager-web 服务的端口号
+server.port=8083
+
+# 默认使用的配置文件为 dev
+spring.profiles.active=dev
+```
+
+上面指定了 dev 配置,接下来修改 `conf/application-dev.properties` 文件:
+
+1) 修改数据库 URL、用户名和密码:
+
+   ```properties
+   spring.datasource.jdbc-url=jdbc:mysql://127.0.0.1:3306/apache_inlong_manager?useSSL=false&allowPublicKeyRetrieval=true&characterEncoding=UTF-8&nullCatalogMeansCurrent=true&serverTimezone=GMT%2b8
+   spring.datasource.username=xxxxxx
+   spring.datasource.password=xxxxxx
+   ```
+
+2) 修改 Tube 和 ZooKeeper 集群的连接信息,其中 `cluster.zk.root` 建议使用默认值:
+
+   ```properties
+   # Tube 集群的 Manager 地址,用来创建 Topic
+   cluster.tube.manager=http://127.0.0.1:8081
+   # 用来管理 Tube 的 Broker
+   cluster.tube.master=127.0.0.1:8000,127.0.0.1:8010
+   # Tube 集群的 ID
+   cluster.tube.clusterId=1
+   
+   # ZK 集群,用来推送 Sort 的配置
+   cluster.zk.url=127.0.0.1:2181
+   cluster.zk.root=inlong_hive
+   
+   # Sort 应用名称,即设置 Sort 的 cluster-id 参数,默认值为"inlong_app"
+   sort.appName=inlong_app
+   ```
+
+## 2.3 启动服务
+
+进入解压后的目录,执行 `sh bin/startup.sh` 启动服务,查看日志 `tailf log/manager-web.log`,若出现类似下面的日志,说明服务启动成功:
+
+```shell
+Started InLongWebApplication in 6.795 seconds (JVM running for 7.565)
+```
+
+# 3. 服务访问验证
+
+在浏览器中访问如下地址,验证 manager-web 服务:
+
+地址:<http://[manager_web_ip]:[manager_web_port]/api/inlong/manager/doc.html#/home>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/img.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/img.png
new file mode 100644
index 0000000..131eddf
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/img.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/introduction.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/introduction.md
new file mode 100644
index 0000000..3b4404c
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/introduction.md
@@ -0,0 +1,42 @@
+---
+title: 架构介绍
+---
+
+# 简介
+inlong-sort是一个基于flink的ETL系统,支持多种数据源,支持简单的字段抽取,支持多种存储系统。
+inlong-sort依赖inlong-manager进行系统元数据的管理,元数据依赖zk进行存储及同步。
+
+# 特性
+## 多租户系统
+inlong-sort支持多租户,一个inlong-sort的作业中可以包含多个同构的数据源,以及多个同构的存储系统。
+并且针对不同的数据源,可以定义不同的数据格式以及字段抽取方式。
+多租户系统依赖inlong-manager的元数据管理,用户只需要在inlong-manager的前端页面进行相应的配置,即可实现。
+举例:以tubemq为source,hive为存储为例,同一个inlong-sort的作业可以订阅多个topic的tubemq数据,并且每个topic的数据可以写入不同的hive集群。
+
+## 支持热更新元数据
+inlong-sort支持热更新元数据,比如更新数据源的信息,数据schema,或者写入存储系统的信息。
+需要注意的是,当前修改数据源信息时,可能会造成数据丢失,因为修改数据源信息后,系统会认为这是一个全新的subscribe,会默认从消息队列的最新位置开始消费。
+修改数据schema,抽取字段规则以及写入存储的信息,不会造成任何数据丢失,保证exactly-once
+
+# 支持的数据源
+- inlong-tubemq
+- pulsar
+
+# 支持的存储系统
+- hive(当前只支持parquet文件格式)
+- clickhouse
+
+# 一些局限
+当前inlong-sort在ETL的transform阶段,只支持简单的字段抽取功能,一些复杂功能暂不支持。
+
+# 未来规划
+## 支持更多种类的数据源
+kafka等
+
+
+## 支持更多种类的存储
+Hbase,Elastic Search等
+
+
+## 支持更多种写入hive的文件格式
+sequece file,orc
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/protocol_introduction.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/protocol_introduction.md
new file mode 100644
index 0000000..c5504d5
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/protocol_introduction.md
@@ -0,0 +1,25 @@
+---
+title: Zookeeper配置介绍
+---
+
+# 前言
+当前inlong-sort的元数据管理依赖inlong-manager。
+
+inlong-sort与inlong-manager之间通过zk进行元数据的交互。
+
+# Zookeeper结构
+
+![img.png](img.png)
+
+cluster代表一个flink作业,同一个cluster中可以处理多个流向,但是这些流向必须是同构的(source与sink相同)。
+
+dataflow代表一个具体的流向,每个流向有一个全局唯一的id来标识。流向由source + sink组成。
+
+上图中上方的一条路径用来表示某个cluster中运行了哪些dataflow的作业,节点下均不含元数据。
+
+下方的路径用来存储dataflow的具体信息,即真正的元数据存放。
+
+元数据管理逻辑可以查看类`org.apache.inlong.sort.meta.MetaManager`
+
+# 协议设计
+具体的协议可以查看类`org.apache.inlong.sort.protocol.DataFlowInfo`
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/quick_start.md
new file mode 100644
index 0000000..334dd52
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/sort/quick_start.md
@@ -0,0 +1,64 @@
+---
+title: 编译部署
+---
+
+##  配置flink运行环境
+当前inlong-sort是基于flink的一个应用,因此运行inlong-sort应用前,需要准备好flink环境。
+
+[如何配置flink环境](https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/deployment/cluster_setup.html "how to set up flink environment")
+
+由于当前inlong-sort依赖的是flink1.9.3版本,因此在下载部署包时,请选择`flink-1.9.3-bin-scala_2.11.tgz`
+
+flink环境配置完成后,可以通过浏览器访问flink的web ui,对应的地址是`/{flink部署路径}/conf/masters`文件中的地址
+
+##  准备安装文件
+安装文件在`inlong-sort`目录。
+
+##  启动inlong-sort应用
+有了上述编译阶段产出的jar包后,就可以启动inlong-sort的应用了。
+
+[如何提交flink作业](https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/deployment/yarn_setup.html#submit-job-to-flink "如何提交flink作业")
+
+示例:
+
+- `./bin/flink run -c org.apache.inlong.sort.flink.Entrance inlong-sort-core-1.0-SNAPSHOT.jar --cluster-id my_application --zookeeper.quorum 127.0.0.1:2181 --zookeeper.path.root /inlong-sort --source.type tubemq --sink.type hive`
+
+注意:
+
+- `-c org.apache.inlong.sort.flink.Entrance` 表示main class name
+
+- `inlong-sort-core-1.0-SNAPSHOT.jar` 为编译阶段产出的jar包
+
+##  必要的配置
+- `--cluster-id ` 用来唯一标识一个inlong-sort作业
+- `--zookeeper.quorum` zk quorum
+- `--zookeeper.path.root` zk根目录
+- `--source.type` 数据源的种类, 当前支持:"tubemq"、"pulsar"
+- `--sink.type` 存储系统的种类,当前支持:"clickhouse"、"hive"
+
+**配置示例**
+
+`--cluster-id my_application --zookeeper.quorum 192.127.0.1:2181 --zookeeper.path.root /zk_root --source.type tubemq --sink.type hive`
+
+##  所有支持的配置
+|  配置名 | 是否必须  | 默认值  |描述   |
+| ------------ | ------------ | ------------ | ------------ |
+|cluster-id   | Y | NA  |  用来唯一标识一个inlong-sort作业 |
+|zookeeper.quorum   | Y  | NA  | zk quorum  |
+|zookeeper.path.root   | Y  | "/inlong-sort"  |  zk根目录  |
+|source.type   | Y | NA | 数据源的种类, 当前支持"tubemq"和"pulsar"  |
+|sink.type   | Y  | NA  | 存储系统的种类,当前支持"clickhouse" 和 "hive" |
+|source.parallelism   | N  | 1  | source的并行度  |
+|deserialization.parallelism | N | 1 | deserialization的并行度  |
+|sink.parallelism   | N  | 1  | sink的并行度 |
+|tubemq.master.address | N  | NA  | 订阅tube的master address,优先级低于zk上的元数据  |
+|tubemq.session.key | N |"inlong-sort" | 订阅tube使用的session key前缀 |
+|tubemq.bootstrap.from.max | N | false | 是否从最大位置开始消费tube |
+|tubemq.message.not.found.wait.period | N | 350ms | tube返回message not found后的等待时间 |
+|tubemq.subscribe.retry.timeout | N | 300000 | 订阅tube的重试超时时间,单位为ms |
+|zookeeper.client.session-timeout | N | 60000 | zk session的超时时间,单位为ms |
+|zookeeper.client.connection-timeout | N | 15000 | zk连接的超时时间,单位为ms |
+|zookeeper.client.retry-wait | N | 5000 | zk重连间的等待时间,单位为ms |
+|zookeeper.client.max-retry-attempts | N | 3 | zk重连的最大重试次数 |
+|zookeeper.client.acl | N | "open" | Defines the ACL (open/creator) to be configured on ZK node. The configuration value can be set to “creator” if the ZooKeeper server configuration has the “authProvider” property mapped to use SASLAuthenticationProvider and the cluster is configured to run in secure mode (Kerberos) |
+|zookeeper.sasl.disable | N | false | 是否禁用sasl |
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/appendixfiles/http_access_api_definition_cn.xls b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/appendixfiles/http_access_api_definition_cn.xls
new file mode 100644
index 0000000..e834b49
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/appendixfiles/http_access_api_definition_cn.xls differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/architecture.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/architecture.md
new file mode 100644
index 0000000..f2e65da
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/architecture.md
@@ -0,0 +1,84 @@
+---
+title: 架构介绍
+---
+
+## 1 Apache InLong TubeMQ模块的架构 
+经过多年演变,TubeMQ集群分为如下5个部分:
+![](img/sys_structure.png)
+
+- **Portal**: 负责对外交互和运维操作的Portal部分,包括API和Web两块,API对接集群之外的管理系统,Web是在API基础上对日常运维功能做的页面封装;
+
+- **Master**: 负责集群控制的Control部分,该部分由1个或多个Master节点组成,Master HA通过Master节点间心跳保活、实时热备切换完成(这是大家使用TubeMQ的Lib时需要填写对应集群所有Master节点地址的原因),主Master负责管理整个集群的状态、资源调度、权限检查、元数据查询等;
+
+- **Broker**: 负责实际数据存储的Store部分,该部分由相互之间独立的Broker节点组成,每个Broker节点对本节点内的Topic集合进行管理,包括Topic的增、删、改、查,Topic内的消息存储、消费、老化、分区扩容、数据消费的offset记录等,集群对外能力,包括Topic数目、吞吐量、容量等,通过水平扩展Broker节点来完成;
+
+- **Client**: 负责数据生产和消费的Client部分,该部分我们以Lib形式对外提供,大家用得最多的是消费端,相比之前,消费端现支持Push、Pull两种数据拉取模式,数据消费行为支持顺序和过滤消费两种。对于Pull消费模式,支持业务通过客户端重置精确offset以支持业务exactly-once消费,同时,消费端新推出跨集群切换免重启的Consumer客户端;
+
+- **ZooKeeper**: 负责offset存储的ZooKeeper部分,该部分功能已弱化到仅做offset的持久化存储,考虑到接下来的多节点副本功能该模块暂时保留。
+
+
+## 2 Apache InLong TubeMQ模块的系统特点
+- **纯Java实现语言**:
+TubeMQ采用纯Java语言开发,便于开发人员快速熟悉项目及问题处理;
+
+- **引入协调节点**:
+TubeMQ系统采用自管理的元数据仲裁机制方式进行,Master节点通过采用内嵌数据库BDB完成集群内元数据的存储、更新以及HA热切功能,负责TubeMQ集群的运行管控和配置管理操作,对外提供接口等;通过Master节点,TubeMQ集群里的Broker配置设置、变更及查询实现了完整的自动化闭环管理,减轻了系统维护的复杂度;
+
+- **服务器侧消费负载均衡**:
+TubeMQ采用的是服务侧负载均衡的方案,而不是客户端侧操作,提升系统的管控能力同时简化客户端实现,更便于均衡算法升级;
+
+- **系统行级锁操作**:
+对于Broker消息读写中存在中间状态的并发操作采用行级锁,避免重复问题;
+
+- **Offset管理调整**:
+Offset由各个Broker独自管理,ZK只作数据持久化存储用(最初考虑完全去掉ZK依赖,考虑到后续的功能扩展就暂时保留);
+
+- **消息读取机制的改进**:
+TubeMQ采用消息随机读取模式,同时为了降低消息时延又增加了内存缓存读写,使其满足业务快速生产消费的需求(后面章节详细介绍);
+
+- **消费者行为管控**:
+支持通过策略实时动态地控制系统接入的消费者行为,包括系统负载高时对特定业务的限流、暂停消费,动态调整数据拉取的频率等;
+
+- **服务分级管控**:
+针对系统运维、业务特点、机器负载状态的不同需求,系统支持运维通过策略来动态控制不同消费者的消费行为,比如是否有权限消费、消费时延分级保证、消费限流控制,以及数据拉取频率控制等;
+
+- **系统安全管控**:
+根据业务不同的数据服务需要,以及系统运维安全的考虑,TubeMQ系统增加了TLS传输层加密管道,生产和消费服务的认证、授权,以及针对分布式访问控制的访问令牌管理,满足业务和系统运维在系统安全方面的需求;
+
+- **资源利用率提升改进**:
+TubeMQ采用连接复用模式,减少连接资源消耗;通过逻辑分区构造,减少系统对文件句柄数的占用,通过服务器端过滤模式,减少网络带宽资源使用率;通过剥离对Zookeeper的使用,减少Zookeeper的强依赖及瓶颈限制;
+
+- **客户端改进**:
+基于业务使用上的便利性以,我们简化了客户端逻辑,使其做到最小的功能集合,我们采用基于响应消息的接收质量统计算法来自动剔出坏的Broker节点,基于首次使用时作连接尝试来避免大数据量发送时发送受阻(具体内容见后面章节介绍)。
+
+
+## 3 Broker文件存储方案改进 
+以磁盘为数据持久化媒介的系统都面临各种因磁盘问题导致的系统性能问题,TubeMQ系统也不例外,性能提升很大程度上是在解决消息数据如何读写及存储的问题。在这个方面TubeMQ进行了比较多的改进,我们采用存储实例来作为最小的Topic数据管理单元,每个存储实例包括一个文件存储块和一个内存缓存块,每个Topic可以分配多个存储实例:
+
+### 3.1 文件存储块
+ TubeMQ的磁盘存储方案类似Kafka,但又不尽相同,如下图示,每个文件存储块由一个索引文件和一个数据文件组成,partiton为数据文件里的逻辑分区,每个Topic单独维护管理文件存储块的相关机制,包括老化周期,partition个数,是否可读可写等。
+![](img/store_file.png)
+
+### 3.2 内存缓存块
+ 在文件存储块基础上,我们额外增加了一个单独的内存缓存块,即在原有写磁盘基础上增加一块内存,隔离硬盘的慢速影响,数据先刷到内存缓存块,然后由内存缓存块批量地将数据刷到磁盘文件。
+![](img/store_mem.png)
+
+
+## 4 Apache InLong TubeMQ模块的客户端演进:
+业务与TubeMQ接触得最多的是消费侧,怎样更适应业务特点、更方便业务使用我们在这块做了比较多的改进:
+
+- **数据拉取模式支持Push、Pull:**
+	- **Push客户端:** TubeMQ最初消费端版本只提供Push模式的消费,这种模式能比较快速地消费数据,减轻服务端压力,但同时也带来一个问题,业务使用的时候因为无法控制拉取频率,从而容易形成数据积压数据处理不过来;
+
+	- **带消费中止/继续的Push客户端:** 在收到业务反馈能否控制Push拉取动作的需求后,我们增加了resumeConsume()/pauseConsume()函数对,让业务可以模拟水位线控制机制,状态比较繁忙时调用pauseConsume()函数来中止Lib后台的数据拉取,在状态恢复后,再调用resumeConsume()通知Lib后台继续拉取数据;
+
+	- **Pull客户端:** 我们后来版本里增加了Pull客户端,该客户端有别于Push客户端,是由业务而非Lib主动的拉取消息并对数据处理的结果进行成功与否的确认,将数据处理的主动权留给业务。这样处理后,虽然服务端压力有所提升,但业务消费时积压情况可大大缓解。
+
+- **数据消费行为支持顺序和过滤消费:** 在TubeMQ设计初我们考虑是不同业务使用不同的Topic,实际运营中我们发现不少业务实际上是通过代理模式上报的数据,数据通过Topic下的文件ID或者表ID属性来区分,业务为了消费自己的一份数据是需要全量消费该Topic下的所有数据。我们通过tid字段支持指定属性的过滤消费模式,将数据过滤放到服务端来做,减少出流量以及客户端的数据处理压力。
+
+- **支持业务exactly-once消费:** 为了解决业务处理数据时需要精确回档的需求,在客户端版本里提供了通过客户端重置精确offset功能,业务重启系统时,只需通过客户端提供待回拨时间点的消费上下文,TubeMQ即可按照指定的精确位置接续消费。该特性目前已在Flink这类实时计算框架使用,依托Flink基于checkpoint机制进行exactly-once数据处理。
+
+
+---
+<a href="#top">Back to top</a>
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/client_rpc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/client_rpc.md
new file mode 100644
index 0000000..ae3205f
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/client_rpc.md
@@ -0,0 +1,197 @@
+---
+title: 客户端RPC
+---
+
+## 1 总体介绍:
+
+这部分介绍内容在/org/apache/inlong/tubemq/corerpc模块下可以找到对应实现,Apache InLong TubeMQ模块的各个节点间(Client、Master、Broker)通过TCP协议长连接交互,其消息采用的是 【二进制 + Protobuf编码】组合方式进行定义,如下图示:
+![](img/client_rpc/rpc_bytes_def.png)
+
+在TCP里我们看到的都是二进制流,我们定义了4字节的msgToken消息头字段RPC\_PROTOCOL\_BEGIN\_TOKEN,用来区分每一条消息以及识别对端的合法性,客户端收到的消息不是以该字段开始的响应消息时,说明连接方非本系统支持的协议,或者返回数据出现了异常,这个时候需要关闭该连接,提示错误退出或者重连;紧接着的是4字节的消息序列号serialNo,该字段由请求方生成通过请求消息携带给服务端,服务器端完成该请求消息服务后通过请求消息的对应响应消息原样返回,主要用于客户端关联请求响应的上下文;4字节的listSize字段表示接下来按照PB编码的数据块个数,即后面跟随的[\&lt;len\&gt;\&lt;data\&gt;]内容的块数,目前协议定义下该字段不为0;[\&lt;len\&gt;\&lt;data\&gt;]是2个字段组合,即数据块长度,数据,主要是表示这个数据块长度及具体的数据。
+
+为什么会以listSize [\&lt;len\&gt;\&lt;data\&gt;]形式定义pb数据内容?因为在TubeMQ的这个实现中,序列化后的PB数据是通过ByteBuffer对象保存的,Java里ByteBuffer存在一个最大块长8196,超过单个块长度的PB消息内容就需要用多个ByteBuffer保存,序列化到TCP消息时候,这块没有统计总长,直接按照PB序列化的ByteBuffer列表写入到了消息中。 **在多语言实现时候,这块需要特别注意:** 需要将PB数据内容序列化成块数组(pb编解码里有对应支持)。
+
+
+## 2 PB格式编码:
+
+PB格式编码分为RPC框架定义,到Master的消息编码和到Broker的消息编码三个部分,大家采用protobuf直接编译就可以获得不同语言的编解码,使用起来非常的方便:
+![](img/client_rpc/rpc_proto_def.png)
+
+RPC.proto定义了6个结构,分为2大类:请求消息与响应消息,响应消息里又分为正常的响应返回以及抛异常情况下的响应返回:
+![](img/client_rpc/rpc_pbmsg_structure.png)
+
+请求消息编码及响应消息解码可以参考NettyClient.java类实现,这个部分的定义存在一些改进空间,具体见【[TUBEMQ-109](https://issues.apache.org/jira/browse/TUBEMQ-109)】,但由于兼容性考虑,会逐步的替换,我们按照当前proto版本实现至少在1.0.0版本前交互不是问题,但1.0.0时会考虑用新协议,协议实现模块需要各个SDK预留出改进空间。以请求消息填写为例,RpcConnHeader等相关结构如下:
+![](img/client_rpc/rpc_conn_detail.png)
+
+其中flag标记的是否请求消息,后面3个标记的是消息跟踪的相关内容,目前没有使用;相关的服务类型,协议版本,服务类型等是固定的映射关系,比较关键的一个参数RequestBody.timeout是一个请求被服务器收到到实际处理时的最大允许等待时间长,超过就丢弃,目前缺省为10秒,请求填写具体见如下部分:
+![](img/client_rpc/rpc_header_fill.png)
+
+
+## 3 客户端的PB请求响应交互图:
+
+### 3.1 Producer交互图:
+
+Producer在系统中一共4对指令,到master是要做注册,心跳,退出操作;到broker只有发送消息:
+![](img/client_rpc/rpc_producer_diagram.png)
+
+从这里我们可以看到,Producer实现逻辑就是从Master侧获取指定Topic对应的分区列表等元数据信息,获得这些信息后按照客户端的规则选择分区并把消息发送给对应的Broker,而到Broker的发送是直接进行TCP连接方式进行。有同学会疑惑这样是否不安全,不注册直接发消息方式,最初考虑是内部使用尽可能的接纳消息,后来考虑安全问题,我们在这个基础上增加了授权信息携带,在服务端进行认证和授权检查,解决客户端绕开Master直连以及无授权乱发消息的情况,但这种只会在严格环境开启。生产端这块 **多语言实现的时候需要注意:**
+
+1. 我们Master是主备实时热切换方式运行,切换时候是通过RspExceptionBody携带的信息,这个时候,需要按照字符串查找方式检索关键字&quot;StandbyException&quot;,如果异常是这类异常,要主动切换到其他的Master节点上进行重注册;这块有相关issue计划调整该问题;
+
+2. 生产过程中遇到Master连接失败时,比如超时,链接被动断开等,Producer要进行重注册;
+
+3. Producer要注意提前做到Broker的预连接操作:后端集群的Broker节点可达上百台,再叠加每个Broker有十个左右的分区,关于分区记录就会存在上千条可能,SDK从Master收到元数据信息后,要提前对暂未建链的Broker进行连接建立操作;
+
+4. Producer到Broker的连接要注意异常检测,长期运行场景,要能检测出Broker坏点,以及长期不发消息,要将到Broker的连接回收,避免运行不稳定。
+
+
+### 3.2 Consumer交互图:
+
+Consumer一共7对指令,到master是要做注册,心跳,退出操作;到broker包括注册,注销,心跳,拉取消息,确认消息4对,其中到Broker的注册注销是同一个命令,用了不同的状态码表示:
+![](img/client_rpc/rpc_consumer_diagram.png)
+
+从上图我们可以看到,Consumer首先要注册到Master,但注册到Master时并没有立即获取到元数据信息,原因是TubeMQ是采用的是服务器端负载均衡模式,客户端需要等待服务器派发消费分区信息;Consumer到Broker需要进行注册注销操作,原因在于消费时候分区是独占消费,即同一时刻同一分区者只能被同组的一个消费者进行消费,为了解决这个问题,需要客户端进行注册,获得分区的消费权限;消息拉取与消费确认需要成对出现,虽然协议支持多次拉取然后最后一次确认处理,但从客户端可能超时丢失分区的消费权限,从而导致数据回滚重复消费触发,数据积攒的越多重复消费的量就越多,所以按照1:1的提交比较合适。
+
+## 4 客户端功能集合:
+
+| **特性** | **Java** | **C/C++** | **Go** | **Python** | **Rust** | **备注** |
+| --- | --- | --- | --- | --- | --- | --- |
+| TLS | ✅ | | | | | |
+| 认证授权 | ✅ | | | | | |
+| 防绕Master生产消费 | ✅ | | | | | |
+| 分布式系统里放置客户端不经过Master的认证授权即访问Broker | ✅ | | | | | |
+| Effectively-Once | ✅ | | | | | |
+| 精确指定分区Offset消费 | ✅ | | | | | |
+| 单个组消费多个Topic消费 | ✅ | | | | | |
+| 服务器过滤消费 | ✅ | | | | | |
+| 生产节点坏点自动屏蔽 | ✅ | | | | | | 
+| 通过算法检测坏点,自动屏蔽故障Broker的数据发送 | ✅ | | | | | | 
+| 断链自动重连 | ✅ | | | | | |
+| 空闲连接自动回收 | ✅ | | | | | |
+| 超过指定分钟不活跃,主要是生产端,比如3分钟 | ✅ | | | | | | 
+| 连接复用 | ✅ | | | | | |
+| 连接按照sessionFactory共用或者不共用 | ✅ | | | | | | 
+| 非连接复用 | ✅ | | | | | | 
+| 异步生产 | ✅ | | | | | |
+| 同步生产 | ✅ | | | | | |
+| Pull消费 | ✅ | | | | | |
+| Push消费 | ✅ | | | | | |
+| 消费限流 | ✅ | | | | | |
+| 控制单位时间消费者消费的数据量 | ✅ | | | | | |
+| 消费拉取频控 | ✅ | | | | | |
+| 控制消费者拉取消息的频度 | ✅ | | | | | |
+
+
+## 5 客户端功能CaseByCase实现介绍:
+
+### 5.1 客户端与服务器端RPC交互过程:
+
+----------
+
+![](img/client_rpc/rpc_inner_structure.png)
+
+如上图示,客户端要维持已发请求消息的本地保存,直到RPC超时,或者收到响应消息,响应消息通过请求发送时生成的SerialNo关联;从服务器端收到的Broker信息,以及Topic信息,SDK要保存在本地,并根据最新的返回信息进行更新,以及定期的上报给服务器端;SDK要维持到Master或者Broker的心跳,如果发现Master反馈注册超时错误时,要进行重注册操作;SDK要基于Broker进行连接建立,同一个进程不同对象之间,要允许业务进行选择,是支持按对象建立连接,还是按照进程建立连接。
+
+### 5.2 Producer到Master注册:
+
+----------
+![](img/client_rpc/rpc_producer_register2M.png)
+
+**ClientId**:Producer需要在启动时候构造一个ClientId,目前的构造规则是:
+
+Java的SDK版本里ClientId = 节点IP地址(IPV4) + &quot;-&quot; + 进程ID + &quot;-&quot; + createTime+&quot;-&quot; +本进程内第n个实例+&quot;-&quot; +客户端版本ID 【+ &quot;-&quot; + SDK实现语言】,建议其他语言增加如上标记,以便于问题排查。该ID值在Producer生命周期内有效;
+
+**TopicList**:是用户发布的Topic列表,Producer在初始化时候会提供初始的待发布数据的Topic列表,在运行中也允许业务通过publish函数延迟的增加新的Topic,但不支持运行中减少topic;
+
+**brokerCheckSum**:客户端本地保存的Broker元数据信息的校验值,初始启动时候Producer本地是没有该数据的,取-1值;SDK需要在每次请求时把上次的brokerCheckSum值携带上,Master通过比较该值来确定客户端的元数据是否需要更新;
+
+**hostname**:Producer所在机器的IPV4地址值;
+
+**success**:操作是否成功,成功为true,失败为false;
+
+**errCode**:如果失败,错误码时多少,目前错误码是大类错误码,具体错误原因需要由errMsg具体判明;
+
+**errMsg**:具体的错误信息,如果出错,SDK需要把具体错误信息打出来
+
+**authInfo**:认证授权信息,如果用户配置里填写了启动认证处理,则进行填写;如果是要求认证,则按照用户名及密码的签名进行上报,如果是运行中,比如心跳时,如果Master强制认证处理,则按照用户名及密码签名上报,没有的话则根据之前交互时Master提供的授权Token进行认证;该授权Token在生产时候也用于到Broker的消息生产时携带。
+
+**brokerInfos**:Broker元数据信息,该字段里主要是Master反馈给Producer的整个集群的Broker信息列表;其格式如下:
+
+![](img/client_rpc/rpc_broker_info.png)
+
+**authorizedInfo**:Master提供的授权信息,格式如下:
+
+![](img/client_rpc/rpc_master_authorizedinfo.png)
+
+**visitAuthorizedToken**:防客户端绕开Master的访问授权Token,如果有该数据,SDK要保存本地,并且在后续访问Broker时携带该信息;如果后续心跳时该字段有变更,则需要更新本地缓存的该字段数据;
+
+**authAuthorizedToken**:认证通过的授权Token,如果有该字段数据,要保存,并且在后续访问Master及Broker时携带该字段信息;如果后续心跳时该字段有变更,则需要更新本地缓存的该字段数据;
+
+
+### 5.3 Producer到Master保持心跳:
+
+----------
+
+![](img/client_rpc/rpc_producer_heartbeat2M.png)
+
+**topicInfos**:SDK发布的Topic对应的元数据信息,包括分区信息以及所在的Broker,具体解码方式如下,由于元数据非常的多,如果将对象数据原样透传所产生的出流量会非常的大,所以我们通过编码方式做了改进:
+
+![](img/client_rpc/rpc_convert_topicinfo.png)
+
+**requireAuth**:标识Master之前的授权访问码(authAuthorizedToken)过期,要求SDK下一次请求,进行用户名及密码的签名信息上报;
+
+### 5.4 Producer到Master关闭退出:
+
+----------
+
+![](img/client_rpc/rpc_producer_close2M.png)
+
+需要注意的是,如果认证开启,关闭会做认证,以避免外部干扰操作。
+
+### 5.5 Producer到Broker发送消息:
+
+----------
+
+该部分的内容主要和Message的定义由关联,其中
+
+![](img/client_rpc/rpc_producer_sendmsg2B.png)
+
+**Data**是Message的二进制字节流:
+
+![](img/client_rpc/rpc_message_data.png)
+
+**sentAddr**是SDK所在的本机IPv4地址转成32位的数字ID;
+
+**msgType**是过滤的消息类型,msgTime是SDK发消息时的消息时间,其值来源于构造Message时通过putSystemHeader填写的值,在Message里有对应的API获取;
+
+**requireAuth**:到Broker进行数据生产的要求认证操作,考虑性能问题,目前未生效,发送消息里填写的authAuthorizedToken值以Master侧提供的值为准,并且随Master侧改变而改变。
+
+### 5.6 分区负载均衡过程:
+
+----------
+
+Apache InLong TubeMQ模块目前采用的是服务器端负载均衡模式,均衡过程由服务器管理维护;后续版本会增加客户端负载均衡模式,形成2种模式共存的情况,由业务根据需要选择不同的均衡方式。
+
+**服务器端负载均衡过程如下**:
+
+- Master进程启动后,会启动负载均衡线程balancerChore,balancerChore定时检查当前已注册的消费组,进行负载均衡处理。过程简单来说就是将消费组订阅的分区均匀的分配给已注册的客户端,并定期检测客户端当前分区数是否超过预定的数量,如果超过则将多余的分区拆分给其他数量少的客户端。具体过程:首先Master检查当前消费组是否需要做负载均衡,如果需要,则将消费组订阅的topic集合的所有分区,以及这个消费组的所有消费者ID进行排序,然后按照消费组的所有分区数以及客户端个数进行整除及取模,获得每个客户端至多订阅的分区数;然后给每个客户端分配分区,并在消费者订阅时将分区信息在心跳响应里携带;如果客户端当前已有的分区有多,则给该客户端一条分区释放指令,将该分区从该消费者这里进行分区释放,同时给被分配的消费�
 �一条分区分配的指令,告知分区分配给了对应客户端,具体指令如下:
+
+![](img/client_rpc/rpc_event_proto.png)
+
+**rebalanceId**:是一个自增ID的long数值,表示负载均衡的轮次;
+
+**opType**:为操作码,值在EventType中定义,目前已实现的操作码只有如下4个部分:释放连接,建立连接;only\_xxx目前没有扩展开,收到心跳里携带的负载均衡信息后,Consumer根据这个值做对应的业务操作;
+
+![](img/client_rpc/rpc_event_proto_optype.png)
+
+**status**:表示该事件状态,在EventStatus里定义。Master构造好负载均衡处理任务时设置指令时状态为TODO;客户端心跳请求过来时,master将该任务写到响应消息里,设置该指令状态为PROCESSING;客户端从心跳响应里收到负载均衡指令,进行实际的连接或者断链操作,操作结束后,设置指令状态为DONE,并等待下一次心跳请求发出时反馈给Master;
+
+![](img/client_rpc/rpc_event_proto_status.png)
+
+**subscribeInfo**表示分配的分区信息,格式如注释提示。
+
+
+- 消费端操作:消费端收到Master返回的元数据信息后,就进行连接建立和释放操作,见上面opType的注解,在连接建立好后,返回事件的处理结果给到Master,从而完成相关的收到任务,执行任务,以及返回任务处理结果的操作;需要注意的是,负载均衡的注册是尽力而为的操作,如果消费端发起连接操作,但之前占用分区的消费者还没有来得及退出时,会收到PARTITION\_OCCUPIED的错误响应,这个时候就将该分区从尝试队列删除;而之前分区消费者在收到对应响应后仍会做删除操作,从而下一轮的负载均衡时分配到这个分区的消费者成功注册到分区上。
+
+---
+<a href="#top">Back to top</a>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/clients_java.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/clients_java.md
new file mode 100644
index 0000000..bda5597
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/clients_java.md
@@ -0,0 +1,231 @@
+---
+title: JAVA SDK API介绍
+---
+
+
+## 1 基础对象接口介绍:
+
+### 1.1 MessageSessionFactory(消息会话工厂):
+
+TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。
+
+ 
+
+### 1.2 MasterInfo:
+
+TubeMQ的Master地址信息对象,该对象的特点是支持配置多个Master地址,由于TubeMQ Master借助BDB的存储能力进行元数据管理,以及服务HA热切能力,Master的地址相应地就需要配置多条信息。该配置信息支持IP、域名两种模式,由于TubeMQ的HA是热切模式,客户端要保证到各个Master地址都是连通的。该信息在初始化TubeClientConfig类对象和ConsumerConfig类对象时使用,考虑到配置的方便性,我们将多条Master地址构造成“ip1:port1,ip2:port2,ip3:port3”格式并进行解析。
+
+ 
+
+### 1.3 TubeClientConfig:
+
+MessageSessionFactory(消息会话工厂)初始化类,用来携带创建网络连接信息、客户端控制参数信息的对象类,包括RPC时长设置、Socket属性设置、连接质量检测参数设置、TLS参数设置、认证授权信息设置等信息。
+
+ 
+
+### 1.4 ConsumerConfig:
+
+ConsumerConfig类是TubeClientConfig类的子类,它是在TubeClientConfig类基础上增加了Consumer类对象初始化时候的参数携带,因而在一个既有Producer又有Consumer的MessageSessionFactory(消息会话工厂)类对象里,会话工厂类的相关设置以MessageSessionFactory类初始化的内容为准,Consumer类对象按照创建时传递的初始化类对象为准。在consumer里又根据消费行为的不同分为Pull消费者和Push消费者两种,两种特有的参数通过参数接口携带“pull”或“push”不同特征进行区分。
+
+ 
+
+### 1.5 Message:
+
+Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产端原样传递给消息接收端,attribute内容是与TubeMQ系统共用的字段,业务填写的内容不会丢失和改写,但该字段有可能会新增TubeMQ系统填写的内容,并在后续的版本中,新增的TubeMQ系统内容有可能去掉而不被通知。该部分需要注意的是Message.putSystemHeader(final String msgType, final String msgTime)接口,该接口用来设置消息的消息类型和消息发送时间,msgType用于消费端过滤用,msgTime用做TubeMQ进行数据收发统计时消息时间统计维度用。
+
+ 
+
+### 1.6 MessageProducer:
+
+消息生产者类,该类完成消息的生产,消息发送分为同步发送和异步发送两种接口,目前消息采用Round Robin方式发往后端服务器,后续这块将考虑按照业务指定的算法进行后端服务器选择方式进行生产。该类使用时需要注意的是,我们支持在初始化时候全量Topic指定的publish,也支持在生产过程中临时增加对新的Topic的publish,但临时增加的Topic不会立即生效,因而在使用新增Topic前,要先调用isTopicCurAcceptPublish接口查询该Topic是否已publish并且被服务器接受,否则有可能消息发送失败。
+
+ 
+
+### 1.7 MessageConsumer:
+
+该类有两个子类PullMessageConsumer、PushMessageConsumer,通过这两个子类的包装,完成了对业务侧的Pull和Push语义。实际上TubeMQ是采用Pull模式与后端服务进行交互,为了便于业务的接口使用,我们进行了封装,大家可以看到其差别在于Push在启动时初始化了一个线程组,来完成主动的数据拉取操作。需要注意的地方在于:
+
+- a. CompleteSubscribe接口,带参数的接口支持客户端对指定的分区进行指定offset消费,不带参数的接口则按照ConsumerConfig.setConsumeModel(int consumeModel)接口进行对应的消费模式设置来消费数据;
+	
+- b. 对subscribe接口,其用来定义该消费者的消费目标,而filterConds参数表示对待消费的Topic是否进行过滤消费,以及如果做过滤消费时要过滤的msgType消息类型值。如果不需要进行过滤消费,则该参数填为null,或者空的集合值。
+
+ 
+
+------
+
+
+
+## 2 接口调用示例:
+
+### 2.1 环境准备:
+
+TubeMQ开源包org.apache.inlong.tubemq.example里提供了生产和消费的具体代码示例,这里我们通过一个实际的例子来介绍如何填参和调用对应接口。首先我们搭建一个带3个Master节点的TubeMQ集群,3个Master地址及端口分别为test_1.domain.com,test_2.domain.com,test_3.domain.com,端口均为8080,在该集群里我们建立了若干个Broker,并且针对Broker我们创建了3个topic:topic_1,topic_2,topic_3等Topic配置;然后我们启动对应的Broker等待Consumer和Producer的创建。
+
+ 
+
+### 2.2 创建Consumer:
+
+见包org.apache.inlong.tubemq.example.MessageConsumerExample类文件,Consumer是一个包含网络交互协调的客户端对象,需要做初始化并且长期驻留内存重复使用的模型,它不适合单次拉起消费的场景。如下图示,我们定义了MessageConsumerExample封装类,在该类中定义了进行网络交互的会话工厂MessageSessionFactory类,以及用来做Push消费的PushMessageConsumer类:
+
+#### 2.2.1 初始化MessageConsumerExample类:
+
+1. 首先构造一个ConsumerConfig类,填写初始化信息,包括本机IP V4地址,Master集群地址,消费组组名信息,这里Master地址信息传入值为:”test_1.domain.com:8080,test_2.domain.com:8080,test_3.domain.com:8080”;
+
+2. 然后设置消费模式:我们设置首次从队列尾消费,后续接续消费模式;
+
+3. 然后设置Push消费时回调函数个数
+
+4. 进行会话工厂初始化操作:该场景里我们选择建立单链接的会话工厂;
+
+5. 在会话工厂创建模式的消费者:
+
+```java
+public final class MessageConsumerExample {
+    private static final Logger logger = LoggerFactory.getLogger(MessageConsumerExample.class);
+    private static final MsgRecvStats msgRecvStats = new MsgRecvStats();
+    private final String masterHostAndPort;
+    private final String localHost;
+    private final String group;
+    private PushMessageConsumer messageConsumer;
+    private MessageSessionFactory messageSessionFactory;
+    
+    public MessageConsumerExample(String localHost, String masterHostAndPort, String group, int fetchCount)
+            throws Exception {
+        this.localHost = localHost;
+        this.masterHostAndPort = masterHostAndPort;
+        this.group = group;
+        ConsumerConfig consumerConfig = new ConsumerConfig(this.localHost,this.masterHostAndPort, this.group);
+        consumerConfig.setConsumeModel(0);
+        if (fetchCount > 0) {
+            consumerConfig.setPushFetchThreadCnt(fetchCount);
+        }
+        this.messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+        this.messageConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
+    }
+}
+```
+
+
+#### 2.2.2 订阅Topic:
+
+我们没有采用指定Offset消费的模式进行订阅,也没有过滤需求,因而我们在如下代码里只做了Topic的指定,对应的过滤项集合我们传的是null值,同时,对于不同的Topic,我们可以传递不同的消息回调处理函数;我们这里订阅了3个topic,topic_1,topic_2,topic_3,每个topic分别调用subscribe函数进行对应参数设置:
+
+```java
+public void subscribe(final Map<String, TreeSet<String>> topicStreamIdMap) throws TubeClientException {
+    for (Map.Entry<String, TreeSet<String>> entry : topicStreamIdMap.entrySet()) {
+        this.messageConsumer.subscribe(entry.getKey(), entry.getValue(),
+                new DefaultMessageListener(entry.getKey()));
+    }
+    messageConsumer.completeSubscribe();
+}
+```
+
+
+#### 2.2.3 进行消费:
+
+到此,对集群里对应topic的订阅就已完成,系统运行开始后,回调函数里数据将不断的通过回调函数推送到业务层进行处理:
+
+```java
+public class DefaultMessageListener implements MessageListener {
+
+    private String topic;
+
+    public DefaultMessageListener(String topic) {
+        this.topic = topic;
+    }
+
+    public void receiveMessages(PeerInfo peerInfo, final List<Message> messages) throws InterruptedException {
+        if (messages != null && !messages.isEmpty()) {
+            msgRecvStats.addMsgCount(this.topic, messages.size());
+        }
+    }
+
+    public Executor getExecutor() {
+        return null;
+    }
+
+    public void stop() {
+    }
+}
+```
+
+
+### 3 创建Producer:
+
+现网环境中业务的数据都是通过代理层来做接收汇聚,包装了比较多的异常处理,大部分的业务都没有也不会接触到TubeSDK的Producer类,考虑到业务自己搭建集群使用TubeMQ进行使用的场景,这里提供对应的使用demo,见包org.apache.inlong.tubemq.example.MessageProducerExample类文件供参考,**需要注意**的是,业务除非使用数据平台的TubeMQ集群做MQ服务,否则仍要按照现网的接入流程使用代理层来进行数据生产:
+
+#### 3.1 初始化MessageProducerExample类:
+
+和Consumer的初始化类似,也是构造了一个封装类,定义了一个会话工厂,以及一个Producer类,生产端的会话工厂初始化通过TubeClientConfig类进行,如之前所介绍的,ConsumerConfig类是TubeClientConfig类的子类,虽然传入参数不同,但会话工厂是通过TubeClientConfig类完成的初始化处理:
+
+```java
+public final class MessageProducerExample {
+
+    private static final Logger logger =  LoggerFactory.getLogger(MessageProducerExample.class);
+    private static final ConcurrentHashMap<String, AtomicLong> counterMap =
+            new ConcurrentHashMap<String, AtomicLong>();
+    String[] arrayKey = {"aaa", "bbb", "ac", "dd", "eee", "fff", "gggg", "hhhh"};
+    private MessageProducer messageProducer;
+    private TreeSet<String> filters = new TreeSet<>();
+    private int keyCount = 0;
+    private int sentCount = 0;
+    private MessageSessionFactory messageSessionFactory;
+
+    public MessageProducerExample(final String localHost, final String masterHostAndPort) throws Exception {
+        filters.add("aaa");
+        filters.add("bbb");
+        TubeClientConfig clientConfig = new TubeClientConfig(localHost, masterHostAndPort);
+        this.messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+        this.messageProducer = this.messageSessionFactory.createProducer();
+    }
+}
+```
+
+
+#### 3.2 发布Topic:
+
+```java
+public void publishTopics(List<String> topicList) throws TubeClientException {
+    this.messageProducer.publish(new TreeSet<String>(topicList));
+}
+```
+
+
+#### 3.3 进行数据生产:
+
+如下所示,则为具体的数据构造和发送逻辑,构造一个Message对象后调用sendMessage()函数发送即可,有同步接口和异步接口选择,依照业务要求选择不同接口;需要注意的是该业务根据不同消息调用message.putSystemHeader()函数设置消息的过滤属性和发送时间,便于系统进行消息过滤消费,以及指标统计用。完成这些,一条消息即被发送出去,如果返回结果为成功,则消息被成功的接纳并且进行消息处理,如果返回失败,则业务根据具体错误码及错误提示进行判断处理,相关错误详情见《TubeMQ错误信息介绍.xlsx》:
+
+```java
+public void sendMessageAsync(int id, long currtime, String topic, byte[] body, MessageSentCallback callback) {
+    Message message = new Message(topic, body);
+    SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMddHHmm");
+    long currTimeMillis = System.currentTimeMillis();
+    message.setAttrKeyVal("index", String.valueOf(1));
+    String keyCode = arrayKey[sentCount++ % arrayKey.length];
+    message.putSystemHeader(keyCode, sdf.format(new Date(currTimeMillis))); 
+    if (filters.contains(keyCode)) {
+        keyCount++;
+    }
+    try {
+        message.setAttrKeyVal("dataTime", String.valueOf(currTimeMillis));
+        messageProducer.sendMessage(message, callback);
+    } catch (TubeClientException e) {
+        logger.error("Send message failed!", e);
+    } catch (InterruptedException e) {
+        logger.error("Send message failed!", e);
+    }
+}
+```
+
+
+#### 3.5 Producer不同类MAMessageProducerExample关注点:
+
+该类初始化与MessageProducerExample类不同,采用的是TubeMultiSessionFactory多会话工厂类进行的连接初始化,该demo提供了如何使用多会话工厂类的特性,可以用于通过多个物理连接提升系统吞吐量的场景(TubeMQ通过连接复用模式来减少物理连接资源的使用),恰当使用可以提升系统的生产性能。在Consumer侧也可以通过多会话工厂进行初始化,但考虑到消费是长时间过程处理,对连接资源的占用比较小,消费场景不推荐使用。
+
+
+至此,整个生产和消费的示例已经介绍完,你可以下载代码并编译运行,看看是不是这么简单😊
+
+---
+<a href="#top">Back to top</a>
+
+ 
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/configure_introduction.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/configure_introduction.md
new file mode 100644
index 0000000..fee790b
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/configure_introduction.md
@@ -0,0 +1,151 @@
+---
+title: 配置参数介绍
+---
+
+## 1 TubeMQ服务端配置文件说明:
+
+TubeMQ服务端包括Master和Broker共2个模块,Master又包含供外部页面访问的Web前端模块(该部分存放在resources中),考虑到实际部署时2个模块常常部署在同1台机器中,TubeMQ将2个模块3个部分的内容打包在一起交付给运维使用;客户端则不包含服务端部分的lib包单独交付给业务使用。
+
+Master与Broker采用ini配置文件格式,相关配置文件分别放置在tubemq-server-3.9.0/conf/目录的master.ini和broker.ini文件中:
+
+![](img/configure/conf_ini_pos.png)
+
+他们的配置是按照配置单元集合来定义的,Master配置由必选的[master]、[zookeeper]、[bdbStore]和可选的[tlsSetting]一共4个配置单元组成,Broker配置由必选的[broker]、[zookeeper]和可选的[tlsSetting]一共3个配置单元组成;实际使用时,大家也可将两个配置文件内容合并放置为一个ini文件。
+
+Master除了后端系统配置文件外,还在resources里存放了Web前端页面模块,resources的根目录velocity.properties文件为Master的Web前端页面配置文件。
+
+![](img/configure/conf_velocity_pos.png)
+
+
+## 2 配置项详情:
+
+### 2.1 master.ini文件中关键配置内容说明:
+
+| 配置单元 | 配置项 | 是否必选 | 值类型 | 配置说明 |
+| --- | --- | --- | --- | --- |
+| [master] | Master系统运行主配置单元,必填单元,值固定为&quot;[master]&quot; |
+| [master] | hostName | 是 | String | Master对外服务的主机地址,必填项,必须在网卡中已配置,处于启用状态,非回环且不能为127.0.0.1的IP |
+| port | 否 | int | Master监听的端口,可选项,缺省值为8715 |
+| webPort | 否 | int | Master Web控制台的访问端口,缺省值为8080 |
+| webResourcePath | 是 | String | Master Web Resource部署绝对路径,必填项,该值设置不正确时Web页面将不能正常显示 |
+| confModAuthToken | 否 | String | 通过Master的Web或API做变更操作(包括增、删、改配置,以及变更Master及管理的Broker状态)时操作者需要提供的授权Token,该值可选,缺省为&quot;ASDFGHJKL&quot; |
+| firstBalanceDelayAfterStartMs | 否 | long | Master启动至首次开始Rebalance的时间间隔,可选项,缺省30000毫秒 |
+| consumerBalancePeriodMs | 否 | long | Master对消费组进行Rebalance的均衡周期,可选项,缺省60000毫秒,当集群规模较大时,请调大该值 |
+| consumerHeartbeatTimeoutMs | 否 | long | 消费者心跳超时周期,可选项,缺省30000毫秒,当集群规模较大时,请调大该值 |
+| producerHeartbeatTimeoutMs | 否 | long | 生产者心跳超时周期,可选项,缺省30000毫秒,当集群规模较大时,请调大该值 |
+| brokerHeartbeatTimeoutMs | 否 | long | Broker心跳超时周期,可选项,缺省30000毫秒,当集群规模较大时,请调大该值 |
+| rebalanceParallel | 否 | int | Master Rebalance处理并行度,可选项,缺省4,取值范围[1, 20], 当集群规模较大时,请调大该值 |
+| socketRecvBuffer | 否 | long | Socket接收Buffer缓冲区SO\_RCVBUF大小,单位字节, 负数为不做设置以缺省值为准 |
+| socketSendBuffer | 否 | long | Socket发送Buffer缓冲区SO\_SNDBUF大小,单位字节, 负数为不做设置以缺省值为准 |
+| maxAutoForbiddenCnt | 否 | int | Broker出现IO故障时最大允许Master自动下线Broker个数,可选项,缺省为5,建议该值不超过集群内Broker总数的10% |
+| startOffsetResetCheck | 否 | boolean | 是否启用客户端Offset重置功能的检查功能,可选项,缺省为false |
+| needBrokerVisitAuth | 否 | boolean | 是否启用Broker访问鉴权,缺省为false,如果为true,则Broker上报的消息里必须携带正确的用户名及签名信息 |
+| visitName | 否 | String | Broker访问鉴权的用户名,缺省为空字符串,在needBrokerVisitAuth为true时该值必须存在,该值必须与broker.ini里的visitName字段值同 |
+| visitPassword | 否 | String | Broker访问鉴权的密码,缺省为空字符串,在needBrokerVisitAuth为true时该值必须存在,该值必须与broker.ini里的visitPassword字段值同 |
+| startVisitTokenCheck | 否 | boolean | 是否启用客户端visitToken检查,缺省为false |
+| startProduceAuthenticate | 否 | boolean | 是否启用生产端用户认证,缺省为false |
+| startProduceAuthorize | 否 | boolean | 是否启用生产端生产授权认证,缺省为false |
+| startConsumeAuthenticate | 否 | boolean | 是否启用消费端用户认证,缺省为false |
+| startConsumeAuthorize | 否 | boolean | 是否启用消费端消费授权认证,缺省为false |
+| maxGroupBrokerConsumeRate | 否 | int | 集群Broker数与消费组里成员数的最大比值,可选项,缺省为50,50台Broker集群里允许1个消费组最少启动1个客户端消费 |
+| metaDataPath | 否 | String | Metadata存储路径,可以是绝对路径、或者相对TubeMQ安装目录(&quot;$BASE_DIR&quot;)的相对路径。缺省为&quot;var/meta_data&quot; |
+|
+ |
+| [zookeeper] | Master对应的TubeMQ集群存储Offset的ZooKeeper集群相关信息,必填单元,值固定为&quot;[zookeeper]&quot; |
+| [zookeeper] | zkServerAddr | 否 | String | zk服务器地址,可选配置,缺省为&quot;localhost:2181&quot; |
+| zkNodeRoot | 否 | String | zk上的节点根目录路径,可选配置,缺省为&quot;/tubemq&quot; |
+| zkSessionTimeoutMs | 否 | long | zk心跳超时,单位毫秒,默认30秒 |
+| zkConnectionTimeoutMs | 否 | long | zk连接超时时间,单位毫秒,默认30秒 |
+| zkSyncTimeMs | 否 | long | zk数据同步时间,单位毫秒,默认5秒 |
+| zkCommitPeriodMs | 否 | long | Master缓存数据刷新到zk上的时间间隔,单位毫秒,默认5秒 |
+|
+ |
+| [replication] | 集群数据复制的相关配置,用于实现元数据多节点热备,必填单元,值固定为&quot;[replication]&quot; |
+| [replication] | repGroupName | 否 | String | 集群名,所属主备Master节点值必须相同,可选字段,缺省为&quot;tubemqMasterGroup&quot; |
+| repNodeName | 是 | String | 所属Master在集群中的节点名,该值各个节点必须不重复,必填字段 |
+| repNodePort | 否 | int | 节点复制通讯端口,可选字段,缺省为9001 |
+| repHelperHost | 否 | String | 集群启动时的主节点,可选字段,缺省为&quot;127.0.0.1:9001&quot; |
+| metaLocalSyncPolicy | 否 | int | 数据节点本地保存方式,该字段取值范围[1,2,3],缺省为1:其中1为数据保存到磁盘,2为数据只保存到内存,3为只将数据写文件系统buffer,但不刷盘 |
+| metaReplicaSyncPolicy | 否 | int | 数据节点同步保存方式,该字段取值范围[1,2,3],缺省为1:其中1为数据保存到磁盘,2为数据只保存到内存,3为只将数据写文件系统buffer,但不刷盘 |
+| repReplicaAckPolicy | 否 | int | 节点数据同步时的应答策略,该字段取值范围为[1,2,3],缺省为1:其中1为超过1/2多数为有效,2为所有节点应答才有效;3为不需要节点应答 |
+| repStatusCheckTimeoutMs | 否 | long | 节点状态检查间隔,可选字段,单位毫秒,缺省为10秒 |
+|
+ |
+| [bdbStore] | 已弃用,请在&quot;[replication]&quot;单元进行相关配置。Master所属BDB集群的相关配置,Master采用BDB进行元数据存储以及多节点热备,必填单元,值固定为&quot;[bdbStore]&quot; |
+| [bdbStore] | bdbRepGroupName | 是 | String | BDB集群名,所属主备Master节点值必须相同,必填字段 |
+| bdbNodeName | 是 | String | 所属Master在BDB集群中的节点名,该值各个BDB节点必须不重复,必填字段 |
+| bdbNodePort | 否 | int | BDB节点通讯端口,可选字段,缺省为9001 |
+| bdbEnvHome | 是 | String | BDB数据存储路径,必填字段 |
+| bdbHelperHost | 是 | String | BDB集群启动时的主节点,必填字段 |
+| bdbLocalSync | 否 | int | BDB数据节点本地保存方式,该字段取值范围[1,2,3],缺省为1:其中1为数据保存到磁盘,2为数据只保存到内存,3为只将数据写文件系统buffer,但不刷盘 |
+| bdbReplicaSync | 否 | int | BDB数据节点同步保存方式,该字段取值范围[1,2,3],缺省为1:其中1为数据保存到磁盘,2为数据只保存到内存,3为只将数据写文件系统buffer,但不刷盘 |
+| bdbReplicaAck | 否 | int | BDB节点数据同步时的应答策略,该字段取值范围为[1,2,3],缺省为1:其中1为超过1/2多数为有效,2为所有节点应答才有效;3为不需要节点应答 |
+| bdbStatusCheckTimeoutMs | 否 | long | BDB状态检查间隔,可选字段,单位毫秒,缺省为10秒 |
+|
+ |
+| [tlsSetting] | Master采用TLS进行传输层数据加密,启用TLS时通过该配置单元提供相关的设置,可选单元,值固定为&quot;[tlsSetting]&quot; |
+| [tlsSetting] | tlsEnable | 否 | boolean | 是否启用TLS功能,可选配置,缺省为false |
+| tlsPort | 否 | int | Master的TLS端口号,可选配置,缺省为8716 |
+| tlsKeyStorePath | 否 | String | TLS的keyStore文件的绝对存储路径+keyStore文件名,在启动TLS功能时,该字段必填且不能为空 |
+| tlsKeyStorePassword | 否 | String | TLS的keyStorePassword文件的绝对存储路径+keyStorePassword文件名,在启动TLS功能时,该字段必填且不能为空 |
+| tlsTwoWayAuthEnable | 否 | boolean | 是否启用TLS双向认证功能,可选配置,缺省为false |
+| tlsTrustStorePath | 否 | String | TLS的TrustStore文件的绝对存储路径+TrustStore文件名,在启动TLS功能且启用双向认证时,该字段必填且不能为空 |
+| tlsTrustStorePassword | 否 | String | TLS的TrustStorePassword文件的绝对存储路径+TrustStorePassword文件名,在启动TLS功能且启用双向认证时,该字段必填且不能为空 |
+
+### 2.2 Master的前台配置文件velocity.properties中关键配置内容说明:
+
+| 配置单元 | 配置项 | 是否必选 | 值类型 | 配置说明 |
+| --- | --- | --- | --- | --- |
+|
+ | file.resource.loader.path | 是 | String | Master的Web的模板绝对路径,该部分为实际部署Master时的工程绝对路径+/resources/templates,该配置要与实际部署相吻合,配置失败会导致Master前端页面访问失败。 |
+
+### 2.3 broker.ini文件中关键配置内容说明:
+
+| 配置单元 | 配置项 | 是否必选 | 值类型 | 配置说明 |
+| --- | --- | --- | --- | --- |
+| [broker] | Broker系统运行主配置单元,必填单元,值固定为&quot;[broker]&quot; |
+| [broker] | brokerId | 是 | int | 服务器唯一标志,必填字段,可设为0;设为0时系统将默认取本机IP转化为int值再取abs绝对值,避免brokerId为负数,如果使用环境的IP比较复杂存在生成的brokerId值冲突时,则需要指定brokerId值进行设置。 |
+| hostName | 是 | String | Broker对外服务的主机地址,必填项,必须在网卡中已配置,处于启用状态,非回环且不能为127.0.0.1的IP |
+| port | 否 | int | Broker监听的端口,可选项,缺省值为8123 |
+| webPort | 否 | int | Broker的http管理访问端口,可选项,缺省为8081 |
+| masterAddressList | 是 | String | Broker所属集群的Master地址列表,必填字段,格式必须是ip1:port1,ip2:port2,ip3:port3 |
+| primaryPath | 是 | String | Broker存储消息的绝对路径,必选字段 |
+| maxSegmentSize | 否 | int | Broker存储消息Data内容的文件大小,可选字段,缺省512M,最大1G |
+| maxIndexSegmentSize | 否 | int | Broker存储消息Index内容的文件大小,可选字段,缺省18M,约70W条消息每文件 |
+| transferSize | 否 | int | Broker允许每次传输给客户端的最大消息内容大小,可选字段,缺省为512K |
+| consumerRegTimeoutMs | 否 | long | consumer心跳超时时间,可选项,单位毫秒,默认30秒 |
+| socketRecvBuffer | 否 | long | Socket接收Buffer缓冲区SO\_RCVBUF大小,单位字节,负数为不做设置以缺省值为准 |
+| socketSendBuffer | 否 | long | Socket发送Buffer缓冲区SO\_SNDBUF大小,单位字节,负数为不做设置以缺省值为准 |
+| tcpWriteServiceThread | 否 | int | Broker支持TCP生产服务的socket worker线程数,可选字段,缺省为所在机器的2倍CPU个数 |
+| tcpReadServiceThread | 否 | int | Broker支持TCP消费服务的socket worker线程数,可选字段,缺省为所在机器的2倍CPU个数 |
+| logClearupDurationMs | 否 | long | 消息文件的老化清理周期, 单位为毫秒, 缺省为3分钟进行一次日志清理操作,最低1分钟 |
+| logFlushDiskDurMs | 否 | long | 批量检查消息持久化到文件的检查周期,单位为毫秒, 缺省为20秒进行一次全量的检查及刷盘 |
+| visitTokenCheckInValidTimeMs | 否 | long | visitToken检查时允许Broker注册后延迟检查的时长,单位ms,缺省120000,取值范围[60000,300000] |
+| visitMasterAuth | 否 | boolean | 是否启用上报Master鉴权,缺省为false,如果为true,则会在上报Master的信令里加入用户名及签名信息 |
+| visitName | 否 | String | 访问Master的用户名,缺省为空字符串,在visitMasterAuth为true时该值必须存在,该值必须与master.ini里的visitName字段值同 |
+| visitPassword | 否 | String | 访问Master的密码,缺省为空字符串,在visitMasterAuth为true时该值必须存在,该值必须与master.ini里的visitPassword字段值同 |
+| logFlushMemDurMs | 否 | long | 批量检查消息内存持久化到文件的检查周期,单位为毫秒, 缺省为10秒进行一次全量的检查及刷盘 |
+|
+ |
+ |
+| [zookeeper] | Broker对应的Tube MQ集群存储Offset的ZooKeeper集群相关信息,必填单元,值固定为&quot;[zookeeper]&quot; |
+| [zookeeper] | zkServerAddr | 否 | String | zk服务器地址,可选配置,缺省为&quot;localhost:2181&quot; |
+| zkNodeRoot | 否 | String | zk上的节点根目录路径,可选配置,缺省为&quot;/tubemq&quot; |
+| zkSessionTimeoutMs | 否 | long | zk心跳超时,单位毫秒,默认30秒 |
+| zkConnectionTimeoutMs | 否 | long | zk连接超时时间,单位毫秒,默认30秒 |
+| zkSyncTimeMs | 否 | long | zk数据同步时间,单位毫秒,默认5秒 |
+| zkCommitPeriodMs | 否 | long | Broker缓存数据刷新到zk上的时间间隔,单位毫秒,默认5秒 |
+| zkCommitFailRetries | 否 | int | Broker刷新缓存数据到Zk失败后的最大重刷次数 |
+|
+ |
+| [tlsSetting] | Master采用TLS进行传输层数据加密,启用TLS时通过该配置单元提供相关的设置,可选单元,值固定为&quot;[tlsSetting]&quot; |
+| [tlsSetting] | tlsEnable | 否 | boolean | 是否启用TLS功能,可选配置,缺省为false |
+| tlsPort | 否 | int | Broker的TLS端口号,可选配置,缺省为8124 |
+| tlsKeyStorePath | 否 | String | TLS的keyStore文件的绝对存储路径+keyStore文件名,在启动TLS功能时,该字段必填且不能为空 |
+| tlsKeyStorePassword | 否 | String | TLS的keyStorePassword文件的绝对存储路径+keyStorePassword文件名,在启动TLS功能时,该字段必填且不能为空 |
+| tlsTwoWayAuthEnable | 否 | boolean | 是否启用TLS双向认证功能,可选配置,缺省为false |
+| tlsTrustStorePath | 否 | String | TLS的TrustStore文件的绝对存储路径+TrustStore文件名,在启动TLS功能且启用双向认证时,该字段必填且不能为空 |
+| tlsTrustStorePassword | 否 | String | TLS的TrustStorePassword文件的绝对存储路径+TrustStorePassword文件名,在启动TLS功能且启用双向认证时,该字段必填且不能为空 |
+
+---
+<a href="#top">Back to top</a>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/console_introduction.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/console_introduction.md
new file mode 100644
index 0000000..ae9f1b9
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/console_introduction.md
@@ -0,0 +1,118 @@
+---
+title: TubeMQ管控台操作指引
+---
+
+## 1 管控台关系
+
+​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:
+![](img/console/1568169770714.png)
+​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。
+
+
+## 2 TubeMQ管控台各版面介绍
+
+​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topic列表2个部分,我们先介绍简单的分发查询和集群管理,然后再介绍复杂的配置管理。
+
+### 2.1 分发查询
+
+​        点分发查询,我们会看到如下的列表信息,这是当前TubeMQ集群里已注册的消费组信息,包括具体的消费组组名,消费的Topic,以及该组总的消费分区数简介信息,如下图示:
+![](img/console/1568169796122.png)
+​       点击记录,可以看到选中的消费组里的消费者成员,及对应消费的Broker及Partition分区信息,如下图示:
+![](img/console/1568169806810.png)
+
+​       这个页面可以供我们查询,输入Topic或者消费组名,就可以很快确认系统里有哪些消费组在消费Topic,以及每个消费组的消费目标是怎样这些信息。
+
+### 2.2 集群管理
+
+​        集群管理主要管理Master的HA,在这个页面上我们可以看到当前Master的各个节点及节点状态,同时,我们可以通过“切换”操作来改变节点的主备状态。
+![](img/console/1568169823675.png)
+
+### 2.3 配置管理
+
+​        配置管理版面既包含了Broker、Topic元数据的管理,还包含了Broker和Topic的上线发布以及下线操作,有2层含义,比如Broker列表里,展示的是当前集群里已配置的Broker元数据,包括未上线处于草稿状态、已上线、已下线的Broker记录信息:
+![](img/console/1568169839931.png)
+
+​        从页面信息我们也可以看到,除了Broker的记录信息外,还有Broker在该集群里的管理信息,包括是否已上线,是否处于命令处理中,是否可读,是否可写,配置是否做了更改,是否已加载变更的配置信息。
+
+​        点单个新增,会弹框如下,这个表示待新增Broker的元数据信息,包括BrokerID,BrokerIP,BrokerPort,以及该Broker里部署的Topic的缺省配置信息,相关的字段详情见《TubeMQ HTTP访问接口定义.xls》
+![](img/console/1568169851085.png)
+
+​        所有TubeMQ管控台的变更操作,或者改变操作,都会要求输入操作授权码,该信息由运维通过Master的配置文件master.ini的confModAuthToken字段进行定义:如果你知道这个集群的密码,你就可以进行该项操作,比如你是管理员,你是授权人员,或者你能登陆这个master的机器拿到这个密码,都认为你是有权操作该项功能。
+
+### 2.4 TubeMQ管控台上涉及的操作及注意事项
+
+​       如上所说,TubeMQ管控台是运营Tube MQ集群的,套件负责包括Master、Broker这类TubeMQ集群节点管理,包括自动部署和安装等,因此,如下几点需要注意:
+
+​       1. **TubeMQ集群做扩缩容增、减Broker节点时,要先在TubeMQ管控台上做相应的节点新增、上线,以及下线、删除等操作后才能在物理环境上做对应Broker节点的增删处理**:
+
+​        TubeMQ集群对Broker按照状态机管理,如上图示涉及到[draft,online,read-only,write-only,offline] 等状态,记录增加还没生效时是draft状态,确定上线后是online态;节点删除首先要由online状态转为offline状态,然后再通过删除操作清理系统内保存的该节点记录;draft、online和offline是为了区分各个节点所处的环节,Master只将online状态的Broker分发给对应的producer和consumer进行生产和消费;read-only,write-only是Broker处于online状态的子状态,表示只能读或者只能写Broker上的数据;相关的状态及操作见页面详情,增加一条记录即可明白其中的关系。TubeMQ管控台上增加这些记录后,我们就可以进行Broker节点的部署及启动,这个时候Tube集群环境的页面会显示节点运行状态,如果为unregister状态,如下图示,则表示节点注册失败,需要到对应broker节点上检查日志,确认
 原因。目前该部分已经很成熟,出错信息会提示完整 [...]
+![](img/console/1568169863402.png)
+​        2. **Topic元数据信息需要通过套件的业务使用界面进行新增和删除操作:**
+
+​       如下图,业务发现自己消费的Topic在TubeMQ管控台上没有,则需要在TubeMQ的管控台上直接操作:
+![](img/console/1568169879529.png)
+
+​       我们通过如上图中的Topic列表项完成Topic的新增,会弹出如下框,
+![](img/console/1568169889594.png)
+
+​       点击确认后会有一个选择部署该新增Topic的Broker列表,选择部署范围后进行确认操作:
+![](img/console/1568169900634.png)
+
+​       在完成新增Topic的操作后,我们还需要对刚进行变更的配置对Broker进行重载操作,如下图示:
+![](img/console/1568169908522.png)
+
+​       重载完成后Topic才能对外使用,我们会发现如下配置变更部分在重启完成后已改变状态:
+![](img/console/1568169916091.png)
+
+​       这个时候我们就可以针对该Topic进行生产和消费处理。
+
+## 3 对于Topic的元数据进行变更后的操作注意事项:
+
+### 3.1 如何自行配置Topic参数:
+
+​       大家点击Topic列表里任意Topic后,会弹出如下框,里面是该Topic的相关元数据信息,其决定了这个Topic在该Broker上,设置了多少个分区,当前读写状态,数据刷盘频率,数据老化周期和时间等信息:
+![](img/console/1568169925657.png)
+
+​       这些信息由系统管理员设置好默认值后直接定义的,一般不会改变,若业务有特殊需求,比如想增加消费的并行度增多分区,或者想减少刷盘频率,怎么操作?如下图示,各个页面的字段含义及作用如下表:
+
+| 配置项              | 配置名                                | 字段类型 | 说明                                                         |
+| ------------------- | ------------------------------------- | -------- | ------------------------------------------------------------ |
+| topicName           | topic名称                             | String   | 字串长度(0,64],以字母开头的字母,数字,下划线的字符串,如果批量新增topic,topic值以","隔开,最大批量值为50条 |
+| brokerId            | broker的ID                            | int      | 待新增的BrokerId,批量操作的brokerId数字以","隔开,最大批量操作量不超过50 |
+| deleteWhen          | topic数据删除时间                     | String   | 按照crontab的配置格式定义,如“0 0 6,18 * *   ?”,缺省为broker的对应字段缺省配置 |
+| deletePolicy        | 删除策略                              | String   | topic数据删除策略,类似"delete,168"定义,缺省为broker的对应字段缺省配置 |
+| numPartitions       | topic在该broker上的分区量             | int      | 缺省为broker的对应字段缺省配置                               |
+| unflushThreshold    | 最大允许的待刷新的记录条数            | int      | 最大允许的未flush消息数,超过此值将强制force到磁盘,默认1000,缺省为broker的对应字段缺省配置 |
+| unflushInterval     | 最大允许的待刷新的间隔                | int      | 最大允许的未flush间隔时间,毫秒,默认10000,缺省为broker的对应字段缺省配置 |
+| numTopicStores      | 允许建立Topic数据块和分区管理组的个数 | int      | 缺省为1个,如果大于1则分区和topic对列按照该值倍乘关系         |
+| memCacheMsgCntInK   | 缺省最大内存缓存包量                  | int      | 内存最大允许缓存的消息包总条数,单位为千条,缺省为10K,最少允许1K |
+| memCacheMsgSizeInMB | 缺省内存缓存包总的Size大小            | int      | 内存最大允许缓存的消息包size总大小,单位为MB,缺省为3M,最小需要为2M |
+| memCacheFlushIntvl  | 内存缓存最大允许的待刷新间隔          | int      | 内存最大允许未flush时间间隔,毫秒,默认20000ms,最小4000ms    |
+| acceptPublish       | topic是否接收发布请求                 | boolean  | 缺省为true,取值范围[true,false]                            |
+| acceptSubscribe     | topic是否接收订阅请求                 | boolean  | 缺省为true,取值范围[true,false]                            |
+| createUser          | topic创建人                           | String   | 字串长度(0,32],以字母开头的字母,数字,下划线的字符串        |
+| createDate          | 创建时间                              | String   | 字串格式:"yyyyMMddHHmmss",必须为14位按如上格式的数字字符串   |
+| confModAuthToken    | 配置修改授权key                       | String   | 以字母开头的字母,数字,下划线的字符串,长度为(0,128]位     |
+
+​       该部分字段相关字段详情见《Tube MQ HTTP访问接口定义.xls》,有很明确的定义。大家通过页面右上角的**修改**按钮进行修改,并确认后,会弹出如下框:
+![](img/console/1568169946683.png)
+
+其作用是:a. 选择涉及该Topic元数据修改的Broker节点集合;b. 提供变更操作的授权信息码。
+
+**特别提醒:大家还需要注意的是,输入授权码修改后,数据变更要刷新后才会生效,同时生效的Broker要按比例进行操作。**
+![](img/console/1568169954746.png)
+
+### 3.2 Topic变更注意事项:
+
+​       如上图示,选择变更Topic元数据后,之前选中的Broker集合会在**配置是否已变更**上出现是的提示。我们还需要对变更进行重载刷新操作,选择Broker集合,然后选择刷新操作,可以批量也可以单条,但是一定要注意的是:操作要分批进行,上一批操作的Broker当前运行状态为running后才能进入下一批的配置刷新操作;如果有节点处于online状态,但长期不进入running状态(缺省最大2分钟),则需要停止刷新,排查问题原因后再继续操作。
+
+​       进行分批操作原因是,我们系统在变更时,会对指定的Broker做停读停写操作,如果将全量的Broker统一做重载,很明显,集群整体会出现服务不可读或者不可写的情况,从而接入出现不该有的异常。
+
+### 3.3 对于Topic的删除处理:
+
+​       页面上进行的删除是软删除处理,如果要彻底删除该topic需要通过API接口进行硬删除操作处理才能实现(避免业务误操作)。
+
+​       完成如上内容后,Topic元数据就变更完成。
+
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/consumer_example.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/consumer_example.md
new file mode 100644
index 0000000..93585e6
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/consumer_example.md
@@ -0,0 +1,82 @@
+---
+title: 消费者示例
+---
+
+## 1 Consumer 示例
+  TubeMQ 提供了两种方式来消费消息: PullConsumer and PushConsumer。
+
+
+### 1.1 PullConsumer 
+   ```java
+    public class PullConsumerExample {
+
+        public static void main(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final String topic = "test";
+            final String group = "test-group";
+            final ConsumerConfig consumerConfig = new ConsumerConfig(masterHostAndPort, group);
+            consumerConfig.setConsumePosition(ConsumePosition.CONSUMER_FROM_LATEST_OFFSET);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+            final PullMessageConsumer messagePullConsumer = messageSessionFactory.createPullConsumer(consumerConfig);
+            messagePullConsumer.subscribe(topic, null);
+            messagePullConsumer.completeSubscribe();
+            // wait for client to join the exact consumer queue that consumer group allocated
+            while (!messagePullConsumer.isPartitionsReady(1000)) {
+                ThreadUtils.sleep(1000);
+            }
+            while (true) {
+                ConsumerResult result = messagePullConsumer.getMessage();
+                if (result.isSuccess()) {
+                    List<Message> messageList = result.getMessageList();
+                    for (Message message : messageList) {
+                        System.out.println("received message : " + message);
+                    }
+                    messagePullConsumer.confirmConsume(result.getConfirmContext(), true);
+                }
+            }
+        }   
+
+    }
+   ``` 
+   
+### 1.2 PushConsumer
+   ```java
+   public class PushConsumerExample {
+   
+        public static void test(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final String topic = "test";
+            final String group = "test-group";
+            final ConsumerConfig consumerConfig = new ConsumerConfig(masterHostAndPort, group);
+            consumerConfig.setConsumePosition(ConsumePosition.CONSUMER_FROM_LATEST_OFFSET);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+            final PushMessageConsumer pushConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
+            pushConsumer.subscribe(topic, null, new MessageListener() {
+
+                @Override
+                public void receiveMessages(PeerInfo peerInfo, List<Message> messages) throws InterruptedException {
+                    for (Message message : messages) {
+                        System.out.println("received message : " + new String(message.getData()));
+                    }
+                }
+
+                @Override
+                public Executor getExecutor() {
+                    return null;
+                }
+
+                @Override
+                public void stop() {
+                    //
+                }
+            });
+            pushConsumer.completeSubscribe();
+            CountDownLatch latch = new CountDownLatch(1);
+            latch.await(10, TimeUnit.MINUTES);
+        }
+    }
+    ```
+
+---
+
+<a href="#top">Back to top</a>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/deployment.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/deployment.md
new file mode 100644
index 0000000..32c5ba0
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/deployment.md
@@ -0,0 +1,157 @@
+---
+title: TubeMQ编译、部署及简单使用
+---
+
+## 1 工程编译打包:
+
+进入工程根目录,执行命令:
+
+```
+mvn clean package -Dmaven.test.skip
+```
+
+例如将TubeMQ源码包放在E盘根目录,按照如下方式执行上述命令,当各个子目录都编译成功时工程编译完成:
+
+![](img/sysdeployment/sys_compile.png)
+
+大家也可以进入各个子目录进行单独编译,编译过程与普通的工程编译处理过程一致。
+
+## 2 部署服务端:
+如上例子,进入..\InLong\inlong-tubemq\tubemq-server\target目录,服务侧的相关内容如下,其中apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz为完整的服务端安装包,里面包括执行脚本,配置文件,依赖包,以及前端的源码;apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT.jar为服务端处理逻辑包,包含于完整工程安装包的lib里,单独提出是考虑到日常变更升级时改动点多在服务器处理逻辑上,升级的时候只需要单独替换该jar包即可:
+
+![](img/sysdeployment/sys_package.png)
+
+这里我们是全新安装,将上述完整的工程安装包部署到待安装机器上,我们这里是放置在/data/inlong目录下:
+
+![](img/sysdeployment/sys_package_list.png)
+
+
+## 3 配置系统:
+
+服务包里打包了3种角色:Master、Broker、Tools,业务使用时可以将Master和Broker放置在一起,也可以单独分开不同机器放置,依照业务对机器的规划进行处理。我们通过如下3台机器搭建一个完整的有2台Master的生产、消费环境:
+
+| 机器 | 所属角色 | TCP端口 | TLS端口 | WEB端口 | 备注 |
+| --- | --- | --- | --- | --- | --- |
+| 9.23.27.24 | **Master** | 8099 | 8199 | 8080 | 元数据存储在`/stage/meta_data` |
+| | Broker | 8123 | 8124 | 8081 | 消息储存在`/stage/msg_data` |
+| | ZooKeeper | 2181 | | | Offset储存在根目录`/tubemq` |
+| 9.23.28.24 | **Master** | 8099 | 8199 | 8080 | 元数据存储在 `/stage/meta_data` |
+| | Broker | 8123 | 8124 | 8081 | 消息储存在`/stage/msg_data` |
+| 9.23.27.160 | Producer ||||
+| | Consumer ||||
+|
+部署Master时需要注意:
+
+1. 部署Master的机器,Master集群可以部署1台、2台或者3台:如果要保证高可靠建议3台(任意坏1台Master对外仍然可读写配置及接入新的生产或者消费),如果只需要保证一般情况2台(任意坏1台Master对外仍然可读配置及已接入的生产和消费不受影响),最低1台(坏1台Master对外配置不可读写及已接入的生产和消费不受影响);
+2. 在完成Master的规划后,对于配置Master的机器,需要将Master所在机器加入时间同步,同时Master各个机器的IP要在各个Master机器的/etc/hosts配置里进行设置,如:
+
+![](img/sysdeployment/sys_address_host.png)
+
+以9.23.27.24和9.23.28.24为例,我们部署了Master和Broker两种角色,需要在/conf/master.ini,/resources/velocity.properties,/conf/broker.ini里进行如下配置,首先是9.23.27.24的配置:
+
+![](img/sysdeployment/sys_configure_1.png)
+
+然后是配置9.23.28.24:
+
+![](img/sysdeployment/sys_configure_2.png)
+
+要注意的是右上角的配置为Master的Web前台配置信息,需要根据Master的安装路径修改/resources/velocity.properties里的file.resource.loader.path信息。
+
+## 4 运行节点
+### 4.1 启动Master:
+
+完成如上配置设置后,首先进入主备Master所在的TubeMQ环境的bin目录,进行服务启动操作:
+
+![](img/sysdeployment/sys_master_start.png)
+
+我们首先启动9.23.27.24,然后启动9.23.28.24上的Master,如下打印可以表示主备Master都已启动成功并开启了对外服务端口:
+
+![](img/sysdeployment/sys_master_startted.png)
+
+访问Master的管控台([http://9.23.27.24:8080](http://9.23.27.24:8080) ),点击页面可以查看到如下集群信息,则表示master已成功启动:
+
+![](img/sysdeployment/sys_master_console.png)
+
+### 4.2 启动Broker:
+
+启动Broker和启动master有些差别:Master负责管理整个TubeMQ集群,包括Broker节点运行管理以及节点上部署的Topic配置管理,还有生产和消费管理等,因此,实体的Broker启动前,首先要在Master上配置Broker元数据,增加Broker相关的管理信息,如下图示:
+
+![](img/sysdeployment/sys_broker_configure.png)
+
+点击确认后形成一个草稿的Broker记录:
+
+![](img/sysdeployment/sys_broker_online.png)
+
+我们对该broker节点进行启动操作:
+
+![](img/sysdeployment/sys_broker_start.png)
+
+结果发现报错信息:
+
+![](img/sysdeployment/sys_broker_start_error.png)
+
+因为该broker目前还处在草稿状态Broker信息没有正式生效,我们回到Master管控台进行上线生效操作:
+
+![](img/sysdeployment/sys_broker_online_2.png)
+
+Master上所有的变更操作在点击确认的时候,都会弹出如上输入框,要求输入操作授权码。该信息由运维通过Master的配置文件master.ini的confModAuthToken字段进行定义:如果你知道这个集群的密码,你就可以进行该项操作,比如你是管理员,你是授权人员,或者你能登陆这个master的机器拿到这个密码,都认为你是有权操作该项功能:
+
+![](img/sysdeployment/sys_broker_deploy.png)
+
+
+然后我们再重启Broker:
+
+![](img/sysdeployment/sys_broker_restart_1.png)
+
+![](img/sysdeployment/sys_broker_restart_2.png)
+
+查看Master管控台,broker已经注册成功:
+
+![](img/sysdeployment/sys_broker_finished.png)
+
+## 5 数据生产和消费
+### 5.1 配置及生效Topic:
+
+配置Topic和配置Broker信息类似,都需要先在Master上新增元数据信息,然后才能开始使用,要不生产和消费时候会报topic不存在错误,如我们用安装包里的example对不存在的Topic名test进行生产:
+![](img/sysdeployment/test_sendmessage.png)
+
+Demo实例会报如下错误信息:
+
+![](img/sysdeployment/sys_topic_error.png)
+
+我们在Master管控台的Topic列表上加入该Topic先:
+
+![](img/sysdeployment/sys_topic_create.png)
+
+![](img/sysdeployment/sys_topic_select.png)
+
+点击确认后会有一个选择部署该新增Topic的Broker列表,选择部署范围后进行确认操作;在完成新增Topic的操作后,我们还需要对刚进行变更的配置对Broker进行重载操作,如下图示:
+
+![](img/sysdeployment/sys_topic_deploy.png)
+
+重载完成后Topic才能对外使用,我们会发现如下配置变更部分在重启完成后已改变状态:
+
+![](img/sysdeployment/sys_topic_finished.png)
+
+
+**大家需要注意的是:** 我们在重载的时候,要对待重载的Broker集合分批次进行。我们的重载通过状态机进行控制,会先进行不可读写—〉只读操作—〉可读写—〉上线运行各个子状态处理,如果所有待重启Broker全量重载,会使得已在线对外服务的Topic对外出现短暂的不可读写状况,使得生产、消费,特别是生产发送失败。
+
+### 5.2 数据生产和消费:
+
+在安装包里,我们打包了example的测试Demo,业务也可以直接使用tubemq-client-0.9.0-incubating-SNAPSHOT.jar封装自己的生产和消费逻辑,总的形式是类似,我们先执行生产者的Demo,我们可以看到Broker上已开始有数据接收:
+![](img/sysdeployment/test_sendmessage_2.png)
+
+![](img/sysdeployment/sys_node_status.png)
+
+我们再执行消费Demo,我们也可以看到消费也正常:
+
+![](img/sysdeployment/sys_node_status_2.png)
+
+在Broker的生产和消费指标日志里,相关数据已经存在:
+
+![](img/sysdeployment/sys_node_log.png)
+
+在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,就需要查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。
+
+---
+<a href="#top">Back to top</a>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/error_code.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/error_code.md
new file mode 100644
index 0000000..cc7def2
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/error_code.md
@@ -0,0 +1,111 @@
+---
+title: 错误码定义
+---
+
+## 1 TubeMQ错误信息介绍
+
+​        TubeMQ采用的是 错误码(errCode) + 错误详情(errMsg) 相结合的方式返回具体的操作结果。首先根据错误码确定是哪类问题,然后根据错误详情来确定具体的错误原因。表格汇总了所有的错误码以及运行中大家可能遇到的错误详情的相关对照。
+
+## 2 错误码
+
+| 错误类别     | 错误码                            | 错误标记                                                     | 含义                                                         | 备注                                           |
+| ------------ | --------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------------------------------------------- |
+| 成功操作     | 200                               | SUCCESS                                                      | 操作成功                                                     |                                                |
+|成功操作| 201          | NOT_READY                         | 请求已接纳,但服务器还没有ready,服务还没有运行              | 保留错误,暂未使用                                           |                                                |
+| 临时处理冲突 | 301                               | MOVED                                                        | 数据临时切换导致操作不成功,需要重新发起操作请求             |                                                |
+| 客户端错误   | 400                               | BAD_REQUEST                                                  | 客户端侧异常,包括参数异常,状态异常等                       | 需要结合错误信息确定问题原因后重试             |
+| 客户端错误| 401          | UNAUTHORIZED                      | 未授权的操作,确认客户端有权限进行该项操作                   | 需要检查配置,同时与管理员确认原因                           |                                                |
+| 客户端错误| 403          | FORBIDDEN                         | 操作的Topic不存在,或者已删除                                | 需要与管理员确认具体问题原因                                 |                                                |
+| 客户端错误| 404          | NOT_FOUND                         | 消费offset已经达到最大位置                                   |                                                              |                                                |
+| 客户端错误| 405          | ALL_PARTITION_FROZEN              | 所有可用分区都被冻结                                   |  可用分区已被客户端冻结,需要解冻处理或者等待一段时间再重试                                                |                                                |
+| 客户端错误| 406          | NO_PARTITION_ASSIGNED              | 当前客户端没有被分配分区进行消费                      |  客户端个数超过分区个数,或者服务器还没有进行负载均衡操作,需要等待并重试                            |                                                |
+| 客户端错误| 407          | ALL_PARTITION_WAITING              | 当前可用分区都达到了最大消费位置                      |  需要等待再重试                            |                                                |
+| 客户端错误| 408          | ALL_PARTITION_INUSE                | 当前可用分区都被业务使用未释放                        |  需要等待业务逻辑调用confirm接口释放分区,需要等待并再重试                            |                                                |
+| 客户端错误| 410          | PARTITION_OCCUPIED                | 分区消费冲突,忽略即可                                       | 内部注册的临时状态,业务接口一般不会遇到该报错               |                                                |
+| 客户端错误| 411          | HB_NO_NODE                        | 节点超时,需要降低操作等待一阵后再重试处理                   | 一般出现在客户端在服务器侧心跳超时,这个时候需要降低操作频率,等待一阵待lib注册成功后再重试处理 |                                                |
+| 客户端错误| 412          | DUPLICATE_PARTITION               | 分区消费冲突,忽略即可                                       | 一般是由于节点超时引起,重试即可                             |                                                |
+| 客户端错误| 415          | CERTIFICATE_FAILURE               | 认证失败,包括用户身份认证,以及操作授权不通过                 | 一般是用户名密码不一致,或者操作的范围未授权,需要结合错误详情进行排查 |                                                |
+| 客户端错误| 419          | SERVER_RECEIVE_OVERFLOW           | 服务器接收overflow,需要重试处理                             | 如果长期的overflow,需要联系管理员扩容存储实例,或者扩大内存缓存大小 |                                                |
+| 客户端错误| 450          | CONSUME_GROUP_FORBIDDEN           | 消费组被纳入黑名单                                           | 联系管理员处理                                               |                                                |
+| 客户端错误| 452          | SERVER_CONSUME_SPEED_LIMIT        | 消费被限速                                                   | 联系管理员处理,解除限速                                     |                                                |
+| 客户端错误| 455          | CONSUME_CONTENT_FORBIDDEN         | 消费内容拒绝,包括消费组禁止过滤消费,过滤的数据流ID集合与允许的ID集合不一致等 | 对齐过滤消费的设置确认没有问题后,再联系管理员处理           |                                                |
+| 服务器侧异常 | 500                               | INTERNAL_SERVER_ERROR                                        | 内部服务器错误                                               | 需要结合错误信息,联系管理员确定问题原因后重试 |
+| 服务器侧异常| 503          | SERVICE_UNAVILABLE                | 业务临时禁读或者禁写                                         | 继续重试处理,如果持续的出现该类错误,需要联系管理员处理     |                                                |
+| 服务器侧异常| 510          | INTERNAL_SERVER_ERROR_MSGSET_NULL | 读取不到消息集合                                             | 继续重试处理,如果持续的出现该类错误,需要联系管理员处理     |                                                |
+
+## 3 常见错误信息
+
+| 记录号 | 错误信息                                                     | 含义                                                         | 备注                                                         |
+| ------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
+| 1      | Status error: producer has been   shutdown!                  | 客户端已停止                                                 |                                                              |
+| 2      | Illegal parameter: blank topic!                              | 参数错误: 空的topic                                          |                                                              |
+| 3      | Illegal parameter: topicSet is   null or empty!              | 参数错误: 空的topic集合                                      |                                                              |
+| 4      | Illegal parameter: found blank   topic value in topicSet : xxxxx | 参数错误: topic集合里含有空的topic                           |                                                              |
+| 5      | Send message failed                                          | 消息发送失败                                                 |                                                              |
+| 6      | Illegal parameter: null message   package!                   | 空消息包                                                     |                                                              |
+| 7      | Illegal parameter: null data in   message package!           | 空消息内容                                                   |                                                              |
+| 8      | Illegal parameter: over max   message length for the total size of message data and attribute, allowed size   is XX,message's real size is YY | 消息长度超过指定长                                           |                                                              |
+| 9      | Topic XX not publish, please   publish first!                | topic没有发布                                                |                                                              |
+| 10     | Topic XX not publish, make sure   the topic exist or acceptPublish and try later! | topic没有发布,或者发布的topic不存在                         |                                                              |
+| 11     | Null partition for topic: XX ,   please try later!           | 对应topic暂时没有分配到分区                                  |                                                              |
+| 12     | No available partition for topic:   XX                       | 没有有效分区                                                 |                                                              |
+| 13     | Current delayed messages over max   allowed count, allowed is xxxxx, current count is yyyy | 当前滞留未响应的消息超过了允许值                             | 稍后再发,该值缺省40W,通过TubeClientConfig.setSessionMaxAllowedDelayedMsgCount()控制 |
+| 14     | The brokers of topic are all   forbidden!                    | 对应Topic的Broker均因为网络质量问题被屏蔽                    | 稍后再发,待屏蔽策略解除                                     |
+| 15     | Not found available partition for   topic: XX                | 没有找到有效分区                                             | 有分区分配但由于网络质量问题分配的分区处于屏蔽中,           |
+| 16     | Channel is not writable, please   try later!                 | 管道不可写                                                   | 缺省10M,通过TubeClientConfig.setNettyWriteBufferHighWaterMark调大写buffer |
+| 17     | Put message failed from xxxxx,   server receive message overflow! | 存储消息时对应服务器过载                                     | 重试消息发送,若持续的出现该类报错,则需要联系管理员进行扩容处理 |
+| 18     | Write StoreService temporary   unavailable!                   | 对应服务器临时写无效                                         | 重试消息发送,如果连续写无效,联系管理员,调整分区在broker上的分布,同时处理异常的broker |
+| 19     | Topic xxx not existed, please   check your configure         | 生产的Topic不存在                                            | 有可能是生产过程中topic被管理员删除,联系管理员处理           |
+| 20     | Partition[xxx:yyy] has been closed                           | 生产的Topic已删除                                            | 有可能是生产过程中topic被管理员软删除,联系管理员处理         |
+| 21     | Partition xxx-yyy not existed,   please check your configure | 生产的Topic分区不存在                                        | 异常情况,分区只会增加不会减少,联系管理员处理                 |
+| 22     | Checksum failure xxx of yyyy not   equal to the data's checksum | 生产的数据checksum计算结果不一致                             | 异常情况,携带的内容计算的checksum计算错误,或者传输途中被篡改 |
+| 23     | Put message failed from xxxxx                                | 存储消息失败                                                 | 重发,同时将错误信息发给管理员确认问题原因                    |
+| 24     | Put message failed2 from                                     | 存储消息失败2                                                | 重发,同时将错误信息发给管理员确认问题原因                    |
+| 25     | Null brokers to select sent,   please try later!             | 当前无可用的Broker进行消息发送                               | 等待一阵时间再重试,如果一直如此联系管理员,该情况有可能是broker异常,或者未完成消息过多引起,需要查看Broker状态后确定。 |
+| 26     | Publish topic failure, make sure   the topic xxx exist or acceptPublish and try later! | publish topic失败,确认该topic存在或者处于可写状态           | 调用void publish(final String   topic)接口时如果对应topic还不在本地,或者topic不存在时将报该错误,等待1分钟左右,或者使用Set<String>   publish(Set</String> topicSet)接口完成topic的发布。 |
+| 27     | Register producer failure,   response is null!               | 注册producer失败                                             | 需要联系管理员处理                                           |
+| 28     | Register producer failure, error   is  XXX                   | 注册producer失败,错误原因是 XXX                             | 根据错误原因进行问题核对,如果仍错误,联系管理员处理。       |
+| 29     | Register producer exception, error   is XXX                  | 注册producer发生异常,错误原因是 XXX                         | 根据错误原因进行问题核对,如果仍错误,联系管理员处理。       |
+| 30     | Status error: please call start   function first!            | 需要首先调用start函数                                        | 接口使用问题,Producer不是从sessionFactory创建生成,调用sessionfactory里的createProducer()函数进行创建后再使用 |
+| 31     | Status error: producer service has   been shutdown!          | producer服务已经停止                                         | producer已经停止服务,停止业务函数调用                       |
+| 32     | Listener is null for topic XXX                               | 针对topic XXX传递的回调对象Listener为null                    | 输入参数不合法,需要检查业务代码                             |
+| 33     | Please complete topic's Subscribe   call first!              | 请先完成对应topic的subscribe()函数调用                       | 接口使用问题,需要先完成topic的订阅操作再进行消费消费处理    |
+| 34     | ConfirmContext is null !                                     | 空ConfirmContext内容,非法上下文                             | 需要业务确认接口调用逻辑                                     |
+| 35     | ConfirmContext format error: value   must be aaaa:bbbb:cccc:ddddd ! | ConfirmContext内容的格式不正确                               | 需要业务确认接口调用逻辑                                     |
+| 36     | ConfirmContext's format error:   item (XXX) is null !        | ConfirmContext内容有异常,存在Blank内容                      | 需要业务确认接口调用逻辑                                     |
+| 37     | The confirmContext's value   invalid!                        | 无效ConfirmContext内容                                       | 有可能是不存在的上下文,或者因为负载均衡对应分区已释放上下文已经过期 |
+| 38     | Confirm XXX 's offset failed!                                | confirm offset失败                                           | 需要根据日志详情确认问题原因,如果持续的出现该问题,联系管理员处理。 |
+| 39     | Not found the partition by   confirmContext:XXX              | 确认的partition未找到                                        | 服务端负载均衡对应分区已释放                                 |
+| 40     | Illegal parameter:   messageSessionFactory or consumerConfig is null! | messageSessionFactory或者   consumerConfig为null             | 检查对象初始化逻辑,确认配置的准确性                         |
+| 41     | Get consumer id failed!                                      | consumer的唯一识别ID生成失败                                 | 持续失败时,将异常堆栈信息给到系统管理员处理                 |
+| 42     | Parameter error: topic is Blank!                             | 输入的topic为Blank                                           | Blank包括参数为null,输入的参数不为空,但内容长度为0,或者内容为isWhitespace字符 |
+| 43     | Parameter error: Over max allowed   filter count, allowed count is XXX | 过滤项个数超过系统允许的最大量                               | 参数异常,调整个数                                           |
+| 44     | Parameter error: blank filter   value in parameter filterConds! | filterConds里包含了为Blank的内容项                           | 参数异常,调整内容项值                                       |
+| 45     | Parameter error: over max allowed   filter length, allowed length is XXX | 过滤项长度超标                                               |                                                              |
+| 46     | Parameter error: null   messageListener                      | messageListener参数为null                                    |                                                              |
+| 47     | Topic=XXX has been subscribed                              | Topic XXX被重复订阅                                          |                                                              |
+| 48     | Not subscribe any topic, please   subscribe first!           | 未订阅任何topic即启动消费                                    | 接口调用逻辑异常,检查业务代码                               |
+| 49     | Duplicated completeSubscribe call!                           | 重复调用completeSubscribe()函数                              | 接口调用逻辑异常,检查业务代码                               |
+| 50     | Subscribe has finished!                                      | 重复调用completeSubscribe()函数                              |                                                              |
+| 51     | Parameter error: sessionKey is   Blank!                      | 参数不合规,sessionKey值不允许为Blank内容                    |                                                              |
+| 52     | Parameter error: sourceCount must   over zero!               | 参数不合规,sourceCount值必须大于0                           |                                                              |
+| 53     | Parameter error: partOffsetMap's   key XXX format error: value must be aaaa:bbbb:cccc ! | 参数不合规,partOffsetMap的key值内容必须是aaaa:bbbb:cccc格式 |                                                              |
+| 54     | Parameter error: not included in   subscribed topic list: partOffsetMap's key is XXX , subscribed topics are YYY | 参数不合规,partOffsetMap里指定的topic在订阅列表里并不存在   |                                                              |
+| 55     | Parameter error: illegal format   error of XXX  : value must not include   ',' char!" | 参数不合规,key值里面不能包含","字符                         |                                                              |
+| 56     | Parameter error: Offset must over   or equal zero of partOffsetMap  key   XXX, value is YYY | 参数不合规,offset值必须是大于等于0                          |                                                              |
+| 57     | Duplicated completeSubscribe call!                           | 重复调用completeSubscribe()函数                              |                                                              |
+| 58     | Register to master failed!   ConsumeGroup forbidden, XXX     | 注册Master失败,消费组被禁止                                 | 服务端主动禁止行为,联系系统管理员处理                       |
+| 59     | Register to master failed!   Restricted consume content, XXX | 注册Master失败,消费内容受限                                 | 过滤消费的数据流ID集合不在申请的集合范围内                        |
+| 60     | Register to master failed! please   check and retry later.   | 注册Master失败,请重试                                       | 这种情况需要查看客户端日志,确认问题原因,在核实没有异常日志,同时master地址填写正确,联系系统管理员处理。 |
+| 61     | Get message error, reason is XXX                             | 因为XXX原因拉取消息失败                                      | 确定下问题原因,将相关错误信息提交给相关的业务负责人处理,需要根据具体错误信息对齐原因 |
+| 62     | Get message null                                             | 获取到的消息为null                                           | 重试                                                         |
+| 63     | Get message   failed,topic=XXX,partition=YYY, throw info is ZZZ | 拉取消息失败                                                 | 将相关错误信息提交给相关的业务负责人处理,需要根据具体错误信息对齐原因 |
+| 64     | Status error: consumer has been   shutdown                   | 消费者已调用shutdown,不应该继续调用其它函数进行业务处理     |                                                              |
+| 65     | All partition in waiting, retry   later!                      | 所有分区都在等待,请稍候                                      | 该错误信息可以不做打印,遇到该情况时拉取县城sleep 200 ~   400ms |
+| 66     | The request offset reached   maxOffset                       | 请求的分区已经消费到最新位置                                 | 可以通过ConsumerConfig.setMsgNotFoundWaitPeriodMs()设置该情况时分区停止拉取的时间段来等待最新消息的到来 |
+| 67     | No partition info in local, please wait and try later                       | 本地没有分区信息,需要等待并重试                                 | 可能情况包括服务器还没有进行rebalance,或者客户端个数大于了分区个数 |
+| 68     | No idle partition to consume, please wait and try later                     | 没有空闲分区进行消费,需要等待并重试                                 | 可能情况业务占用了分区还没有释放,需要等待业务confirm消费后才能获取到空闲分区 |
+| 69     | All partition are frozen to consume, please unfreeze partition(s) or wait                    | 所有分区都被冻结                                 | 可能情况业务调用freeze接口冻结可分区消费,需要业务调用unfreeze接口进行解冻处理 |
+
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/http_access_api.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/http_access_api.md
new file mode 100644
index 0000000..819f9ee
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/http_access_api.md
@@ -0,0 +1,20 @@
+---
+title: HTTP API介绍
+---
+
+HTTP API是Master或者Broker对外功能暴露的接口,管控台的各项操作都是基于这些API进行;如果有最新的功能,或者管控台没有涵盖的功能,业务都可以直接通过调用HTTP API接口完成。
+
+该部分接口一共有4个部分:
+
+- Master元数据配置相关的操作接口,接口数量 24个
+- Master消费权限操作接口,接口数量 33个 
+- Master订阅关系接口,接口数量 2个
+- Broker相关操作接口定义,接口数量 6个
+![](img/api_interface/http-api.png)
+
+
+由于接口众多且参数繁杂,md格式不能比较好的表达,因而以excel附件形式提供给到大家:
+<a href="appendixfiles/http_access_api_definition_cn.xls" target="_blank">TubeMQ HTTP API</a>
+
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/api_interface/http-api.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/api_interface/http-api.png
new file mode 100644
index 0000000..c7374fd
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/api_interface/http-api.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_broker_info.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_broker_info.png
new file mode 100644
index 0000000..4747a88
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_broker_info.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_bytes_def.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_bytes_def.png
new file mode 100644
index 0000000..45a2384
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_bytes_def.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_conn_detail.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_conn_detail.png
new file mode 100644
index 0000000..6e803af
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_conn_detail.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_consumer_diagram.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_consumer_diagram.png
new file mode 100644
index 0000000..f761f54
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_consumer_diagram.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_convert_topicinfo.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_convert_topicinfo.png
new file mode 100644
index 0000000..6c5bffa
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_convert_topicinfo.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto.png
new file mode 100644
index 0000000..430d297
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_optype.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_optype.png
new file mode 100644
index 0000000..9685b80
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_optype.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_status.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_status.png
new file mode 100644
index 0000000..7a787cc
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_status.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_header_fill.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_header_fill.png
new file mode 100644
index 0000000..0023e89
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_header_fill.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_inner_structure.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_inner_structure.png
new file mode 100644
index 0000000..9533ce4
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_inner_structure.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_master_authorizedinfo.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_master_authorizedinfo.png
new file mode 100644
index 0000000..097fb05
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_master_authorizedinfo.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_message_data.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_message_data.png
new file mode 100644
index 0000000..fa7a66e
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_message_data.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_pbmsg_structure.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_pbmsg_structure.png
new file mode 100644
index 0000000..1ec4faf
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_pbmsg_structure.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_close2M.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_close2M.png
new file mode 100644
index 0000000..5342d62
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_close2M.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_diagram.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_diagram.png
new file mode 100644
index 0000000..9d087e7
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_diagram.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_heartbeat2M.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_heartbeat2M.png
new file mode 100644
index 0000000..3dc4367
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_heartbeat2M.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_register2M.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_register2M.png
new file mode 100644
index 0000000..6add74c
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_register2M.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_sendmsg2B.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_sendmsg2B.png
new file mode 100644
index 0000000..2a81905
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_sendmsg2B.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_proto_def.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_proto_def.png
new file mode 100644
index 0000000..f56c275
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_proto_def.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/configure/conf_ini_pos.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/configure/conf_ini_pos.png
new file mode 100644
index 0000000..a68e36d
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/configure/conf_ini_pos.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/configure/conf_velocity_pos.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/configure/conf_velocity_pos.png
new file mode 100644
index 0000000..40e6625
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/configure/conf_velocity_pos.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169770714.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169770714.png
new file mode 100644
index 0000000..4c952c0
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169770714.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169796122.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169796122.png
new file mode 100644
index 0000000..568fefa
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169796122.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169806810.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169806810.png
new file mode 100644
index 0000000..0204457
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169806810.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169823675.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169823675.png
new file mode 100644
index 0000000..7330892
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169823675.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169839931.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169839931.png
new file mode 100644
index 0000000..d961dcf
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169839931.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169851085.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169851085.png
new file mode 100644
index 0000000..28b55c3
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169851085.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169863402.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169863402.png
new file mode 100644
index 0000000..58af810
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169863402.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169879529.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169879529.png
new file mode 100644
index 0000000..b715e74
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169879529.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169889594.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169889594.png
new file mode 100644
index 0000000..37eb229
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169889594.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169900634.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169900634.png
new file mode 100644
index 0000000..fa80612
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169900634.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169908522.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169908522.png
new file mode 100644
index 0000000..8efef9f
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169908522.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169916091.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169916091.png
new file mode 100644
index 0000000..c25a1bb
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169916091.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169925657.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169925657.png
new file mode 100644
index 0000000..dcea033
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169925657.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169946683.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169946683.png
new file mode 100644
index 0000000..15688f9
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169946683.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169954746.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169954746.png
new file mode 100644
index 0000000..4142122
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/console/1568169954746.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/create_pull_request.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/create_pull_request.png
new file mode 100644
index 0000000..2e63a34
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/create_pull_request.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/github_fork_repository.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/github_fork_repository.png
new file mode 100644
index 0000000..d800e10
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/github_fork_repository.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_create_issue.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_create_issue.png
new file mode 100644
index 0000000..1c72d48
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_create_issue.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_filter.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_filter.png
new file mode 100644
index 0000000..6ac0fa0
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_filter.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_1.png
new file mode 100644
index 0000000..cc99519
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_2.png
new file mode 100644
index 0000000..a19f9ee
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/new_pull_request.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/new_pull_request.png
new file mode 100644
index 0000000..de9a478
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/development/new_pull_request.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/mqs_comare.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/mqs_comare.png
new file mode 100644
index 0000000..cb6af4b
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/mqs_comare.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_1.png
new file mode 100644
index 0000000..812a25b
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_2.png
new file mode 100644
index 0000000..2bb77db
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_3.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_3.png
new file mode 100644
index 0000000..0224e3e
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_3.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_4.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_4.png
new file mode 100644
index 0000000..9195504
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_4.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_1.png
new file mode 100644
index 0000000..2f1c0c7
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_2.png
new file mode 100644
index 0000000..5f536a8
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_3.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_3.png
new file mode 100644
index 0000000..41e595a
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_3.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_4.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_4.png
new file mode 100644
index 0000000..c94c5cc
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_4.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_1.png
new file mode 100644
index 0000000..23fc1de
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_2.png
new file mode 100644
index 0000000..6fabcc3
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_3.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_3.png
new file mode 100644
index 0000000..ff8e551
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_3.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_4.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_4.png
new file mode 100644
index 0000000..1f75903
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_4.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_5.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_5.png
new file mode 100644
index 0000000..80342ce
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_5.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_6.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_6.png
new file mode 100644
index 0000000..5714ba2
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_6.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_7.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_7.png
new file mode 100644
index 0000000..67d0cd5
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_7.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_8.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_8.png
new file mode 100644
index 0000000..d63f015
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_8.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_9.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_9.png
new file mode 100644
index 0000000..b459396
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_9.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_1.png
new file mode 100644
index 0000000..ceaf949
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_2.png
new file mode 100644
index 0000000..7a00562
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_3.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_3.png
new file mode 100644
index 0000000..00ebe8d
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_3.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_4.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_4.png
new file mode 100644
index 0000000..2ec4d50
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_4.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_5.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_5.png
new file mode 100644
index 0000000..99fecab
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_5.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_6.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_6.png
new file mode 100644
index 0000000..85e8950
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_6.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_7.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_7.png
new file mode 100644
index 0000000..ff2b2d9
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_7.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_8.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_8.png
new file mode 100644
index 0000000..a805778
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_8.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_9.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_9.png
new file mode 100644
index 0000000..a5926db
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_9.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_1.png
new file mode 100644
index 0000000..0a21fdf
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_2.png
new file mode 100644
index 0000000..b570ee8
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_3.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_3.png
new file mode 100644
index 0000000..fb3c6bc
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_3.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_4.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_4.png
new file mode 100644
index 0000000..322f171
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_4.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_5.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_5.png
new file mode 100644
index 0000000..03ed9c8
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_5.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_6.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_6.png
new file mode 100644
index 0000000..36de673
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_6.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_7.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_7.png
new file mode 100644
index 0000000..eb44fb2
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_7.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_8.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_8.png
new file mode 100644
index 0000000..fbb0415
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_8.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_9.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_9.png
new file mode 100644
index 0000000..e5dec3d
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_9.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_1.png
new file mode 100644
index 0000000..4263605
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_2.png
new file mode 100644
index 0000000..a6407c9
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_3.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_3.png
new file mode 100644
index 0000000..174e42c
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_3.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_4.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_4.png
new file mode 100644
index 0000000..279b319
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_4.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_5.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_5.png
new file mode 100644
index 0000000..b87b8be
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_5.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_6.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_6.png
new file mode 100644
index 0000000..2515997
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_6.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_7.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_7.png
new file mode 100644
index 0000000..80909df
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_7.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_8.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_8.png
new file mode 100644
index 0000000..f610ced
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_8.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_9.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_9.png
new file mode 100644
index 0000000..7155236
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_9.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_1.png
new file mode 100644
index 0000000..2677301
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_1_index.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_1_index.png
new file mode 100644
index 0000000..0901ad7
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_1_index.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_2.png
new file mode 100644
index 0000000..5371180
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_2_index.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_2_index.png
new file mode 100644
index 0000000..abba39d
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_2_index.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_3.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_3.png
new file mode 100644
index 0000000..cfca08b
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_3.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_3_index.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_3_index.png
new file mode 100644
index 0000000..0a3f58e
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_3_index.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_4_index.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_4_index.png
new file mode 100644
index 0000000..2856dbb
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_4_index.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_6_index.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_6_index.png
new file mode 100644
index 0000000..f4f6ce9
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_6_index.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_7.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_7.png
new file mode 100644
index 0000000..a62c928
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_7.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_8.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_8.png
new file mode 100644
index 0000000..fcd1f40
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_8.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_8_index.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_8_index.png
new file mode 100644
index 0000000..66251f8
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scenario_8_index.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scheme.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scheme.png
new file mode 100644
index 0000000..fccce90
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/perf_scheme.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/store_file.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/store_file.png
new file mode 100644
index 0000000..c251dc3
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/store_file.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/store_mem.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/store_mem.png
new file mode 100644
index 0000000..fff9975
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/store_mem.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sys_structure.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sys_structure.png
new file mode 100644
index 0000000..70b4dad
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sys_structure.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_address_host.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_address_host.png
new file mode 100644
index 0000000..4b38251
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_address_host.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_configure.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_configure.png
new file mode 100644
index 0000000..b8b000f
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_configure.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_deploy.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_deploy.png
new file mode 100644
index 0000000..31fc2d7
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_deploy.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_finished.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_finished.png
new file mode 100644
index 0000000..f5364d0
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_finished.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online.png
new file mode 100644
index 0000000..1b0e3e3
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online_2.png
new file mode 100644
index 0000000..9f12cb9
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_1.png
new file mode 100644
index 0000000..4c19cb0
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_2.png
new file mode 100644
index 0000000..7a6aea0
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start.png
new file mode 100644
index 0000000..2ad204b
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start_error.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start_error.png
new file mode 100644
index 0000000..f7a94c5
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start_error.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_compile.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_compile.png
new file mode 100644
index 0000000..edecd21
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_compile.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_1.png
new file mode 100644
index 0000000..f20201b
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_2.png
new file mode 100644
index 0000000..1d35431
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_console.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_console.png
new file mode 100644
index 0000000..d03148d
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_console.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_start.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_start.png
new file mode 100644
index 0000000..a513e6c
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_start.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_startted.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_startted.png
new file mode 100644
index 0000000..764b996
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_startted.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_log.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_log.png
new file mode 100644
index 0000000..ae6a435
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_log.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status.png
new file mode 100644
index 0000000..f7e2982
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status_2.png
new file mode 100644
index 0000000..5f46607
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package.png
new file mode 100644
index 0000000..f04af8a
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package_list.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package_list.png
new file mode 100644
index 0000000..fb531ba
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package_list.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_create.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_create.png
new file mode 100644
index 0000000..ae4af1e
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_create.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_deploy.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_deploy.png
new file mode 100644
index 0000000..d41b54c
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_deploy.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_error.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_error.png
new file mode 100644
index 0000000..1673b8a
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_error.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_finished.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_finished.png
new file mode 100644
index 0000000..f37f726
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_finished.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_select.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_select.png
new file mode 100644
index 0000000..a186889
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_select.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage.png
new file mode 100644
index 0000000..c18ffad
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage_2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage_2.png
new file mode 100644
index 0000000..05dfeac
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage_2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/test_scheme.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/test_scheme.png
new file mode 100644
index 0000000..fcf2087
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/test_scheme.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/test_summary.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/test_summary.png
new file mode 100644
index 0000000..9943b4e
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/test_summary.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-1.png
new file mode 100644
index 0000000..b6bc3e7
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-2.png
new file mode 100644
index 0000000..70466ee
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-3.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-3.png
new file mode 100644
index 0000000..4404414
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-3.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-1.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-1.png
new file mode 100644
index 0000000..4a590b8
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-1.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-2.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-2.png
new file mode 100644
index 0000000..3481225
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-2.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-3.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-3.png
new file mode 100644
index 0000000..fdf2391
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-3.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-4.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-4.png
new file mode 100644
index 0000000..5d7d608
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-4.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-5.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-5.png
new file mode 100644
index 0000000..66028da
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-5.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-6.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-6.png
new file mode 100644
index 0000000..e6fe21e
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-6.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-console-gui.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-console-gui.png
new file mode 100644
index 0000000..c2b4ea8
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-console-gui.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-consume-message.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-consume-message.png
new file mode 100644
index 0000000..1bb14a7
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-consume-message.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-send-message.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-send-message.png
new file mode 100644
index 0000000..c0ab65d
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/img/tubemq-send-message.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/producer_example.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/producer_example.md
new file mode 100644
index 0000000..be2d5ce
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/producer_example.md
@@ -0,0 +1,150 @@
+---
+title: 生产者示例
+---
+
+## 1 Producer 示例
+TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactory 和 TubeMultiSessionFactory。
+  - TubeSingleSessionFactory 在整个生命周期只会创建一个 session
+  - TubeMultiSessionFactory 每次调用都会创建一个session
+
+### 1.1 TubeSingleSessionFactory
+   #### 1.1.1 Send Message Synchronously
+     ```java
+     public final class SyncProducerExample {
+    
+        public static void main(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+            final MessageProducer messageProducer = messageSessionFactory.createProducer();
+            final String topic = "test";
+            final String body = "This is a test message from single-session-factory!";
+            byte[] bodyData = StringUtils.getBytesUtf8(body);
+            messageProducer.publish(topic);
+            Message message = new Message(topic, bodyData);
+            MessageSentResult result = messageProducer.sendMessage(message);
+            if (result.isSuccess()) {
+                System.out.println("sync send message : " + message);
+            }
+            messageProducer.shutdown();
+        }
+    }
+    ```
+     
+   #### 1.1.2 Send Message Asynchronously
+     ```java
+     public final class AsyncProducerExample {
+     
+        public static void main(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+            final MessageProducer messageProducer = messageSessionFactory.createProducer();
+            final String topic = "test";
+            final String body = "async send message from single-session-factory!";
+            byte[] bodyData = StringUtils.getBytesUtf8(body);
+            messageProducer.publish(topic);
+            final Message message = new Message(topic, bodyData);
+            messageProducer.sendMessage(message, new MessageSentCallback(){
+                @Override
+                public void onMessageSent(MessageSentResult result) {
+                    if (result.isSuccess()) {
+                        System.out.println("async send message : " + message);
+                    } else {
+                        System.out.println("async send message failed : " + result.getErrMsg());
+                    }
+                }
+                @Override
+                public void onException(Throwable e) {
+                    System.out.println("async send message error : " + e);
+                }
+            });
+            messageProducer.shutdown();
+        }
+
+    }
+    ```
+     
+   #### 1.1.3 Send Message With Attributes
+     ```java
+     public final class ProducerWithAttributeExample {
+     
+        public static void main(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+            final MessageProducer messageProducer = messageSessionFactory.createProducer();
+            final String topic = "test";
+            final String body = "send message with attribute from single-session-factory!";
+            byte[] bodyData = StringUtils.getBytesUtf8(body);
+            messageProducer.publish(topic);
+            Message message = new Message(topic, bodyData);
+            //set attribute
+            message.setAttrKeyVal("test_key", "test value");
+            //msgType is used for consumer filtering, and msgTime(accurate to minute) is used as the pipe to send and receive statistics
+            SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMddHHmm");
+            message.putSystemHeader("test", sdf.format(new Date()));
+            messageProducer.sendMessage(message);
+            messageProducer.shutdown();
+        }
+
+    }
+    ```
+     
+### 1.2 TubeMultiSessionFactory
+
+    ```java
+    public class MultiSessionProducerExample {
+        
+        public static void main(String[] args) throws Throwable {
+            final int SESSION_FACTORY_NUM = 10;
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+            final List<MessageSessionFactory> sessionFactoryList = new ArrayList<>(SESSION_FACTORY_NUM);
+            final ExecutorService sendExecutorService = Executors.newFixedThreadPool(SESSION_FACTORY_NUM);
+            final CountDownLatch latch = new CountDownLatch(SESSION_FACTORY_NUM);
+            for (int i = 0; i < SESSION_FACTORY_NUM; i++) {
+                TubeMultiSessionFactory tubeMultiSessionFactory = new TubeMultiSessionFactory(clientConfig);
+                sessionFactoryList.add(tubeMultiSessionFactory);
+                MessageProducer producer = tubeMultiSessionFactory.createProducer();
+                Sender sender = new Sender(producer, latch);
+                sendExecutorService.submit(sender);
+            }
+            latch.await();
+            sendExecutorService.shutdownNow();
+            for (MessageSessionFactory sessionFactory : sessionFactoryList) {
+                sessionFactory.shutdown();
+            }
+        }
+    
+        private static class Sender implements Runnable {
+            
+            private MessageProducer producer;
+            
+            private CountDownLatch latch;
+    
+            public Sender(MessageProducer producer, CountDownLatch latch) {
+                this.producer = producer;
+                this.latch = latch;
+            }
+    
+            @Override
+            public void run() {
+                final String topic = "test";
+                try {
+                    producer.publish(topic);
+                    final byte[] bodyData = StringUtils.getBytesUtf8("This is a test message from multi-session factory");
+                    Message message = new Message(topic, bodyData);
+                    producer.sendMessage(message);
+                    producer.shutdown();
+                } catch (Throwable ex) {
+                    System.out.println("send message error : " + ex);
+                } finally {
+                    latch.countDown();
+                }
+            }
+        }
+    }
+    ```
+---
+<a href="#top">Back to top</a>    
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/quick_start.md
new file mode 100644
index 0000000..e806f92
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/quick_start.md
@@ -0,0 +1,183 @@
+---
+title: 快速开始
+---
+## 部署运行
+
+### 1.1 配置示例
+TubeMQ 集群包含有两个组件: **Master** 和 **Broker**. Master 和 Broker 可以部署在相同或者不同的节点上,依照业务对机器的规划进行处理。我们通过如下3台机器搭建有2台Master的生产、消费的集群进行配置示例:
+| 所属角色 | TCP端口 | TLS端口 | WEB端口 | 备注 |
+| --- | --- | --- | --- | --- |
+| Master | 8099 | 8199 | 8080 | 元数据存储在`/stage/meta_data` |
+| Broker | 8123 | 8124 | 8081 | 消息储存在`/stage/msg_data` |
+| ZooKeeper | 2181 | | | Offset储存在根目录`/tubemq` |
+
+### 1.2 准备工作
+- ZooKeeper集群
+
+选择安装路径后,安装包解压后的目录结构如下:
+```
+/INSTALL_PATH/inlong-tubemq-server/
+├── bin
+├── conf
+├── lib
+├── logs
+└── resources
+```
+
+### 1.3 配置Master
+编辑`conf/master.ini`,根据集群信息变更以下配置项
+
+- Master IP和端口
+```ini
+[master]
+hostName=YOUR_SERVER_IP                   // 替换为当前主机IP
+port=8099
+webPort=8080
+metaDataPath=/stage/meta_data
+```
+
+- 访问授权Token
+```ini
+confModAuthToken=abc                     // 该token用于页面配置、API调用等
+```
+
+- ZooKeeper集群地址
+```ini
+[zookeeper]                              // 同一个集群里Master和Broker必须使用同一套zookeeper环境,且配置一致
+zkNodeRoot=/tubemq
+zkServerAddr=localhost:2181              // 指向zookeeper集群,多个地址逗号分开
+```
+
+- 配置Replication策略
+```ini
+[replication]
+repGroupName=tubemqGroup1                // 同一个集群的Master必须要用同一个组名,且不同集群的组名必须不同 
+repNodeName=tubemqGroupNode1             // 同一个集群的master节点名必须是不同的名称
+repHelperHost=FIRST_MASTER_NODE_IP:9001  // helperHost用于创建master集群,一般配置第一个master节点ip
+```
+
+- (可选)生产环境,多master HA级别
+
+| HA级别 | Master数量 | 描述 |
+| -------- | ------------- | ----------- |
+| 高 | 3 masters | 任何主节点崩溃后,集群元数据仍处于读/写状态,可以接受新的生产者/消费者。 |
+| 中 | 2 masters | 一个主节点崩溃后,集群元数据处于只读状态。对现有的生产者和消费者没有任何影响。 |
+| 低 | 1 master | 主节点崩溃后,对现有的生产者和消费者没有影响。 |
+
+**注意**:需保证Master所有节点之间的时钟同步
+
+
+### 1.4 配置Broker
+编辑`conf/broker.ini`,根据集群信息变更以下配置项
+- Broker IP和端口
+```ini
+[broker]
+brokerId=0
+hostName=YOUR_SERVER_IP                 // 替换为当前主机IP,broker目前只支持IP
+port=8123
+webPort=8081
+```
+
+- Master地址
+```ini
+masterAddressList=YOUR_MASTER_IP1:8099,YOUR_MASTER_IP2:8099   //多个master以逗号分隔
+```
+
+- 数据目录
+```ini
+primaryPath=/stage/msg_data
+```
+
+- ZooKeeper集群地址
+```ini
+[zookeeper]                             // 同一个集群里Master和Broker必须使用同一套zookeeper环境,且配置一致
+zkNodeRoot=/tubemq                      
+zkServerAddr=localhost:2181             // 指向zookeeper集群,多个地址逗号分开
+```
+
+### 1.5 启动Master
+进入Master节点的 `bin` 目录下,启动服务:
+```bash
+./tubemq.sh master start
+```
+访问Master的管控台 `http://YOUR_MASTER_IP:8080` ,页面可查则表示master已成功启动:
+![TubeMQ Console GUI](img/tubemq-console-gui.png)
+
+
+#### 1.5.1 配置Broker元数据
+Broker启动前,首先要在Master上配置Broker元数据,增加Broker相关的管理信息。在`Broker List` 页面,  `Add Single Broker`,然后填写相关信息:
+
+![Add Broker 1](img/tubemq-add-broker-1.png)
+
+需要填写的内容包括:
+1. broker IP: broker server ip
+1. authToken:  `conf/master.ini` 文件中 `confModAuthToken` 字段配置的 token
+
+然后上线Broker:
+![Add Broker 2](img/tubemq-add-broker-1.png)
+
+### 1.6 启动Broker
+进入broker节点的 `bin` 目录下,执行以下命令启动Broker服务:
+
+```bash
+./tubemq.sh broker start
+```
+
+刷新页面可以看到 Broker 已经注册,当 `当前运行子状态` 为 `idle` 时, 可以增加topic:
+![Add Broker 3](img/tubemq-add-broker-3.png)
+
+## 3 快速使用
+### 3.1 新增 Topic
+
+可以通过 web GUI 添加 Topic, 在 `Topic列表`页面添加,需要填写相关信息,比如增加`demo` topic:
+![Add Topic 1](img/tubemq-add-topic-1.png)
+
+然后选择部署 Topic 的 Broker
+![Add Topic 5](img/tubemq-add-topic-5.png)
+
+此时 Broker的 `可发布` 和 `可订阅` 依旧是灰色的
+![Add Topic 6](img/tubemq-add-topic-6.png)
+
+需要在 `Broker列表`页面重载Broker 配置
+![Add Topic 2](img/tubemq-add-topic-2.png)
+
+![Add Topic 3](img/tubemq-add-topic-3.png)
+
+之后就可以在页面查看Topic信息。
+
+![Add Topic 4](img/tubemq-add-topic-4.png)
+
+
+### 2.2 运行Example
+可以通过上面创建的`demo` topic来测试集群。
+
+#### 2.2.1 生产消息
+将 `YOUR_MASTER_IP:port` 替换为实际的IP和端口,然后运行producer:
+```bash
+cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
+./bin/tubemq-producer-test.sh --master-servers YOUR_MASTER_IP1:port,YOUR_MASTER_IP2:port --topicName demo
+```
+
+如果能观察下如下日志,则表示数据发送成功:
+![Demo 1](img/tubemq-send-message.png)
+
+#### 2.2.2 消费消息
+将 `YOUR_MASTER_IP:port` 替换为实际的IP和端口,然后运行Consumer:
+```bash
+cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
+./bin/tubemq-consumer-test.sh --master-servers YOUR_MASTER_IP1:port,YOUR_MASTER_IP2:port --topicName demo --groupName test_consume
+```
+
+如果能观察下如下日志,则表示数据被消费者消费到:
+
+![Demo 2](img/tubemq-consume-message.png)
+
+
+## 3 结束
+在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,请查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。
+
+---
+<a href="#top">Back to top</a>
+
+
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/tubemq-manager/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/tubemq-manager/quick_start.md
new file mode 100644
index 0000000..41a0e4e
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/tubemq-manager/quick_start.md
@@ -0,0 +1,123 @@
+## 部署TubeMQ Manager
+安装文件在inlong-tubemq-manager目录.
+
+### 配置
+- 在mysql中创建`tubemanager`数据和相应用户.
+- 在conf/application.properties中添加mysql信息:
+
+```ini
+# mysql configuration for manager
+spring.datasource.url=jdbc:mysql://mysql_ip:mysql_port/tubemanager
+spring.datasource.username=mysql_username
+spring.datasource.password=mysql_password
+```
+
+### 启动服务
+
+``` bash
+$ bin/start-manager.sh 
+```
+
+### 初始化TubeMQ集群
+
+    vim bin/init-tube-cluster.sh
+
+替换如下六个参数
+```
+TUBE_MANAGER_IP=   //tube manager服务启动ip
+TUBE_MANAGER_PORT=   //tube manager服务启动port
+TUBE_MASTER_IP=   //tube 集群master ip
+TUBE_MASTER_PORT=
+TUBE_MASTER_WEB_PORT=
+TUBE_MASTER_TOKEN=
+```
+
+然后执行以下命令:
+```
+sh bin/init-tube-cluster.sh
+```
+如上操作会创建一个clusterId为1的tube集群,注意该操作只进行一次,之后重启服务无需新建集群
+
+### 附录:其它操作接口
+
+#### cluster
+查询clusterId以及clusterName全量数据 (get)
+
+示例
+
+【GET】 /v1/cluster
+
+返回值
+
+    {
+    "errMsg": "",
+    "errCode": 0,
+    "result": true,
+    "data": "[{\"clusterId\":1,\"clusterName\":\"1124\", \"masterIp\":\"127.0.0.1\"}]"
+    }
+
+#### topic
+
+#### 添加topicTask
+
+    type	(必填) 请求类型,字段填写:op_query
+    clusterId	(必填) 请求集群id
+    addTopicTasks (必填) topicTasks,创建task任务json,
+    user	(必填) 之后接入权限验证需要验证用户,这里预留出来
+
+addTopicTasks目前只包括一个字段为topicName
+之后接入region设计会新加入region字段表示不同区域的broker
+目前一个addTopicTask会在cluster中的所有broker创建topic
+
+
+AddTopicTasks 为以下对象的List,可携带多个创建topic请求
+
+    topicName	(必填) topic名称
+
+示例
+
+【POST】 /v1/task?method=addTopicTask
+
+    {
+    "clusterId": "1",
+    "addTopicTasks": [{"topicName": "1"}],
+    "user": "test"
+    }
+
+返回json格式样例
+
+    {
+    "errMsg": "There are topic tasks [a12322] already in adding status",
+    "errCode": 200,
+    "result": false,
+    "data": ""
+    }
+
+result为false为写入task失败
+
+
+#### 查询某一个topic是否创建成功(业务可以写入)
+
+    clusterId	(必填) 请求集群id
+    topicName   (必填) 查询topic名称
+    user	(必填) 之后接入权限验证需要验证用户,这里预留出来
+
+
+示例
+
+【POST】 /v1/topic?method=queryCanWrite
+
+    {
+    "clusterId": "1",
+    "topicName": "1",
+    "user": "test"
+    }
+
+
+返回json格式样例
+
+    { "result":true, "errCode":0, "errMsg":"OK", }
+    { "result":false, "errCode": 100, "errMsg":"topic test is not writable"}
+    { "result":false, "errCode": 101, "errMsg":"no such topic in master"}
+
+result为false为不可写
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
new file mode 100644
index 0000000..adebec8
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
@@ -0,0 +1,239 @@
+---
+title: TubeMQ VS Kafka性能对比测试总结
+---
+
+## 1 背景
+TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于[Apache Kafka](http://kafka.apache.org/)。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。
+这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。
+
+## 2 测试场景方案
+如下是我们根据实际应用场景设计的测试方案:
+![](img/perf_scheme.png)
+
+## 3 测试结论
+用"复仇者联盟"里的角色来形容:
+
+角色|测试场景|要点
+:---:|:---:|---
+闪电侠|场景五|快 (数据生产消费时延 TubeMQ 10ms vs kafka 250ms )
+绿巨人|场景三,场景四|抗击打能力 (随着topic数由100,200,到500,1000逐步增大,TubeMQ系统能力不减,吞吐量随负载的提升下降微小且能力持平 vs kafka吞吐量明显下降且不稳定;过滤消费时,TubeMQ入出流量提升直接完胜kafka的入流量下降且吞吐量下降)
+蜘蛛侠|场景八|各个场景来去自如(不同机型下对比测试,TubeMQ吞吐量稳定 vs Kafka在BX1机型下性能更低的问题)
+钢铁侠|场景二,场景三,场景六|自动化(系统运行中TubeMQ可以动态实时的调整系统设置、消费行为来提升系统性能)
+     
+具体的数据分析来看:
+1. 单Topic单实例配置下,TubeMQ吞吐量要远低于Kafka;单Topic多实例配置下,TubeMQ在4个实例时吞吐量追上Kafka对应5个分区配置,同时TubeMQ的吞吐量随实例数增加而增加,Kafka出现不升反降的情况;TubeMQ可以在系统运行中通过调整各项参数来动态的控制吞吐量的提升;
+2. 多Topic多实例配置下,TubeMQ吞吐量维持在一个非常稳定的范围,且资源消耗,包括文件句柄、网络连接句柄数等非常的低;Kafka吞吐量随Topic数增多呈现明显的下降趋势,且资源消耗急剧增大;在SATA盘存储条件下,随着机型的配置提升,TubeMQ吞吐量可以直接压到磁盘瓶颈,而Kafka呈现不稳定状态;在CG1机型SSD盘情况下,Kafka的吞吐量要好于TubeMQ;
+3. 在过滤消费时,TubeMQ可以极大地降低服务端的网络出流量,同时还会因过滤消费消耗的资源少于全量消费,反过来促进TubeMQ吞吐量提升;kafka无服务端过滤,出流量与全量消费一致,流量无明显的节约;
+4. 资源消耗方面各有差异:TubeMQ由于采用顺序写随机读,CPU消耗很大,Kafka采用顺序写块读,CPU消耗很小,但其他资源,如文件句柄、网络连接等消耗非常的大。在实际的SAAS模式下的运营环境里,Kafka会因为zookeeper依赖出现系统瓶颈,会因生产、消费、Broker众多,受限制的地方会更多,比如文件句柄、网络连接数等,资源消耗会更大;
+
+## 4 测试环境及配置
+### 4.1 【软件版本及部署环境】
+
+**角色**|**TubeMQ**|**Kafka**
+:---:|---|---
+**软件版本**|tubemq-3.8.0|Kafka\_2.11-0.10.2.0
+**zookeeper部署**|与Broker不在同一台机器上,单机|与Broker配置不在同一台机器,单机
+**Broker部署**|单机|单机
+**Master部署**|与Broker不在同一台机器上,单机|不涉及
+**Producer**|1台M10 + 1台CG1|1台M10 + 1台CG1
+**Consumer**|6台TS50万兆机|6台TS50万兆机
+
+### 4.2 【Broker硬件机型配置】
+
+**机型**|配置|**备注**
+:---:|---|---
+**TS60**|(E5-2620v3\*2/16G\*4/SATA3-2T\*12/SataSSD-80G\*1/10GE\*2) Pcs|若未作说明,默认都是在TS60机型上进行测试对比
+**BX1-10G**|SA5212M5(6133\*2/16G\*16/4T\*12/10GE\*2) Pcs|                                     
+**CG1-10G**|CG1-10G\_6.0.2.12\_RM760-FX(6133\*2/16G\*16/5200-480G\*6 RAID/10GE\*2)-ODM Pcs |  
+
+### 4.3 【Broker系统配置】
+
+| **配置项**            | **TubeMQ Broker**     | **Kafka Broker**      |
+|:---:|---|---|
+| **日志存储**          | Raid10处理后的SATA盘或SSD盘 | Raid10处理后的SATA盘或SSD盘 |
+| **启动参数**          | BROKER_JVM_ARGS="-Dcom.sun.management.jmxremote -server -Xmx24g -Xmn8g -XX:SurvivorRatio=6 -XX:+UseMembar -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+CMSScavengeBeforeRemark -XX:ParallelCMSThreads=4 -XX:+UseCMSCompactAtFullCollection -verbose:gc -Xloggc:$BASE_DIR/logs/gc.log.`date +%Y-%m-%d-%H-%M-%S` -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=75 -XX:CMSFullGCsBeforeCompaction=1 -Dsun.net [...]
+| **配置文件**          | 在tubemq-3.8.0版本broker.ini配置文件上改动: consumerRegTimeoutMs=35000<br/>tcpWriteServiceThread=50<br/>tcpReadServiceThread=50<br/>primaryPath为SATA盘日志目录|kafka_2.11-0.10.2.0版本server.properties配置文件上改动:<br/>log.flush.interval.messages=5000<br/>log.flush.interval.ms=10000<br/>log.dirs为SATA盘日志目录<br/>socket.send.buffer.bytes=1024000<br/>socket.receive.buffer.bytes=1024000<br/>socket.request.max.bytes=2147483600<br/>log.segment.bytes=1073741824<br/>num.network.threads=25<br/>num.io [...]
+| **其它**             | 除测试用例里特别指定,每个topic创建时设置:<br/>memCacheMsgSizeInMB=5<br/>memCacheFlushIntvl=20000<br/>memCacheMsgCntInK=10 <br/>unflushThreshold=5000<br/>unflushInterval=10000<br/>unFlushDataHold=5000 | 客户端代码里设置:<br/>生产端:<br/>props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br/>props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br/>props.put("linger.ms", "200");<br/>props.put("block.on.buffer.full", false);< [...]
+              
+## 5 测试场景及结论
+
+### 5.1 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
+ ![](img/perf_scenario_1.png)
+
+#### 5.1.1 【结论】
+
+在单topic不同分区的情况下:
+1. TubeMQ吞吐量不随分区变化而变化,同时TubeMQ属于顺序写随机读模式,单实例情况下吞吐量要低于Kafka,CPU要高于Kafka;
+2. Kafka随着分区增多吞吐量略有下降,CPU使用率很低;
+3. TubeMQ分区由于是逻辑分区,增加分区不影响吞吐量;Kafka分区为物理文件的增加,但增加分区入出流量反而会下降;
+
+####5.1.2 【指标】
+ ![](img/perf_scenario_1_index.png)
+
+### 5.2 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
+ ![](img/perf_scenario_2.png)
+
+#### 5.2.1 【结论】
+
+从场景一和场景二的测试数据结合来看:
+
+1. TubeMQ随着实例数增多,吞吐量增长,在4个实例的时候吞吐量与Kafka持平,磁盘IO使用率比Kafka低,CPU使用率比Kafka高;
+2. TubeMQ的消费方式影响到系统的吞吐量,内存读取模式(301)性能低于文件读取模式(101),但能降低消息的时延;
+3. Kafka随分区实例数增多,没有如期提升系统吞吐量;
+4. TubeMQ按照Kafka等同的增加实例(物理文件)后,吞吐量量随之提升,在4个实例的时候测试效果达到并超过Kafka
+    5个分区的状态;TubeMQ可以根据业务或者系统配置需要,调整数据读取方式,可以动态提升系统的吞吐量;Kafka随着分区增加,入流量有下降;
+
+#### 5.2.2 【指标】
+
+**注1 :** 如下场景中,均为单Topic测试下不同分区或实例、不同读取模式场景下的测试,单条消息包长均为1K;
+
+**注2 :**
+读取模式通过admin\_upd\_def\_flow\_control\_rule设置qryPriorityId为对应值.
+ ![](img/perf_scenario_2_index.png)
+
+### 5.3 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
+ ![](img/perf_scenario_3.png)
+
+#### 5.3.1 【结论】
+
+按照多Topic场景下测试:
+
+1.  TubeMQ随着Topic数增加,生产和消费性能维持在一个均线上,没有特别大的流量波动,占用的文件句柄、内存量、网络连接数不多(1k
+    topic下文件句柄约7500个,网络连接150个),但CPU占用比较大;
+2.  TubeMQ通过调整消费方式由内存消费转为文件消费方式后,吞吐量有比较大的增长,CPU占用率有下降,对不同性能要求的业务可以进行区别服务;
+3.  Kafka随着Topic数的增加,吞吐量有明显的下降,同时Kafka流量波动较为剧烈,长时间运行存消费滞后,以及吞吐量明显下降的趋势,以及内存、文件句柄、网络连接数量非常大(在1K
+    Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题;
+4.  数据对比来看,TubeMQ相比Kafka运行更稳定,吞吐量以稳定形势呈现,长时间跑吞吐量不下降,资源占用少,但CPU的占用需要后续版本解决;
+
+#### 5.3.2 【指标】
+
+**注:** 如下场景中,包长均为1K,分区数均为10。
+ ![](img/perf_scenario_3_index.png)
+
+### 5.4 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
+
+#### 5.4.1 【结论】
+
+1.  TubeMQ采用服务端过滤的模式,出流量指标与入流量存在明显差异;
+2.  TubeMQ服务端过滤提供了更多的资源给到生产,生产性能比非过滤情况有提升;
+3.  Kafka采用客户端过滤模式,入流量没有提升,出流量差不多是入流量的2倍,同时入出流量不稳定;
+
+#### 5.4.2 【指标】
+
+**注:** 如下场景中,topic为100,包长均为1K,分区数均为10
+ ![](img/perf_scenario_4_index.png)
+
+### 5.5 场景五:TubeMQ、Kafka数据消费时延比对
+
+| 类型   | 时延            | Ping时延                |
+|---|---|---|
+| TubeMQ | 90%数据在10ms±  | C->B:0.05ms ~ 0.13ms, P->B:2.40ms ~ 2.42ms |
+| Kafka  | 90%集中在250ms± | C->B:0.05ms ~ 0.07ms, P-\>B:2.95ms \~ 2.96ms |
+
+备注:TubeMQ的消费端存在一个等待队列处理消息追平生产时的数据未找到的情况,缺省有200ms的等待时延。测试该项时,TubeMQ消费端要调整拉取时延(ConsumerConfig.setMsgNotFoundWaitPeriodMs())为10ms,或者设置频控策略为10ms。
+
+### 5.6 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
+
+#### 5.6.1 【结论】
+
+1.  TubeMQ调整Topic的内存缓存大小能对吞吐量形成正面影响,实际使用时可以根据机器情况合理调整;
+2.  从实际使用情况看,内存大小设置并不是越大越好,需要合理设置该值;
+
+#### 5.6.2 【指标】
+
+ **注:** 如下场景中,消费方式均为读取内存(301)的PULL消费,单条消息包长均为1K
+ ![](img/perf_scenario_6_index.png)
+ 
+
+### 5.7 场景七:消费严重滞后情况下两系统的表现
+
+#### 5.7.1 【结论】
+
+1.  消费严重滞后情况下,TubeMQ和Kafka都会因磁盘IO飙升使得生产消费受阻;
+2.  在带SSD系统里,TubeMQ可以通过SSD转存储消费来换取部分生产和消费入流量;
+3.  按照版本计划,目前TubeMQ的SSD消费转存储特性不是最终实现,后续版本中将进一步改进,使其达到最合适的运行方式;
+
+#### 5.7.2 【指标】
+ ![](img/perf_scenario_7.png)
+
+
+### 5.8 场景八:评估多机型情况下两系统的表现
+ ![](img/perf_scenario_8.png)
+      
+#### 5.8.1【结论】
+
+1.  TubeMQ在BX1机型下较TS60机型有更高的吞吐量,同时因IO util达到瓶颈无法再提升,吞吐量在CG1机型下又较BX1达到更高的指标值;
+2.  Kafka在BX1机型下系统吞吐量不稳定,且较TS60下测试的要低,在CG1机型下系统吞吐量达到最高,万兆网卡跑满;
+3.  在SATA盘存储条件下,TubeMQ性能指标随着硬件配置的改善有明显的提升;Kafka性能指标随硬件机型的改善存在不升反降的情况;
+4.  在SSD盘存储条件下,Kafka性能指标达到最好,TubeMQ指标不及Kafka;
+5.  CG1机型数据存储盘较小(仅2.2T),RAID 10配置下90分钟以内磁盘即被写满,无法测试两系统长时间运行情况。
+
+#### 5.8.2 【指标】
+
+**注1:** 如下场景Topic数均配置500个topic,10个分区,消息包大小为1K字节;
+
+**注2:** TubeMQ采用的是301内存读取模式消费;
+ ![](img/perf_scenario_8_index.png)
+
+## 6 附录 
+## 6.1 附录1 不同机型下资源占用情况图:
+### 6.1.1 【BX1机型测试】
+![](img/perf_appendix_1_bx1_1.png)
+![](img/perf_appendix_1_bx1_2.png)
+![](img/perf_appendix_1_bx1_3.png)
+![](img/perf_appendix_1_bx1_4.png)
+
+### 6.1.2 【CG1机型测试】
+![](img/perf_appendix_1_cg1_1.png)
+![](img/perf_appendix_1_cg1_2.png)
+![](img/perf_appendix_1_cg1_3.png)
+![](img/perf_appendix_1_cg1_4.png)
+
+## 6.2 附录2 多Topic测试时的资源占用情况图:
+
+### 6.2.1 【100个topic】
+![](img/perf_appendix_2_topic_100_1.png)
+![](img/perf_appendix_2_topic_100_2.png)
+![](img/perf_appendix_2_topic_100_3.png)
+![](img/perf_appendix_2_topic_100_4.png)
+![](img/perf_appendix_2_topic_100_5.png)
+![](img/perf_appendix_2_topic_100_6.png)
+![](img/perf_appendix_2_topic_100_7.png)
+![](img/perf_appendix_2_topic_100_8.png)
+![](img/perf_appendix_2_topic_100_9.png)
+ 
+### 6.2.2 【200个topic】
+![](img/perf_appendix_2_topic_200_1.png)
+![](img/perf_appendix_2_topic_200_2.png)
+![](img/perf_appendix_2_topic_200_3.png)
+![](img/perf_appendix_2_topic_200_4.png)
+![](img/perf_appendix_2_topic_200_5.png)
+![](img/perf_appendix_2_topic_200_6.png)
+![](img/perf_appendix_2_topic_200_7.png)
+![](img/perf_appendix_2_topic_200_8.png)
+![](img/perf_appendix_2_topic_200_9.png)
+
+### 6.2.3 【500个topic】
+![](img/perf_appendix_2_topic_500_1.png)
+![](img/perf_appendix_2_topic_500_2.png)
+![](img/perf_appendix_2_topic_500_3.png)
+![](img/perf_appendix_2_topic_500_4.png)
+![](img/perf_appendix_2_topic_500_5.png)
+![](img/perf_appendix_2_topic_500_6.png)
+![](img/perf_appendix_2_topic_500_7.png)
+![](img/perf_appendix_2_topic_500_8.png)
+![](img/perf_appendix_2_topic_500_9.png)
+
+### 6.2.4 【1000个topic】
+![](img/perf_appendix_2_topic_1000_1.png)
+![](img/perf_appendix_2_topic_1000_2.png)
+![](img/perf_appendix_2_topic_1000_3.png)
+![](img/perf_appendix_2_topic_1000_4.png)
+![](img/perf_appendix_2_topic_1000_5.png)
+![](img/perf_appendix_2_topic_1000_6.png)
+![](img/perf_appendix_2_topic_1000_7.png)
+![](img/perf_appendix_2_topic_1000_8.png)
+![](img/perf_appendix_2_topic_1000_9.png)
+
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/website/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/website/quick_start.md
new file mode 100644
index 0000000..9d8441f
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/modules/website/quick_start.md
@@ -0,0 +1,56 @@
+---
+title: 编译部署
+---
+
+## 关于 WebSite
+WebSite[Apache InLong incubator](https://github.com/apache/incubator-inlong)的管控端。
+
+## 编译
+```
+mvn package -DskipTests -Pdocker -pl inlong-website
+```
+
+## 运行
+```
+docker run -d --name website -e MANAGER_API_ADDRESS=127.0.0.1:8083 -p 80:80 inlong/website
+```
+
+## 开发指引
+
+确认 `nodejs >= 12.0` 已经安装。
+
+在新创建的项目中,您可以运行一些内置命令:
+
+如果没有安装 `node_modules`,你应该首先运行 `npm install` 或 `yarn install`。
+
+使用 `npm run dev` 或 `yarn dev` 在开发模式下运行应用程序。
+
+如果服务器运行成功,浏览器将打开 [http://localhost:8080](http://localhost:8080) 在浏览器中查看。
+
+如果您进行编辑,页面将重新加载。
+您还将在控制台中看到任何 lint 错误。
+
+web服务器的启动依赖于后端服务 `manger api` 接口。
+
+您应该先启动后端服务器,然后将 `/inlong-website/src/setupProxy.js` 中的变量`target` 设置为api服务的地址。
+
+### 测试
+
+运行 `npm test` 或 `yarn test`
+
+在交互式观察模式下启动测试运行器。
+有关更多信息,请参阅有关 [运行测试](https://create-react-app.dev/docs/running-tests/) 的部分。
+
+### 构建
+
+首先保证项目已运行过 `npm install` 或 `yarn install` 安装了 `node_modules`。
+
+运行 `npm run build` 或 `yarn build`。
+
+将用于生产的应用程序构建到构建文件夹。
+在构建后的生产模式下可以获得较好的页面性能。
+
+构建后代码被压缩,文件名包括哈希值。
+您的应用程序已准备好部署!
+
+有关详细信息,请参阅有关 [deployment](https://create-react-app.dev/docs/deployment/) 的部分。
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/user_guide/example.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/user_guide/example.md
new file mode 100644
index 0000000..d211033
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/user_guide/example.md
@@ -0,0 +1,107 @@
+---
+title: 入库 Hive 示例
+sidebar_position: 3
+---
+
+本节用一个简单的示例,帮助您使用 Docker 快速体验 InLong 的完整流程。
+
+
+## 安装 Hive
+Hive 是运行的必备组件。如果您的机器上没有 Hive,这里推荐使用 Docker 进行快速安装,详情可见 [这里](https://github.com/big-data-europe/docker-hive)。
+
+> 注意,如果使用以上 Docker 镜像的话,我们需要在 namenode 中添加一个端口映射 `8020:8020`,因为它是 HDFS DefaultFS 的端口,后面在配置 Hive 时需要用到。
+
+## 安装 InLong
+在开始之前,我们需要安装 InLong 的全部组件,这里提供两种方式:
+1. 按照 [这里的说明](https://github.com/apache/incubator-inlong/tree/master/docker/docker-compose),使用 Docker 进行快速部署。(推荐)
+2. 按照 [这里的说明](./quick_start.md),使用二进制包依次安装各组件。
+
+
+## 新建接入
+部署完毕后,首先我们进入 “数据接入” 界面,点击右上角的 “新建接入”,新建一条接入,按下图所示填入业务信息
+
+<img src="../../img/create-business.png" align="center" alt="Create Business"/>
+
+然后点击下一步,按下图所示填入数据流信息
+
+<img src="../../img/create-stream.png" align="center" alt="Create Stream"/>
+
+注意其中消息来源选择“文件”,暂时不用新建数据源。
+
+然后我们在下面的“数据信息”一栏中填入以下信息
+
+<img src="../../img/data-information.png" align="center" alt="Data Information"/>
+
+然后在数据流向中选择 Hive,并点击 “添加”,添加 Hive 配置
+
+<img src="../../img/hive-config.png" align="center" alt="Hive Config"/>
+
+注意这里目标表无需提前创建,InLong Manager 会在接入通过之后自动为我们创建表。另外,请使用 “连接测试” 保证 InLong Manager 可以连接到你的 Hive。
+
+然后点击“提交审批”按钮,该接入就会创建成功,进入审批状态。
+
+## 审批接入
+进入“审批管理”界面,点击“我的审批”,将刚刚申请的接入通过。
+
+到此接入就已经创建完毕了,我们可以在 Hive 中看到相应的表已经被创建,并且在 TubeMQ 的管理界面中可以看到相应的 topic 已经创建成功。
+
+## 配置 agent
+然后我们使用 docker 进入 agent 容器内,创建相应的 agent 配置。
+```
+$ docker exec -it agent sh
+```
+
+然后我们新建 `.inlong` 文件夹,并创建以 `groupId.local` 命名的文件,在其中填入 Dataproxy 有关配置。
+```
+$ mkdir .inlong
+$ cd .inlong
+$ touch b_test.local
+$ echo '{"cluster_id":1,"isInterVisit":1,"size":1,"address": [{"port":46801,"host":"dataproxy"}], "switch":0}' >> b_test.local
+```
+
+然后退出容器,使用 curl 向 agent 容器发送请求。
+```
+curl --location --request POST 'http://localhost:8008/config/job' \
+--header 'Content-Type: application/json' \
+--data '{
+"job": {
+"dir": {
+"path": "",
+"pattern": "/data/collect-data/test.log"
+},
+"trigger": "org.apache.inlong.agent.plugin.trigger.DirectoryTrigger",
+"id": 1,
+"thread": {
+"running": {
+"core": "4"
+}
+},
+"name": "fileAgentTest",
+"source": "org.apache.inlong.agent.plugin.sources.TextFileSource",
+"sink": "org.apache.inlong.agent.plugin.sinks.ProxySink",
+"channel": "org.apache.inlong.agent.plugin.channel.MemoryChannel"
+},
+"proxy": {
+"groupId": "b_test",
+"streamId": "test_stream"
+},
+"op": "add"
+}'
+```
+
+至此,agent 就配置完毕了。接下来我们可以新建 `./collect-data/test.log` ,并往里面添加内容,来触发 agent 向 dataproxy 发送数据了。
+
+```
+$ touch collect-data/test.log
+$ echo 'test,24' >> collect-data/test.log
+```
+
+然后观察 agent 和 dataproxy 的日志,可以看到相关数据已经成功发送。
+
+```
+$ docker logs agent
+$ docker logs dataproxy
+```
+
+
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/user_guide/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/user_guide/quick_start.md
new file mode 100644
index 0000000..e0958a5
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/user_guide/quick_start.md
@@ -0,0 +1,76 @@
+---
+title: 快速开始
+sidebar_position: 1
+---
+
+本节包含快速入门指南,可帮助您开始使用 Apache InLong。
+
+## 整体架构
+<img src="/img/inlong_architecture.png" align="center" alt="Apache InLong"/>
+
+[Apache InLong](https://inlong.apache.org)(incubating) 整体架构如上,该组件是一站式数据流媒体平台,提供自动化、安全、分布式、高效的数据发布和订阅能力,帮助您轻松构建基于流的数据应用程序。
+
+InLong(应龙)是中国神话故事里的神兽,可以引流入海,借喻InLong可用于流式数据上报功能。
+
+InLong(应龙) 最初建于腾讯,服务线上业务8年多,支持大数据场景下的海量数据(每天40万亿条数据规模以上)报表服务。整个平台集成了数据采集、汇聚、缓存、分拣和管理模块等共5个模块,通过这个系统,业务只需要提供数据源、数据服务质量、数据落地集群和数据落地格式,即数据可以源源不断地将数据从源集群推送到目标集群,极大满足了业务大数据场景下的数据上报服务需求。
+
+## 编译
+- Java [JDK 8](https://adoptopenjdk.net/?variant=openjdk8)
+- Maven 3.6.1+
+
+```
+$ mvn clean install -DskipTests
+```
+(可选) 使用docker编译:
+```
+$ docker pull maven:3.6-openjdk-8
+$ docker run -v `pwd`:/inlong  -w /inlong maven:3.6-openjdk-8 mvn clean install -DskipTests
+```
+若编译成功,在`inlong-distribution/target`下会找到`tar.gz`格式的安装包,解压安装目录,包括各个模块安装文件:
+```
+inlong-agent
+inlong-dataproxy
+inlong-dataproxy-sdk
+inlong-manager-web
+inlong-sort
+inlong-tubemq-manager
+inlong-tubemq-server
+inlong-website
+```
+
+## 环境要求
+- ZooKeeper 3.5+
+- Hadoop 2.10.x 和 Hive 2.3.x
+- MySQL 5.7+
+- Flink 1.9.x
+
+## 部署InLong TubeMQ Server
+[部署InLong TubeMQ Server](modules/tubemq/quick_start.md)
+
+## 部署InLong TubeMQ Manager
+[部署InLong TubeMQ Manager](modules/tubemq/tubemq-manager/quick_start.md)
+
+## 部署InLong Manager
+[部署InLong Manager](modules/manager/quick_start.md)
+
+## 部署InLong WebSite
+[部署InLong WebSite](modules/website/quick_start.md)
+
+## 部署InLong Sort
+[部署InLong Sort](modules/sort/quick_start.md)
+
+## 部署InLong DataProxy
+[部署InLong DataProxy](modules/dataproxy/quick_start.md)
+
+## 部署InLong DataProxy-SDK
+[部署InLong DataProxy](modules/dataproxy-sdk/quick_start.md)
+
+## 部署InLong Agent
+[部署InLong Agent](modules/agent/quick_start.md)
+
+## 业务配置
+[配置新业务](docs/user_guide/user_manual)
+
+## 数据上报验证
+到这里,您就可以通过文件Agent采集数据并在指定的Hive表中验证接收到的数据是否与发送的数据一致。
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/user_guide/user_manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/user_guide/user_manual.md
new file mode 100644
index 0000000..4613300
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.11.0/user_guide/user_manual.md
@@ -0,0 +1,246 @@
+---
+title: 用户手册
+sidebar_position: 2
+---
+
+# 1. 用户登录
+
+需系统使用用户输入账号名称和密码。
+
+![](/cookbooks_img/image-1624433272455.png)
+
+# 2. 数据接入
+
+数据接入模块展示目前用户权限内接入系统所有任务列表,可以对这些任务详情查看、编辑更新和删除操作。
+
+点击【数据接入】接入流程,数据接入信息填写有两个步骤:业务信息、数据流。
+
+![](/cookbooks_img/image-1624431177918.png)
+
+## 2.1 业务信息
+
+### 2.1.1 业务信息
+
+需要用户对接入任务填写基础业务信息。
+
+![](/cookbooks_img/image-1624431271642.png)
+
+- 业务ID:统一小写英文名称,请尽量包含产品名和简洁规范,如pay_base
+- 业务中文名称:业务的中文描述,便于使用与检索,最多128个字
+- 业务责任人:至少2人,业务责任人可查看、修改业务信息,新增和修改所有接入配置项
+- 业务介绍:剪短信对此次接入任务进行业务背景和应用介绍:
+
+### 2.1.2 接入要求
+
+接入要求需要用户进行选择消息中间件:高吞吐(TUBE):
+
+![](/cookbooks_img/image-1624431306077.png)
+
+高吞吐—Tube :高吞吐消息传输组件,适用于日志类的消息传递。
+
+### 2.1.3 接入规模
+
+接入规模需要用户预先针对接入数据进行规模判断,以便后续分配计算和存储资源。
+
+![](/cookbooks_img/image-1624431333949.png)
+
+## 2.2 数据流
+
+点击【下一步】进入到数据流信息填写步骤,数据流信息填写有四个模块:基础信息、数据来源、数据信息、数据流向。
+
+在数据流流程中可以点击【新建数据流】,创建一个新的数据流信息填写页面:
+
+![](/cookbooks_img/image-1624431416449.png)
+
+### 2.2.1 基础信息
+
+需用户对该接入任务中数据流的基础信息进行填写:
+
+![](/cookbooks_img/image-1624431435574.png)
+
+- 数据流ID:前缀根据产品/项目自动生成,这在某个具体的接入中是唯一的,与数据源和入库的表中的数据流ID保持一致
+- 数据流名称:接口信息说明,长度限制为64个英文字符(对应32个中文字符)
+- 数据流责任人:数据流责任人可查看、修改数据流信息,新增和修改所有接入配置项
+- 数据流介绍:数据流简单文本介绍
+
+### 2.2.2 数据来源
+
+需用户选择该数据流的消息来源,目前支持文件、自主推送三种方式,并且可以在高级选项中补充该数据来源详细信息:
+
+- 文件:业务数据以文件形式存放,业务机器部署 InLong Agent,根据定制的策略规则进行读取
+- 自主推送:通过 SDK 向消息中间件推送数据
+
+  ![](/cookbooks_img/image-1624431594406.png)
+
+### 2.2.3 数据信息
+
+需用户填写该数据流中数据相关信息:
+
+![](/cookbooks_img/image-1624431617259.png)
+
+- 数据格式:数据来源格式,是普通文本类型,或者KV键值对数据
+- 数据编码:如数据源含中文,需要选UTF-8或GBK,否则编码格式不对,入库后会乱码
+- 源字段分隔符:数据发送到 MQ 里的格式
+- 源数据字段:数据在 MQ 里按某种格式划分的不同含义的属性
+
+### 2.2.4 数据流向
+
+需用户对此任务的流向终流向进行选择,此部分为非必填项,目前支持Hive和自主推送两种:
+
+![](/cookbooks_img/image-1624431713360.png)
+
+HIVE流向:
+
+![](/cookbooks_img/image-1624431787323.png)
+
+- 目标库:hive数据库名(需要提前准备创建好)
+- 目标表:hive表名
+- 一级分区:hive数据划分hdfs数据一级子目录的字段名
+- 二级分区:hive数据划分hdfs数据一级子目录的字段名
+- 用户名:hiveserver连接账户名
+- 用户密码:hiveserver连接账密码
+- HDFS url:hive底层hdfs连接
+- JDBC url:hiveserver 的jdbcurl
+- 字段相关信息: 源字段名、源字段类型、HIVE字段名、HIVE字段类型、字段描述,并支持删除和新增字段
+
+# 3. 接入详情
+
+## 3.1 执行日志
+
+当数据接入任务状态为”批准成功“和”配置失败“状态,可通过”执行日志“功能,以便用户查看任务执行进程进程和详情:
+
+![](/cookbooks_img/image-1624432002615.png)
+
+点击【执行日志】将以弹窗形式展示该任务执行日志详情:
+
+![](/cookbooks_img/image-1624432022859.png)
+
+执行日志中将展示该接入流程执行中任务类型、执行结果、执行日志内容、结束时间、如果执行失败可以”重启“该任务再次执行。
+
+## 3.2 任务详情
+
+业务负责人/关注人可以查看该任务接入详情,并在【待提交】、【配置成功】、【配置失败】状态下可对部分信息进行修改更新接入任务详情中具有业务信息、数据流、流向三个模块。
+
+### 3.2.1 业务信息
+
+展示接入任务中基础业务信息,点击【编辑】可对部分内容进行修改更改:
+
+![](/cookbooks_img/image-1624432076857.png)
+
+### 3.2.2 数据流
+
+展示该接入任务下数据流基础信息,点击【新建数据流】可新建一条数据流信息:
+
+![](/cookbooks_img/image-1624432092795.png)
+
+### 3.2.3 流向
+
+展示该接入任务中数据流向基础信息,通过通过下拉框选择不同流向类型,点击【新建流向配置】可新建一条数据流向:
+
+![](/cookbooks_img/image-1624432114765.png)
+
+# 4. 数据消费
+
+数据消费目前不支持直接消费接入数据,需走数据审批流程后方可正常消费数据; 点击【新建消费】,进入数据消费流程,需要对消费信息相关信息进行填写:
+![](/cookbooks_img/image-1624432235900.png)
+
+## 4.1 消费信息
+
+申请人需在该信息填写模块中逐步填写数据消费申请相关基础消费业务信息:
+
+![](/cookbooks_img/image-1624432254118.png)
+
+- 消费组名称:前缀根据BG/产品/项目自动生成,消费者的简要名称,必须是小写字母、数字、下划线组成,最后审批会根据简称拼接分配出消费者名称
+- 消费责任人:自行选择责任人,必须至少2人;责任人可查看、修改消费信息
+- 消费目标业务ID:需要选择消费数据的业务ID,可以点击【查询】后,在弹窗页面中选择合适的业务ID,如下图所示:
+- 数据用途:选择数据使用用途
+- 数据用途说明:需申请人根据自身消费场景,简要说明使用的项目和数据的用途
+
+信息填完完成后,点击【提交】后,会将次数据消费流程正式提交待审批人审批后方可生效。
+
+![](/cookbooks_img/image-1624432286674.png)
+
+# 5. 审批管理
+
+审批管理功能模块目前包含了我的申请和我的审批,管理系统中数据接入和数据消费申请审批全部任务。
+
+## 5.1 我的申请
+
+展示目前申请人在系统中数据接入、消费提交的任务列表,点击【详情】可以查看目前该任务基础信和审批进程:
+
+![](/cookbooks_img/image-1624432445002.png)
+
+### 5.1.1 数据接入详情
+
+数据接入任务详细展示目前该申请任务基础信息包括:申请人相关信息、申请接入基础信息,以及目前审批进程节点:
+
+![](/cookbooks_img/image-1624432458971.png)
+
+### 5.1.2 数据消费详情
+
+数据消费任务详情展示目前申请任务基础信息包括:申请人信息、基础消费信息,以及目前审批进程节点:
+
+![](/cookbooks_img/image-1624432474526.png)
+
+## 5.2 我的审批
+
+作为具有审批权限的数据接入员和系统成员,具备对数据接入或者消费审批职责:
+
+![](/cookbooks_img/image-1624432496461.png)
+
+### 5.2.1 数据接入审批
+
+新建数据接入审批:目前为一级审批,由系统管理员审批。
+
+系统管理员将根据数据接入业务信息,审核此次接入流程是否符合接入要求:
+
+![](/cookbooks_img/image-1624432515850.png)
+
+### 5.2.2 新建数据消费审批
+
+新建数据消费审批:目前为一级审批,由业务负责人审批。
+
+业务审批:由数据接入业务负责人根据接入信息判断此消费是否符合业务要求:
+
+![](/cookbooks_img/image-1624432535541.png)
+
+# 6. 系统管理
+
+角色为系统管理员的用户才可以使用此功能,他们可以创建、修改、删除用户:
+
+![](/cookbooks_img/image-1624432652141.png)
+
+## 6.1 新建用户
+
+具有系统管理员权限用户,可以进行创建新用户账号:
+
+![](/cookbooks_img/image-1624432668340.png)
+
+- 账号类型: 普通用户(具有数据接入和数据消费权限,不具有数据接入审批和账号管理权限);系统管理员(具有数据接入和数据消费权限、数据接入审批和管理账号的权限)
+- 用户名称:用户登录账号ID
+- 用户密码:用户登录密码
+- 有效时长:该账号可在系统使用期限
+
+![](/cookbooks_img/image-1624432740241.png)
+
+## 6.2 删除用户
+
+系统管理员可以对已创建的用户进行账户删除,删除后此账号将停止使用:
+
+![](/cookbooks_img/image-1624432759224.png)
+
+## 6.3 修改用户
+
+系统管理员可以修改已创建的账号:
+
+![](/cookbooks_img/image-1624432778845.png)
+
+系统管理员可以修改账号类型和有效时长进行:
+
+![](/cookbooks_img/image-1624432797226.png)
+
+## 6.4 更改密码
+
+用户可以修改账号密码,点击【修改密码】,输入旧密码和新密码,确认后此账号新密码将生效:
+
+![](/cookbooks_img/image-1624432829313.png)
diff --git a/i18n/zh-CN/docusaurus-theme-classic/navbar.json b/i18n/zh-CN/docusaurus-theme-classic/navbar.json
index 6a0ddcb..013f015 100644
--- a/i18n/zh-CN/docusaurus-theme-classic/navbar.json
+++ b/i18n/zh-CN/docusaurus-theme-classic/navbar.json
@@ -7,6 +7,10 @@
     "message": "文档",
     "description": "Navbar item with label DOC"
   },
+  "item.label.latest": {
+    "message": "当前版本",
+    "description": "Navbar item with label latest"
+  },
   "item.label.DOWNLOAD": {
     "message": "下载",
     "description": "Navbar item with label DOWNLOAD"
diff --git a/package.json b/package.json
index 1c47a90..6f5a2d1 100644
--- a/package.json
+++ b/package.json
@@ -16,6 +16,7 @@
   },
   "dependencies": {
     "@docusaurus/core": "^2.0.0-beta.6",
+    "@docusaurus/plugin-content-docs": "^2.0.0-beta.6",
     "@docusaurus/preset-classic": "2.0.0-beta.6",
     "@mdx-js/react": "^1.6.21",
     "@svgr/webpack": "^5.5.0",
diff --git a/src/pages/versions/config.json b/src/pages/versions/config.json
new file mode 100644
index 0000000..e82bfaf
--- /dev/null
+++ b/src/pages/versions/config.json
@@ -0,0 +1,39 @@
+{
+  "zh-CN": {
+    "title": "Apache InLong 所有文档版本",
+    "newVersion": "这是当前的文档版本",
+    "newVersionExplain": "在这里您可以找到当前发布的文档版本",
+    "nextVersion": "这是未发布文档版本",
+    "nextVersionExplain": "在这里您可以找到未发布的文档版本",
+    "passVersion": "这是以前发布的文档版本",
+    "passVersionExplain": "在这里您可以找到以前发布的文档版本",
+    "table": {
+      "doc": "文档",
+      "link": "/zh-CN/docs/user_guide/quick_start",
+      "release": "Release Note",
+      "releaseUrlOne": "/zh-CN/download/release-0.11.0",
+      "nextLink": "/zh-CN/docs/next/user_guide/quick_start",
+      "latestUrl": "/zh-CN/docs/user_guide/quick_start",
+      "source": "源代码"
+    }
+
+  },
+  "en": {
+    "title": "Apache InLong all document versions",
+    "newVersion": "This is the current document version",
+    "newVersionExplain": "Here you can find the currently published version of the document",
+    "nextVersion": "This is an unpublished document version",
+    "nextVersionExplain": "Here you can find the unpublished version of the document",
+    "passVersion": "This is the previously published version of the document",
+    "passVersionExplain": "Here you can find the previously published version of the document",
+    "table": {
+      "doc": "Document",
+      "link": "/docs/user_guide/quick_start",
+      "release": "Release Note",
+      "releaseUrlOne": "/download/release-0.11.0",
+      "nextLink": "/docs/next/user_guide/quick_start",
+      "latestUrl": "/docs/user_guide/quick_start",
+      "source": "Source Code"
+    }
+  }
+}
\ No newline at end of file
diff --git a/src/pages/versions/index.js b/src/pages/versions/index.js
new file mode 100644
index 0000000..84b1353
--- /dev/null
+++ b/src/pages/versions/index.js
@@ -0,0 +1,66 @@
+import React, { useState }  from 'react';
+import useIsBrowser from '@docusaurus/useIsBrowser';
+import useBaseUrl from '@docusaurus/useBaseUrl';
+import config from "../versions/config.json";
+import Layout from '@theme/Layout';
+import './index.less';
+
+export default function() {
+    const isBrowser = useIsBrowser();
+
+    const [p1Animation, setP1Animation] = useState(false);
+    const [p2Animation, setP2Animation] = useState(false);
+
+    const language = isBrowser && location.pathname.indexOf('/zh-CN/') === 0 ? 'zh-CN' : 'en';
+    const dataSource = config?.[language];
+
+    return (
+        <Layout>
+            <div className="div-one"><br/>
+                <h1>{dataSource.title}</h1>
+                <h3>{dataSource.newVersion}</h3>
+                <p>{dataSource.newVersionExplain}</p>
+                <table>
+                    <tr>
+                        <td>0.11.0</td>
+                        <td>
+                            <a href={dataSource.table.latestUrl}>{dataSource.table.doc}</a>
+                        </td>
+                        <td>
+                            <a href={dataSource.table.releaseUrlOne}>{dataSource.table.release}</a>
+                        </td>
+                        <td>
+                            <a href="https://github.com/apache/incubator-inlong">{dataSource.table.source}</a>
+                        </td>
+                    </tr>
+                </table>
+                <br/>
+                <h3>{dataSource.nextVersion}</h3>
+                <p>{dataSource.nextVersionExplain}</p>
+                <table>
+                    <tr>
+                        <td>Next</td>
+                        <td>
+                            <a href={dataSource.table.nextLink}>{dataSource.table.doc}</a>
+                        </td>
+                    </tr>
+                </table>
+                <br/>
+                <h3>{dataSource.passVersion}</h3>
+                <p>{dataSource.passVersionExplain}</p>
+                <table>
+                    <tr>
+                        <td>0.11.0</td>
+                        <td>
+                            <a href={dataSource.table.link}>{dataSource.table.doc}</a>
+                        </td>
+                        <td>
+                            <a href={dataSource.table.releaseUrlOne}>{dataSource.table.release}</a>
+                        </td>
+                    </tr>
+                </table>
+
+            </div>
+        </Layout>
+    );
+}
diff --git a/src/pages/versions/index.less b/src/pages/versions/index.less
new file mode 100644
index 0000000..49d0f52
--- /dev/null
+++ b/src/pages/versions/index.less
@@ -0,0 +1,5 @@
+.div-one {
+  width: 50%;
+  height: 960px;
+  margin: 0 auto;
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/contact.md b/versioned_docs/version-0.11.0/contact.md
new file mode 100644
index 0000000..1ebf107
--- /dev/null
+++ b/versioned_docs/version-0.11.0/contact.md
@@ -0,0 +1,24 @@
+---
+title: Contact Us
+sidebar_position: 10
+---
+
+Contact us
+-------
+- Ask questions on: [Apache InLong Slack](https://the-asf.slack.com/archives/C01QAG6U00L)
+- Mailing lists:
+
+    | Name                                                                          | Scope                           |                                                                 |                                                                     |                                                                              |
+    |:------------------------------------------------------------------------------|:--------------------------------|:----------------------------------------------------------------|:--------------------------------------------------------------------|:-----------------------------------------------------------------------------|
+    | [dev@inlong.apache.org](mailto:dev@inlong.apache.org)     | Development-related discussions | [Subscribe](mailto:dev-subscribe@inlong.apache.org)   | [Unsubscribe](mailto:dev-unsubscribe@inlong.apache.org)   | [Archives](http://mail-archives.apache.org/mod_mbox/inlong-dev/)   |
+	
+- Home page: https://inlong.apache.org
+- Issues: https://issues.apache.org/jira/browse/InLong
+
+
+
+License
+-------
+© Contributors Licensed under an [Apache-2.0](https://github.com/apache/incubator-inlong/blob/master/LICENSE) license.
+
+
diff --git a/versioned_docs/version-0.11.0/modules/_category_.json b/versioned_docs/version-0.11.0/modules/_category_.json
new file mode 100644
index 0000000..fe46450
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Components",
+  "position": 5
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/agent/_category_.json b/versioned_docs/version-0.11.0/modules/agent/_category_.json
new file mode 100644
index 0000000..e503c23
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/agent/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Agent",
+  "position": 3
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/agent/architecture.md b/versioned_docs/version-0.11.0/modules/agent/architecture.md
new file mode 100644
index 0000000..75e58b7
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/agent/architecture.md
@@ -0,0 +1,46 @@
+---
+title: Architecture
+---
+
+## 1. Overview of InLong-Agent
+InLong-Agent is a collection tool that supports multiple types of data sources, and is committed to achieving stable and efficient data collection functions between multiple heterogeneous data sources including file, sql, Binlog, metrics, etc.
+
+### The brief architecture diagram is as follows:
+![](img/architecture.png)
+
+### design concept
+In order to solve the problem of data source diversity, InLong-agent abstracts multiple data sources into a unified source concept, and abstracts sinks to write data. When you need to access a new data source, you only need to configure the format and reading parameters of the data source to achieve efficient reading.
+
+### Current status of use
+InLong-Agent is widely used within the Tencent Group, undertaking most of the data collection business, and the amount of online data reaches tens of billions.
+
+## 2. InLong-Agent architecture
+The InLong Agent task is used as a data acquisition framework, constructed with a channel + plug-in architecture. Read and write the data source into a reader/writer plug-in, and then into the entire framework.
+
++ Reader: Reader is the data collection module, responsible for collecting data from the data source and sending the data to the channel.
++ Writer: Writer is a data writing module, which reuses data continuously to the channel and writes the data to the destination.
++ Channel: The channel used to connect the reader and writer, and as the data transmission channel of the connection, which realizes the function of data reading and monitoring
+
+
+## 3. Different kinds of agent
+### 3.1 file agent
+File collection includes the following functions:
+
+User-configured path monitoring, able to monitor the created file information
+Directory regular filtering, support YYYYMMDD+regular expression path configuration
+Breakpoint retransmission, when InLong-Agent restarts, it can automatically re-read from the last read position to ensure no reread or missed reading.
+
+### 3.2 sql agent
+This type of data refers to the way it is executed through SQL
+SQL regular decomposition, converted into multiple SQL statements
+Execute SQL separately, pull the data set, the pull process needs to pay attention to the impact on mysql itself
+The execution cycle, which is generally executed regularly
+
+### 3.3 binlog agent
+This type of collection reads binlog and restores data by configuring mysql slave
+Need to pay attention to multi-threaded parsing when binlog is read, and multi-threaded parsing data needs to be labeled in order
+The code is based on the old version of dbsync, the main modification is to change the sending of tdbus-sender to push to agent-channel for integration
+
+
+
+
diff --git a/versioned_docs/version-0.11.0/modules/agent/img/architecture.png b/versioned_docs/version-0.11.0/modules/agent/img/architecture.png
new file mode 100644
index 0000000..1138fe1
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/agent/img/architecture.png differ
diff --git a/versioned_docs/version-0.11.0/modules/agent/quick_start.md b/versioned_docs/version-0.11.0/modules/agent/quick_start.md
new file mode 100644
index 0000000..7e567c2
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/agent/quick_start.md
@@ -0,0 +1,185 @@
+---
+title: Build && Deployment
+---
+
+## 1、Configuration
+```
+cd inlong-agent
+```
+
+The agent supports two modes of operation: local operation and online operation
+
+
+### Agent configuration
+
+Online operation needs to pull the configuration from inlong-manager, the configuration conf/agent.properties is as follows:
+```ini
+agent.fetcher.classname=org.apache.inlong.agent.plugin.fetcher.ManagerFetcher (the class name for fetch tasks, default ManagerFetcher)
+agent.local.ip=Write local ip
+agent.manager.vip.http.host=manager web host
+agent.manager.vip.http.port=manager web port
+```
+
+## 2、run
+After decompression, run the following command
+
+```bash
+sh agent.sh start
+```
+
+
+## 3、Add job configuration in real time
+
+#### 3.1 agent.properties Modify the following two places
+```ini
+# whether enable http service
+agent.http.enable=true
+# http default port
+agent.http.port=Available ports
+```
+
+#### 3.2 Execute the following command
+```bash
+    curl --location --request POST 'http://localhost:8008/config/job' \
+    --header 'Content-Type: application/json' \
+    --data '{
+    "job": {
+    "dir": {
+    "path": "",
+    "pattern": "/data/inlong-agent/test.log"
+    },
+    "trigger": "org.apache.inlong.agent.plugin.trigger.DirectoryTrigger",
+    "id": 1,
+    "thread": {
+    "running": {
+    "core": "4"
+    },
+    "onejob": true
+    },
+    "name": "fileAgentTest",
+    "source": "org.apache.inlong.agent.plugin.sources.TextFileSource",
+    "sink": "org.apache.inlong.agent.plugin.sinks.ProxySink",
+    "channel": "org.apache.inlong.agent.plugin.channel.MemoryChannel"
+    },
+    "proxy": {
+  "groupId": "groupId10",
+  "streamId": "groupId10"
+    },
+    "op": "add"
+    }'
+```
+
+    The meaning of each parameter is :
+    - job.dir.pattern: Configure the read file path, which can include regular expressions
+    - job.trigger: Trigger name, the default is DirectoryTrigger, the function is to monitor the files under the folder to generate events
+    - job.source: The type of data source used, the default is TextFileSource, which reads text files
+    - job.sink:The type of writer used, the default is ProxySink, which sends messages to the proxy
+    - proxy.groupId: The groupId type used when writing proxy, groupId is group id showed on data access in inlong-manager, not the topic name.
+    - proxy.streamId: The streamId type used when writing proxy, streamId is the data flow id showed on data flow window in inlong-manager
+
+
+## 4、eg for directory config
+
+    E.g:
+    /data/inlong-agent/test.log //Represents reading the new file test.log in the inlong-agent folder
+    /data/inlong-agent/test[0-9]{1} // means to read the new file test in the inlong-agent folder followed by a number at the end
+    /data/inlong-agent/test //If test is a directory, it means to read all new files under test
+    /data/inlong-agent/^\\d+(\\.\\d+)? // Start with one or more digits, followed by. or end with one. or more digits (? stands for optional, can match Examples: "5", "1.5" and "2.21"
+
+
+## 5. Support to get data time from file name
+
+    Agent supports obtaining the time from the file name as the production time of the data. The configuration instructions are as follows:
+    /data/inlong-agent/***YYYYMMDDHH***
+    Where YYYYDDMMHH represents the data time, YYYY represents the year, MM represents the month, DD represents the day, and HH represents the hour
+    Where *** is any character
+
+    At the same time, you need to add the current data cycle to the job conf, the current support day cycle and hour cycle,
+    When adding a task, add the property job.cycleUnit
+    
+    job.cycleUnit contains the following two types:
+    1. D: Represents the data time and day dimension
+    2. H: Represents the data time and hour dimension
+
+    E.g:
+    The configuration data source is
+    /data/inlong-agent/YYYYMMDDHH.log
+    Write data to 2021020211.log
+    Configure job.cycleUnit as D
+    Then the agent will try the 202020211.log file at the time of 202020211. When reading the data in the file, it will write all the data to the backend proxy at the time of 20210202.
+    If job.cycleUnit is configured as H
+    When collecting data in the 2021020211.log file, all data will be written to the backend proxy at the time of 2021020211
+
+    
+    Examples of job submission
+
+```bash
+curl --location --request POST'http://localhost:8008/config/job' \
+--header'Content-Type: application/json' \
+--data'{
+"job": {
+"dir": {
+"path": "",
+"pattern": "/data/inlong-agent/test.log"
+},
+"trigger": "org.apache.inlong.agent.plugin.trigger.DirectoryTrigger",
+"id": 1,
+"thread": {
+"running": {
+"core": "4"
+}
+},
+"name": "fileAgentTest",
+"cycleUnit": "D",
+"source": "org.apache.inlong.agent.plugin.sources.TextFileSource",
+"sink": "org.apache.inlong.agent.plugin.sinks.ProxySink",
+"channel": "org.apache.inlong.agent.plugin.channel.MemoryChannel"
+},
+"proxy": {
+"group": "group10",
+"group": "group10"
+},
+"op": "add"
+}'
+```
+
+## 6. Support time offset reading
+
+    After the configuration is read by time, if you want to read data at other times than the current time, you can configure the time offset to complete
+    Configure the job attribute name as job.timeOffset, the value is number + time dimension, time dimension includes day and hour
+    For example, the following settings are supported
+    1. 1d Read the data one day after the current time
+    2. -1h read the data one hour before the current time
+
+
+    Examples of job submission
+```bash
+curl --location --request POST'http://localhost:8008/config/job' \
+--header'Content-Type: application/json' \
+--data'{
+"job": {
+"dir": {
+"path": "",
+"pattern": "/data/inlong-agent/test.log"
+},
+"trigger": "org.apache.inlong.agent.plugin.trigger.DirectoryTrigger",
+"id": 1,
+"thread": {
+"running": {
+"core": "4"
+}
+},
+"name": "fileAgentTest",
+"cycleUnit": "D",
+"timeOffset": "-1d",
+"source": "org.apache.inlong.agent.plugin.sources.TextFileSource",
+"sink": "org.apache.inlong.agent.plugin.sinks.ProxySink",
+"channel": "org.apache.inlong.agent.plugin.channel.MemoryChannel"
+},
+"proxy": {
+"groupId": "groupId10",
+"streamId": "streamId10"
+},
+"op": "add"
+}'
+```
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/dataproxy-sdk/_category_.json b/versioned_docs/version-0.11.0/modules/dataproxy-sdk/_category_.json
new file mode 100644
index 0000000..45a5d59
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/dataproxy-sdk/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "DataProxy-SDK",
+  "position": 5
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/dataproxy-sdk/architecture.md b/versioned_docs/version-0.11.0/modules/dataproxy-sdk/architecture.md
new file mode 100644
index 0000000..591163c
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/dataproxy-sdk/architecture.md
@@ -0,0 +1,60 @@
+---
+title: Architecture
+---
+# 1、intro
+When the business uses the message access method, the business generally only needs to format the data in a proxy-recognizable format (such as six-segment protocol, digital protocol, etc.)
+After group packet transmission, data can be connected to inlong. But in order to ensure data reliability, load balancing, and dynamic update of the proxy list and other security features
+The user program needs to consider more and ultimately leads to the program being too cumbersome and bloated.
+
+The original intention of API design is to simplify user access and assume some reliability-related logic. After the user integrates the API in the service delivery program, the data can be sent to the proxy without worrying about the grouping format, load balancing and other logic.
+
+# 2、functions
+
+## 2.1 overall functions
+
+|  function   | description  |
+|  ----  | ----  |
+| Package function (new)  | The user data is packaged and sent to the proxy in a packet format recognized by the proxy (such as six-segment protocol, digital protocol, etc.)|
+| Compression function| Before sending proxy, compress user data to reduce network bandwidth usage|
+| Maintain proxy list| Get the proxy list every five minutes to detect whether there is a proxy machine on the operation and maintenance side; automatically remove unavailable connections every 20s to ensure that the connected proxy can operate normally |
+| Indicator statistics (new)| Increase the indicator of business minute-level sending volume (interface level)|
+| Load balancing (new)| Use the new strategy to load balance the sent data among multiple proxies, instead of relying on simple random + polling mechanism to ensure|
+| proxy list persistence (new)| Persist the proxy list according to the business group id to prevent the configuration center from failing to send data when the program starts
+
+
+## 2.2 Data transmission function description
+
+### Synchronous batch function
+
+    public SendResult sendMessage(List<byte[]> bodyList, String groupId, String streamId, long dt, long timeout, TimeUnit timeUnit)
+
+    Parameter Description
+
+    bodyListIt is a collection of multiple pieces of data that users need to send. The total length is recommended to be less than 512k. groupId represents the service id, and streamId represents the interface id. dt represents the time stamp of the data, accurate to the millisecond level. It can also be set to 0 directly, and the api will get the current time as its timestamp in the background. timeout & timeUnit: These two parameters are used to set the timeout time for sending data, a [...]
+
+### Synchronize a single function
+
+    public SendResult sendMessage(byte[] body, String groupId, String streamId, long dt, long timeout, TimeUnit timeUnit)
+
+    Parameter Description
+
+    body is the content of a single piece of data that the user wants to send, and the meaning of the remaining parameters is basically the same as the batch sending interface.
+
+
+### Asynchronous batch function
+
+    public void asyncSendMessage(SendMessageCallback callback, List<byte[]> bodyList, String groupId, String streamId, long dt, long timeout,TimeUnit timeUnit)
+
+    Parameter Description
+
+    SendMessageCallback is a callback for processing messages. The bodyList is a collection of multiple pieces of data that users need to send. The total length of multiple pieces of data is recommended to be less than 512k. groupId is the service id, and streamId is the interface id. dt represents the time stamp of the data, accurate to the millisecond level. It can also be set to 0 directly, and the api will get the current time as its timestamp in the background. timeout and timeUnit  [...]
+
+
+### Asynchronous single function
+
+
+    public void asyncSendMessage(SendMessageCallback callback, byte[] body, String groupId, String streamId, long dt, long timeout, TimeUnit timeUnit)
+
+    Parameter Description
+
+    The body is the content of a single message, and the meaning of the remaining parameters is basically the same as the batch sending interface
diff --git a/versioned_docs/version-0.11.0/modules/dataproxy-sdk/quick_start.md b/versioned_docs/version-0.11.0/modules/dataproxy-sdk/quick_start.md
new file mode 100644
index 0000000..aec4d30
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/dataproxy-sdk/quick_start.md
@@ -0,0 +1,12 @@
+---
+title: Build && Deployment
+---
+# how to use
+
+add dependency in pom and use the api in [architecture](architecture.md)
+
+    <dependency>
+            <groupId>org.apache.inlong</groupId>
+            <artifactId>inlong-dataproxy-sdk</artifactId>
+            <version>0.10.0-incubating-SNAPSHOT</version>
+    </dependency>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/dataproxy/_category_.json b/versioned_docs/version-0.11.0/modules/dataproxy/_category_.json
new file mode 100644
index 0000000..148d928
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/dataproxy/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "DataProxy",
+  "position": 4
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/dataproxy/architecture.md b/versioned_docs/version-0.11.0/modules/dataproxy/architecture.md
new file mode 100644
index 0000000..a7d72f5
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/dataproxy/architecture.md
@@ -0,0 +1,152 @@
+---
+title: Architecture
+---
+# 1、intro
+
+    Inlong-dataProxy belongs to the inlong proxy layer and is used for data collection, reception and forwarding. Through format conversion, the data is converted into TDMsg1 format that can be cached and processed by the cache layer
+    InLong-dataProxy acts as a bridge from the InLong collection end to the InLong buffer end. Dataproxy pulls the relationship between the business group id and the corresponding topic name from the manager module, and internally manages the producers of multiple topics
+    The overall architecture of inlong-dataproxy is based on Apache Flume. On the basis of this project, inlong-bus expands the source layer and sink layer, and optimizes disaster tolerance forwarding, which improves the stability of the system.
+
+
+# 2、architecture
+
+![](img/architecture.png)
+
+ 	1. The source layer opens port monitoring, which is realized through netty server. The decoded data is sent to the channel layer
+ 	2. The channel layer has a selector, which is used to choose which type of channel to go. If the memory is eventually full, the data will be processed.
+ 	3. The data of the channel layer will be forwarded through the sink layer. The main purpose here is to convert the data to the TDMsg1 format and push it to the cache layer (tube is more commonly used here)
+
+# 3、DataProxy support configuration instructions
+
+DataProxy supports configurable source-channel-sink, and the configuration method is the same as the configuration file structure of flume:
+
+Source configuration example and corresponding notes:
+
+    agent1.sources.tcp-source.channels = ch-msg1 ch-msg2 ch-msg3 ch-more1 ch-more2 ch-more3 ch-msg5 ch-msg6 ch-msg7 ch-msg8 ch-msg9 ch-msg10 ch-transfer ch -Back
+    Define the channel used in the source. Note that if the configuration below this source uses the channel, it needs to be annotated here
+
+    agent1.sources.tcp-source.type = org.apache.flume.source.SimpleTcpSource
+    tcp resolution type definition, here provide the class name for instantiation, SimpleTcpSource is mainly to initialize the configuration and start port monitoring
+
+    agent1.sources.tcp-source.msg-factory-name = org.apache.flume.source.ServerMessageFactory
+    Handler used for message structure analysis, and set read stream handler and write stream handler
+
+    agent1.sources.tcp-source.host = 0.0.0.0
+    tcp ip binding monitoring, binding all network cards by default
+
+    agent1.sources.tcp-source.port = 46801
+    tcp port binding, port 46801 is bound by default
+
+    agent1.sources.tcp-source.highWaterMark=2621440
+    The concept of netty, set the netty high water level value
+
+    agent1.sources.tcp-source.enableExceptionReturn=true
+    The new function of v1.7 version, optional, the default is false, used to open the exception channel, when an exception occurs, the data is written to the exception channel to prevent other normal data transmission (the open source version does not add this function), Details: Increase the local disk of abnormal data landing
+
+    agent1.sources.tcp-source.max-msg-length = 524288
+    Limit the size of a single package, here if the compressed package is transmitted, it is the compressed package size, the limit is 512KB
+
+    agent1.sources.tcp-source.topic = test_token
+    The default topic value, if the mapping relationship between groupId and topic cannot be found, it will be sent to this topic
+
+    agent1.sources.tcp-source.attr = m=9
+    The default value of m is set, where the value of m is the version of inlong's internal TdMsg protocol
+
+    agent1.sources.tcp-source.connections = 5000
+    Concurrent connections go online, new connections will be broken when the upper limit is exceeded
+
+    agent1.sources.tcp-source.max-threads = 64
+    Netty thread pool work thread upper limit, generally recommended to choose twice the cpu
+
+    agent1.sources.tcp-source.receiveBufferSize = 524288
+    Netty server tcp tuning parameters
+
+    agent1.sources.tcp-source.sendBufferSize = 524288
+    Netty server tcp tuning parameters
+
+    agent1.sources.tcp-source.custom-cp = true
+    Whether to use the self-developed channel process, the self-developed channel process can select the alternate channel to send when the main channel is blocked
+
+    agent1.sources.tcp-source.selector.type = org.apache.flume.channel.FailoverChannelSelector
+    This channel selector is a self-developed channel selector, which is not much different from the official website, mainly because of the channel master-slave selection logic
+
+    agent1.sources.tcp-source.selector.master = ch-msg5 ch-msg6 ch-msg7 ch-msg8 ch-msg9
+    Specify the master channel, these channels will be preferentially selected for data push. Those channels that are not in the master, transfer, fileMetric, and slaMetric configuration items, but are in
+    There are defined channels in channels, which are all classified as slave channels. When the master channel is full, the slave channel will be selected. Generally, the file channel type is recommended for the slave channel.
+
+    agent1.sources.tcp-source.selector.transfer = ch-msg5 ch-msg6 ch-msg7 ch-msg8 ch-msg9
+    Specify the transfer channel to accept the transfer type data. The transfer here generally refers to the data pushed to the non-tube cluster, which is only for forwarding, and it is reserved for subsequent functions.
+
+    agent1.sources.tcp-source.selector.fileMetric = ch-back
+    Specify the fileMetric channel to receive the metric data reported by the agent
+
+
+Channel configuration examples and corresponding annotations
+
+memory channel
+
+    agent1.channels.ch-more1.type = memory
+    memory channel type
+
+    agent1.channels.ch-more1.capacity = 10000000
+    Memory channel queue size, the maximum number of messages that can be cached
+
+    agent1.channels.ch-more1.keep-alive = 0
+    
+    agent1.channels.ch-more1.transactionCapacity = 20
+    The maximum number of batches are processed in atomic operations, and the memory channel needs to be locked when used, so there will be a batch process to increase efficiency
+
+file channel
+
+    agent1.channels.ch-msg5.type = file
+    file channel type
+
+    agent1.channels.ch-msg5.capacity = 100000000
+    The maximum number of messages that can be cached in a file channel
+
+    agent1.channels.ch-msg5.maxFileSize = 1073741824
+    file channel file maximum limit, the number of bytes
+
+    agent1.channels.ch-msg5.minimumRequiredSpace = 1073741824
+    The minimum free space of the disk where the file channel is located. Setting this value can prevent the disk from being full
+
+    agent1.channels.ch-msg5.checkpointDir = /data/work/file/ch-msg5/check
+    file channel checkpoint path
+
+    agent1.channels.ch-msg5.dataDirs = /data/work/file/ch-msg5/data
+    file channel data path
+
+    agent1.channels.ch-msg5.fsyncPerTransaction = false
+    Whether to synchronize the disk for each atomic operation, it is recommended to change it to false, otherwise it will affect the performance
+
+    agent1.channels.ch-msg5.fsyncInterval = 5
+    The time interval between data flush from memory to disk, in seconds
+
+Sink configuration example and corresponding notes
+
+    agent1.sinks.meta-sink-more1.channel = ch-msg1
+    The upstream channel name of the sink
+
+    agent1.sinks.meta-sink-more1.type = org.apache.flume.sink.MetaSink
+    The sink class is implemented, where the message is implemented to push data to the tube cluster
+
+    agent1.sinks.meta-sink-more1.master-host-port-list =
+    Tube cluster master node list
+
+    agent1.sinks.meta-sink-more1.send_timeout = 30000
+    Timeout limit when sending to tube
+
+    agent1.sinks.meta-sink-more1.stat-interval-sec = 60
+    Sink indicator statistics interval time, in seconds
+
+    agent1.sinks.meta-sink-more1.thread-num = 8
+    Sink class sends messages to the worker thread, 8 means to start 8 concurrent threads
+
+    agent1.sinks.meta-sink-more1.client-id-cache = true
+    agent id cache, used to check the data reported by the agent to remove duplicates
+
+    agent1.sinks.meta-sink-more1.max-survived-time = 300000
+    Maximum cache time
+    
+    agent1.sinks.meta-sink-more1.max-survived-size = 3000000
+    Maximum number of caches
diff --git a/versioned_docs/version-0.11.0/modules/dataproxy/img/architecture.png b/versioned_docs/version-0.11.0/modules/dataproxy/img/architecture.png
new file mode 100644
index 0000000..bc46026
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/dataproxy/img/architecture.png differ
diff --git a/versioned_docs/version-0.11.0/modules/dataproxy/quick_start.md b/versioned_docs/version-0.11.0/modules/dataproxy/quick_start.md
new file mode 100644
index 0000000..e8bcc69
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/dataproxy/quick_start.md
@@ -0,0 +1,58 @@
+---
+title: Build && Deployment
+---
+## Deploy DataProxy
+
+All deploying files at `inlong-dataproxy` directory.
+
+### config TubeMQ master
+
+`tubemq_master_list` is the rpc address of TubeMQ Master.
+```
+$ sed -i 's/TUBE_LIST/tubemq_master_list/g' conf/flume.conf
+```
+
+notice that conf/flume.conf FLUME_HOME is proxy the directory for proxy inner data
+
+### Environmental preparation
+
+```
+sh prepare_env.sh
+```
+
+### config manager web url
+
+configuration file: `conf/common.properties`:
+```
+# manager web 
+manager_hosts=ip:port 
+```
+
+## run
+
+```
+sh bin/start.sh
+```
+	
+
+## check
+```
+telnet 127.0.0.1 46801
+```
+
+## Add DataProxy configuration to InLong-Manager
+
+After installing the DataProxy, you need to insert the IP and port of the DataProxy service is located into the backend database of InLong-Manager.
+
+For the background database address of InLong-Manager, please refer to the deployment document of the InLong-Manager module.
+
+The insert SQL statement is:
+
+```sql
+-- name is the name of the DataProxy, which can be customized
+-- address is the IP of the DataProxy service is located
+-- port is the port of the DataProxy service, default is 46801
+insert into data_proxy_cluster (name, address, port, status, is_deleted, create_time, modify_time)
+values ("data_proxy_name", "data_proxy_ip", 46801, 0, 0, now(), now());
+```
+
diff --git a/versioned_docs/version-0.11.0/modules/manager/_category_.json b/versioned_docs/version-0.11.0/modules/manager/_category_.json
new file mode 100644
index 0000000..6e9ba33
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/manager/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Manager",
+  "position": 1
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/manager/architecture.md b/versioned_docs/version-0.11.0/modules/manager/architecture.md
new file mode 100644
index 0000000..84256a9
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/manager/architecture.md
@@ -0,0 +1,32 @@
+---
+title: Architecture
+---
+
+## Introduction to Apache InLong Manager
+
++ Target positioning: Apache inlong is positioned as a one-stop data access solution, providing complete coverage of big data access scenarios from data collection, transmission, sorting, and technical capabilities.
+
++ Platform value: Users can complete task configuration, management, and indicator monitoring through the platform's built-in management and configuration platform. At the same time, the platform provides SPI extension points in the main links of the process to implement custom logic as needed. Ensure stable and efficient functions while lowering the threshold for platform use.
+
++ Apache InLong Manager is the user-oriented unified UI of the entire data access platform. After the user logs in, it will provide different function permissions and data permissions according to the corresponding role. The page provides maintenance portals for the platform's basic clusters (such as mq, sorting), and you can view basic maintenance information and capacity planning adjustments at any time. At the same time, business users can complete the creation, modification and maint [...]
+## Architecture
+
+![](img/inlong-manager.png)
+
+
+##Module division of labor
+
+| Module | Responsibilities |
+| :----| :---- |
+| manager-common | Module common code, entry exception definition, tool class, enumeration, etc.|
+| manager-dao |Database Operation |
+| manager-service |Business Logic Layer |
+| manager-web | Front-end interactive response interface |
+| manager-workflow-engine | Workflow Engine |
+
+## use process 
+![](img/interactive.jpg)
+
+
+## data model
+![](img/datamodel.jpg)
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/manager/img/datamodel.jpg b/versioned_docs/version-0.11.0/modules/manager/img/datamodel.jpg
new file mode 100644
index 0000000..7d0b578
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/manager/img/datamodel.jpg differ
diff --git a/versioned_docs/version-0.11.0/modules/manager/img/inlong-manager.png b/versioned_docs/version-0.11.0/modules/manager/img/inlong-manager.png
new file mode 100644
index 0000000..3db4937
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/manager/img/inlong-manager.png differ
diff --git a/versioned_docs/version-0.11.0/modules/manager/img/interactive.jpg b/versioned_docs/version-0.11.0/modules/manager/img/interactive.jpg
new file mode 100644
index 0000000..7238d00
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/manager/img/interactive.jpg differ
diff --git a/versioned_docs/version-0.11.0/modules/manager/quick_start.md b/versioned_docs/version-0.11.0/modules/manager/quick_start.md
new file mode 100644
index 0000000..6f61754
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/manager/quick_start.md
@@ -0,0 +1,90 @@
+---
+title: Build && Deployment
+---
+
+# 1. Environmental preparation
+- Install and start MySQL 5.7+, copy the `doc/sql/apache_inlong_manager.sql` file in the inlong-manager module to the
+  server where the MySQL database is located (for example, copy to `/data/` directory), load this file through the
+  following command to complete the initialization of the table structure and basic data:
+
+  ```shell
+  # Log in to the MySQL server by username and password:
+  mysql -u xxx -p xxx
+  ...
+  # Create database
+  CREATE DATABASE IF NOT EXISTS apache_inlong_manager;
+  USE apache_inlong_manager;
+  # Load the above SQL file through the source command:
+  mysql> source /data/apache_inlong_manager.sql;
+  ```
+
+- Refer to [Compile and deploy TubeMQ](https://inlong.apache.org/zh-cn/docs/modules/tubemq/quick_start.html) to install
+  and start the Tube cluster;
+
+- Refer
+  to [Compile and deploy TubeMQ Manager](https://inlong.apache.org/zh-cn/docs/modules/tubemq/tubemq-manager/quick_start.html)
+  , install and start TubeManager.
+
+# 2. Deploy and start manager-web
+
+**manager-web is a background service that interacts with the front-end page.**
+
+## 2.1 Prepare installation files
+
+All installation files at `inlong-manager-web` directory.
+
+## 2.2 Modify configuration
+
+Go to the decompressed `inlong-manager-web` directory and modify the `conf/application.properties` file:
+
+```properties
+# manager-web service port number
+server.port=8083
+
+# The configuration file used is dev
+spring.profiles.active=dev
+```
+
+The dev configuration is specified above, then modify the `conf/application-dev.properties` file:
+
+1) Modify the database URL, username and password:
+
+   ```properties
+   spring.datasource.jdbc-url=jdbc:mysql://127.0.0.1:3306/apache_inlong_manager?useSSL=false&allowPublicKeyRetrieval=true&characterEncoding=UTF-8&nullCatalogMeansCurrent=true&serverTimezone=GMT%2b8
+   spring.datasource.username=xxxxxx
+   spring.datasource.password=xxxxxx
+   ```
+
+2) Modify the connection information of the Tube and ZooKeeper clusters, among which `cluster.zk.root` suggests using
+   the default value:
+
+   ```properties
+   # Manager address of Tube cluster, used to create Topic
+   cluster.tube.manager=http://127.0.0.1:8081
+   # Broker used to manage Tube
+   cluster.tube.master=127.0.0.1:8000,127.0.0.1:8010
+   # Tube cluster ID
+   cluster.tube.clusterId=1
+
+   # ZK cluster, used to push the configuration of Sort
+   cluster.zk.url=127.0.0.1:2181
+   cluster.zk.root=inlong_hive
+   
+   # Sort application name, that is, set the cluster-id parameter of Sort, the default value is "inlong_app"
+   sort.appName=inlong_app
+   ```
+
+## 2.3 Start the service
+
+Enter the decompressed directory, execute `sh bin/startup.sh` to start the service, and check the
+log `tailf log/manager-web.log`. If a log similar to the following appears, the service has started successfully:
+
+```shell
+Started InLongWebApplication in 6.795 seconds (JVM running for 7.565)
+```
+
+# 3. Service access verification
+
+Verify the manager-web service:
+
+Visit address: <http://[manager_web_ip]:[manager_web_port]/api/inlong/manager/doc.html#/home>
diff --git a/versioned_docs/version-0.11.0/modules/sort/_category_.json b/versioned_docs/version-0.11.0/modules/sort/_category_.json
new file mode 100644
index 0000000..8f2c6e3
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/sort/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Sort",
+  "position": 7
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/sort/img.png b/versioned_docs/version-0.11.0/modules/sort/img.png
new file mode 100644
index 0000000..131eddf
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/sort/img.png differ
diff --git a/versioned_docs/version-0.11.0/modules/sort/introduction.md b/versioned_docs/version-0.11.0/modules/sort/introduction.md
new file mode 100644
index 0000000..a215522
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/sort/introduction.md
@@ -0,0 +1,37 @@
+---
+title: Architecture
+---
+
+# overview
+Inlong-sort is used to extract data from different source systems, then transforms the data and finally loads the data into diffrent storage systems.
+Inlong-sort is simply an Flink application, and relys on Inlong-manager to manage meta data(such as the source informations and storage informations)
+
+# features
+## multi-tenancy
+Inlong-sort is an multi-tenancy system, which means you can extract data from different sources(these sources must be of the same source type) and load data into different sinks(these sinks must be of the same storage type).
+e.g. you can extract data form different topics of inlong-tubemq and the load them to different hive clusters.
+
+## change meta data without restart
+Inlong-sort uses zookeeper to manage its meta data, every time you change meta data on zk, inlong-sort application will be informed immediately.
+e.g if you want to change the schema of your data, just change the meta data on zk without restart your inlong-sort application.
+
+# supported sources
+- inlong-tubemq
+- pulsar
+
+# supported storages
+- clickhouse
+- hive (Currently we just support parquet file format)
+
+# limitations
+Currently, we just support extracting specified fields in the stage of **Transform**.
+
+# future plans
+## More kinds of source systems
+kafka and etc
+
+## More kinds of storage systems
+Hbase, Elastic Search, and etc
+
+## More kinds of file format in hive sink
+sequence file, orc
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/sort/protocol_introduction.md b/versioned_docs/version-0.11.0/modules/sort/protocol_introduction.md
new file mode 100644
index 0000000..a04538f
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/sort/protocol_introduction.md
@@ -0,0 +1,25 @@
+---
+title: Zookeeper Configure
+---
+
+# Overview
+Currently the metadata management of inlong-sort relies on inlong-manager.
+
+Metadata interaction between inlong-sort and inlong-manager is performed via ZK.
+
+# Zookeeper's path structure
+
+![img.png](img.png)
+
+
+Cluster represents a flink job. Multiple flows can be handled in the same cluster, but these flows must be homogeneous (source is the same as sink).
+
+The DataFlow represents a specific flow, and each flow is identified by a globally unique ID. The flow consists of source + sink.
+
+A path at the top of the figure indicates which dataflow are running in a cluster, without metadata under the node.
+
+The path below is used to store the details of the dataflow.
+
+# Protocol
+Please reference
+`org.apache.inlong.sort.protocol.DataFlowInfo`
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/sort/quick_start.md b/versioned_docs/version-0.11.0/modules/sort/quick_start.md
new file mode 100644
index 0000000..d2e46eb
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/sort/quick_start.md
@@ -0,0 +1,69 @@
+---
+title: Build && Deployment
+---
+
+## Set up flink environment
+Currently inlong-sort is based on flink, before you run an inlong-sort application,
+you need to set up flink environment.
+
+<a href="https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/deployment/cluster_setup.html" target="_blank">how to set up flink environment</a>
+
+Currently, inlong-sort relys on flink-1.9.3. Chose `flink-1.9.3-bin-scala_2.11.tgz` when downloading package.
+
+Once your flink environment is set up, you can visit web ui of flink, whose address is stored in `/${your_flink_path}/conf/masters`.
+
+## Prepare installation files
+All installation files at `inlong-sort` directory.
+
+## Starting an inlong-sort application
+Now you can submit job to flink with the jar compiled.
+
+<a href="https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/deployment/yarn_setup.html#submit-job-to-flink" target="_blank">how to submit job to flink</a>
+
+Example:
+
+- `./bin/flink run -c org.apache.inlong.sort.flink.Entrance inlong-sort-core-1.0-SNAPSHOT.jar --cluster-id my_application --zookeeper.quorum 127.0.0.1:2181 --zookeeper.path.root /inlong-sort --source.type tubemq --sink.type hive`
+
+Notice:
+
+- `-c org.apache.inlong.sort.flink.Entrance` is the main class name
+
+- `inlong-sort-core-1.0-SNAPSHOT.jar` is the compiled jar
+
+## Necessary configurations
+- `--cluster-id ` which is used to represent a specified inlong-sort application
+- `--zookeeper.quorum` zk quorum
+- `--zookeeper.path.root` zk root path
+- `--source.type` source of the application, currently "tubemq" and "pulsar" are supported
+- `--sink.type` sink of the application, currently "clickhouse" and "hive" are supported
+
+Configurations above are necessary, you can see full configurations in
+
+`~/Inlong/inlong-sort/common/src/main/java/org/apache/inlong/sort/configuration/Constants.java`
+
+**Example**
+
+`--cluster-id my_application --zookeeper.quorum 192.127.0.1:2181 --zookeeper.path.root /zk_root --source.type tubemq --sink.type hive`
+
+##  All configurations
+|  name | necessary  | default value  |description   |
+| ------------ | ------------ | ------------ | ------------ |
+|cluster-id   |  Y | NA  |  used to represent a specified inlong-sort application |
+|zookeeper.quorum   | Y  | NA  | zk quorum  |
+|zookeeper.path.root   | Y  | "/inlong-sort"  |  zk root path  |
+|source.type   | Y | NA   | source of the application, currently "tubemq" and "pulsar" are supported  |
+|sink.type   | Y  | NA  | sink of the application, currently "clickhouse" and "hive" are supported  |
+|source.parallelism   | N  | 1  | parallelism of source  |
+|deserialization.parallelism   | N  |  1 | parallelism of deserialization  |
+|sink.parallelism   | N  | 1  | parallelism of sink  |
+|tubemq.master.address | N  | NA  | tube master address used if absent in DataFlowInfo on zk  |
+|tubemq.session.key |N |"inlong-sort" | session key used when subscribing to tubemq |
+|tubemq.bootstrap.from.max | N | false | whether consume from max or not when subscribing to tubemq |
+|tubemq.message.not.found.wait.period | N | 350ms | The time of waiting period if tube broker return message not found |
+|tubemq.subscribe.retry.timeout | N | 300000 | The time of subscribing tube timeout, in millisecond |
+|zookeeper.client.session-timeout | N | 60000 | The session timeout for the ZooKeeper session in ms |
+|zookeeper.client.connection-timeout | N | 15000 | The connection timeout for ZooKeeper in ms |
+|zookeeper.client.retry-wait | N | 5000 | The pause between consecutive retries in ms |
+|zookeeper.client.max-retry-attempts | N | 3 | The number of connection retries before the client gives up |
+|zookeeper.client.acl | N | "open" | Defines the ACL (open/creator) to be configured on ZK node. The configuration value can be set to “creator” if the ZooKeeper server configuration has the “authProvider” property mapped to use SASLAuthenticationProvider and the cluster is configured to run in secure mode (Kerberos) |
+|zookeeper.sasl.disable | N | false | Whether disable zk sasl or not |
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/_category_.json b/versioned_docs/version-0.11.0/modules/tubemq/_category_.json
new file mode 100644
index 0000000..0ff1b29
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "TubeMQ",
+  "position": 6
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/appendixfiles/http_access_api_definition_cn.xls b/versioned_docs/version-0.11.0/modules/tubemq/appendixfiles/http_access_api_definition_cn.xls
new file mode 100644
index 0000000..e834b49
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/appendixfiles/http_access_api_definition_cn.xls differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/architecture.md b/versioned_docs/version-0.11.0/modules/tubemq/architecture.md
new file mode 100644
index 0000000..d81a6ee
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/architecture.md
@@ -0,0 +1,43 @@
+---
+title: Architecture
+---
+
+## 1. TubeMQ Architecture:
+After years of evolution, the TubeMQ cluster is divided into the following 5 parts:
+![](img/sys_structure.png)
+
+- **Portal:** The Portal part responsible for external interaction and maintenance operations, including API and Web. 
+  The API connects to the management system outside the cluster. The Web is a page encapsulation of daily operation 
+  and maintenance functions based on the API;
+
+- **Master:** It is responsible for the Control part of the cluster. This part is composed of one or more Master nodes.
+  Master HA performs heartbeat keep-alive and real-time hot standby switching between master nodes (This is the reason 
+  why everyone needs to fill in the addresses of all Master nodes corresponding to the cluster when using TubeMQ Lib).
+  The main master is responsible for managing the status of the entire cluster, resource scheduling, permission 
+  checking, metadata query, etc.;
+
+- **Broker:** The Store part responsible for data storage. This part is composed of independent Broker nodes.
+  Each Broker node manages the Topic set in this node, including the addition, deletion, modification, and inquiring
+  about Topics. It is also responsible for message storage, consumption, aging, partition expansion, data consumption 
+  offset records, etc. On the topic, and the external capabilities of the cluster, including the number of topics,
+  throughput, and capacity, are completed by horizontally expanding the broker node;
+
+- **Client:** The Client part responsible for data production and consumption. We provide this part in the form of Lib.
+  The most commonly used is the consumer. Compared with the previous, the consumer now supports Push and Pull data pull
+  modes, data consumption behavior support both order and filtered consumption. For the Pull consumption mode, the 
+  service supports resetting the precise offset through the client to support the business extract-once consumption.
+  At the same time, the consumer has launched a new cross-cluster switch-free Consumer client;
+
+- **ZooKeeper:** Responsible for the ZooKeeper part of the offset storage. This part of the function has been weakened to only the persistent storage of the offset. Considering the next multi-node copy function, this module is temporarily reserved;
+
+## 2. Broker File Storage Scheme Improvement:
+Systems that use disks as data persistence media are faced with various system performance problems caused by disk problems. The TubeMQ system is no exception, the performance improvement is largely to solve the problem of how to read, write and store message data. In this regard TubeMQ has made many improvements: storage instances is as the smallest Topic data management unit; each storage instance includes a file storage block and a memory cache block; each Topic can be assigned multip [...]
+
+1. **File storage block:** The disk storage solution of TubeMQ is similar to Kafka, but it is not the same, as shown in the following figure: each file storage block is composed of an index file and a data file; the partiton is a logical partition in the data file; each Topic maintains and manages the file storage block separately, the related mechanisms include the aging cycle, the number of partitions, whether it is readable and writable, etc.
+![](img/store_file.png)
+
+
+2. **Memory cache block:** We added a separate memory cache block on the file storage block, that is, add a block of memory to the original write disk to isolate the slow effect of the hard disk. The data is first flushed to the memory cache block, and then the memory cache block is batched flush the data to the disk file.
+![](img/store_mem.png)
+
+
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/client_rpc.md b/versioned_docs/version-0.11.0/modules/tubemq/client_rpc.md
new file mode 100644
index 0000000..c132332
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/client_rpc.md
@@ -0,0 +1,202 @@
+---
+title: Client RPC
+---
+
+
+## 1 General Introduction
+
+Implements of this part can be found in `org.apache.tubemq.corerpc`. Each node in Apache TubeMQ Cluster Communicates by TCP Keep-Alive. Mseeages are definded using binary and protobuf combined.
+![](img/client_rpc/rpc_bytes_def.png)
+
+All we can see in TCP are binary streams. We defind a 4-byte msgToken message `RPC\_PROTOCOL\_BEGIN\_TOKEN` in header, which are used to distinguish each message and identify the legitimacy of the counterpart. When message client received is not started with these header field, client needs to close the connection and prompt the error and quit or reconnect because the protocal is not supported by TubeMQ or something wrong may happended. Follows is a 4-byte serialNo, this field is sent by [...]
+
+We defined `listSize` as `\&lt;len\&gt;\&lt;data\&gt;` because serialized PB data is saved as a ByteBuffer object in TubeMQ, and in Java, there a maximum(8196) length of ByteBuffer block, an overlength PB message needs to be saved in several ByteBuffer. No total length was counted, and the ByteBuffer is directly written when Serializing in to TCP message.
+
+**Pay more attention when implementing multiple languages and SDKs.** Need to serialize PB data content into arrays of blocks(supported in PB codecs).
+
+
+## 2 PB format code:
+
+PB format encoding is divided into RPC framework definition, to the Master message encoding and to the Broker message encoding of three parts, you can use protobuf directly compiled to get different language codecs, it is very convenient to use.
+![](img/client_rpc/rpc_proto_def.png)
+
+`RPC.proto` defines 6 struct, which divided into 2 class: Request message and Response message. Response message is divided into Successful Response and Exception Response.
+![](img/client_rpc/rpc_pbmsg_structure.png)
+
+The request message encoding and response message decoding can be implemented in the `NettyClient.java` class. There is some room for improvement in this part of the definition and can be found in [TUBEMQ-109](https://issues.apache.org/jira/browse/TUBEMQ-109). However, due to compatibility concerns, it will be gradually replaced. We have implemented the current protobuf version, which is not a problem until at least 1.0.0. With the new protocol, the protocol implementation module require [...]
+![](img/client_rpc/rpc_conn_detail.png)
+
+Flag marks whether the message is requested or not, and the next three marks represent the content of the message trace, which is not currently used; the related is a fixed mapping of the service type, protocol version, service type, etc., the more critical parameter RequestBody.timeout is the maximum allowable time from when a request is received by the server to when it is actually processed. Long wait time, discarded if exceeded, current default is 10 seconds, request filled as follows.
+![](img/client_rpc/rpc_header_fill.png)
+
+
+## 3 Interactive diagram of the client's PB request & response:
+
+**Producer Interaction**:
+
+The Producer has four pairs of instructions in the system, registration to master, heartbeat to master, exit from master and sending message to brokers.
+![](img/client_rpc/rpc_producer_diagram.png)
+
+Here we can see, Producer's implementation logic is to get metadata such as the list of partitions of specified topic from master, then select a partition and send message via TCP connection according to the rules of the client. It may be unsafe to send message without registration to master, the initial consideration was to use internal intake messages as much as possible and after that, considering security issues, we added authorization information carrying on top of this to perform a [...]
+
+**Note in producer side of multiple languages implementation:**
+
+1. Our Master is running as a hot-swap master, and the switchover is based on the information carried by the `RspExceptionBody`. In this case, you need to search for the keywords `&quot;StandbyException&quot;`, If this type of exception occurs, switch to another Master node for re-registration. This part has some relevant issues to adjust to the problem.
+
+2. Producer should re-register in the event of a Master connection failure during production, e.g. timeout, passive connection break, etc.
+
+3. Producer side should pay attention to the Broker pre-connection operation in advance: the back-end cluster can have hundreds of Broker nodes, and each Broker has about  ten partitions, so there will be thousands of possible records about the partition, after the SDK receives the metadata information from the Master, it should perform the connection establishment operation on the Broker that has not yet built the chain in advance.
+
+4. The Producer to Broker connection should be aware of anomaly detection and should be able to detect Broker bad spots and long periods of no messages, and to recycle the connection to Broker to avoid unstable operation in long-term running scene.
+
+**Consumer Interaction Diagram**:
+
+Consumer has 7 pairs of command in all, Register, Heartbeat, Exit to Master; Register, Logout, Heartbeat, Pulling mseeage to Broker. Registration and Logout to Broker is the same command, indicated by a different status code.
+
+![](img/client_rpc/rpc_consumer_diagram.png)
+
+As we can see from the above picture, the Consumer first has to register to the Master, but registering to the Master can not get Metadata information immediately because TubeMQ is using a server-side load-balancing model, and the client needs to wait for the server to dispatch the consumption partition information; Consumer to Broker needs to register the logout operation. Partition is exclusive at the time of consumption, i.e., the same partition can only be consumed by one consumer in [...]
+
+## 4 Client feature:
+
+| **FEATURE** | **Java** | **C/C++** | **Go** | **Python** | **Rust** | **NOTE** |
+| --- | --- | --- | --- | --- | --- | --- |
+| TLS | ✅ | | | | | |
+| Authorization | ✅ | | | | | |
+| Anti-bypass-master production/consumption | ✅ | | | | | |
+| Distributed system with clients accessing Broker without Master's authentication authorization | ✅ | | | | | |
+| Effectively-Once | ✅ | | | | | |
+| Partition offset consumption | ✅ | | | | | |
+| Multiple Topic Consumption for a single Consumer group | ✅ | | | | | |
+| Server Consumption filter | ✅ | | | | | |
+| Auto shielding inactive Nodes| ✅ | | | | | | 
+| Auto shielding bad Brokers | ✅ | | | | | | 
+| Auto reconnect | ✅ | | | | | |
+| Auto recycling of Idle Connection | ✅ | | | | | |
+| Inactive for more than a specified period(e.g. 3min, mainly the producer side)| ✅ | | | | | | 
+| Connection reuse | ✅ | | | | | |
+| Connection sharing according to the sessionFactory | ✅ | | | | | | 
+| Unconnection reuse | ✅ | | | | | | 
+| Asynchrounous Production | ✅ | | | | | |
+| Synchrounous Production | ✅ | | | | | |
+| Pull Consumption | ✅ | | | | | |
+| Push Consumption | ✅ | | | | | |
+| Consumption limit (QOS) | ✅ | | | | | |
+| Limit the amount of data per unit of time consumed by consumers | ✅ | | | | | |
+| Pull Consumption frequency limit | ✅ | | | | | |
+| Consumer Pull Consumption frequency limit | ✅ | | | | | |
+
+
+## 5 Client function Induction CaseByCase:
+
+**Client side and server side RPC interaction process**:
+
+----------
+
+![](img/client_rpc/rpc_inner_structure.png)
+
+As shown above, the client has to maintain local preservation of the sent request message until the RPC times out, or a response message is received and the response The message is associated by the SerialNo generated when the request is sent; the Broker information received from the server side, and the Topic information, which the SDK stores locally and updates with the latest returned information, as well as periodic reports to the Server side; the SDK is maintained to the heartbeat o [...]
+
+### 5.1 Message: Producer register to Master:
+
+----------
+
+![](img/client_rpc/rpc_producer_register2M.png)
+
+**ClientId**:Producer needs to construct a ClientId at startup, and the current construction rule is: 
+
+Java: ClientId = IPV4 + `&quot;-&quot;` + Thread ID + `&quot;-&quot;` + createTime + `&quot;-&quot;` + Instance ID + `&quot;-&quot;` + Client Version ID [+ `&quot;-&quot;` + SDK]. it is recommended that other languages add the above markup for easier access to the issue Exclusion. The ID value is valid for the lifetime of the Producer.
+
+**TopicList**: The list of topics published by the user, Producer provides the initial list of topics for the data to be published at initialization, and also allows the business to defer adding new topics via the publish function in runtime, but does not support reducing topics in runtime.
+
+**brokerCheckSum**: The check value of the Broker metadata information stored locally by the client, which is not available locally in Producer at initial startup, takes the value as -1; the SDK needs to carry the last BrokerCheckSum value on each request, and the Master determines whether the client's metadata needs to be updated by comparing the value.
+
+**hostname**: The IPV4 address value of the machine where the Producer is located.
+
+**success**: Whether the operation is successful, success is true, failure is false.
+
+**errCode**: The code of error, currently one error code represents a large class of error, the specific cause of the error needs to be specifically identified by `errMsg`.
+
+**errMsg**: The specific error message that the SDK needs to print out if something goes wrong.
+
+**authInfo**:Authentication authorization information, if the user configuration is filled in to start authentication processing, then fill in; if authentication is required, then report according to the signature of the user name and password; if it is running, such as heartbeat, if the Master forces authentication processing, then report according to the signature of the user name and password; if not, then authenticate according to the authorization Token provided by the Master during [...]
+
+**brokerInfos**: Broker metadata information, which is primarily a list of Broker information for the entire cluster that the Master feeds back to the Producer in this field; the format is as follows.
+
+![](img/client_rpc/rpc_broker_info.png)
+
+**authorizedInfo**: Master provides authorization information in the following format.
+
+![](img/client_rpc/rpc_master_authorizedinfo.png)
+
+**visitAuthorizedToken**: To prevent clients from bypassing the Master's access authorization token, if that data is available, the SDK should save it locally and carry that information on subsequent visits to the Broker; if the field is changed on subsequent heartbeats, the locally cached data for that field needs to be updated.
+
+**authAuthorizedToken**:Authenticated authorization tokens, if they have data for that field, they need to save and carry that field information for subsequent accesses to the Master and Broker; if the field is changed on subsequent heartbeats, the local cache of that field data needs to be updated.
+
+
+### 5.2 Mseeage: Heartbeat from Producer to Master:
+
+----------
+
+![](img/client_rpc/rpc_producer_heartbeat2M.png)
+
+**topicInfos**: The metadata information corresponding to the Topic published by the SDK, including partition information and the Broker where it is located, is decoded. Since there is a lot of metadata, the outflow generated by passing the object data through as is would be very large, so we made Improvements.
+
+![](img/client_rpc/rpc_convert_topicinfo.png)
+
+**requireAuth**: Code to indicates the expiration of the previous authAuthorizedToken of the Master, requiring the SDK to report the username and password signatures on the next request.
+
+### 5.3 Message: Producer exits from Master:
+
+----------
+
+![](img/client_rpc/rpc_producer_close2M.png)
+
+Note that if authentication is enable, closing operation will do the authentication to avoid external interference with the operation.
+
+### 5.4 Message: Producer to Broker:
+
+----------
+
+This part is related to the definition of RPC Message.
+
+![](img/client_rpc/rpc_producer_sendmsg2B.png)
+
+**Data** is the binary byte stream of Message.
+
+![](img/client_rpc/rpc_message_data.png)
+
+**sentAddr** is the local IPv4 address of the machine where the SDK is located converted to a 32-bit numeric ID.
+
+**msgType** is the type of filter message. `msgTime` is the message time when the SDK sends a message, its value comes from the value filled in by `putSystemHeader` when constructing Message, and there is a corresponding API in Message to get it.
+
+**requireAuth**: Required authentication operations to Broker for data production, not currently in effect due to performance issues. The authAuthorizedToken value in the sent message is based on the value provided by the Master and will change with the change of the Master.
+
+### 5.5 Partition Loadbalance:
+
+----------
+
+Apache TubeMQ currently uses a server-side load balancing mode, where the balancing process is managed and maintained by the server; subsequent versions will add a client-side load balancing mode, so that two modes can co-exist.
+
+**Server side load balancing**:
+
+- When the Master process starts, it starts the load-balancing thread balancerChore. balancerChore periodically checks the current registered consumer group for load balancing. The process is simply to evenly distribute the consumer group subscription partitions to registered clients, and periodically detect the current partition of the client If so, the extra partitions will be split to other clients with less number of subscriptions. First, the master checks if the current consumer gro [...]
+
+Translated with www.DeepL.com/Translator (free version)
+
+![](img/client_rpc/rpc_event_proto.png)
+
+**rebalanceId**:A long-type auto-increment number that indicates the round of load balance.
+
+**opType**:Operation code, and its value defined in EventType. There are only four parts of the opcode that have been implemented, as follows: `DISCONNECT`, `CONNECT`, `REPORT` and `ONLY_`. Opcode started with `ONLY` is not detailed developed.
+
+![](img/client_rpc/rpc_event_proto_optype.png)
+
+**status**:Defined in `EventStatus`, indicates the status of the event. When Master constructs a load balancing task, it sets the status to `TODO`. When receiving the client heartbeat request, master writes the task to the response message and sets the status to `PROCESSING`. The client receives a load balancing command from the heartbeat response, and then it can perform the actual connection or disconnection operation, after the operation is finished, set the command status to `DONE` u [...]
+
+![](img/client_rpc/rpc_event_proto_status.png)
+
+**subscribeInfo** indicates assigned partition information, in the format suggested by the comment.
+
+
+- Consumer Operation: When consumer receives metadata returned from master, it should establish the connection and release the operation(Refer to the opType note above). When connection established, return the operation result to master so that consumer can receive some relative job and perform. What we need to know is the LoadBalance of registration is a best-effort operation, if a new consumer send a request for connection before the consumer who occupanies the partition quits, it will [...]
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/clients_java.md b/versioned_docs/version-0.11.0/modules/tubemq/clients_java.md
new file mode 100644
index 0000000..6947bc0
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/clients_java.md
@@ -0,0 +1,251 @@
+---
+title: TubeMQ JAVA SDK API
+---
+
+
+## 1 基础对象接口介绍:
+
+### 1.1 MessageSessionFactory(消息会话工厂):
+
+TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。
+
+ 
+### 1.2 MasterInfo:
+
+TubeMQ的Master地址信息对象,该对象的特点是支持配置多个Master地址,由于TubeMQ Master借助BDB的存储能力进行元数据管理,以及服务HA热切能力,Master的地址相应地就需要配置多条信息。该配置信息支持IP、域名两种模式,由于TubeMQ的HA是热切模式,客户端要保证到各个Master地址都是连通的。该信息在初始化TubeClientConfig类对象和ConsumerConfig类对象时使用,考虑到配置的方便性,我们将多条Master地址构造成“ip1:port1,ip2:port2,ip3:port3”格式并进行解析。
+
+ 
+### 1.3 TubeClientConfig:
+
+MessageSessionFactory(消息会话工厂)初始化类,用来携带创建网络连接信息、客户端控制参数信息的对象类,包括RPC时长设置、Socket属性设置、连接质量检测参数设置、TLS参数设置、认证授权信息设置等信息,该类,连同接下来介绍的ConsumerConfig类,与TubeMQ-3.8.0版本之前版本的类变更最大的类,主要原因是在此之前TubeMQ的接口定义超6年多没有变更,接口使用上存在接口语义定义有歧义、接口属性设置单位不清晰、程序无法识别多种情况的内容选择等问题,考虑到代码开源自查问题方便性,以及新手学习成本问题,我们这次作了接口的重定义。对于重定义的前后差别,见配置接口定义说明部分介绍。
+
+ 
+
+### 1.4 ConsumerConfig:
+
+ConsumerConfig类是TubeClientConfig类的子类,它是在TubeClientConfig类基础上增加了Consumer类对象初始化时候的参数携带,因而在一个既有Producer又有Consumer的MessageSessionFactory(消息会话工厂)类对象里,会话工厂类的相关设置以MessageSessionFactory类初始化的内容为准,Consumer类对象按照创建时传递的初始化类对象为准。在consumer里又根据消费行为的不同分为Pull消费者和Push消费者两种,两种特有的参数通过参数接口携带“pull”或“push”不同特征进行区分。
+
+ 
+### 1.5 Message:
+
+Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产端原样传递给消息接收端,attribute内容是与TubeMQ系统共用的字段,业务填写的内容不会丢失和改写,但该字段有可能会新增TubeMQ系统填写的内容,并在后续的版本中,新增的TubeMQ系统内容有可能去掉而不被通知。该部分需要注意的是Message.putSystemHeader(final String msgType, final String msgTime)接口,该接口用来设置消息的消息类型和消息发送时间,msgType用于消费端过滤用,msgTime用做TubeMQ进行数据收发统计时消息时间统计维度用。
+
+ 
+
+### 1.6 MessageProducer:
+
+消息生产者类,该类完成消息的生产,消息发送分为同步发送和异步发送两种接口,目前消息采用Round Robin方式发往后端服务器,后续这块将考虑按照业务指定的算法进行后端服务器选择方式进行生产。该类使用时需要注意的是,我们支持在初始化时候全量Topic指定的publish,也支持在生产过程中临时增加对新的Topic的publish,但临时增加的Topic不会立即生效,因而在使用新增Topic前,要先调用isTopicCurAcceptPublish接口查询该Topic是否已publish并且被服务器接受,否则有可能消息发送失败。
+
+ 
+
+### 1.7 MessageConsumer:
+
+该类有两个子类PullMessageConsumer、PushMessageConsumer,通过这两个子类的包装,完成了对业务侧的Pull和Push语义。实际上TubeMQ是采用Pull模式与后端服务进行交互,为了便于业务的接口使用,我们进行了封装,大家可以看到其差别在于Push在启动时初始化了一个线程组,来完成主动的数据拉取操作。需要注意的地方在于:
+
+- a. CompleteSubscribe接口,带参数的接口支持客户端对指定的分区进行指定offset消费,不带参数的接口则按照ConsumerConfig.setConsumeModel(int consumeModel)接口进行对应的消费模式设置来消费数据;
+	
+- b. 对subscribe接口,其用来定义该消费者的消费目标,而filterConds参数表示对待消费的Topic是否进行过滤消费,以及如果做过滤消费时要过滤的msgType消息类型值。如果不需要进行过滤消费,则该参数填为null,或者空的集合值。
+
+ 
+
+------
+
+
+
+## 2 接口调用示例:
+
+### 2.1 环境准备:
+
+TubeMQ开源包org.apache.tubemq.example里提供了生产和消费的具体代码示例,这里我们通过一个实际的例子来介绍如何填参和调用对应接口。首先我们搭建一个带3个Master节点的TubeMQ集群,3个Master地址及端口分别为test_1.domain.com,test_2.domain.com,test_3.domain.com,端口均为8080,在该集群里我们建立了若干个Broker,并且针对Broker我们创建了3个topic:topic_1,topic_2,topic_3等Topic配置;然后我们启动对应的Broker等待Consumer和Producer的创建。
+
+ 
+### 2.2 创建Consumer:
+
+见包org.apache.tubemq.example.MessageConsumerExample类文件,Consumer是一个包含网络交互协调的客户端对象,需要做初始化并且长期驻留内存重复使用的模型,它不适合单次拉起消费的场景。如下图示,我们定义了MessageConsumerExample封装类,在该类中定义了进行网络交互的会话工厂MessageSessionFactory类,以及用来做Push消费的PushMessageConsumer类:
+
+##### 2.2.1 初始化MessageConsumerExample类:
+
+1. 首先构造一个ConsumerConfig类,填写初始化信息,包括本机IP V4地址,Master集群地址,消费组组名信息,这里Master地址信息传入值为:”test_1.domain.com:8080,test_2.domain.com:8080,test_3.domain.com:8080”;
+
+2. 然后设置消费模式:我们设置首次从队列尾消费,后续接续消费模式;
+
+3. 然后设置Push消费时回调函数个数
+
+4. 进行会话工厂初始化操作:该场景里我们选择建立单链接的会话工厂;
+
+5. 在会话工厂创建模式的消费者:
+
+```java
+public final class MessageConsumerExample {
+	private static final Logger logger = 
+        LoggerFactory.getLogger(MessageConsumerExample.class);
+    private static final MsgRecvStats msgRecvStats = new MsgRecvStats();
+    private final String masterHostAndPort;
+    private final String localHost;
+    private final String group;
+    private PushMessageConsumer messageConsumer;
+    private MessageSessionFactory messageSessionFactory;
+    
+    public MessageConsumerExample(String localHost,
+                                  String masterHostAndPort,
+                                  String group,
+                                  int fetchCount) throws Exception {
+        this.localHost = localHost;
+        this.masterHostAndPort = masterHostAndPort;
+        this.group = group;
+        ConsumerConfig consumerConfig = 
+            new ConsumerConfig(this.localHost,this.masterHostAndPort, this.group);
+        consumerConfig.setConsumeModel(0);
+        if (fetchCount > 0) {
+            consumerConfig.setPushFetchThreadCnt(fetchCount);
+        }
+        this.messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+        this.messageConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
+    }
+}
+```
+
+
+
+#### 2.2.2 订阅Topic:
+
+我们没有采用指定Offset消费的模式进行订阅,也没有过滤需求,因而我们在如下代码里只做了Topic的指定,对应的过滤项集合我们传的是null值,同时,对于不同的Topic,我们可以传递不同的消息回调处理函数;我们这里订阅了3个topic,topic_1,topic_2,topic_3,每个topic分别调用subscribe函数进行对应参数设置:
+
+```java
+public void subscribe(final Map<String, TreeSet<String>> topicStreamIdsMap)
+    throws TubeClientException {
+    for (Map.Entry<String, TreeSet<String>> entry : topicStreamIdsMap.entrySet()) {
+        this.messageConsumer.subscribe(entry.getKey(),
+                                       entry.getValue(), 
+                                       new DefaultMessageListener(entry.getKey()));
+    }
+    messageConsumer.completeSubscribe();
+}
+```
+
+
+
+#### 2.2.3 进行消费:
+
+到此,对集群里对应topic的订阅就已完成,系统运行开始后,回调函数里数据将不断的通过回调函数推送到业务层进行处理:
+
+```java
+public class DefaultMessageListener implements MessageListener {
+
+    private String topic;
+
+    public DefaultMessageListener(String topic) {
+        this.topic = topic;
+    }
+
+    public void receiveMessages(PeerInfo peerInfo, final List<Message> messages) throws InterruptedException 
+    {
+        if (messages != null && !messages.isEmpty()) {
+            msgRecvStats.addMsgCount(this.topic, messages.size());
+        }
+    }
+
+    public Executor getExecutor() {
+        return null;
+    }
+
+    public void stop() {
+    }
+}
+```
+
+
+
+### 2.3 创建Producer:
+
+现网环境中业务的数据都是通过代理层来做接收汇聚,包装了比较多的异常处理,大部分的业务都没有也不会接触到TubeSDK的Producer类,考虑到业务自己搭建集群使用TubeMQ进行使用的场景,这里提供对应的使用demo,见包org.apache.tubemq.example.MessageProducerExample类文件供参考,**需要注意**的是,业务除非使用数据平台的TubeMQ集群做MQ服务,否则仍要按照现网的接入流程使用代理层来进行数据生产:
+
+- **i. 初始化MessageProducerExample类:**
+
+和Consumer的初始化类似,也是构造了一个封装类,定义了一个会话工厂,以及一个Producer类,生产端的会话工厂初始化通过TubeClientConfig类进行,如之前所介绍的,ConsumerConfig类是TubeClientConfig类的子类,虽然传入参数不同,但会话工厂是通过TubeClientConfig类完成的初始化处理:
+
+```java
+public final class MessageProducerExample {
+
+    private static final Logger logger = 
+        LoggerFactory.getLogger(MessageProducerExample.class);
+    private static final ConcurrentHashMap<String, AtomicLong> counterMap = 
+        new ConcurrentHashMap<String, AtomicLong>();
+    String[] arrayKey = {"aaa", "bbb", "ac", "dd", "eee", "fff", "gggg", "hhhh"};
+    private MessageProducer messageProducer;
+    private TreeSet<String> filters = new TreeSet<String>();
+    private int keyCount = 0;
+    private int sentCount = 0;
+    private MessageSessionFactory messageSessionFactory;
+
+    public MessageProducerExample(final String localHost, final String masterHostAndPort) 
+        throws Exception {
+        filters.add("aaa");
+        filters.add("bbb");
+        TubeClientConfig clientConfig = 
+            new TubeClientConfig(localHost, masterHostAndPort);
+        this.messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+        this.messageProducer = this.messageSessionFactory.createProducer();
+    }
+}
+```
+
+
+
+#### 2.3.1 发布Topic:
+
+```java
+public void publishTopics(List<String> topicList) throws TubeClientException {
+    this.messageProducer.publish(new TreeSet<String>(topicList));
+}
+```
+
+
+
+#### 2.3.2 进行数据生产:
+
+如下所示,则为具体的数据构造和发送逻辑,构造一个Message对象后调用sendMessage()函数发送即可,有同步接口和异步接口选择,依照业务要求选择不同接口;需要注意的是该业务根据不同消息调用message.putSystemHeader()函数设置消息的过滤属性和发送时间,便于系统进行消息过滤消费,以及指标统计用。完成这些,一条消息即被发送出去,如果返回结果为成功,则消息被成功的接纳并且进行消息处理,如果返回失败,则业务根据具体错误码及错误提示进行判断处理,相关错误详情见《TubeMQ错误信息介绍.xlsx》:
+
+```java
+public void sendMessageAsync(int id, long currtime,
+                             String topic, byte[] body,
+                             MessageSentCallback callback) {
+    Message message = new Message(topic, body);
+    SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMddHHmm");
+    long currTimeMillis = System.currentTimeMillis();
+    message.setAttrKeyVal("index", String.valueOf(1));
+    String keyCode = arrayKey[sentCount++ % arrayKey.length];
+    message.putSystemHeader(keyCode, sdf.format(new Date(currTimeMillis))); 
+    if (filters.contains(keyCode)) {
+        keyCount++;
+    }
+    try {
+        message.setAttrKeyVal("dataTime", String.valueOf(currTimeMillis));
+        messageProducer.sendMessage(message, callback);
+    } catch (TubeClientException e) {
+        logger.error("Send message failed!", e);
+    } catch (InterruptedException e) {
+        logger.error("Send message failed!", e);
+    }
+}
+```
+
+
+
+#### 2.3.3 Producer不同类MAMessageProducerExample关注点:
+
+该类初始化与MessageProducerExample类不同,采用的是TubeMultiSessionFactory多会话工厂类进行的连接初始化,该demo提供了如何使用多会话工厂类的特性,可以用于通过多个物理连接提升系统吞吐量的场景(TubeMQ通过连接复用模式来减少物理连接资源的使用),恰当使用可以提升系统的生产性能。在Consumer侧也可以通过多会话工厂进行初始化,但考虑到消费是长时间过程处理,对连接资源的占用比较小,消费场景不推荐使用。
+
+ 
+
+自此,整个生产和消费的示例已经介绍完,大家可以直接下载对应的代码编译跑一边,看看是不是就是这么简单😊
+
+---
+<a href="#top">Back to top</a>
+ 
+
+ 
+
+ 
+
+ 
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/configure_introduction.md b/versioned_docs/version-0.11.0/modules/tubemq/configure_introduction.md
new file mode 100644
index 0000000..23f2ee5
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/configure_introduction.md
@@ -0,0 +1,172 @@
+---
+title: Configure Introduction
+---
+
+## 1 TubeMQ configuration item description
+
+The TubeMQ server includes two modules for the Master and the Broker. The Master also includes a Web front-end module for external page access (this part is stored in the resources). Considering the actual deployment, two modules are often deployed in the same machine, TubeMQ. The contents of the three parts of the two modules are packaged and delivered to the operation and maintenance; the client does not include the lib package of the server part and is delivered to the user separately.
+
+Master and Broker use the ini configuration file format, and the relevant configuration files are placed in the master.ini and broker.ini files in the tubemq-server-3.9.0/conf/ directory:
+![](img/configure/conf_ini_pos.png)
+
+Their configuration is defined by a set of configuration units. The Master configuration consists of four mandatory units: [master], [zookeeper], [bdbStore], and optional [tlsSetting]. The Broker configuration is mandatory. Broker], [zookeeper] and optional [tlsSetting] consist of a total of 3 configuration units; in actual use, you can also combine the contents of the two configuration files into one ini file.
+
+In addition to the back-end system configuration file, the Master also stores the Web front-end page module in the resources. The root directory velocity.properties file of the resources is the Web front-end page configuration file of the Master.
+![](img/configure/conf_velocity_pos.png)
+
+
+## 2 Configuration item details:
+
+### 2.1 master.ini file:
+[master]
+> Master system runs the main configuration unit, required unit, the value is fixed to "[master]"
+
+| Name                          | Required                          | Type                          | Description                                                  |
+| ----------------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| hostName                      | yes      | string  | The host address of the master external service, required, must be configured on the NIC, is enabled, non-loopback and cannot be IP of 127.0.0.1 |
+| port                          | no       | int     | Master listening port, optional, default is 8715             |
+| webPort                       | no       | int     | Master web console access port, the default value is 8080    |
+| webResourcePath               | yes      | string  | Master Web Resource deploys an absolute path, which is required. If the value is set incorrectly, the web page will not display properly. |
+| confModAuthToken              | no       | string  | The authorization Token provided by the operator when the change operation (including adding, deleting, changing configuration, and changing the master and managed Broker status) is performed by the Master's Web or API. The value is optional. The default is "ASDFGHJKL". |
+| firstBalanceDelayAfterStartMs | no       | long    | Master starts to the interval of the first time to start Rebalance, optional, default 30000 milliseconds |
+| consumerBalancePeriodMs       | no       | long    | The master balances the rebalance period of the consumer group. The default is 60000 milliseconds. When the cluster size is large, increase the value. |
+| consumerHeartbeatTimeoutMs    | no       | long    | Consumer heartbeat timeout period, optional, default 30000 milliseconds, when the cluster size is large, please increase the value |
+| producerHeartbeatTimeoutMs    | no       | long    | Producer heartbeat timeout period, optional, default 30000 milliseconds, when the cluster size is large, please increase the value |
+| brokerHeartbeatTimeoutMs      | no       | long    | Broker heartbeat timeout period, optional, default 30000 milliseconds, when the cluster size is large, please increase the value |
+| rebalanceParallel      | no       | int    | Master rebalance parallelism, optional, default 4, the value range of this field is [1, 20], when the cluster size is large, please increase the value |
+| socketRecvBuffer              | no       | long    | Socket receives the size of the Buffer buffer SO_RCVBUF, the unit byte, the negative number is set as the default value |
+| socketSendBuffer              | no       | long    | Socket sends Buffer buffer SO_SNDBUF size, unit byte, negative number is  set as the default value |
+| maxAutoForbiddenCnt           | no       | int     | When the broker has an IO failure, the maximum number of masters allowed to automatically go offline is the number of options. The default value is 5. It is recommended that the value does not exceed 10% of the total number of brokers in the cluster. |
+| startOffsetResetCheck         | no       | boolean | Whether to enable the check function of the client Offset reset function, optional, the default is false |
+| needBrokerVisitAuth           | no       | boolean | Whether to enable Broker access authentication, the default is false. If true, the message reported by the broker must carry the correct username and signature information. |
+| visitName                     | no       | string  | The username of the Broker access authentication. The default is an empty string. This value must exist when needBrokerVisitAuth is true. This value must be the same as the value of the visitName field in broker.ini. |
+| visitPassword                 | no       | string  | The password for the Broker access authentication. The default is an empty string. This value must exist when needBrokerVisitAuth is true. This value must be the same as the value of the visitPassword field in broker.ini. |
+| startVisitTokenCheck      | no       | boolean | Whether to enable client visitToken check, the default is false |
+| startProduceAuthenticate      | no       | boolean | Whether to enable production end user authentication, the default is false |
+| startProduceAuthorize         | no       | boolean | Whether to enable production-side production authorization authentication, the default is false |
+| startConsumeAuthenticate      | no       | boolean | Whether to enable consumer user authentication, the default is false |
+| startConsumeAuthorize         | no       | boolean | Whether to enable consumer consumption authorization authentication, the default is false |
+| maxGroupBrokerConsumeRate     | no       | int     | The maximum ratio of the number of clustered brokers to the number of members in the consumer group. The default is 50. In a 50-kerrow cluster, one consumer group is allowed to start at least one client. |
+| metaDataPath                  | no       | string  | Metadata storage path. Absolute, or relative to TubeMQ base directory (`$BASE_DIR`). Optional field, default is "var/meta_data". Should be the same as "[bdbStore].bdbEnvHome" if upgrade from version prior `0.5.0`. |
+
+[zookeeper]
+>The corresponding Tom MQ cluster of the Master stores the information about the ZooKeeper cluster of the Offset. The required unit has a fixed value of "[zookeeper]".
+
+| Name                  | Required                          | Type                          | Description                                                  |
+| --------------------- |  -----------------------------|  ----------------------------- | ------------------------------------------------------------ |
+| zkServerAddr          | no       | string | Zk server address, optional configuration, defaults to "localhost:2181" |
+| zkNodeRoot            | no       | string | The root path of the node on zk, optional configuration. The default is "/tube". |
+| zkSessionTimeoutMs    | no       | long   | Zk heartbeat timeout, in milliseconds, default 30 seconds    |
+| zkConnectionTimeoutMs | no       | long   | Zk connection timeout, in milliseconds, default 30 seconds   |
+| zkSyncTimeMs          | no       | long   | Zk data synchronization time, in milliseconds, default 5 seconds |
+| zkCommitPeriodMs      | no       | long   | The interval at which the Master cache data is flushed to zk, in milliseconds, default 5 seconds. |
+
+[replication]
+>Replication configuration for metadata storage replication and multi-node hot standby between Masters. The required unit has a fixed value of "[replication]".
+
+| Name                    | Required                          | Type                          | Description                                                  |
+| ----------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| repGroupName            | no       | string | Cluster name, the primary and backup master node values must be the same. Optional field, default is "tubemqMasterGroup". |
+| repNodeName             | yes      | string | The name of the master node in the cluster. The value of each node MUST BE DIFFERENT. Required field. |
+| repNodePort             | no       | int    | Node communication port, optional field, default is 9001. |
+| repHelperHost           | no       | string | Primary node when the master cluster starts, optional field, default is "127.0.0.1:9001". |
+| metaLocalSyncPolicy     | no       | int    | Replication data node local storage mode, the value range of this field is [1, 2, 3]. The default is 1: 1 is data saved to disk, 2 is data only saved to memory, and 3 is only data is written to file system buffer without flush. |
+| metaReplicaSyncPolicy   | no       | int    | Replication data node synchronization save mode, the value range of this field is [1, 2, 3]. The default is 1: 1 is data saved to disk, 2 is data only saved to memory, and 3 is only data is written to file system buffer without flush. |
+| repReplicaAckPolicy     | no       | int    | The response policy of the replication node data synchronization, the value range of this field is [1, 2, 3], the default is 1: 1 is more than 1/2 majority is valid, 2 is valid for all nodes, 3 is not Need node response. |
+| repStatusCheckTimeoutMs | no       | long   | Replication status check interval, optional field, in milliseconds, defaults to 10 seconds. |
+
+[bdbStore]
+>Deprecated, config in "[replication]" instead.
+
+>Master configuration of the BDB cluster to which the master belongs. The master uses BDB for metadata storage and multi-node hot standby. The required unit has a fixed value of "[bdbStore]".
+
+| Name                    | Required                          | Type                          | Description                                                  |
+| ----------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| bdbRepGroupName         | yes      | string | BDB cluster name, the primary and backup master node values must be the same, required field |
+| bdbNodeName             | yes      | string | The name of the node of the master in the BDB cluster. The value of each BDB node must not be repeated. Required field. |
+| bdbNodePort             | no       | int    | BDB node communication port, optional field, default is 9001 |
+| bdbEnvHome              | yes      | string | BDB data storage path, required field                        |
+| bdbHelperHost           | yes      | string | Primary node when the BDB cluster starts, required field     |
+| bdbLocalSync            | no       | int    | BDB data node local storage mode, the value range of this field is [1, 2, 3]. The default is 1: 1 is data saved to disk, 2 is data only saved to memory, and 3 is only data is written to file system buffer. But not brush |
+| bdbReplicaSync          | no       | int    | BDB data node synchronization save mode, the value range of this field is [1, 2, 3]. The default is 1: 1 is data saved to disk, 2 is data only saved to memory, and 3 is only data is written to file system buffer. But not brush |
+| bdbReplicaAck           | no       | int    | The response policy of the BDB node data synchronization, the value range of this field is [1, 2, 3], the default is 1: 1 is more than 1/2 majority is valid, 2 is valid for all nodes, 3 is not Need node response |
+| bdbStatusCheckTimeoutMs | no       | long   | BDB status check interval, optional field, in milliseconds, defaults to 10 seconds |
+
+[tlsSetting]
+>The Master uses TLS to encrypt the transport layer data. When TLS is enabled, the configuration unit provides related settings. The optional unit has a fixed value of "[tlsSetting]".
+
+| Name                  | Required                          | Type                          | Description                                                  |
+| --------------------- |  -----------------------------|  ----------------------------- | ------------------------------------------------------------ |
+| tlsEnable             | no       | boolean | Whether to enable TLS function, optional configuration, default is false |
+| tlsPort               | no       | int     | Master TLS port number, optional configuration, default is 8716 |
+| tlsKeyStorePath       | no       | string  | The absolute storage path of the TLS keyStore file + the name of the keyStore file. This field is required and cannot be empty when the TLS function is enabled. |
+| tlsKeyStorePassword   | no       | string  | The absolute storage path of the TLS keyStorePassword file + the name of the keyStorePassword file. This field is required and cannot be empty when the TLS function is enabled. |
+| tlsTwoWayAuthEnable   | no       | boolean | Whether to enable TLS mutual authentication, optional configuration, the default is false |
+| tlsTrustStorePath     | no       | string  | The absolute storage path of the TLS TrustStore file + the TrustStore file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+| tlsTrustStorePassword | no       | string  | The absolute storage path of the TLS TrustStorePassword file + the TrustStorePassword file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+
+### 2.2 velocity.properties file:
+
+| Name                      | Required                          | Type                          | Description                                                  |
+| ------------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| file.resource.loader.path | yes      | string | The absolute path of the master web template. This part is the absolute path plus /resources/templates of the project when the master is deployed. The configuration is consistent with the actual deployment. If the configuration fails, the master front page access fails. |
+
+### 2.3 broker.ini file:
+
+[broker]
+>The broker system runs the main configuration unit, required unit, and the value is fixed to "[broker]"
+
+| Name                  | Required                          | Type                          | Description                                                  |
+| --------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| brokerId              | yes      | int     | Server unique flag, required field, can be set to 0; when set to 0, the system will default to take the local IP to int value |
+| hostName              | yes      | string  | The host address of the broker external service, required, must be configured in the NIC, is enabled, non-loopback and cannot be IP of 127.0.0.1 |
+| port                  | no       | int     | Broker listening port, optional, default is 8123             |
+| webPort               | no       | int     | Broker's http management access port, optional, default is 8081 |
+| masterAddressList     | yes      | string  | Master address list of the cluster to which the broker belongs. Required fields. The format must be ip1:port1, ip2:port2, ip3:port3. |
+| primaryPath           | yes      | string  | Broker stores the absolute path of the message, mandatory field |
+| maxSegmentSize        | no       | int     | Broker stores the file size of the message data content, optional field, default 512M, maximum 1G |
+| maxIndexSegmentSize   | no       | int     | Broker stores the file size of the message Index content, optional field, default 18M, about 70W messages per file |
+| transferSize          | no       | int     | Broker allows the maximum message content size to be transmitted to the client each time, optional field, default is 512K |
+| consumerRegTimeoutMs  | no       | long    | Consumer heartbeat timeout, optional, in milliseconds, default 30 seconds |
+| socketRecvBuffer      | no       | long    | Socket receives the size of the Buffer buffer SO_RCVBUF, the unit byte, the negative number is not set, the default value is |
+| socketSendBuffer      | no       | long    | Socket sends Buffer buffer SO_SNDBUF size, unit byte, negative number is not set, the default value is |
+| tcpWriteServiceThread | no       | int     | Broker supports the number of socket worker threads for TCP production services, optional fields, and defaults to 2 times the number of CPUs of the machine. |
+| tcpReadServiceThread  | no       | int     | Broker supports the number of socket worker threads for TCP consumer services, optional fields, defaults to 2 times the number of CPUs of the machine |
+| logClearupDurationMs  | no       | long    | The aging cleanup period of the message file, in milliseconds. The default is 3 minutes for a log cleanup operation. The minimum is 1 minutes. |
+| logFlushDiskDurMs     | no       | long    | Batch check message persistence to file check cycle, in milliseconds, default is 20 seconds for a full check and brush |
+| visitTokenCheckInValidTimeMs       | no       | long | The length of the delay check for the visitToken check since the Broker is registered, in ms, the default is 120000, the value range [60000, 300000]. |
+| visitMasterAuth       | no       | boolean | Whether the authentication of the master is enabled, the default is false. If true, the user name and signature information are added to the signaling reported to the master. |
+| visitName             | no       | string  | User name of the access master. The default is an empty string. This value must exist when visitMasterAuth is true. The value must be the same as the value of the visitName field in master.ini. |
+| visitPassword         | no       | string  | The password for accessing the master. The default is an empty string. This value must exist when visitMasterAuth is true. The value must be the same as the value of the visitPassword field in master.ini. |
+| logFlushMemDurMs      | no       | long    | Batch check message memory persistence to file check cycle, in milliseconds, default is 10 seconds for a full check and brush |
+
+[zookeeper]
+>The Tube MQ cluster corresponding to the Broker stores the information about the ZooKeeper cluster of the Offset. The required unit has a fixed value of "[zookeeper]".
+
+
+| Name                  | Required                          | Type                          | Description                                                  |
+| --------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| zkServerAddr          | no       | string | Zk server address, optional configuration, defaults to "localhost:2181" |
+| zkNodeRoot            | no       | string | The root path of the node on zk, optional configuration. The default is "/tube". |
+| zkSessionTimeoutMs    | no       | long   | Zk heartbeat timeout, in milliseconds, default 30 seconds    |
+| zkConnectionTimeoutMs | no       | long   | Zk connection timeout, in milliseconds, default 30 seconds   |
+| zkSyncTimeMs          | no       | long   | Zk data synchronization time, in milliseconds, default 5 seconds |
+| zkCommitPeriodMs      | no       | long   | The interval at which the broker cache data is flushed to zk, in milliseconds, default 5 seconds |
+| zkCommitFailRetries   | no       | int    | The maximum number of re-brushings after Broker fails to flush cached data to Zk |
+
+[tlsSetting]
+>The Master uses TLS to encrypt the transport layer data. When TLS is enabled, the configuration unit provides related settings. The optional unit has a fixed value of "[tlsSetting]".
+
+
+| Name                  | Required                          | Type                           | Description                                                  |
+| --------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| tlsEnable             | no       | boolean | Whether to enable TLS function, optional configuration, default is false |
+| tlsPort               | no       | int     | Broker TLS port number, optional configuration, default is 8124 |
+| tlsKeyStorePath       | no       | string  | The absolute storage path of the TLS keyStore file + the name of the keyStore file. This field is required and cannot be empty when the TLS function is enabled. |
+| tlsKeyStorePassword   | no       | string  | The absolute storage path of the TLS keyStorePassword file + the name of the keyStorePassword file. This field is required and cannot be empty when the TLS function is enabled. |
+| tlsTwoWayAuthEnable   | no       | boolean | Whether to enable TLS mutual authentication, optional configuration, the default is false |
+| tlsTrustStorePath     | no       | string  | The absolute storage path of the TLS TrustStore file + the TrustStore file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+| tlsTrustStorePassword | no       | string  | The absolute storage path of the TLS TrustStorePassword file + the TrustStorePassword file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/console_introduction.md b/versioned_docs/version-0.11.0/modules/tubemq/console_introduction.md
new file mode 100644
index 0000000..415f99b
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/console_introduction.md
@@ -0,0 +1,118 @@
+---
+title: Console Introduction
+---
+
+## 1 管控台关系
+
+​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:
+![](img/console/1568169770714.png)
+​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。
+
+
+## 2 TubeMQ管控台各版面介绍
+
+​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topic列表2个部分,我们先介绍简单的分发查询和集群管理,然后再介绍复杂的配置管理。
+
+### 2.1 分发查询
+
+​        点分发查询,我们会看到如下的列表信息,这是当前TubeMQ集群里已注册的消费组信息,包括具体的消费组组名,消费的Topic,以及该组总的消费分区数简介信息,如下图示:
+![](img/console/1568169796122.png)
+​       点击记录,可以看到选中的消费组里的消费者成员,及对应消费的Broker及Partition分区信息,如下图示:
+![](img/console/1568169806810.png)
+
+​       这个页面可以供我们查询,输入Topic或者消费组名,就可以很快确认系统里有哪些消费组在消费Topic,以及每个消费组的消费目标是怎样这些信息。
+
+### 2.2 集群管理
+
+​        集群管理主要管理Master的HA,在这个页面上我们可以看到当前Master的各个节点及节点状态,同时,我们可以通过“切换”操作来改变节点的主备状态。
+![](img/console/1568169823675.png)
+
+### 2.3 配置管理
+
+​        配置管理版面既包含了Broker、Topic元数据的管理,还包含了Broker和Topic的上线发布以及下线操作,有2层含义,比如Broker列表里,展示的是当前集群里已配置的Broker元数据,包括未上线处于草稿状态、已上线、已下线的Broker记录信息:
+![](img/console/1568169839931.png)
+
+​        从页面信息我们也可以看到,除了Broker的记录信息外,还有Broker在该集群里的管理信息,包括是否已上线,是否处于命令处理中,是否可读,是否可写,配置是否做了更改,是否已加载变更的配置信息。
+
+​        点单个新增,会弹框如下,这个表示待新增Broker的元数据信息,包括BrokerID,BrokerIP,BrokerPort,以及该Broker里部署的Topic的缺省配置信息,相关的字段详情见《TubeMQ HTTP访问接口定义.xls》
+![](img/console/1568169851085.png)
+
+​        所有TubeMQ管控台的变更操作,或者改变操作,都会要求输入操作授权码,该信息由运维通过Master的配置文件master.ini的confModAuthToken字段进行定义:如果你知道这个集群的密码,你就可以进行该项操作,比如你是管理员,你是授权人员,或者你能登陆这个master的机器拿到这个密码,都认为你是有权操作该项功能。
+
+## 3 TubeMQ管控台上涉及的操作及注意事项
+
+​       如上所说,TubeMQ管控台是运营Tube MQ集群的,套件负责包括Master、Broker这类TubeMQ集群节点管理,包括自动部署和安装等,因此,如下几点需要注意:
+
+​       1. **TubeMQ集群做扩缩容增、减Broker节点时,要先在TubeMQ管控台上做相应的节点新增、上线,以及下线、删除等操作后才能在物理环境上做对应Broker节点的增删处理**:
+
+​        TubeMQ集群对Broker按照状态机管理,如上图示涉及到[draft,online,read-only,write-only,offline] 等状态,记录增加还没生效时是draft状态,确定上线后是online态;节点删除首先要由online状态转为offline状态,然后再通过删除操作清理系统内保存的该节点记录;draft、online和offline是为了区分各个节点所处的环节,Master只将online状态的Broker分发给对应的producer和consumer进行生产和消费;read-only,write-only是Broker处于online状态的子状态,表示只能读或者只能写Broker上的数据;相关的状态及操作见页面详情,增加一条记录即可明白其中的关系。TubeMQ管控台上增加这些记录后,我们就可以进行Broker节点的部署及启动,这个时候Tube集群环境的页面会显示节点运行状态,如果为unregister状态,如下图示,则表示节点注册失败,需要到对应broker节点上检查日志,确认
 原因。目前该部分已经很成熟,出错信息会提示完整 [...]
+![](img/console/1568169863402.png)
+​        2. **Topic元数据信息需要通过套件的业务使用界面进行新增和删除操作:**
+
+​       如下图,业务发现自己消费的Topic在TubeMQ管控台上没有,则需要在TubeMQ的管控台上直接操作:
+![](img/console/1568169879529.png)
+
+​       我们通过如上图中的Topic列表项完成Topic的新增,会弹出如下框,
+![](img/console/1568169889594.png)
+
+​       点击确认后会有一个选择部署该新增Topic的Broker列表,选择部署范围后进行确认操作:
+![](img/console/1568169900634.png)
+
+​       在完成新增Topic的操作后,我们还需要对刚进行变更的配置对Broker进行重载操作,如下图示:
+![](img/console/1568169908522.png)
+
+​       重载完成后Topic才能对外使用,我们会发现如下配置变更部分在重启完成后已改变状态:
+![](img/console/1568169916091.png)
+
+​       这个时候我们就可以针对该Topic进行生产和消费处理。
+
+## 4 对于Topic的元数据进行变更后的操作注意事项:
+
+### 4.1 如何自行配置Topic参数:
+
+​       大家点击Topic列表里任意Topic后,会弹出如下框,里面是该Topic的相关元数据信息,其决定了这个Topic在该Broker上,设置了多少个分区,当前读写状态,数据刷盘频率,数据老化周期和时间等信息:
+![](img/console/1568169925657.png)
+
+​       这些信息由系统管理员设置好默认值后直接定义的,一般不会改变,若业务有特殊需求,比如想增加消费的并行度增多分区,或者想减少刷盘频率,怎么操作?如下图示,各个页面的字段含义及作用如下表:
+
+| 配置项              | 配置名                                | 字段类型 | 说明                                                         |
+| ------------------- | ------------------------------------- | -------- | ------------------------------------------------------------ |
+| topicName           | topic名称                             | String   | 字串长度(0,64],以字母开头的字母,数字,下划线的字符串,如果批量新增topic,topic值以","隔开,最大批量值为50条 |
+| brokerId            | broker的ID                            | int      | 待新增的BrokerId,批量操作的brokerId数字以","隔开,最大批量操作量不超过50 |
+| deleteWhen          | topic数据删除时间                     | String   | 按照crontab的配置格式定义,如“0 0 6,18 * *   ?”,缺省为broker的对应字段缺省配置 |
+| deletePolicy        | 删除策略                              | String   | topic数据删除策略,类似"delete,168"定义,缺省为broker的对应字段缺省配置 |
+| numPartitions       | topic在该broker上的分区量             | int      | 缺省为broker的对应字段缺省配置                               |
+| unflushThreshold    | 最大允许的待刷新的记录条数            | int      | 最大允许的未flush消息数,超过此值将强制force到磁盘,默认1000,缺省为broker的对应字段缺省配置 |
+| unflushInterval     | 最大允许的待刷新的间隔                | int      | 最大允许的未flush间隔时间,毫秒,默认10000,缺省为broker的对应字段缺省配置 |
+| numTopicStores      | 允许建立Topic数据块和分区管理组的个数 | int      | 缺省为1个,如果大于1则分区和topic对列按照该值倍乘关系         |
+| memCacheMsgCntInK   | 缺省最大内存缓存包量                  | int      | 内存最大允许缓存的消息包总条数,单位为千条,缺省为10K,最少允许1K |
+| memCacheMsgSizeInMB | 缺省内存缓存包总的Size大小            | int      | 内存最大允许缓存的消息包size总大小,单位为MB,缺省为3M,最小需要为2M |
+| memCacheFlushIntvl  | 内存缓存最大允许的待刷新间隔          | int      | 内存最大允许未flush时间间隔,毫秒,默认20000ms,最小4000ms    |
+| acceptPublish       | topic是否接收发布请求                 | boolean  | 缺省为true,取值范围[true,false]                            |
+| acceptSubscribe     | topic是否接收订阅请求                 | boolean  | 缺省为true,取值范围[true,false]                            |
+| createUser          | topic创建人                           | String   | 字串长度(0,32],以字母开头的字母,数字,下划线的字符串        |
+| createDate          | 创建时间                              | String   | 字串格式:"yyyyMMddHHmmss",必须为14位按如上格式的数字字符串   |
+| confModAuthToken    | 配置修改授权key                       | String   | 以字母开头的字母,数字,下划线的字符串,长度为(0,128]位     |
+
+​       该部分字段相关字段详情见《Tube MQ HTTP访问接口定义.xls》,有很明确的定义。大家通过页面右上角的**修改**按钮进行修改,并确认后,会弹出如下框:
+![](img/console/1568169946683.png)
+
+其作用是:a. 选择涉及该Topic元数据修改的Broker节点集合;b. 提供变更操作的授权信息码。
+
+**特别提醒:大家还需要注意的是,输入授权码修改后,数据变更要刷新后才会生效,同时生效的Broker要按比例进行操作。**
+![](img/console/1568169954746.png)
+
+### 4.2 Topic变更注意事项:
+
+​       如上图示,选择变更Topic元数据后,之前选中的Broker集合会在**配置是否已变更**上出现是的提示。我们还需要对变更进行重载刷新操作,选择Broker集合,然后选择刷新操作,可以批量也可以单条,但是一定要注意的是:操作要分批进行,上一批操作的Broker当前运行状态为running后才能进入下一批的配置刷新操作;如果有节点处于online状态,但长期不进入running状态(缺省最大2分钟),则需要停止刷新,排查问题原因后再继续操作。
+
+​       进行分批操作原因是,我们系统在变更时,会对指定的Broker做停读停写操作,如果将全量的Broker统一做重载,很明显,集群整体会出现服务不可读或者不可写的情况,从而接入出现不该有的异常。
+
+### 4.3 对于Topic的删除处理:
+
+​       页面上进行的删除是软删除处理,如果要彻底删除该topic需要通过API接口进行硬删除操作处理才能实现(避免业务误操作)。
+
+​       完成如上内容后,Topic元数据就变更完成。
+
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/consumer_example.md b/versioned_docs/version-0.11.0/modules/tubemq/consumer_example.md
new file mode 100644
index 0000000..510c69a
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/consumer_example.md
@@ -0,0 +1,77 @@
+---
+title: Consumer Example
+---
+
+## 1 Consumer Example
+  TubeMQ provides two ways to consumer message, PullConsumer and PushConsumer:
+
+### 1.1 PullConsumer 
+    ```java
+    public class PullConsumerExample {
+
+        public static void main(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final String topic = "test";
+            final String group = "test-group";
+            final ConsumerConfig consumerConfig = new ConsumerConfig(masterHostAndPort, group);
+            consumerConfig.setConsumePosition(ConsumePosition.CONSUMER_FROM_LATEST_OFFSET);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+            final PullMessageConsumer messagePullConsumer = messageSessionFactory.createPullConsumer(consumerConfig);
+            messagePullConsumer.subscribe(topic, null);
+            messagePullConsumer.completeSubscribe();
+            // wait for client to join the exact consumer queue that consumer group allocated
+            while (!messagePullConsumer.isPartitionsReady(1000)) {
+                ThreadUtils.sleep(1000);
+            }
+            while (true) {
+                ConsumerResult result = messagePullConsumer.getMessage();
+                if (result.isSuccess()) {
+                    List<Message> messageList = result.getMessageList();
+                    for (Message message : messageList) {
+                        System.out.println("received message : " + message);
+                    }
+                    messagePullConsumer.confirmConsume(result.getConfirmContext(), true);
+                }
+            }
+        }   
+
+    }
+    ``` 
+   
+### 1.2 PushConsumer
+    ```java
+    public class PushConsumerExample {
+   
+        public static void test(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final String topic = "test";
+            final String group = "test-group";
+            final ConsumerConfig consumerConfig = new ConsumerConfig(masterHostAndPort, group);
+            consumerConfig.setConsumePosition(ConsumePosition.CONSUMER_FROM_LATEST_OFFSET);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+            final PushMessageConsumer pushConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
+            pushConsumer.subscribe(topic, null, new MessageListener() {
+
+                @Override
+                public void receiveMessages(PeerInfo peerInfo, List<Message> messages) throws InterruptedException {
+                    for (Message message : messages) {
+                        System.out.println("received message : " + new String(message.getData()));
+                    }
+                }
+
+                @Override
+                public Executor getExecutor() {
+                    return null;
+                }
+
+                @Override
+                public void stop() {
+                    //
+                }
+            });
+            pushConsumer.completeSubscribe();
+            CountDownLatch latch = new CountDownLatch(1);
+            latch.await(10, TimeUnit.MINUTES);
+        }
+    }
+    ```
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/deployment.md b/versioned_docs/version-0.11.0/modules/tubemq/deployment.md
new file mode 100644
index 0000000..a4017b7
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/deployment.md
@@ -0,0 +1,156 @@
+---
+title: Deployment
+---
+
+## 1 Compile and Package Project:
+
+Enter the root directory of project and run:
+
+```
+mvn clean package -Dmaven.test.skip
+```
+
+e.g. We put the TubeMQ project package at `E:/`, then run the above command. Compilation is complete when all subdirectories are compiled successfully.
+
+![](img/sysdeployment/sys_compile.png)
+
+We can also run individual compilation in each subdirectory. Steps are the same as the whole project's compilation.
+
+## 2 Server Deployment
+
+As example above, entry directory `..\InLong\inlong-tubemq\tubemq-server\target`, we can see several JARs. `apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz` is the complete server-side installation package, including execution scripts, configuration files, dependencies, and frontend source code. `apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT.jar` is a server-side processing package included in `lib` of the complete project installer. Consider to daily changes and [...]
+
+
+![](img/sysdeployment/sys_package.png)
+
+Here we have a complete package deployed onto server and we place it in `/data/inlong`
+
+![](img/sysdeployment/sys_package_list.png)
+
+
+## 3 Configuration System
+
+There are 3 roles in server package: Master, Broker and Tools. Master and Broker can be deployed on the same or different machine. It depends on the bussiness layouts. As example below, we have 3 machine to startup a complete production and consumption cluster with 2 Masters.
+
+| Machine | Role | TCP Port | TLS Port | WEB Port | Note |
+| --- | --- | --- | --- | --- | --- |
+| 9.23.27.24 | **Master** | 8099 | 8199 | 8080 | Metadata stored at `/stage/meta_data` |
+| | Broker | 8123 | 8124 | 8081 | Message stored at`/stage/msg_data` |
+| | ZooKeeper | 2181 | | | Offset stored at root directory`/tubemq` |
+| 9.23.28.24 | **Master** | 8099 | 8199 | 8080 | Metadata stored at `/stage/meta_data` |
+| | Broker | 8123 | 8124 | 8081 | Message stored at`/stage/msg_data` |
+| 9.23.27.160 | Producer ||||
+| | Consumer ||||
+|
+
+Something should be noticed during deploying Master:
+
+1. Master cluster can be deployed in 1, 2 or 3 machines. 3 machines is suggested if HA is necessary so that reading/writing configuration and access to new production/consumption is still available when one of them is shutdown. In common situation, 2 machines provide readable configuration and proper state of production/consumption already registered when one is shutdown. The minimum is 1 and it provides proper state of production/consumption already registered when is shutdown.
+2. For machines with Master Role, we should promise clock synchronization. At the same time, IP address of each Master machine should be set in `/etc/hosts` on each Master machine.
+
+![](img/sysdeployment/sys_address_host.png)
+
+Take `9.23.27.24` and `9.23.28.24` as examples, if we want to deploy both Master and Broker role on them, we need to configure in `/conf/master.ini`, `/resources/velocity.properties` and `/conf/broker.ini`. First set up the configuration of `9.23.27.24`,
+
+![](img/sysdeployment/sys_configure_1.png)
+
+then it is `9.23.28.24`.
+
+![](img/sysdeployment/sys_configure_2.png)
+
+Note that the upper right corner is configured with Master's web frontend configuration and configuration `file.resource.loader.path` in `/resources/velocity.properties` need to be modified according to the Master's installation path.
+
+## 4 Start up Master:
+
+After configuration, entry directory `bin` of Master environment and start up master.
+
+![](img/sysdeployment/sys_master_start.png)
+
+We First start up `9.23.27.24`, and then start up Master on `9.23.28.24`. The following messages indicate that the master and backup master have been successfully started up and the external service ports are reachable.
+
+![](img/sysdeployment/sys_master_startted.png)
+
+Visiting Master's Administrator panel([http://9.23.27.24:8080](http://9.23.27.24:8080)), search operation working well indicates that master has been successfully started up.
+
+![](img/sysdeployment/sys_master_console.png)
+
+## 5 Start up Broker:
+
+Starting up Broker is a little bit different to starting Master: Master is responsible for managing the entire TubeMQ cluster, including Broker node with Topic configuration on them, production and consumption managament. So we need to add metadata on Master before starting up Broker.
+
+![](img/sysdeployment/sys_broker_configure.png)
+
+Confirm and create a draft record of Broker.
+
+![](img/sysdeployment/sys_broker_online.png)
+
+We try to start up the Broker.
+
+![](img/sysdeployment/sys_broker_start.png)
+
+But we got an error message.
+
+![](img/sysdeployment/sys_broker_start_error.png)
+
+Because the broker record is currently in draft status and it is not available now. Let's go back to Master Administrator panel and publish.
+
+![](img/sysdeployment/sys_broker_online_2.png)
+
+Every changing operation need to text in an Authorization Code when submited to Master. Authorization Code is defined by `confModAuthToken` in `master.ini`. If you have the Code of this cluster, we consider you as administrator and you have permission to operate the modification.
+
+![](img/sysdeployment/sys_broker_deploy.png)
+
+
+Then we restart the Broker.
+
+![](img/sysdeployment/sys_broker_restart_1.png)
+
+![](img/sysdeployment/sys_broker_restart_2.png)
+
+Check the Master Control Panel, broker has successfully registered.
+
+![](img/sysdeployment/sys_broker_finished.png)
+
+
+## 6 Topic Configuration and Activation:
+
+Configuration of Topic is similar with Broker's, we should add metadata on Master before using them, otherwise it will report an Not Found Error during production/consumption. For example, if we try to consum a non-existent topic `test`,
+![](img/sysdeployment/test_sendmessage.png)
+
+Demo returns error message.
+![](img/sysdeployment/sys_topic_error.png)
+
+First we add a topic in topic list page in Master Control Panel.
+
+![](img/sysdeployment/sys_topic_create.png)
+
+![](img/sysdeployment/sys_topic_select.png)
+
+Choose publish scope and confirm after submit topic detail. After adding a new topic, we need to overload the topic.
+
+![](img/sysdeployment/sys_topic_deploy.png)
+
+Topic is available after overload. We can see some status of topic has changed after overload.
+
+![](img/sysdeployment/sys_topic_finished.png)
+
+
+**Note** When we are executing overload opertaion, we should make it in batches. Overload operations are controlled by state machines. It would become unwritable and un readale, read-only, readable and writable in order before published. Waiting for overloads on all brokers make topic temporary unreadable and unwritable, which result in production and consumption failure, especially production failure.
+
+## 7 Message Production and Consumption:
+
+We pack Demo for test in package or `tubemq-client-0.9.0-incubating-SNAPSHOT.jar` can be used for implementing your own production and consumption.
+We run Producer Demo in below script and we can see data accepted on Broker.
+![](img/sysdeployment/test_sendmessage_2.png)
+
+![](img/sysdeployment/sys_node_status.png)
+
+Then we run the Consumption Demo and we can see that consumption is also working properly.
+![](img/sysdeployment/sys_node_status_2.png)
+
+As we can see, files relative to broker's production and consumption already exist.
+
+![](img/sysdeployment/sys_node_log.png)
+
+Here, the compilation, deployment, system configuration, startup, production and consumption of TubeMQ has been completed!
+If you need to get further, please refer to "TubeMQ HTTP API" and make your appropriate configuration settings.
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/error_code.md b/versioned_docs/version-0.11.0/modules/tubemq/error_code.md
new file mode 100644
index 0000000..123caad
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/error_code.md
@@ -0,0 +1,115 @@
+---
+title: Error Code
+---
+
+## 1 Introduction of TubeMQ Error
+
+​        TubeMQ use `errCode` and `errMsg` combined to return specific operation result. 
+        Firstly, determine the type of result(problem) by errCode, and then determine the specific reson of the errCode based on errMsg.
+        The following table summarizes all the errCodes and errMsgs that may return during operation.
+
+## 2 errCodes
+
+| Error Type | errCode | Error Mark | Meaning | Note |
+| ---------- | ------- | ---------- | ------- | ---- |
+| Operation Success | 200 | Operation Success| Success. ||
+| Operation Success| 201| NOT_READY | The request is accepted, but the server is not ready or the service is not running.| unused now, reserved. ||
+| Temporary Conflict Resolved | 301 | MOVED| Temporary switching of data results in an unsuccessful operation and a request for a new operation needs to be initiated. ||
+| Client Error | 400 | BAD_REQUEST| Client error, including parameter error, status error, etc. |Refer to ErrMsg for detail to location the error. |
+| Client Error | 401| UNAUTHORIZED| Unauthorized operation, make sure that the client has permission to perform the operation. | Need to check configuration. ||
+| Client Error | 403| FORBIDDEN | Topic not found or already deleted. |||
+| Client Error | 404| NOT_FOUND | Consumer has reach the max offset of the topic. |||
+| Client Error | 405| ALL_PARTITION_FROZEN | All available partitions are frozen. | The available partition has been frozen by the client, and it needs to be unfrozen or wait a while and try again. ||
+| Client Error | 406| NO_PARTITION_ASSIGNED | The current client is not allocated a partition for consumption. | The number of clients exceeds the number of partitions, or the server has not performed load balancing operations, so you need to wait and try again. ||
+| Client Error | 407| ALL_PARTITION_WAITING | The current available partitions have reached the maximum consumption position. | Need to wait and try again. ||
+| Client Error | 408| ALL_PARTITION_INUSE | Currently available partitions are all used by business but not released. | Need to wait for the business logic to call the confirm API to release the partition, wait and try again. ||
+| Client Error | 410| PARTITION_OCCUPIED| Partition consumption conflicts. Ignore it. | Temporary status of internal registration. ||
+| Client Error | 411| HB_NO_NODE| Node timeout, need to reduce the frequency of the operation and wait a while before retrying. | It usually occurs when the heartbeat sent from client to the server is timeout, try to reduce the operation frequency and wait for a while for the lib to register successfully before retrying the process. ||
+| Client Error | 412| DUPLICATE_PARTITION | Partition consumption conflicts. Ignore it. | Usually caused by node timeout, retry it. ||
+| Client Error | 415| CERTIFICATE_FAILURE | Authorization fails, including user authentication and operational authorization. | Usually occurs when the user name and password are inconsistent, the operation is not authorized. ||
+| Client Error | 419| SERVER_RECEIVE_OVERFLOW | Server receives overflow and need to retry. | For long-term overflow, try to expand the storage instance or expand the memory cache size. ||
+| Client Error | 450| CONSUME_GROUP_FORBIDDEN | Consumer group is forbidden. |||
+| Client Error | 452| SERVER_CONSUME_SPEED_LIMIT| Consumption speed is limited. |||
+| Client Error | 455| CONSUME_CONTENT_FORBIDDEN | Consumption is rejected, including that the consumer group is forbidden to filter consume and The filter `streamId` set does not match the allowed `streamId` set, etc. | Confirm the setting of filter of message.  ||
+| Server Error | 500 | INTERNAL_SERVER_ERROR| Internal server error | Refer to ErrMsg for detail to location the error. |
+| Server Error| 503| SERVICE_UNAVILABLE| Temporary ban on reading or writing for business. | Retry it. ||
+| Server Error| 510| INTERNAL_SERVER_ERROR_MSGSET_NULL | Can not read Message Set. | Retry it. ||
+
+## 3 Common errMsgs
+
+| Record ID | errMsg | Meaning | Note |
+| --------- | ------ | ------- | ---- |
+| 1      | Status error: producer has been shutdown! | Producer has been shutdown. ||
+| 2      | Illegal parameter: blank topic! | parameter error: blank topic ||
+| 3      | Illegal parameter: topicSet is null or empty! | parameter error: empty topic ||
+| 4      | Illegal parameter: found blank topic value in topicSet: xxxxx | parameter error: The topic set contains an empty topic. ||
+| 5      | Send message failed | Send message failed. ||
+| 6      | Illegal parameter: null message package! | Empty message package. ||
+| 7      | Illegal parameter: null data in message package! | Empty message content. ||
+| 8      | Illegal parameter: over max message length for the total size of message data and attribute, allowed size is XX, message's real size is YY | Message length over specified maximum length. ||
+| 9      | Topic XX not publish, please publish first! | Topic is not published yet. ||
+| 10     | Topic XX not publish, make sure the topic exist or acceptPublish and try later! | Topic is not published yet or not exist. ||
+| 11     | Null partition for topic: XX, please try later! | Topic has not been assigned to a partition. ||
+| 12     | No available partition for topic: XX | No available partition. ||
+| 13     | Current delayed messages over max allowed count, allowed is xxxxx, current count is yyyy | The number of unanswered messages currently stranded exceeds the allowed value. | Send again later. The maximum amount can be changed by `TubeClientConfig.setSessionMaxAllowedDelayedMsgCount()`, 400000 as default. |
+| 14     | The brokers of topic are all forbidden! | Brokers of the topic are blocked due to network problem. | Retry later when the blocking strategy is lifted. |
+| 15     | Not found available partition for topic: XX | Can not find available partition. | Partition exists but blocked due to network problem. |
+| 16     | Channel is not writable, please try later! | Channel is not writable now. | Modify buffer size by `TubeClientConfig.setNettyWriteBufferHighWaterMark()`, 10M as default. |
+| 17     | Put message failed from xxxxx, server receive message overflow! | Server is overloaded when storing messages | Retry sending. If error persists, try to expand the storage size. |
+| 18     | Write StoreService temporary unavailable! | Temporary invalid writing operation towards corresponding server. | Retry sending message. If error presists, adjust the partition distribution on the broker, and deal with the abnormal brokers. |
+| 19     | Topic xxx not existed, please check your configure | Topic does not exist. | It is possible that the topic was deleted by the administrator during production. Contact the administrator to deal with it. |
+| 20     | Partition[xxx:yyy] has been closed | Topic has been deleted. | It is possible that the topic was deleted by the administrator during the production. Contact the administrator to deal with it. |
+| 21     | Partition xxx-yyy not existed, please check your configure | Topic does not exist. | Partitions will only be increased, contact the administrator to deal with the situation. |
+| 22     | Checksum failure xxx of yyyy not equal to the data's checksum | Inconsistent checksum. | The checksum of the content is incorrectly calculated, or is tampered with during transmission. |
+| 23     | Put message failed from xxxxx | Message storage failure. | Retry. Also send the error message to the administrator to confirm the cause of the problem. |
+| 24     | Put message failed from | Message storage failure. | Retry. Also send the error message to the administrator to confirm the cause of the problem. |
+| 25     | Null brokers to select sent, please try later! | No available broker fro sending message now. | Retry later. If error presists, it may be some exception with broker or there is too many incomplete messages, check the status of broker. |
+| 26     | Publish topic failure, make sure the topic xxx exist or acceptPublish and try later! | publish topic failed, make sure that the topic exists and is writable | This error is reported when `void publish(final String topic)` interface is called and the topic is not local or does not exist. Wait about 1 minute or use `Set<String> publish(Set<String> topicSet)` interface to finish publishing the topic. |
+| 27     | Register producer failure, response is null! | Fail to register producer. | Contact administrator to deal with it. |
+| 28     | Register producer failure, error is XXX | Fail to register producer for some reason. | Check the problem against the cause of the error, and if it is still wrong, contact the administrator. |
+| 29     | Register producer exception, error is XXX | Fail to register producer for some reason. | Check the problem against the cause of the error, and if it is still wrong, contact the administrator. |
+| 30     | Status error: please call start function first! | Call `start()` firstly. | Producer is not created from sessionFactory, call `createProducer()` in sessionfactory first. |
+| 31     | Status error: producer service has been shutdown!| Producer has been shutdowned. | Producer has been shutdowned and stop calling any function. |
+| 32     | Listener is null for topic XXX | Callback Listener passed against a topic is null. | Input parameters are not valid, need to check code. |
+| 33     | Please complete topic's Subscribe call first! | Call `subscribe()` of the topic first. | Complete the topic subscription before consuming. |
+| 34     | ConfirmContext is null ! | Empty ConfirmContext content, illegal contexts. | Check the call of function in code. |
+| 35     | ConfirmContext format error: value must be aaaa:bbbb:cccc:ddddd ! | ConfirmContext format incorrect. | Check the call of function in code. |
+| 36     | ConfirmContext's format error: item (XXX) is null ! | ConfirmContext contain blank content. | Check the call of function in code. |
+| 37     | The confirmContext's value invalid! | Invalid ConfirmContext content. | It is possible that the context does not exist, or has expired because the load balancing corresponding partition has been released. |
+| 38     | Confirm XXX 's offset failed! | Fail to confirm offset. | Confirm the cause of the problem based on the log details, and if the problem persists, contact administrator to resolve it. |
+| 39     | Not found the partition by confirmContext:XXX | Can not find the coresponding partition. | The corresponding load balancing partition on the server is released. |
+| 40     | Illegal parameter: messageSessionFactory or consumerConfig is null! | messageSessionFactory or consumerConfig is null | Check the object initialization logic and the configuration. |
+| 41     | Get consumer id failed! | Fail to generate uuid for consumer. | Contact the system administrator to check the exception stack information where error presists. |
+| 42     | Parameter error: topic is Blank! | topic inputed is blank.| Blank includes arguments that are null, arguments inputed that are not null but have a content length of 0, or content with the whitespace character |
+| 43     | Parameter error: Over max allowed filter count, allowed count is XXX | The number of filter items exceeds the maximum allowed by the system. | Parameter error and modify the amount. |
+| 44     | Parameter error: blank filter value in parameter filterConds! | filterConds contain blank content. | Parameter error and modify the parameter. |
+| 45     | Parameter error: over max allowed filter length, allowed length is XXX | Exceeded filter length. ||
+| 46     | Parameter error: null messageListener | MessageListener inputed is null. |
+| 47     | Topic=XXX has been subscribed| Subscribe topic duplicately. ||
+| 48     | Not subscribe any topic, please subscribe first! | No topic subscribed. | Check business code for inappropriate call of function. |
+| 49     | Duplicated completeSubscribe call! | Call `completeSubscribe()` duplicately. | Check business code for inappropriate call of function. |
+| 50     | Subscribe has finished! | Call `completeSubscribe()` duplicately. ||
+| 51     | Parameter error: sessionKey is Blank! | Parameter error: sessionKey is not allowed to be blank.||
+| 52     | Parameter error: sourceCount must over zero! | Parameter error: sourceCount must over zero! ||
+| 53     | Parameter error: partOffsetMap's key XXX format error: value must be aaaa:bbbb:cccc ! | Parameter error: The key content of the partOffsetMap must be in "aaaa:bbbb:cccc" format. ||
+| 54     | Parameter error: not included in subscribed topic list: partOffsetMap's key is XXX , subscribed topics are YYY | Parameter error: The specified topic does not exist in the subscription list. ||
+| 55     | Parameter error: illegal format error of XXX: value must not include ',' char!" | Parameter error: cannot contain the "," character. ||
+| 56     | Parameter error: Offset must over or equal zero of partOffsetMap key XXX, value is YYY | Parameter error: Offset must over or equal zero. ||
+| 57     | Duplicated completeSubscribe call! | Call `completeSubscribe()` duplicately. ||
+| 58     | Register to master failed! ConsumeGroup forbidden, XXX | Fail to register to master. Consumer group is forbidden | Server prohibits this operation, contact administrator to deal with it. |
+| 59     | Register to master failed! Restricted consume content, XXX | Fail to register to master, and comsumption is limited. | Filter consumption of `streamId` sets that are not within the scope of the requested set. |
+| 60     | Register to master failed! please check and retry later. | Fail to register to master, retry it. | In this case, check the client log to confirm the cause of the problem, and then contact the administrator to verify that there is no abnormal log and the master address is correct. |
+| 61     | Get message error, reason is XXX | Pull message fail by some reason. | Submit the relevant error message to the relevant business owner for action, aligning the cause with the specific error message. |
+| 62     | Get message null | Message pulled from topic is null. | Retry it. |
+| 63     | Get message failed, topic=XXX,partition=YYY, throw info is ZZZ | Failed to pull message. | Submit the relevant error message to the relevant business owner for action, aligning the cause with the specific error message. |
+| 64     | Status error: consumer has been shutdown | The consumer has called shutdown and should not continue to call other functions. ||
+| 65     | All partition in waiting, retry later! | All partition in waiting status, retry later. | This erMsg can be ignored, pulling thread will sleep 200-400ms in this case. |
+| 66     | The request offset reached maxOffset | The request partition has reached the max offset | Modify the period of waiting for new message in a partition by `ConsumerConfig.setMsgNotFoundWaitPeriodMs()` |
+| 67     | No partition info in local, please wait and try later | There is no partition information locally, you need to wait and try again | Possible situations include that the server has not rebalanced, or the number of clients is greater than the number of partitions |
+| 68     | No idle partition to consume, please wait and try later | There is no free partition for consumption, need to wait and try again | Need to wait for the business logic to call the confirm API to release the partition, wait and try again |
+| 69     | All partition are frozen to consume, please unfreeze partition(s) or wait | All partitions are frozen | It is possible that the business calls the freeze interface to freeze the partitionable consumption, and the business needs to call the unfreeze API to unfreeze |
+
+If there is error not mentioned above, please do contact us.
+
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/http_access_api.md b/versioned_docs/version-0.11.0/modules/tubemq/http_access_api.md
new file mode 100644
index 0000000..73ccdf7
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/http_access_api.md
@@ -0,0 +1,919 @@
+---
+title: HTTP API
+---
+
+## 1 Master metadata configuration API
+
+### 1.1 Cluster management API
+#### 1.1.1 `admin_online_broker_configure`
+
+The online configuration of the Brokers are new or offline. The configuration of Topics are distributed to related Brokers as well.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerId|yes|The broker ID. It supports bulk brokerId which is separated by `,`. The maximum number of a bulk is `50`. The brokerId should be distinct in case of bulk value  |int|
+|modifyUser| yes|The user who executes this |String|
+|modifyDate| no|The modify date in the format of "yyyyMMddHHmmss"|String|
+|confModAuthToken| yes|The authorization key |String|
+
+__Response__
+
+|name|description|type|
+|---|---|---|
+|code| Returns `0` if success, otherwise failed | int|
+|errMsg| "OK" if success, other return error message| string|
+
+#### 1.1.2 `admin_reload_broker_configure`
+
+Update the configuration of the Brokers which are __online__. The new configuration will be published to Broker server, it
+ will return error if the broker is offline.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerId|yes|the id of broker. It supports bulk brokerId which separated by `,`. The maximum <br/> number of a bulk is 50. The brokerId should be distinct in case of bulk value  |int|
+|modifyUser| yes|the user who executes this |String|
+|modifyDate| no|the modify date in the format of "yyyyMMddHHmmss"|String|
+|confModAuthToken| yes|the authorization key |String|
+
+__Response__
+
+|name|description|type|
+|---|---|---|
+|code| return 0 if success, otherwise failed | int|
+|errMsg| "OK" if success, other return error message| string|
+
+#### 1.1.3 `admin_offline_broker_configure`
+
+Offline the configuration of the Brokers which are __online__. It should be called before Broker offline or retired.
+The Broker processes can be terminated once all offline tasks are done.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerId|yes|the id of broker. It supports bulk brokerId which separated by `,`. The maximum <br/> number of a bulk is 50. The brokerId should be distinct in case of bulk value  |int|
+|modifyUser| yes|the user who executes this |String|
+|modifyDate| no|the modify date in the format of "yyyyMMddHHmmss"|String|
+|confModAuthToken| yes|the authorization key |String|
+
+__Response__
+
+|name|description|type|
+|---|---|---|
+|code| return 0 if success, otherwise failed | int|
+|errMsg| "OK" if success, other return error message| string|
+
+#### 1.1.4 `admin_set_broker_read_or_write`
+
+Set Broker into a read-only or write-only state. Only Brokers are online and idle can be handled.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerId|yes|the id of broker. It supports bulk brokerId which separated by `,`. The maximum <br/> number of a bulk is 50. The brokerId should be distinct in case of bulk value  |int|
+|isAcceptPublish| yes|whether the brokers accept publish requests, default is true |Boolean|
+|isAcceptSubscribe| no|whether the brokers accept subscribe requests, default is true|Boolean|
+|modifyUser| yes|the user who request the change, default is creator |String|
+|modifyDate| no|the modify date in the format of "yyyyMMddHHmmss"|String|
+|confModAuthToken| yes|the authorization key |String|
+
+__Response__
+
+|name|description|type|
+|---|---|---|
+|code| return 0 if success, otherwise failed | int|
+|errMsg| "OK" if success, other return error message| string|
+
+#### 1.1.5 `admin_query_broker_run_status`
+
+Query Broker status. Only the Brokers processes are __offline__ and idle can be terminated.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerId|yes|the id of broker. It supports bulk brokerId which separated by `,`. The maximum <br/> number of a bulk is 50. The brokerId should be distinct in case of bulk value  |int|
+|onlyAbnormal| no|only report abnormal set, default is false |Boolean|
+|onlyAutoForbidden| no|only auto forbidden set, default is false |Boolean|
+|onlyEnableTLS| no|only enable TLS set, default is false|Boolean|
+|withDetail| yes|whether it needs detail, default is false |Boolean|
+
+__Response__
+
+|name|description|type|
+|---|---|---|
+|code| return 0 if success, otherwise failed | int|
+|errMsg| "OK" if success, other return error message| string|
+
+#### 1.1.6 `admin_release_broker_autoforbidden_status`
+
+Release the brokers' auto forbidden status.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerId|yes|the id of broker. It supports bulk brokerId which separated by `,`. The maximum <br/> number of a bulk is 50. The brokerId should be distinct in case of bulk value  |int|
+|realReason| yes|the reason of why it needs to release|String|
+|modifyUser| yes|the user who request the change, default is creator |String|
+|modifyDate| no|the modify date in the format of "yyyyMMddHHmmss"|String|
+|confModAuthToken| yes|the authorization key |String|
+
+Response
+
+|name|description|type|
+|---|---|---|
+|code| return 0 if success, otherwise failed | int|
+|errMsg| "OK" if success, other return error message| string|
+
+#### 1.1.7 `admin_query_master_group_info`
+
+Query the detail of master cluster nodes.
+
+#### 1.1.8 `admin_transfer_current_master`
+
+Set current master node as backup node, let it select another master.
+
+
+#### 1.9 `groupAdmin.sh`
+
+Clean the invalid node inside master group.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|groupName| yes|the name of master group|String|
+|helperHost| yes|the address of an online master node which will connect. The format is `ip:port`|String|
+|nodeName2Remove| no|the group node to be clean|String|
+
+Response
+
+|name|description|type|
+|---|---|---|
+|code| return 0 if success, otherwise failed | int|
+|errMsg| "OK" if success, other return error message| string|
+
+### 1.2 Broker node configuration API
+#### 1.2.1 `admin_add_broker_configure`
+
+Add broker default configuration (not include topic info). It will be effective after calling load API.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerIp|yes|a ip v4 address|string|
+|brokerPort|no|the port of broker. Default is 8123 |Int|
+|brokerId|yes|the id of the broker, its default value is 0. If brokerId is not zero, it ignores brokerIp field|String|
+|deleteWhen|no|the default deleting time of the topic data. The format should like cronjob form `0 0 6, 18 * * ?`|String|
+|deletePolicy|no|the default policy for deleting, the default policy is "delete, 168"|String|
+|numPartitions|no|the default partition number of a default topic on the broker. Default 1|Int|
+|unflushThreshold|no|the maximum message number which allows in memory. It has to be flushed to disk if the number exceed this value. Default 1000|Int|
+|numTopicStores|no|the number of data block and partition group allowed to create, default 1. If it is larger than 1, the partition number and topic number should be mapping with this value|Int|
+|unflushInterval|no|the maximum interval for unflush, default 1000ms|Int|
+|memCacheMsgCntInK|no|the max cached message package, default is 10, the unit is K|Int|
+|memCacheMsgSizeInMB|no|the max cache message size in MB, default 3|Int|
+|memCacheFlushIntvl|no|the max unflush interval in ms, default 20000|Int|
+|brokerTLSPort|no|the port of TLS of the broker, it has no default value|Int|
+|acceptPublish|no|whether the broker accept publish, default true|Boolean|
+|acceptSubscribe|no|whether the broker accept subscribe, default true| Boolean|
+|createUser|yes|the create user|String|
+|createDate|yes|the create date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+#### 1.2.2 `admin_batch_add_broker_configure`
+
+Add broker default configuration in batch (not include topic info). It will be effective after calling load API.
+
+This API take a json string referred as `brokerJsonSet` as input parameter. The content of Json contains the configuration lists in
+`admin_add_broker_configure`
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerJsonSet|yes|the parameter for the configuration|String|
+|createUser|yes|the creator|String|
+|createDate|yes|the create date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+#### 1.2.3 `admin_update_broker_configure`
+
+Update broker default configuration (not include topic info). It will be effective after calling load API.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerId|yes|the id of the broker. It supports bulk operation by providing id set here. The brokerId should separated by `,` and be distinct|String|
+|brokerPort|no|the port of broker. Default is 8123 |Int|
+|deleteWhen|no|the default deleting time of the topic data. The format should like cronjob form `0 0 6, 18 * * ?`|String|
+|deletePolicy|no|the default policy for deleting, the default policy is "delete, 168"|String|
+|numPartitions|no|the default partition number of a default topic on the broker. Default 1|Int|
+|unflushThreshold|no|the maximum message number which allows in memory. It has to be flushed to disk if the number exceed this value. Default 1000|Int|
+|numTopicStores|no|the number of data block and partition group allowed to create, default 1. If it is larger than 1, the partition number and topic number should be mapping with this value|Int|
+|unflushInterval|no|the maximum interval for unflush, default 1000ms|Int|
+|memCacheMsgCntInK|no|the max cached message package, default is 10, the unit is K|Int|
+|memCacheMsgSizeInMB|no|the max cache message size in MB, default 3|Int|
+|memCacheFlushIntvl|no|the max unflush interval in ms, default 20000|Int|
+|brokerTLSPort|no|the port of TLS of the broker, it has no default value|Int|
+|acceptPublish|no|whether the broker accept publish, default true|Boolean|
+|acceptSubscribe|no|whether the broker accept subscribe, default true| Boolean|
+|modifyUser|yes|the modifier|String|
+|modifyDate|yes|the modify date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+#### 1.2.4 `admin_query_broker_configure`
+
+Query the broker configuration.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerId|yes|the id of the broker. It supports bulk operation by providing id set here. The brokerId should separated by `,` and be distinct|String|
+|brokerPort|no|the port of broker. Default is 8123 |Int|
+|deleteWhen|no|the default deleting time of the topic data. The format should like cronjob form `0 0 6, 18 * * ?`|String|
+|deletePolicy|no|the default policy for deleting, the default policy is "delete, 168"|String|
+|numPartitions|no|the default partition number of a default topic on the broker. Default 1|Int|
+|unflushThreshold|no|the maximum message number which allows in memory. It has to be flushed to disk if the number exceed this value. Default 1000|Int|
+|numTopicStores|no|the number of data block and partition group allowed to create, default 1. If it is larger than 1, the partition number and topic number should be mapping with this value|Int|
+|unflushInterval|no|the maximum interval for unflush, default 1000ms|Int|
+|memCacheMsgCntInK|no|the max cached message package, default is 10, the unit is K|Int|
+|memCacheMsgSizeInMB|no|the max cache message size in MB, default 3|Int|
+|memCacheFlushIntvl|no|the max unflush interval in ms, default 20000|Int|
+|brokerTLSPort|no|the port of TLS of the broker, it has no default value|Int|
+|acceptPublish|no|whether the broker accept publish, default true|Boolean|
+|acceptSubscribe|no|whether the broker accept subscribe, default true| Boolean|
+|createUser|yes|the creator to be query|String|
+|modifyUser|yes|the modifier to be query|String|
+|topicStatusId|yes|the status of topic record|int|
+|withTopic|no|whether it needs topic configuration|Boolean|
+
+#### 1.2.5 `admin_delete_broker_configure`
+
+Delete the broker's default configuration. It requires the related topic configuration to be delete at first, and the broker should be offline. 
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|brokerId|yes|the id of the broker. It supports bulk operation by providing id set here. The brokerId should separated by `,` and be distinct|String|
+|modifyUser|yes|the modifier|String|
+|modifyDate|no|the modifying date in format `yyyyMMddHHmmss`|String|
+|isReserveData|no|whether to reserve production data, default false|Boolean|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 1.3 Topic configuration API
+#### 1.3.1 `admin_add_new_topic_record`
+
+Add topic related configuration.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|brokerId|yes|the id of the broker, its default value is 0. If brokerId is not zero, it ignores brokerIp field|String|
+|deleteWhen|no|the default deleting time of the topic data. The format should like cronjob form `0 0 6, 18 * * ?`|String|
+|deletePolicy|no|the default policy for deleting, the default policy is "delete, 168"|String|
+|numPartitions|no|the default partition number of a default topic on the broker. Default 1|Int|
+|unflushThreshold|no|the maximum message number which allows in memory. It has to be flushed to disk if the number exceed this value. Default 1000|Int|
+|numTopicStores|no|the number of data block and partition group allowed to create, default 1. If it is larger than 1, the partition number and topic number should be mapping with this value|Int|
+|unflushInterval|no|the maximum interval for unflush, default 1000ms|Int|
+|memCacheMsgCntInK|no|the max cached message package, default is 10, the unit is K|Int|
+|memCacheMsgSizeInMB|no|the max cache message size in MB, default 3|Int|
+|memCacheFlushIntvl|no|the max unflush interval in ms, default 20000|Int|
+|brokerTLSPort|no|the port of TLS of the broker, it has no default value|Int|
+|acceptPublish|no|whether the broker accept publish, default true|Boolean|
+|acceptSubscribe|no|whether the broker accept subscribe, default true| Boolean|
+|createUser|yes|the create user|String|
+|createDate|yes|the create date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+#### 1.3.2 `admin_query_topic_info`
+
+Query specific topic record info.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|topicStatusId|no| the status of topic record, 0-normal record, 1-already soft delete, 2-already hard delete, default 0|int|
+|brokerId|yes|the id of the broker, its default value is 0. If brokerId is not zero, it ignores brokerIp field|String|
+|deleteWhen|no|the default deleting time of the topic data. The format should like cronjob form `0 0 6, 18 * * ?`|String|
+|deletePolicy|no|the default policy for deleting, the default policy is "delete, 168"|String|
+|numPartitions|no|the default partition number of a default topic on the broker. Default 3|Int|
+|unflushThreshold|no|the maximum message number which allows in memory. It has to be flushed to disk if the number exceed this value. Default 1000|Int|
+|numTopicStores|no|the number of data block and partition group allowed to create, default 1. If it is larger than 1, the partition number and topic number should be mapping with this value|Int|
+|unflushInterval|no|the maximum interval for unflush, default 1000ms|Int|
+|memCacheMsgCntInK|no|the max cached message package, default is 10, the unit is K|Int|
+|memCacheMsgSizeInMB|no|the max cache message size in MB, default 3|Int|
+|memCacheFlushIntvl|no|the max unflush interval in ms, default 20000|Int|
+|brokerTLSPort|no|the port of TLS of the broker, it has no default value|Int|
+|acceptPublish|no|whether the broker accept publish, default true|Boolean|
+|acceptSubscribe|no|whether the broker accept subscribe, default true| Boolean|
+|createUser|yes|the creator|String|
+|modifyUser|yes|the modifier|String|
+
+#### 1.3.3 `admin_modify_topic_info`
+
+Modify specific topic record info.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|topicStatusId|no| the status of topic record, 0-normal record, 1-already soft delete, 2-already hard delete, default 0|int|
+|brokerId|yes|the id of the broker, its default value is 0. If brokerId is not zero, it ignores brokerIp field|String|
+|deleteWhen|no|the default deleting time of the topic data. The format should like cronjob form `0 0 6, 18 * * ?`|String|
+|deletePolicy|no|the default policy for deleting, the default policy is "delete, 168"|String|
+|numPartitions|no|the default partition number of a default topic on the broker. Default 3|Int|
+|unflushThreshold|no|the maximum message number which allows in memory. It has to be flushed to disk if the number exceed this value. Default 1000|Int|
+|numTopicStores|no|the number of data block and partition group allowed to create, default 1. If it is larger than 1, the partition number and topic number should be mapping with this value|Int|
+|unflushInterval|no|the maximum interval for unflush, default 1000ms|Int|
+|memCacheMsgCntInK|no|the max cached message package, default is 10, the unit is K|Int|
+|memCacheMsgSizeInMB|no|the max cache message size in MB, default 3|Int|
+|memCacheFlushIntvl|no|the max unflush interval in ms, default 20000|Int|
+|brokerTLSPort|no|the port of TLS of the broker, it has no default value|Int|
+|acceptPublish|no|whether the broker accept publish, default true|Boolean|
+|acceptSubscribe|no|whether the broker accept subscribe, default true| Boolean|
+|modifyUser|yes|the modifier|String|
+|modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+
+#### 1.3.4 `admin_delete_topic_info`
+
+Delete specific topic record info softly.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|brokerId|yes|the id of the broker, its default value is 0. If brokerId is not zero, it ignores brokerIp field|String|
+|modifyUser|yes|the modifier|String|
+|modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+#### 1.3.4 `admin_redo_deleted_topic_info`
+
+Redo the Deleted specific topic record info.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|brokerId|yes|the id of the broker, its default value is 0. If brokerId is not zero, it ignores brokerIp field|String|
+|modifyUser|yes|the modifier|String|
+|modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+#### 1.3.5 `admin_remove_topic_info`
+
+Delete specific topic record info hardly.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|brokerId|yes|the id of the broker, its default value is 0. If brokerId is not zero, it ignores brokerIp field|String|
+|modifyUser|yes|the modifier|String|
+|modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+#### 1.3.6 `admin_query_broker_topic_config_info`
+
+Query the topic configuration info of the broker in current cluster.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+
+
+## 2 Master consumer permission operation API
+
+### 2.1 `admin_set_topic_info_authorize_control`
+
+Enable or disable the authorization control feature of the topic. If the consumer group is not authorized, the register request will be denied.
+If the topic's authorization group is empty, the topic will fail.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|createUser|yes|the creator|String|
+|createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
+|isEnable|no|whether the authorization control is enable, default false|Boolean|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.2 `admin_delete_topic_info_authorize_control`
+
+Delete the authorization control feature of the topic. The content of the authorized consumer group list will be delete as well.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|createUser|yes|the creator|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.3 `admin_query_topic_info_authorize_control`
+
+Query the authorization control feature of the topic.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|createUser|yes|the creator|String|
+
+### 2.4 `admin_add_authorized_consumergroup_info`
+
+Add new authorized consumer group record of the topic. The server will deny the registration from the consumer group which is not exist in
+topic's authorized consumer group.
+
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|groupName|yes| the group name to be added|String|
+|createUser|yes|the creator|String|
+|createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.5 `admin_query_allowed_consumer_group_info`
+
+Query the authorized consumer group record of the topic. 
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|groupName|yes| the group name to be added|String|
+|createUser|yes|the creator|String|
+
+### 2.6 `admin_delete_allowed_consumer_group_info`
+
+Delete the authorized consumer group record of the topic. 
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes| the topic name|String|
+|groupName|yes| the group name to be added|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.7`admin_batch_add_topic_authorize_control`
+
+Add the authorized consumer group of the topic record in batch mode.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicJsonSet|yes| the topic names in JSON format|List|
+|createUser|yes|the creator|String|
+|createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.8 `admin_batch_add_authorized_consumergroup_info`
+
+Add the authorized consumer group record in batch mode.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|groupNameJsonSet|yes|the group names in JSON format|List|
+|createUser|yes|the creator|String|
+|createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.9 `admin_add_black_consumergroup_info`
+
+Add consumer group into the black list of the topic. The registered consumer on the group cannot consume topic later as well as unregistered one.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name |List|
+|groupName|yes|the group name |List|
+|createUser|yes|the creator|String|
+|createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.10 `admin_query_black_consumergroup_info`
+
+Query the black list of the topic. 
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name |List|
+|groupName|yes|the group name |List|
+|createUser|yes|the creator|String|
+
+### 2.11 `admin_delete_black_consumergroup_info`
+
+Delete the black list of the topic. 
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name |List|
+|groupName|yes|the group name |List|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.12 `admin_add_group_filtercond_info`
+
+Add condition of consuming filter for the consumer group 
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name |List|
+|groupName|yes|the group name |List|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+|condStatus|no| the condition status, 0: disable, 1:enable full authorization, 2:enable and limit consuming|Int|
+|filterConds|no| the filter conditions, the max length is 256|String|
+|createUser|yes|the creator|String|
+|createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
+
+### 2.13 `admin_mod_group_filtercond_info`
+
+Modify the condition of consuming filter for the consumer group 
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name |List|
+|groupName|yes|the group name |List|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+|condStatus|no| the condition status, 0: disable, 1:enable full authorization, 2:enable and limit consuming|Int|
+|filterConds|no| the filter conditions, the max length is 256|String|
+|modifyUser|yes|the modifier|String|
+|modifyDate|no|the modification date in format `yyyyMMddHHmmss`|String|
+
+### 2.14 `admin_del_group_filtercond_info`
+
+Delete the condition of consuming filter for the consumer group 
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name |List|
+|groupName|yes|the group name |List|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.15 `admin_query_group_filtercond_info`
+
+Query the condition of consuming filter for the consumer group 
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name |List|
+|groupName|yes|the group name |List|
+|condStatus|no| the condition status, 0: disable, 1:enable full authorization, 2:enable and limit consuming|Int|
+|filterConds|no| the filter conditions, the max length is 256|String|
+
+### 2.16 `admin_rebalance_group_allocate`
+
+Adjust consuming partition of the specific consumer in consumer group. This includes:  \
+1. release current consuming partition and retrieve new consuming partition.
+2. release current consuming partition and stop consuming for a while, then retrieve new consuming partition.
+
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+|consumerId|yes|the consumer id|List|
+|groupName|yes|the group name |List|
+|reJoinWait|no|the time in ms wait for re-consuming, the default value is 0 which means re-consuming immediately |Int|
+|modifyUser|yes|the modifier|String|
+|modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
+
+### 2.17 `admin_set_def_flow_control_rule`
+
+Set default flow control rule. It is effective for all consumer group. It worth to note that the priority is lower than the setting in consumer group.
+
+The flow control info is described in JSON format, for example: 
+
+```json
+[{"type":0,"rule":[{"start":"08:00","end":"17:59","dltInM":1024,"limitInM":20,"freqInMs":1000},{"start":"18:00","end":"22:00","dltInM":1024,"limitInM":20,"freqInMs":5000}]},{"type":2,"rule":[{"start":"18:00","end":"23:59","dltStInM":20480,"dltEdInMM":2048}]},{"type":1,"rule":[{"zeroCnt":3,"freqInMs":300},{"zeroCnt":8,"freqInMs":1000}]},{"type":3,"rule":[{"normFreqInMs":0,"filterFreqInMs":100,"minDataFilterFreqInMs":400}]}]
+```
+The `type` has four values [0, 1, 3]. 0: flow control, 1: frequency control, 3: filter consumer frequency control,<br/>
+ `[start, end]` is an inclusive range of time, `dltInM` is the consuming delta in MB, `limitInM` is the flow control each minute, <br/>
+ `freqInMs` is the interval for sending request after exceeding the flow or freq limit, `zeroCnt` is the count of how many times occurs zero data,  <br/>
+ `normFreqInMs` is the interval of sequential pulling, `filterFreqInMs` is the interval of pulling filtered request.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|flowCtrlInfo|yes|the flow control info in JSON format|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+|consumerId|yes|the consumer id|List|
+|groupName|yes|the group name |List|
+|reJoinWait|no|the time in ms wait for re-consuming, the default value is 0 which means re-consuming immediately |Int|
+|modifyUser|yes|the modifier|String|
+|modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
+
+
+### 2.18 `admin_upd_def_flow_control_rule`
+
+Update the default flow control rule.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+|StatusId|no| the strategy status Id, default 0|int|
+|qryPriorityId|no| the consuming priority Id. It is a composed field `A0B` with default value 301, <br/>the value of A,B is [1, 2, 3] which means file, backup memory, and main memory respectively|int|
+|createUser|yes|the creator|String|
+|flowCtrlInfo|yes|the flow control info in JSON format|String|
+|createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
+
+### 2.19 `admin_query_def_flow_control_rule`
+
+Query the default flow control rule.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|StatusId|no| the strategy status Id, default 0|int|
+|qryPriorityId|no| the consuming priority Id. It is a composed field `A0B` with default value 301,<br/> the value of A,B is [1, 2, 3] which means file, backup memory, and main memory respectively|int|
+|createUser|yes|the creator|String|
+
+### 2.20 `admin_set_group_flow_control_rule`
+
+Set the group flow control rule.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|flowCtrlInfo|yes|the flow control info in JSON format|String|
+|groupName|yes|the group name to set flow control rule|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+|StatusId|no| the strategy status Id, default 0|int|
+|qryPriorityId|no|the consuming priority Id. It is a composed field `A0B` with default value 301,<br/> the value of A,B is [1, 2, 3] which means file, backup memory, and main memory respectively|int|
+|createUser|yes|the creator|String|
+|createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
+
+### 2.21 `admin_upd_group_flow_control_rule`
+
+Update the group flow control rule.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|flowCtrlInfo|yes|the flow control info in JSON format|String|
+|groupName|yes|the group name to set flow control rule|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+|StatusId|no| the strategy status Id, default 0|int|
+|qryPriorityId|no|the consuming priority Id. It is a composed field `A0B` with default value 301,<br/> the value of A,B is [1, 2, 3] which means file, backup memory, and main memory respectively|int|
+|createUser|yes|the creator|String|
+|createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
+
+
+### 2.22 `admin_rmv_group_flow_control_rule`
+
+Remove the group flow control rule.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|groupName|yes|the group name to set flow control rule|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+|createUser|yes|the creator|String|
+
+### 2.23 `admin_query_group_flow_control_rule`
+
+Remove the group flow control rule.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|groupName|yes|the group name to set flow control rule|String|
+|StatusId|no| the strategy status Id, default 0|int|
+|qryPriorityId|no| the consuming priority Id. It is a composed field `A0B` with default value 301, <br/>the value of A,B is [1, 2, 3] which means file, backup memory, and main memory respectively|int|
+|createUser|yes|the creator|String|
+
+### 2.24 `admin_add_consume_group_setting`
+
+Set whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|groupName|yes|the group name to set flow control rule|String|
+|enableBind|no|whether to bind consuming permission, default value 0 means disable|int|
+|allowedBClientRate|no|the ratio of the number of the consuming target's broker against the number of client in consuming group|int|
+|createUser|yes|the creator|String|
+|createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.25 `admin_query_consume_group_setting`
+
+Query the consume group setting to check whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|groupName|yes|the group name to set flow control rule|String|
+|enableBind|no|whether to bind consuming permission, default value 0 means disable|int|
+|allowedBClientRate|no|the ratio of the number of the consuming target's broker against the number of client in consuming group|int|
+|createUser|yes|the creator|String|
+
+### 2.26 `admin_upd_consume_group_setting`
+
+Update the consume group setting for whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|groupName|yes|the group name to set flow control rule|String|
+|enableBind|no|whether to bind consuming permission, default value 0 means disable|int|
+|allowedBClientRate|no|the ratio of the number of the consuming target's broker against the number of client in consuming group|int|
+|modifyUser|yes|the modifier|String|
+|modifyDate|yes|the modifying date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+### 2.27 `admin_del_consume_group_setting`
+
+Delete the consume group setting for whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|groupName|yes|the group name to set flow control rule|String|
+|modifyUser|yes|the modifier|String|
+|modifyDate|yes|the modifying date in format `yyyyMMddHHmmss`|String|
+|confModAuthToken|yes|the authorized key for configuration update|String|
+
+## 3 Master subscriber relation API
+
+### 3.1 Query consumer group subscription information
+
+Url ` http://127.0.0.1:8080/webapi.htm?type=op_query&method=admin_query_sub_info&topicName=test&consumeGroup=xxx `
+
+response:
+
+```json
+{
+    "errCode": 0, 
+    "errMsg": "Ok", 
+    "count": 263,  
+    "data": [{ 
+        "consumeGroup": "", 
+        "topicSet": ["a", "b"],
+        "consumerNum": 33
+       }]
+}									
+```
+
+### 3.2 Query consumer group detailed subscription information
+
+Url `http://127.0.0.1:8080/webapi.htm?type=op_query&method=admin_query_consume_group_detail&consumeGroup=test_25`
+
+response:
+
+```json
+{
+    "errCode": 0, 
+    "errMsg": "Ok", 
+    "count": 263, 
+    "topicSet": ["a", "b"],
+    "consumeGroup": "", 
+    "data": [{      
+       "consumerId": "",
+       "parCount": 1,
+       "parInfo": [{
+           "brokerAddr": "",
+           "topic": "",
+           "partId": 2
+       }] 
+   }]
+}									
+```
+
+## 4 Broker operation API
+
+### 4.1 `admin_snapshot_message`
+
+Check whether it is transferring data under current broker's topic, and what is the content.
+
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name|String|
+|msgCount|no|the max number of message to extract|int|
+|partitionId|yes|the partition ID which must exists|int|
+|filterConds|yes|the streamId value for filtering|String|
+
+### 4.2 `admin_manual_set_current_offset`
+
+Modify the offset value of consuming group under current broker. The new value will be persisted to ZK.
+
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name|String|
+|groupName|yes|the group name|String|
+|modifyUser|no|the user who modify the value|String|
+|partitionId|yes|the partition ID which must exists|int|
+|manualOffset|yes|the offset to be modified, it must be a valid value|long|
+
+### 4.3 `admin_query_group_offset`
+
+Query the offset of consuming group under current broker.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name|String|
+|groupName|yes|the group name|String|
+|partitionId|yes|the partition ID which must exists|int|
+|requireRealOffset|no|whether to check real offset on ZK, default false|Boolean|
+
+### 4.4 `admin_query_broker_all_consumer_info`
+
+Query consumer info of the specific consume group on the broker.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|groupName|yes|the group name|String|
+
+### 4.5 `admin_query_broker_all_store_info`
+
+Query store info of the specific topic on the broker.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name|String|
+
+### 4.6 `admin_query_broker_memstore_info`
+
+Query memory store info of the specific topic on the broker.
+
+__Request__
+
+|name|must|description|type|
+|---|---|---|---|
+|topicName|yes|the topic name|String|
+|needRefresh|no|whether it needs to refresh, default false|Boolean|
+
+More API see:
+<a href="appendixfiles/http_access_api_definition_cn.xls" target="_blank">TubeMQ HTTP API</a>
+
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/.gitkeep b/versioned_docs/version-0.11.0/modules/tubemq/img/.gitkeep
new file mode 100644
index 0000000..781e383
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/img/.gitkeep
@@ -0,0 +1,3 @@
+# Ignore everything in this directory 
+* 
+# Except this file !.gitkeep
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_broker_info.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_broker_info.png
new file mode 100644
index 0000000..4747a88
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_broker_info.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_bytes_def.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_bytes_def.png
new file mode 100644
index 0000000..45a2384
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_bytes_def.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_conn_detail.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_conn_detail.png
new file mode 100644
index 0000000..6e803af
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_conn_detail.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_consumer_diagram.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_consumer_diagram.png
new file mode 100644
index 0000000..f761f54
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_consumer_diagram.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_convert_topicinfo.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_convert_topicinfo.png
new file mode 100644
index 0000000..6c5bffa
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_convert_topicinfo.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto.png
new file mode 100644
index 0000000..430d297
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_optype.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_optype.png
new file mode 100644
index 0000000..9685b80
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_optype.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_status.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_status.png
new file mode 100644
index 0000000..7a787cc
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_event_proto_status.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_header_fill.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_header_fill.png
new file mode 100644
index 0000000..0023e89
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_header_fill.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_inner_structure.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_inner_structure.png
new file mode 100644
index 0000000..9533ce4
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_inner_structure.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_master_authorizedinfo.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_master_authorizedinfo.png
new file mode 100644
index 0000000..097fb05
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_master_authorizedinfo.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_message_data.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_message_data.png
new file mode 100644
index 0000000..fa7a66e
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_message_data.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_pbmsg_structure.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_pbmsg_structure.png
new file mode 100644
index 0000000..1ec4faf
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_pbmsg_structure.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_close2M.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_close2M.png
new file mode 100644
index 0000000..5342d62
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_close2M.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_diagram.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_diagram.png
new file mode 100644
index 0000000..9d087e7
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_diagram.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_heartbeat2M.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_heartbeat2M.png
new file mode 100644
index 0000000..3dc4367
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_heartbeat2M.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_register2M.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_register2M.png
new file mode 100644
index 0000000..6add74c
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_register2M.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_sendmsg2B.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_sendmsg2B.png
new file mode 100644
index 0000000..2a81905
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_producer_sendmsg2B.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_proto_def.png b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_proto_def.png
new file mode 100644
index 0000000..f56c275
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/client_rpc/rpc_proto_def.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/configure/conf_ini_pos.png b/versioned_docs/version-0.11.0/modules/tubemq/img/configure/conf_ini_pos.png
new file mode 100644
index 0000000..a68e36d
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/configure/conf_ini_pos.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/configure/conf_velocity_pos.png b/versioned_docs/version-0.11.0/modules/tubemq/img/configure/conf_velocity_pos.png
new file mode 100644
index 0000000..40e6625
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/configure/conf_velocity_pos.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169770714.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169770714.png
new file mode 100644
index 0000000..4c952c0
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169770714.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169796122.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169796122.png
new file mode 100644
index 0000000..568fefa
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169796122.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169806810.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169806810.png
new file mode 100644
index 0000000..0204457
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169806810.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169823675.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169823675.png
new file mode 100644
index 0000000..7330892
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169823675.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169839931.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169839931.png
new file mode 100644
index 0000000..d961dcf
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169839931.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169851085.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169851085.png
new file mode 100644
index 0000000..28b55c3
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169851085.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169863402.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169863402.png
new file mode 100644
index 0000000..58af810
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169863402.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169879529.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169879529.png
new file mode 100644
index 0000000..b715e74
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169879529.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169889594.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169889594.png
new file mode 100644
index 0000000..37eb229
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169889594.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169900634.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169900634.png
new file mode 100644
index 0000000..fa80612
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169900634.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169908522.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169908522.png
new file mode 100644
index 0000000..8efef9f
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169908522.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169916091.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169916091.png
new file mode 100644
index 0000000..c25a1bb
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169916091.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169925657.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169925657.png
new file mode 100644
index 0000000..dcea033
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169925657.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169946683.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169946683.png
new file mode 100644
index 0000000..15688f9
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169946683.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169954746.png b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169954746.png
new file mode 100644
index 0000000..4142122
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/console/1568169954746.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/development/create_pull_request.png b/versioned_docs/version-0.11.0/modules/tubemq/img/development/create_pull_request.png
new file mode 100644
index 0000000..2e63a34
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/development/create_pull_request.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/development/github_fork_repository.png b/versioned_docs/version-0.11.0/modules/tubemq/img/development/github_fork_repository.png
new file mode 100644
index 0000000..d800e10
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/development/github_fork_repository.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_create_issue.png b/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_create_issue.png
new file mode 100644
index 0000000..1c72d48
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_create_issue.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_filter.png b/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_filter.png
new file mode 100644
index 0000000..6ac0fa0
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_filter.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_1.png
new file mode 100644
index 0000000..cc99519
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_2.png
new file mode 100644
index 0000000..a19f9ee
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/development/jira_resolve_issue_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/development/new_pull_request.png b/versioned_docs/version-0.11.0/modules/tubemq/img/development/new_pull_request.png
new file mode 100644
index 0000000..de9a478
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/development/new_pull_request.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/mqs_comare.png b/versioned_docs/version-0.11.0/modules/tubemq/img/mqs_comare.png
new file mode 100644
index 0000000..cb6af4b
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/mqs_comare.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_1.png
new file mode 100644
index 0000000..812a25b
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_2.png
new file mode 100644
index 0000000..2bb77db
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_3.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_3.png
new file mode 100644
index 0000000..0224e3e
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_3.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_4.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_4.png
new file mode 100644
index 0000000..9195504
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_bx1_4.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_1.png
new file mode 100644
index 0000000..2f1c0c7
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_2.png
new file mode 100644
index 0000000..5f536a8
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_3.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_3.png
new file mode 100644
index 0000000..41e595a
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_3.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_4.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_4.png
new file mode 100644
index 0000000..c94c5cc
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_1_cg1_4.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_1.png
new file mode 100644
index 0000000..23fc1de
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_2.png
new file mode 100644
index 0000000..6fabcc3
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_3.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_3.png
new file mode 100644
index 0000000..ff8e551
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_3.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_4.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_4.png
new file mode 100644
index 0000000..1f75903
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_4.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_5.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_5.png
new file mode 100644
index 0000000..80342ce
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_5.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_6.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_6.png
new file mode 100644
index 0000000..5714ba2
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_6.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_7.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_7.png
new file mode 100644
index 0000000..67d0cd5
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_7.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_8.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_8.png
new file mode 100644
index 0000000..d63f015
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_8.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_9.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_9.png
new file mode 100644
index 0000000..b459396
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_1000_9.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_1.png
new file mode 100644
index 0000000..ceaf949
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_2.png
new file mode 100644
index 0000000..7a00562
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_3.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_3.png
new file mode 100644
index 0000000..00ebe8d
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_3.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_4.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_4.png
new file mode 100644
index 0000000..2ec4d50
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_4.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_5.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_5.png
new file mode 100644
index 0000000..99fecab
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_5.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_6.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_6.png
new file mode 100644
index 0000000..85e8950
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_6.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_7.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_7.png
new file mode 100644
index 0000000..ff2b2d9
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_7.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_8.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_8.png
new file mode 100644
index 0000000..a805778
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_8.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_9.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_9.png
new file mode 100644
index 0000000..a5926db
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_100_9.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_1.png
new file mode 100644
index 0000000..0a21fdf
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_2.png
new file mode 100644
index 0000000..b570ee8
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_3.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_3.png
new file mode 100644
index 0000000..fb3c6bc
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_3.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_4.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_4.png
new file mode 100644
index 0000000..322f171
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_4.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_5.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_5.png
new file mode 100644
index 0000000..03ed9c8
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_5.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_6.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_6.png
new file mode 100644
index 0000000..36de673
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_6.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_7.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_7.png
new file mode 100644
index 0000000..eb44fb2
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_7.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_8.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_8.png
new file mode 100644
index 0000000..fbb0415
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_8.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_9.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_9.png
new file mode 100644
index 0000000..e5dec3d
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_200_9.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_1.png
new file mode 100644
index 0000000..4263605
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_2.png
new file mode 100644
index 0000000..a6407c9
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_3.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_3.png
new file mode 100644
index 0000000..174e42c
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_3.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_4.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_4.png
new file mode 100644
index 0000000..279b319
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_4.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_5.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_5.png
new file mode 100644
index 0000000..b87b8be
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_5.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_6.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_6.png
new file mode 100644
index 0000000..2515997
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_6.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_7.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_7.png
new file mode 100644
index 0000000..80909df
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_7.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_8.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_8.png
new file mode 100644
index 0000000..f610ced
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_8.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_9.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_9.png
new file mode 100644
index 0000000..7155236
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_appendix_2_topic_500_9.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_1.png
new file mode 100644
index 0000000..2677301
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_1_index.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_1_index.png
new file mode 100644
index 0000000..0901ad7
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_1_index.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_2.png
new file mode 100644
index 0000000..5371180
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_2_index.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_2_index.png
new file mode 100644
index 0000000..abba39d
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_2_index.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_3.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_3.png
new file mode 100644
index 0000000..cfca08b
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_3.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_3_index.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_3_index.png
new file mode 100644
index 0000000..0a3f58e
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_3_index.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_4_index.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_4_index.png
new file mode 100644
index 0000000..2856dbb
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_4_index.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_6_index.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_6_index.png
new file mode 100644
index 0000000..f4f6ce9
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_6_index.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_7.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_7.png
new file mode 100644
index 0000000..a62c928
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_7.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_8.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_8.png
new file mode 100644
index 0000000..fcd1f40
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_8.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_8_index.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_8_index.png
new file mode 100644
index 0000000..66251f8
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scenario_8_index.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scheme.png b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scheme.png
new file mode 100644
index 0000000..fccce90
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/perf_scheme.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/store_file.png b/versioned_docs/version-0.11.0/modules/tubemq/img/store_file.png
new file mode 100644
index 0000000..c251dc3
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/store_file.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/store_mem.png b/versioned_docs/version-0.11.0/modules/tubemq/img/store_mem.png
new file mode 100644
index 0000000..fff9975
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/store_mem.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sys_structure.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sys_structure.png
new file mode 100644
index 0000000..70b4dad
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sys_structure.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_address_host.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_address_host.png
new file mode 100644
index 0000000..4b38251
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_address_host.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_configure.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_configure.png
new file mode 100644
index 0000000..b8b000f
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_configure.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_deploy.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_deploy.png
new file mode 100644
index 0000000..31fc2d7
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_deploy.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_finished.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_finished.png
new file mode 100644
index 0000000..f5364d0
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_finished.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online.png
new file mode 100644
index 0000000..1b0e3e3
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online_2.png
new file mode 100644
index 0000000..9f12cb9
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_online_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_1.png
new file mode 100644
index 0000000..4c19cb0
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_2.png
new file mode 100644
index 0000000..7a6aea0
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_restart_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start.png
new file mode 100644
index 0000000..2ad204b
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start_error.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start_error.png
new file mode 100644
index 0000000..f7a94c5
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_broker_start_error.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_compile.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_compile.png
new file mode 100644
index 0000000..edecd21
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_compile.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_1.png
new file mode 100644
index 0000000..f20201b
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_2.png
new file mode 100644
index 0000000..1d35431
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_configure_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_console.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_console.png
new file mode 100644
index 0000000..d03148d
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_console.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_start.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_start.png
new file mode 100644
index 0000000..a513e6c
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_start.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_startted.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_startted.png
new file mode 100644
index 0000000..764b996
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_master_startted.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_log.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_log.png
new file mode 100644
index 0000000..ae6a435
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_log.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status.png
new file mode 100644
index 0000000..f7e2982
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status_2.png
new file mode 100644
index 0000000..5f46607
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_node_status_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package.png
new file mode 100644
index 0000000..f04af8a
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package_list.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package_list.png
new file mode 100644
index 0000000..fb531ba
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_package_list.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_create.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_create.png
new file mode 100644
index 0000000..ae4af1e
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_create.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_deploy.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_deploy.png
new file mode 100644
index 0000000..d41b54c
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_deploy.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_error.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_error.png
new file mode 100644
index 0000000..1673b8a
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_error.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_finished.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_finished.png
new file mode 100644
index 0000000..f37f726
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_finished.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_select.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_select.png
new file mode 100644
index 0000000..a186889
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/sys_topic_select.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage.png
new file mode 100644
index 0000000..c18ffad
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage_2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage_2.png
new file mode 100644
index 0000000..05dfeac
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/sysdeployment/test_sendmessage_2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/test_scheme.png b/versioned_docs/version-0.11.0/modules/tubemq/img/test_scheme.png
new file mode 100644
index 0000000..fcf2087
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/test_scheme.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/test_summary.png b/versioned_docs/version-0.11.0/modules/tubemq/img/test_summary.png
new file mode 100644
index 0000000..9943b4e
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/test_summary.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-1.png
new file mode 100644
index 0000000..b6bc3e7
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-2.png
new file mode 100644
index 0000000..70466ee
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-3.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-3.png
new file mode 100644
index 0000000..4404414
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-broker-3.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-1.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-1.png
new file mode 100644
index 0000000..4a590b8
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-1.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-2.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-2.png
new file mode 100644
index 0000000..3481225
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-2.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-3.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-3.png
new file mode 100644
index 0000000..fdf2391
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-3.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-4.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-4.png
new file mode 100644
index 0000000..5d7d608
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-4.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-5.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-5.png
new file mode 100644
index 0000000..66028da
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-5.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-6.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-6.png
new file mode 100644
index 0000000..e6fe21e
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-add-topic-6.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-console-gui.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-console-gui.png
new file mode 100644
index 0000000..c2b4ea8
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-console-gui.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-consume-message.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-consume-message.png
new file mode 100644
index 0000000..1bb14a7
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-consume-message.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-send-message.png b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-send-message.png
new file mode 100644
index 0000000..c0ab65d
Binary files /dev/null and b/versioned_docs/version-0.11.0/modules/tubemq/img/tubemq-send-message.png differ
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/producer_example.md b/versioned_docs/version-0.11.0/modules/tubemq/producer_example.md
new file mode 100644
index 0000000..c5d0069
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/producer_example.md
@@ -0,0 +1,152 @@
+---
+title: Producer Example
+---
+
+## 1 Producer Example
+  TubeMQ provides two ways to initialize session factory, TubeSingleSessionFactory and TubeMultiSessionFactory:
+  - TubeSingleSessionFactory creates only one session in the lifecycle, this is very useful in streaming scenarios.
+  - TubeMultiSessionFactory creates new session on every call.
+
+### 1.1 TubeSingleSessionFactory
+#### 1.1.1 Send Message Synchronously
+
+    ```java
+    
+    public final class SyncProducerExample {
+    
+        public static void main(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+            final MessageProducer messageProducer = messageSessionFactory.createProducer();
+            final String topic = "test";
+            final String body = "This is a test message from single-session-factory!";
+            byte[] bodyData = StringUtils.getBytesUtf8(body);
+            messageProducer.publish(topic);
+            Message message = new Message(topic, bodyData);
+            MessageSentResult result = messageProducer.sendMessage(message);
+            if (result.isSuccess()) {
+                System.out.println("sync send message : " + message);
+            }
+            messageProducer.shutdown();
+        }
+    }
+    ```
+     
+####1.1.2 Send Message Asynchronously
+    ```java
+    public final class AsyncProducerExample {
+     
+        public static void main(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+            final MessageProducer messageProducer = messageSessionFactory.createProducer();
+            final String topic = "test";
+            final String body = "async send message from single-session-factory!";
+            byte[] bodyData = StringUtils.getBytesUtf8(body);
+            messageProducer.publish(topic);
+            final Message message = new Message(topic, bodyData);
+            messageProducer.sendMessage(message, new MessageSentCallback(){
+                @Override
+                public void onMessageSent(MessageSentResult result) {
+                    if (result.isSuccess()) {
+                        System.out.println("async send message : " + message);
+                    } else {
+                        System.out.println("async send message failed : " + result.getErrMsg());
+                    }
+                }
+                @Override
+                public void onException(Throwable e) {
+                    System.out.println("async send message error : " + e);
+                }
+            });
+            messageProducer.shutdown();
+        }
+
+    }
+    ```
+     
+#### 1.1.3 Send Message With Attributes
+    ```java
+    public final class ProducerWithAttributeExample {
+     
+        public static void main(String[] args) throws Throwable {
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+            final MessageProducer messageProducer = messageSessionFactory.createProducer();
+            final String topic = "test";
+            final String body = "send message with attribute from single-session-factory!";
+            byte[] bodyData = StringUtils.getBytesUtf8(body);
+            messageProducer.publish(topic);
+            Message message = new Message(topic, bodyData);
+            //set attribute
+            message.setAttrKeyVal("test_key", "test value");
+            //msgType is used for consumer filtering, and msgTime(accurate to minute) is used as the pipe to send and receive statistics
+            SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMddHHmm");
+            message.putSystemHeader("test", sdf.format(new Date()));
+            messageProducer.sendMessage(message);
+            messageProducer.shutdown();
+        }
+
+    }
+    ```
+     
+### 1.2 TubeMultiSessionFactory
+
+    ```java
+    public class MultiSessionProducerExample {
+        
+        public static void main(String[] args) throws Throwable {
+            final int SESSION_FACTORY_NUM = 10;
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+            final List<MessageSessionFactory> sessionFactoryList = new ArrayList<>(SESSION_FACTORY_NUM);
+            final ExecutorService sendExecutorService = Executors.newFixedThreadPool(SESSION_FACTORY_NUM);
+            final CountDownLatch latch = new CountDownLatch(SESSION_FACTORY_NUM);
+            for (int i = 0; i < SESSION_FACTORY_NUM; i++) {
+                TubeMultiSessionFactory tubeMultiSessionFactory = new TubeMultiSessionFactory(clientConfig);
+                sessionFactoryList.add(tubeMultiSessionFactory);
+                MessageProducer producer = tubeMultiSessionFactory.createProducer();
+                Sender sender = new Sender(producer, latch);
+                sendExecutorService.submit(sender);
+            }
+            latch.await();
+            sendExecutorService.shutdownNow();
+            for (MessageSessionFactory sessionFactory : sessionFactoryList) {
+                sessionFactory.shutdown();
+            }
+        }
+    
+        private static class Sender implements Runnable {
+            
+            private MessageProducer producer;
+            
+            private CountDownLatch latch;
+    
+            public Sender(MessageProducer producer, CountDownLatch latch) {
+                this.producer = producer;
+                this.latch = latch;
+            }
+    
+            @Override
+            public void run() {
+                final String topic = "test";
+                try {
+                    producer.publish(topic);
+                    final byte[] bodyData = StringUtils.getBytesUtf8("This is a test message from multi-session factory");
+                    Message message = new Message(topic, bodyData);
+                    producer.sendMessage(message);
+                    producer.shutdown();
+                } catch (Throwable ex) {
+                    System.out.println("send message error : " + ex);
+                } finally {
+                    latch.countDown();
+                }
+            }
+        }
+    }
+    ```
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/quick_start.md b/versioned_docs/version-0.11.0/modules/tubemq/quick_start.md
new file mode 100644
index 0000000..b51dbb1
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/quick_start.md
@@ -0,0 +1,197 @@
+---
+title: Quick Start
+---
+## 1 Deploy and Start
+
+### 1.1 Configuration Example
+There're two components in the cluster: **Master** and **Broker**. Master and Broker
+can be deployed on the same server or different servers. In this example, we setup our cluster
+like this, and all services run on the same node. Zookeeper should be setup in your environment also.
+| Role | TCP Port | TLS Port | Web Port | Comment |
+| ---- | -------- | -------- | -------- | ------- |
+| Master | 8099 | 8199 | 8080 | Meta data is stored at /stage/meta_data |
+| Broker | 8123 | 8124 | 8081 | Message is stored at /stage/msg_data |
+| Zookeeper | 2181 | | | Offset is stored at /tubemq |
+
+### 1.2 Prerequisites
+- ZooKeeper Cluster
+
+After you extract the package file, here's the folder structure.
+```
+/INSTALL_PATH/inlong-tubemq-server/
+├── bin
+├── conf
+├── lib
+├── logs
+└── resources
+```
+
+### 1.3 Configure Master
+You can change configurations in `conf/master.ini` according to cluster information.
+- Master IP and Port
+```ini
+[master]
+hostName=YOUR_SERVER_IP                  // replaced with your server IP
+port=8099
+webPort=8080
+metaDataPath=/stage/meta_data
+```
+
+- Access Authorization Token
+```ini
+confModAuthToken=abc                    // for configuring Web Resources\API etc
+```
+
+- ZooKeeper Cluster
+```ini
+[zookeeper]                             // Master and Broker in the same cluster must use the same zookeeper environment and have the same configuration
+zkNodeRoot=/tubemq
+zkServerAddr=localhost:2181             // multi zookeeper addresses can separate with ","
+```
+
+- Replication Strategy 
+```ini
+[replication]
+repGroupName=tubemqGroup1                // The Master of the same cluster must use the same group name, and the group names of different clusters must be different
+repNodeName=tubemqGroupNode1             // The master node names of the same cluster must be different names
+repHelperHost=FIRST_MASTER_NODE_IP:9001  // helperHost is used for building HA master.
+```
+
+- (Optional) Master High Availability
+In the example above, we run the services on a single node. However, in real production environment, you
+need to run multiple master services on different servers for high availability purpose. Here's
+the introduction of availability level.
+
+| HA Level | Master Number | Description |
+| -------- | ------------- | ----------- |
+| High | 3 masters | After any master crashed, the cluster meta data is still in read/write state and can accept new producers/consumers. |
+| Medium | 2 masters | After one master crashed, the cluster meta data is in read only state. There's no affect on existing producers and consumers. |
+| Minimum | 1 master | After the master crashed, there's no affect on existing producer and consumer. |
+
+**Tips**:Please notice that the master servers should be clock synchronized.
+
+
+### 1.4 Configure Broker
+You can change configurations in `conf/broker.ini` according to cluster information.
+- Broker IP and Port
+```ini
+[broker]
+brokerId=0
+hostName=YOUR_SERVER_IP                 // replaced with your server IP
+port=8123
+webPort=8081
+```
+- Master Address
+```ini
+masterAddressList=MASTER_NODE_IP1:8099,MASTER_NODE_IP2:8099   // multi addresses can separate with ","
+```
+
+- Metadata Path
+```ini
+primaryPath=/stage/msg_data
+```
+
+- ZooKeeper Cluster
+```ini
+[zookeeper]                             // Master and Broker in the same cluster must use the same zookeeper environment and have the same configuration
+zkNodeRoot=/tubemq
+zkServerAddr=localhost:2181             // multi zookeeper addresses can separate with ","
+```
+
+### 1.5 Start Master
+Please go to the `bin` folder and run this command to start
+the master service.
+```bash
+./tubemq.sh master start
+```
+
+You should be able to access `http://your-master-ip:8080` to see the
+web GUI now.
+
+![TubeMQ Console GUI](img/tubemq-console-gui.png)
+
+#### 1.5.1 Configure Broker Metadata
+Before we start a broker service, we need to configure it on master web GUI first. Go to the `Broker List` page, click `Add Single Broker`, and input the new broker information.
+
+![Add Broker 1](img/tubemq-add-broker-1.png)
+
+In this example, we only need to input broker IP and authToken:
+1. broker IP: broker server ip
+2. authToken: A token pre-configured in the `conf/master.ini` file. Please check the
+`confModAuthToken` field in your `master.ini` file.
+
+Click the online link to activate the new added broker.
+
+![Add Broker 2](img/tubemq-add-broker-2.png)
+
+### 1.6 Start Broker
+Please go to the `bin` folder and run this command to start the broker service
+```bash
+./tubemq.sh broker start
+```
+
+Refresh the GUI broker list page, you can see that the broker now is registered.
+
+After the sub-state of the broker changed to `idle`, we can add topics to that broker.
+
+![Add Broker 3](img/tubemq-add-broker-3.png)
+
+## 2 Quick Start
+### 3.1 Add Topic
+We can add or manage the cluster topics on the web GUI. To add a new topic, go to the
+topic list page and click the add new topic button
+
+![Add Topic 1](img/tubemq-add-topic-1.png)
+
+Then select the brokers which you want to deploy the topics to.
+
+![Add Topic 5](img/tubemq-add-topic-5.png)
+
+We can see the publish and subscribe state of the new added topic is still grey. We need
+to go to the broker list page to reload the broker configuration.
+
+![Add Topic 6](img/tubemq-add-topic-6.png)
+
+![Add Topic 2](img/tubemq-add-topic-2.png)
+
+When the broker sub-state changed to idle, go to the topic list page. We can see
+that the topic publish/subscribe state is active now.
+
+![Add Topic 3](img/tubemq-add-topic-3.png)
+
+![Add Topic 4](img/tubemq-add-topic-4.png)
+
+Now we can use the topic to send messages.
+
+### 2.2 Run Example
+Now we can use `demo` topic which created before to test our cluster.
+
+#### 2.2.1 Produce Messages
+
+Please don't forget replace `YOUR_MASTER_IP:port` with your server ip and port, and start producer.
+
+```bash
+cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
+./bin/tubemq-producer-test.sh --master-servers YOUR_MASTER_IP1:port,YOUR_MASTER_IP2:port --topicName demo
+```
+
+From the log, we can see the message is sent out.
+![Demo 1](img/tubemq-send-message.png)
+
+#### 2.2.2 Consume Messages
+
+Please don't forget replace YOUR_MASTER_IP:port with your server ip and port, and start consumer.
+```bash
+cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
+./bin/tubemq-consumer-test.sh --master-servers YOUR_MASTER_IP1:port,YOUR_MASTER_IP2:port --topicName demo --groupName test_consume
+```
+
+From the log, we can see the message received by the consumer.
+![Demo 2](img/tubemq-consume-message.png)
+
+## 3 The End
+Here, the compilation, deployment, system configuration, startup, production and consumption of TubeMQ have been completed. If you need to understand more in-depth content, please check the relevant content in "TubeMQ HTTP API" and make the corresponding configuration settings.
+
+---
+
+
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/tubemq-manager/quick_start.md b/versioned_docs/version-0.11.0/modules/tubemq/tubemq-manager/quick_start.md
new file mode 100644
index 0000000..cf27a7c
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/tubemq-manager/quick_start.md
@@ -0,0 +1,125 @@
+## Deploy TubeMQ Manager
+All deploying files at `inlong-tubemq-manager` directory.
+
+### configuration
+- create `tubemanager` and account in MySQL.
+- Add mysql information in conf/application.properties:
+
+```ini
+# mysql configuration for manager
+spring.datasource.url=jdbc:mysql://mysql_ip:mysql_port/tubemanager
+spring.datasource.username=mysql_username
+spring.datasource.password=mysql_password
+```
+
+### start service
+
+``` bash
+$ bin/start-manager.sh 
+```
+
+### register TubeMQ cluster
+
+    vim bin/init-tube-cluster.sh
+
+replace the parameters below
+```
+TUBE_MANAGER_IP=  
+TUBE_MANAGER_PORT=   
+TUBE_MASTER_IP=   
+TUBE_MASTER_PORT=
+TUBE_MASTER_WEB_PORT=
+TUBE_MASTER_TOKEN=
+```
+
+then run:
+```
+sh bin/init-tube-cluster.sh
+```
+
+this will create a cluster with id = 1, note that this operation should not be executed repeatedly.
+
+
+### Appendix: Other Operation interface
+
+#### cluster
+Query full data of clusterId and clusterName (get)
+
+Example
+
+【GET】 /v1/cluster
+
+return value
+
+    {
+    "errMsg": "",
+    "errCode": 0,
+    "result": true,
+    "data": "[{\"clusterId\":1,\"clusterName\":\"1124\", \"masterIp\":\"127.0.0.1\"}]"
+    }
+
+#### topic
+
+##### add topicTask
+
+parameter:
+
+    type	 (required) request type, fill in the field: op_query
+    clusterId	(required) Request cluster id
+    addTopicTasks (required) topicTasks, create task task json
+    user	(required) After the access authorization verification needs to verify the user, it is reserved here
+
+addTopicTasks currently only includes one field as topicName
+After accessing the region design, a new region field will be added to represent brokers in different regions
+Currently an addTopicTask will create topics in all brokers in the cluster
+
+
+AddTopicTasks is a list of the following objects, which can carry multiple create topic requests
+
+    topicName (required) topic name
+
+Example
+
+【POST】 /v1/task?method=addTopicTask
+
+    {
+    "clusterId": "1",
+    "addTopicTasks": [{"topicName": "1"}],
+    "user": "test"
+    }
+
+return json
+
+    {
+    "errMsg": "There are topic tasks [a12322] already in adding status",
+    "errCode": 200,
+    "result": false,
+    "data": ""
+    }
+
+If result is false, the writing task failed
+
+
+##### Query whether a topic is successfully created (business can be written)
+
+    clusterId	(Required) Request cluster id
+    topicName   (Required) Query topic name
+    user	(Required) After the access authorization verification needs to verify the user, it is reserved here
+
+example
+
+【POST】 /v1/topic?method=queryCanWrite
+
+    {
+    "clusterId": "1",
+    "topicName": "1",
+    "user": "test"
+    }
+
+return json
+
+    { "result":true, "errCode":0, "errMsg":"OK", }
+    { "result":false, "errCode": 100, "errMsg":"topic test is not writable"}
+    { "result":false, "errCode": 101, "errMsg":"no such topic in master"}
+
+result is false as not writable
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md b/versioned_docs/version-0.11.0/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
new file mode 100644
index 0000000..cbeff56
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
@@ -0,0 +1,239 @@
+---
+title: Performance testing of TubeMQ vs Kafka
+---
+
+## 1 背景
+TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于[Apache Kafka](http://kafka.apache.org/)。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。
+这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。
+
+## 2 测试场景方案
+如下是我们根据实际应用场景设计的测试方案:
+![](img/perf_scheme.png)
+
+## 3 测试结论
+用"复仇者联盟"里的角色来形容:
+
+角色|测试场景|要点
+:---:|:---:|---
+闪电侠|场景五|快 (数据生产消费时延 TubeMQ 10ms vs kafka 250ms )
+绿巨人|场景三,场景四|抗击打能力 (随着topic数由100,200,到500,1000逐步增大,TubeMQ系统能力不减,吞吐量随负载的提升下降微小且能力持平 vs kafka吞吐量明显下降且不稳定;过滤消费时,TubeMQ入出流量提升直接完胜kafka的入流量下降且吞吐量下降)
+蜘蛛侠|场景八|各个场景来去自如(不同机型下对比测试,TubeMQ吞吐量稳定 vs Kafka在BX1机型下性能更低的问题)
+钢铁侠|场景二,场景三,场景六|自动化(系统运行中TubeMQ可以动态实时的调整系统设置、消费行为来提升系统性能)
+     
+具体的数据分析来看:
+1. 单Topic单实例配置下,TubeMQ吞吐量要远低于Kafka;单Topic多实例配置下,TubeMQ在4个实例时吞吐量追上Kafka对应5个分区配置,同时TubeMQ的吞吐量随实例数增加而增加,Kafka出现不升反降的情况;TubeMQ可以在系统运行中通过调整各项参数来动态的控制吞吐量的提升;
+2. 多Topic多实例配置下,TubeMQ吞吐量维持在一个非常稳定的范围,且资源消耗,包括文件句柄、网络连接句柄数等非常的低;Kafka吞吐量随Topic数增多呈现明显的下降趋势,且资源消耗急剧增大;在SATA盘存储条件下,随着机型的配置提升,TubeMQ吞吐量可以直接压到磁盘瓶颈,而Kafka呈现不稳定状态;在CG1机型SSD盘情况下,Kafka的吞吐量要好于TubeMQ;
+3. 在过滤消费时,TubeMQ可以极大地降低服务端的网络出流量,同时还会因过滤消费消耗的资源少于全量消费,反过来促进TubeMQ吞吐量提升;kafka无服务端过滤,出流量与全量消费一致,流量无明显的节约;
+4. 资源消耗方面各有差异:TubeMQ由于采用顺序写随机读,CPU消耗很大,Kafka采用顺序写块读,CPU消耗很小,但其他资源,如文件句柄、网络连接等消耗非常的大。在实际的SAAS模式下的运营环境里,Kafka会因为zookeeper依赖出现系统瓶颈,会因生产、消费、Broker众多,受限制的地方会更多,比如文件句柄、网络连接数等,资源消耗会更大;
+
+## 4 测试环境及配置
+### 4.1 【软件版本及部署环境】
+
+**角色**|**TubeMQ**|**Kafka**
+:---:|---|---
+**软件版本**|tubemq-3.8.0|Kafka\_2.11-0.10.2.0
+**zookeeper部署**|与Broker不在同一台机器上,单机|与Broker配置不在同一台机器,单机
+**Broker部署**|单机|单机
+**Master部署**|与Broker不在同一台机器上,单机|不涉及
+**Producer**|1台M10 + 1台CG1|1台M10 + 1台CG1
+**Consumer**|6台TS50万兆机|6台TS50万兆机
+
+### 4.2 【Broker硬件机型配置】
+
+**机型**|配置|**备注**
+:---:|---|---
+**TS60**|(E5-2620v3\*2/16G\*4/SATA3-2T\*12/SataSSD-80G\*1/10GE\*2) Pcs|若未作说明,默认都是在TS60机型上进行测试对比
+**BX1-10G**|SA5212M5(6133\*2/16G\*16/4T\*12/10GE\*2) Pcs|                                     
+**CG1-10G**|CG1-10G\_6.0.2.12\_RM760-FX(6133\*2/16G\*16/5200-480G\*6 RAID/10GE\*2)-ODM Pcs |  
+
+### 4.3 【Broker系统配置】
+
+| **配置项**            | **TubeMQ Broker**     | **Kafka Broker**      |
+|:---:|---|---|
+| **日志存储**          | Raid10处理后的SATA盘或SSD盘 | Raid10处理后的SATA盘或SSD盘 |
+| **启动参数**          | BROKER_JVM_ARGS="-Dcom.sun.management.jmxremote -server -Xmx24g -Xmn8g -XX:SurvivorRatio=6 -XX:+UseMembar -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+CMSScavengeBeforeRemark -XX:ParallelCMSThreads=4 -XX:+UseCMSCompactAtFullCollection -verbose:gc -Xloggc:$BASE_DIR/logs/gc.log.`date +%Y-%m-%d-%H-%M-%S` -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=75 -XX:CMSFullGCsBeforeCompaction=1 -Dsun.net [...]
+| **配置文件**          | 在tubemq-3.8.0版本broker.ini配置文件上改动: consumerRegTimeoutMs=35000<br/>tcpWriteServiceThread=50<br/>tcpReadServiceThread=50<br/>primaryPath为SATA盘日志目录|kafka_2.11-0.10.2.0版本server.properties配置文件上改动:<br/>log.flush.interval.messages=5000<br/>log.flush.interval.ms=10000<br/>log.dirs为SATA盘日志目录<br/>socket.send.buffer.bytes=1024000<br/>socket.receive.buffer.bytes=1024000<br/>socket.request.max.bytes=2147483600<br/>log.segment.bytes=1073741824<br/>num.network.threads=25<br/>num.io [...]
+| **其它**             | 除测试用例里特别指定,每个topic创建时设置:<br/>memCacheMsgSizeInMB=5<br/>memCacheFlushIntvl=20000<br/>memCacheMsgCntInK=10 <br/>unflushThreshold=5000<br/>unflushInterval=10000<br/>unFlushDataHold=5000 | 客户端代码里设置:<br/>生产端:<br/>props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br/>props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br/>props.put("linger.ms", "200");<br/>props.put("block.on.buffer.full", false);< [...]
+              
+## 5 测试场景及结论
+
+### 5.1 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
+ ![](img/perf_scenario_1.png)
+
+#### 5.1.1 【结论】
+
+在单topic不同分区的情况下:
+1. TubeMQ吞吐量不随分区变化而变化,同时TubeMQ属于顺序写随机读模式,单实例情况下吞吐量要低于Kafka,CPU要高于Kafka;
+2. Kafka随着分区增多吞吐量略有下降,CPU使用率很低;
+3. TubeMQ分区由于是逻辑分区,增加分区不影响吞吐量;Kafka分区为物理文件的增加,但增加分区入出流量反而会下降;
+
+#### 5.1.2 【指标】
+ ![](img/perf_scenario_1_index.png)
+
+### 5.2 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
+ ![](img/perf_scenario_2.png)
+
+#### 5.2.1 【结论】
+
+从场景一和场景二的测试数据结合来看:
+
+1. TubeMQ随着实例数增多,吞吐量增长,在4个实例的时候吞吐量与Kafka持平,磁盘IO使用率比Kafka低,CPU使用率比Kafka高;
+2. TubeMQ的消费方式影响到系统的吞吐量,内存读取模式(301)性能低于文件读取模式(101),但能降低消息的时延;
+3. Kafka随分区实例数增多,没有如期提升系统吞吐量;
+4. TubeMQ按照Kafka等同的增加实例(物理文件)后,吞吐量量随之提升,在4个实例的时候测试效果达到并超过Kafka
+    5个分区的状态;TubeMQ可以根据业务或者系统配置需要,调整数据读取方式,可以动态提升系统的吞吐量;Kafka随着分区增加,入流量有下降;
+
+#### 5.2.2 【指标】
+
+**注1 :** 如下场景中,均为单Topic测试下不同分区或实例、不同读取模式场景下的测试,单条消息包长均为1K;
+
+**注2 :**
+读取模式通过admin\_upd\_def\_flow\_control\_rule设置qryPriorityId为对应值.
+ ![](img/perf_scenario_2_index.png)
+
+### 5.3 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
+ ![](img/perf_scenario_3.png)
+
+#### 5.3.1 【结论】
+
+按照多Topic场景下测试:
+
+1.  TubeMQ随着Topic数增加,生产和消费性能维持在一个均线上,没有特别大的流量波动,占用的文件句柄、内存量、网络连接数不多(1k
+    topic下文件句柄约7500个,网络连接150个),但CPU占用比较大;
+2.  TubeMQ通过调整消费方式由内存消费转为文件消费方式后,吞吐量有比较大的增长,CPU占用率有下降,对不同性能要求的业务可以进行区别服务;
+3.  Kafka随着Topic数的增加,吞吐量有明显的下降,同时Kafka流量波动较为剧烈,长时间运行存消费滞后,以及吞吐量明显下降的趋势,以及内存、文件句柄、网络连接数量非常大(在1K
+    Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题;
+4.  数据对比来看,TubeMQ相比Kafka运行更稳定,吞吐量以稳定形势呈现,长时间跑吞吐量不下降,资源占用少,但CPU的占用需要后续版本解决;
+
+#### 5.3.2 【指标】
+
+**注:** 如下场景中,包长均为1K,分区数均为10。
+ ![](img/perf_scenario_3_index.png)
+
+### 5.4 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
+
+#### 5.4.1 【结论】
+
+1.  TubeMQ采用服务端过滤的模式,出流量指标与入流量存在明显差异;
+2.  TubeMQ服务端过滤提供了更多的资源给到生产,生产性能比非过滤情况有提升;
+3.  Kafka采用客户端过滤模式,入流量没有提升,出流量差不多是入流量的2倍,同时入出流量不稳定;
+
+#### 5.4.2 【指标】
+
+**注:** 如下场景中,topic为100,包长均为1K,分区数均为10
+ ![](img/perf_scenario_4_index.png)
+
+### 5.5 场景五:TubeMQ、Kafka数据消费时延比对
+
+| 类型   | 时延            | Ping时延                |
+|---|---|---|
+| TubeMQ | 90%数据在10ms±  | C->B:0.05ms ~ 0.13ms, P->B:2.40ms ~ 2.42ms |
+| Kafka  | 90%集中在250ms± | C->B:0.05ms ~ 0.07ms, P-\>B:2.95ms \~ 2.96ms |
+
+备注:TubeMQ的消费端存在一个等待队列处理消息追平生产时的数据未找到的情况,缺省有200ms的等待时延。测试该项时,TubeMQ消费端要调整拉取时延(ConsumerConfig.setMsgNotFoundWaitPeriodMs())为10ms,或者设置频控策略为10ms。
+
+### 5.6 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
+
+#### 5.6.1【结论】
+
+1.  TubeMQ调整Topic的内存缓存大小能对吞吐量形成正面影响,实际使用时可以根据机器情况合理调整;
+2.  从实际使用情况看,内存大小设置并不是越大越好,需要合理设置该值;
+
+#### 5.6.2 【指标】
+
+ **注:** 如下场景中,消费方式均为读取内存(301)的PULL消费,单条消息包长均为1K
+ ![](img/perf_scenario_6_index.png)
+ 
+
+### 5.7 场景七:消费严重滞后情况下两系统的表现
+
+#### 5.7.1 【结论】
+
+1.  消费严重滞后情况下,TubeMQ和Kafka都会因磁盘IO飙升使得生产消费受阻;
+2.  在带SSD系统里,TubeMQ可以通过SSD转存储消费来换取部分生产和消费入流量;
+3.  按照版本计划,目前TubeMQ的SSD消费转存储特性不是最终实现,后续版本中将进一步改进,使其达到最合适的运行方式;
+
+#### 5.7.2 【指标】
+ ![](img/perf_scenario_7.png)
+
+
+### 5.8 场景八:评估多机型情况下两系统的表现
+ ![](img/perf_scenario_8.png)
+      
+#### 5.8.1 【结论】
+
+1.  TubeMQ在BX1机型下较TS60机型有更高的吞吐量,同时因IO util达到瓶颈无法再提升,吞吐量在CG1机型下又较BX1达到更高的指标值;
+2.  Kafka在BX1机型下系统吞吐量不稳定,且较TS60下测试的要低,在CG1机型下系统吞吐量达到最高,万兆网卡跑满;
+3.  在SATA盘存储条件下,TubeMQ性能指标随着硬件配置的改善有明显的提升;Kafka性能指标随硬件机型的改善存在不升反降的情况;
+4.  在SSD盘存储条件下,Kafka性能指标达到最好,TubeMQ指标不及Kafka;
+5.  CG1机型数据存储盘较小(仅2.2T),RAID 10配置下90分钟以内磁盘即被写满,无法测试两系统长时间运行情况。
+
+#### 5.8.2 【指标】
+
+**注1:** 如下场景Topic数均配置500个topic,10个分区,消息包大小为1K字节;
+
+**注2:** TubeMQ采用的是301内存读取模式消费;
+ ![](img/perf_scenario_8_index.png)
+
+## 6 附录
+### 6.1 附录1 不同机型下资源占用情况图:
+#### 6.1.1 【BX1机型测试】
+![](img/perf_appendix_1_bx1_1.png)
+![](img/perf_appendix_1_bx1_2.png)
+![](img/perf_appendix_1_bx1_3.png)
+![](img/perf_appendix_1_bx1_4.png)
+
+#### 6.1.2 【CG1机型测试】
+![](img/perf_appendix_1_cg1_1.png)
+![](img/perf_appendix_1_cg1_2.png)
+![](img/perf_appendix_1_cg1_3.png)
+![](img/perf_appendix_1_cg1_4.png)
+
+### 6.2 附录2 多Topic测试时的资源占用情况图:
+
+#### 6.2.1 【100个topic】
+![](img/perf_appendix_2_topic_100_1.png)
+![](img/perf_appendix_2_topic_100_2.png)
+![](img/perf_appendix_2_topic_100_3.png)
+![](img/perf_appendix_2_topic_100_4.png)
+![](img/perf_appendix_2_topic_100_5.png)
+![](img/perf_appendix_2_topic_100_6.png)
+![](img/perf_appendix_2_topic_100_7.png)
+![](img/perf_appendix_2_topic_100_8.png)
+![](img/perf_appendix_2_topic_100_9.png)
+ 
+#### 6.2.2 【200个topic】
+![](img/perf_appendix_2_topic_200_1.png)
+![](img/perf_appendix_2_topic_200_2.png)
+![](img/perf_appendix_2_topic_200_3.png)
+![](img/perf_appendix_2_topic_200_4.png)
+![](img/perf_appendix_2_topic_200_5.png)
+![](img/perf_appendix_2_topic_200_6.png)
+![](img/perf_appendix_2_topic_200_7.png)
+![](img/perf_appendix_2_topic_200_8.png)
+![](img/perf_appendix_2_topic_200_9.png)
+
+#### 6.2.3 【500个topic】
+![](img/perf_appendix_2_topic_500_1.png)
+![](img/perf_appendix_2_topic_500_2.png)
+![](img/perf_appendix_2_topic_500_3.png)
+![](img/perf_appendix_2_topic_500_4.png)
+![](img/perf_appendix_2_topic_500_5.png)
+![](img/perf_appendix_2_topic_500_6.png)
+![](img/perf_appendix_2_topic_500_7.png)
+![](img/perf_appendix_2_topic_500_8.png)
+![](img/perf_appendix_2_topic_500_9.png)
+
+#### 6.2.4【1000个topic】
+![](img/perf_appendix_2_topic_1000_1.png)
+![](img/perf_appendix_2_topic_1000_2.png)
+![](img/perf_appendix_2_topic_1000_3.png)
+![](img/perf_appendix_2_topic_1000_4.png)
+![](img/perf_appendix_2_topic_1000_5.png)
+![](img/perf_appendix_2_topic_1000_6.png)
+![](img/perf_appendix_2_topic_1000_7.png)
+![](img/perf_appendix_2_topic_1000_8.png)
+![](img/perf_appendix_2_topic_1000_9.png)
+
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/website/_category_.json b/versioned_docs/version-0.11.0/modules/website/_category_.json
new file mode 100644
index 0000000..b7dc82c
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/website/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Website",
+  "position": 2
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/modules/website/quick_start.md b/versioned_docs/version-0.11.0/modules/website/quick_start.md
new file mode 100644
index 0000000..8eeaeda
--- /dev/null
+++ b/versioned_docs/version-0.11.0/modules/website/quick_start.md
@@ -0,0 +1,55 @@
+---
+title: Build && Deployment
+---
+
+## About WebSite
+This is a website console for us to use the [Apache InLong incubator](https://github.com/apache/incubator-inlong).
+
+## Build
+```
+mvn package -DskipTests -Pdocker -pl inlong-website
+```
+
+## Run
+```
+docker run -d --name website -e MANAGER_API_ADDRESS=127.0.0.1:8083 -p 80:80 inlong/website
+```
+
+## Guide For Developer
+You should check that `nodejs >= 12.0` is installed.
+
+In the project, you can run some built-in commands:
+
+If `node_modules` is not installed, you should first run `npm install` or `yarn install`.
+
+Use `npm run dev` or `yarn dev` to run the application in development mode.
+
+If the server runs successfully, the browser will open [http://localhost:8080](http://localhost:8080) to view in the browser.
+
+If you edit, the page will reload.
+You will also see any lint errors in the console.
+
+The start of the web server depends on the back-end server `manger api` interface.
+
+You should start the backend server first, and then set the variable `target` in `/inlong-website/src/setupProxy.js` to the address of the api service.
+
+### Test
+
+Run `npm test` or `yarn test`
+
+Start the test runner in interactive observation mode.
+For more information, see the section on [Running Tests](https://create-react-app.dev/docs/running-tests/).
+
+### Build
+
+First, make sure that the project has run `npm install` or `yarn install` to install `node_modules`.
+
+Run `npm run build` or `yarn build`.
+
+Build the application for production into the build folder.
+Better page performance can be obtained in the constructed production mode.
+
+After the build, the code is compressed, and the file name includes the hash value.
+Your application is ready to be deployed!
+
+For details, see the section on [deployment](https://create-react-app.dev/docs/deployment/).
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/user_guide/_category_.json b/versioned_docs/version-0.11.0/user_guide/_category_.json
new file mode 100644
index 0000000..6ee9555
--- /dev/null
+++ b/versioned_docs/version-0.11.0/user_guide/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "User Guide",
+  "position": 1
+}
\ No newline at end of file
diff --git a/versioned_docs/version-0.11.0/user_guide/example.md b/versioned_docs/version-0.11.0/user_guide/example.md
new file mode 100644
index 0000000..140269b
--- /dev/null
+++ b/versioned_docs/version-0.11.0/user_guide/example.md
@@ -0,0 +1,104 @@
+---
+title: Hive Example
+sidebar_position: 3
+---
+
+Here we use a simple example to help you experience InLong by Docker.
+
+## Install Hive
+Hive is the necessary component. If you don't have Hive in your machine, we recommand using Docker to install it. Details can be found [here](https://github.com/big-data-europe/docker-hive).
+
+> Note that if you use Docker, you need to add a port mapping `8020:8020`, because it's the port of HDFS DefaultFS, and we need to use it later.
+
+## Install InLong
+Before we begin, we need to install InLong. Here we provide two ways:
+1. Install InLong with Docker by according to the [instructions here](https://github.com/apache/incubator-inlong/tree/master/docker/docker-compose).(Recommanded)
+2. Install InLong binary according to the [instructions here](./quick_start.md).
+
+## Create a data access
+After deployment, we first enter the "Data Access" interface, click "Create an Access" in the upper right corner to create a new date access, and fill in the business information as shown in the figure below.
+
+<img src="/img/create-business.png" align="center" alt="Create Business"/>
+
+Then we click the next button, and fill in the stream information as shown in the figure below.
+
+<img src="/img/create-stream.png" align="center" alt="Create Stream"/>
+
+Note that the message source is "File", and we don't need to create a message source manually.
+
+Then we fill in the following information in the "data information" column below.
+
+<img src="/img/data-information.png" align="center" alt="Data Information"/>
+
+Then we select Hive in the data flow and click "Add" to add Hive configuration
+
+<img src="/img/hive-config.png" align="center" alt="Hive Config"/>
+
+Note that the target table does not need to be created in advance, as InLong Manager will automatically create the table for us after the access is approved. Also, please use connection test to ensure that InLong Manager can connect to your Hive.
+
+Then we click the "Submit for Approval" button, the connection will be created successfully and enter the approval state.
+
+## Approve the data access
+Then we enter the "Approval Management" interface and click "My Approval" to approve the data access that we just applied for.
+
+At this point, the data access has been created successfully. We can see that the corresponding table has been created in Hive, and we can see that the corresponding topic has been created successfully in the management GUI of TubeMQ.
+
+## Configure the agent
+Here we use `docker exec` to enter the container of the agent and configure it.
+```
+$ docker exec -it agent sh
+```
+
+Then we create a directory of `.inlong`, and new a file named `groupid.local` (Here groupId is group id showed on data access in inlong-manager) and fill in the configuration of Dataproxy as follows.
+```
+$ mkdir .inlong
+$ cd .inlong
+$ touch b_test.local
+$ echo '{"cluster_id":1,"isInterVisit":1,"size":1,"address": [{"port":46801,"host":"dataproxy"}], "switch":0}' >> b_test.local
+```
+
+Then we exit the container, and use `curl` to make a request.
+```
+curl --location --request POST 'http://localhost:8008/config/job' \
+--header 'Content-Type: application/json' \
+--data '{
+"job": {
+"dir": {
+"path": "",
+"pattern": "/data/collect-data/test.log"
+},
+"trigger": "org.apache.inlong.agent.plugin.trigger.DirectoryTrigger",
+"id": 1,
+"thread": {
+"running": {
+"core": "4"
+}
+},
+"name": "fileAgentTest",
+"source": "org.apache.inlong.agent.plugin.sources.TextFileSource",
+"sink": "org.apache.inlong.agent.plugin.sinks.ProxySink",
+"channel": "org.apache.inlong.agent.plugin.channel.MemoryChannel"
+},
+"proxy": {
+"groupId": "b_test",
+"streamId": "test_stream"
+},
+"op": "add"
+}'
+```
+
+At this point, the agent is configured successfully.
+Then we need to create a new file `./collect-data/test.log` and add content to it to trigger the agent to send data to the dataproxy.
+
+```
+$ touch collect-data/test.log
+$ echo 'test,24' >> collect-data/test.log
+```
+
+Then we can observe the logs of agent and dataproxy, and we can see that the relevant data has been sent successfully.
+
+```
+$ docker logs agent
+$ docker logs dataproxy
+```
+
diff --git a/versioned_docs/version-0.11.0/user_guide/quick_start.md b/versioned_docs/version-0.11.0/user_guide/quick_start.md
new file mode 100644
index 0000000..218dd2b
--- /dev/null
+++ b/versioned_docs/version-0.11.0/user_guide/quick_start.md
@@ -0,0 +1,76 @@
+---
+title: Quick Start
+sidebar_position: 1
+---
+
+This section contains a quick start guide to help you get started with Apache InLong.
+
+## Overall architecture
+<img src="/img/inlong_architecture.png" align="center" alt="Apache InLong"/>
+
+[Apache InLong](https://inlong.apache.org)(incubating) overall architecture is as above. This component is a one-stop data streaming platform that provides automated, secure, distributed, and efficient data publishing and subscription capabilities to help You can easily build stream-based data applications.
+
+InLong (应龙) is a divine beast in Chinese mythology who guides river into the sea, it is regarded as a metaphor of the InLong system for reporting streams of data.
+
+InLong was originally built in Tencent and has served online business for more than 8 years. It supports massive data (over 40 trillion pieces of data per day) report services under big data scenarios. The entire platform integrates 5 modules including data collection, aggregation, caching, sorting and management modules. Through this system, the business only needs to provide data sources, data service quality, data landing clusters and data landing formats, that is, data can be continu [...]
+
+
+## Compile
+- Java [JDK 8](https://adoptopenjdk.net/?variant=openjdk8)
+- Maven 3.6.1+
+
+```
+$ mvn clean install -DskipTests
+```
+(Optional) Compile using docker image:
+```
+$ docker pull maven:3.6-openjdk-8
+$ docker run -v `pwd`:/inlong  -w /inlong maven:3.6-openjdk-8 mvn clean install -DskipTests
+```
+after compile successfully, you could find distribution file at `inlong-distribution/target` with `tar.gz` format, it includes following files:
+```
+inlong-agent
+inlong-dataproxy
+inlong-dataproxy-sdk
+inlong-manager-web
+inlong-sort
+inlong-tubemq-manager
+inlong-tubemq-server
+inlong-website
+```
+
+## Environment Requirements
+- ZooKeeper 3.5+
+- Hadoop 2.10.x 和 Hive 2.3.x
+- MySQL 5.7+
+- Flink 1.9.x
+
+## deploy InLong TubeMQ Server
+[deploy InLong TubeMQ Server](modules/tubemq/quick_start.md)
+
+## deploy InLong TubeMQ Manager
+[deploy InLong TubeMQ Manager](modules/tubemq/tubemq-manager/quick_start.md)
+
+## deploy InLong Manager
+[deploy InLong Manager](modules/manager/quick_start.md)
+
+## deploy InLong WebSite
+[deploy InLong WebSite](modules/website/quick_start.md)
+
+## deploy InLong Sort
+[deploy InLong Sort](modules/sort/quick_start.md)
+
+## deploy InLong DataProxy
+[deploy InLong DataProxy](modules/dataproxy/quick_start.md)
+
+## deploy InLong DataProxy-SDK
+[deploy InLong DataProxy](modules/dataproxy-sdk/quick_start.md)
+
+## deploy InLong Agent
+[deploy InLong Agent](modules/agent/quick_start.md)
+
+## Business configuration
+[How to configure a new business](docs/user_guide/user_manual)
+
+## Data report verification
+At this stage, you can collect data through the file agent and verify whether the received data is consistent with the sent data in the specified Hive table.
diff --git a/versioned_docs/version-0.11.0/user_guide/user_manual.md b/versioned_docs/version-0.11.0/user_guide/user_manual.md
new file mode 100644
index 0000000..83bdf33
--- /dev/null
+++ b/versioned_docs/version-0.11.0/user_guide/user_manual.md
@@ -0,0 +1,286 @@
+---
+title: User Manual
+sidebar_position: 2
+---
+
+# 1. User login
+
+Requires the user to enter the account name and password of the system.
+
+![](/cookbooks_img//image-1624433272455.png)
+
+# 2. Data access
+
+The data access module displays a list of all tasks connected to the system within the current user authority, and can
+view, edit, update and delete the details of these tasks.
+
+Click [Data Access], there are two steps to fill in data access information: business information, data stream.
+
+![](/cookbooks_img//image-1624431177918.png)
+
+## 2.1 Business Information
+
+### 2.1.1 Business Information
+
+You are required to fill in basic business information for access tasks.
+
+![](/cookbooks_img//image-1624431271642.png)
+
+- Business English ID: Unified lowercase English name, please try to include the product name and concise
+  specifications, such as pay_base
+- Business Chinese name: Chinese description of the business, easy to use and retrieve, up to 128 characters
+- Business responsible person: at least 2 people, the business responsible person can view and modify business
+  information, add and modify all access configuration items
+- Business introduction: Cut SMS to introduce the business background and application of this access task:
+
+### 2.1.2 Access requirements
+
+Access requirements require users to choose message middleware: high throughput (TUBE):
+
+![](/cookbooks_img//image-1624431306077.png)
+
+High-throughput-Tube: high-throughput message transmission component, suitable for log message transmission.
+
+### 2.1.3 Access scale
+
+The scale of access requires users to judge the scale of access data in advance, to allocate computing and storage
+resources later.
+
+![](/cookbooks_img//image-1624431333949.png)
+
+## 2.2 Data stream
+
+Click [Next] to enter the data flow information filling step. There are four modules for data flow information filling:
+basic information, data source, data information, and data stream.
+
+In the data flow process, you can click [New Data Stream] to create a new data stream page:
+
+![](/cookbooks_img//image-1624431416449.png)
+
+### 2.2.1 Basic information
+
+You are required to fill in the basic information of the data stream in the access task:
+
+![](/cookbooks_img//image-1624431435574.png)
+
+- InLong stream id: The prefix is automatically generated according to the product/project, which is unique in a 
+  specific business group and is consistent with the stream id in the data source and the storage table
+- Data stream name: interface information description, the length is limited to 64 characters (32 Chinese characters)
+- Data stream owner: The data stream owner can view and modify data stream information, add and modify all access
+  configuration items
+- Introduction to data flow: simple text introduction to data flow
+
+### 2.2.2 Data source
+
+You are required to select the source of the data stream.
+
+Currently, three methods of file and independent push are supported, and the detailed information of the data source can
+be supplemented in the advanced options.
+
+- File: The business data is in the file, and the business machine deploys InLong Agent, which is read according to
+  customized policy rules
+- Autonomous push: Push data to the messaging middleware through the SDK
+
+![](/cookbooks_img//image-1624431594406.png)
+
+### 2.2.3 Data Information
+
+You are required to fill in the data-related information in the data stream.
+
+![](/cookbooks_img//image-1624431617259.png)
+
+- Data Format
+- Data encoding: If the data source contains Chinese, you need choose UTF-8 or GBK, otherwise the encoding format is
+  incorrect and garbled characters after storage
+- Source field separator: the format of data sent to MQ
+- Source data field: attributes with different meanings divided by a certain format in MQ
+
+### 2.2.4 Data storage
+
+You are required to select the final flow direction of this task, this part is not currently supports both hive storage
+and autonomous push.
+
+![](/cookbooks_img//image-1624431713360.png)
+
+Add HIVE storage:
+
+![](/cookbooks_img//image-1624431787323.png)
+
+- Target database: hive database name (prepared to create in advance)
+- Target table: hive table name
+- First-level partition: the field name of the first-level subdirectory of hdfs data divided by hive data
+- Secondary partition: the field name of the first-level subdirectory of hdfs data divided by hive data
+- Username: hive server connection account name
+- User password: hive server connection account password
+- HDFS url: Hive bottom HDFS connection
+- JDBC url: jdbc url of hive server
+- Field related information: source field name, source field type, HIVE field name, HIVE field type, field description,
+  and support deletion and addition-
+
+# 3. Access details
+
+## 3.1 Execution log
+
+When the status of the data access task is "approved successfully" or "configuration failed", the "execution log"
+function can be used to allow users to view the progress and details of the task.
+
+![](/cookbooks_img//image-1624432002615.png)
+
+Click [Execution Log] to display the details of the task execution log in a pop-up window.
+
+![](/cookbooks_img//image-1624432022859.png)
+
+The execution log will display the task type, execution result, execution log content, end time, and the end time of the
+execution of the access process. If the execution fails, you can "restart" the task and execute it again.
+
+## 3.2 Task details
+
+The business person in charge/following person can view the access details of the task, and can modify and update part
+of the information under the status of [Waiting Applying], [Configuration Successful], and [Configuration Failed].
+
+There are three modules in the access task details: business information, data stream and data storage.
+
+### 3.2.1 Business Information
+
+Display the basic business information in the access task, click [Edit] to modify part of the content
+
+![](/cookbooks_img//image-1624432076857.png)
+
+### 3.2.2 Data stream
+
+Display the basic information of the data flow under the access task, click [New Data Flow] to create a new data flow
+information
+
+![](/cookbooks_img//image-1624432092795.png)
+
+### 3.2.3 Data Storage
+
+Display the basic information of the data flow in the access task, select different flow types through the drop-down
+box, and click [New Flow Configuration] to create a new data storage.
+
+![](/cookbooks_img//image-1624432114765.png)
+
+# 4. Data consumption
+
+Data consumption currently does not support direct consumption access to data, and data can be consumed normally after
+the approval process.
+
+Click [New Consumption] to enter the data consumption process, and you need to fill in information related to
+consumption.
+
+![](/cookbooks_img//image-1624432235900.png)
+
+## 4.1 Consumer Information
+
+Applicants need to gradually fill in the basic consumer business information related to data consumption applications in
+the information filling module
+
+![](/cookbooks_img//image-1624432254118.png)
+
+- Consumer group name: The prefix is automatically generated according to the product/project. The brief name of the
+  consumer must be composed of lowercase letters, numbers, and underscores. The final approval will assign the consumer
+  name based on the abbreviation splicing
+- Consumer Responsible Person: At least 2 persons are required to choose the responsible person; the responsible person
+  can view and modify the consumption information
+- Consumer target business group id: you need to select the business group id of the consumer data, you can click [Query] and 
+  select the appropriate business group id in the pop-up window
+  ![](/cookbooks_img//image-1624432286674.png)
+- Data usage: select data usage usage
+- Data usage description: The applicant needs to briefly explain the items used and the purpose of the data according to
+  their own consumption scenarios After completing the information, click [Submit], and the data consumption process
+  will be formally submitted to the approver before it will take effect.
+
+# 5. Approval management
+
+The approval management function module currently includes my application and my approval, and all tasks of data access
+and consumption application approval in the management system.
+
+## 5.1 My application
+
+Display the current task list submitted by the applicant for data access and consumption in the system, click [Details]
+to view the current basic information and approval process of the task.
+
+![](/cookbooks_img//image-1624432445002.png)
+
+### 5.1.1 Data access details
+
+Data access task detailed display The current basic information of the application task includes: applicant-related
+information, basic information about application access, and current approval process nodes.
+
+![](/cookbooks_img//image-1624432458971.png)
+
+### 5.1.2 Data consumption details
+
+Data consumption task details display basic information of current application tasks including: applicant information,
+basic consumption information, and current approval process nodes.
+
+![](/cookbooks_img//image-1624432474526.png)
+
+## 5.2 My approval
+
+As a data access officer and system member with approval authority, have the responsibility for data access or
+consumption approval.
+
+![](/cookbooks_img//image-1624432496461.png)
+
+### 5.2.1 Data Access Approval
+
+New data access approval: currently it is a first-level approval, which is approved by the system administrator.
+
+The system administrator will review whether the access process meets the access requirements based on the data access
+business information.
+
+![](/cookbooks_img//image-1624432515850.png)
+
+### 5.2.2 New data consumption approval
+
+New data consume approval: currently it is a first-level approval, which is approved by the person in charge of the
+business.
+
+Business approval: The person in charge of the data access business judges whether the consumption meets the business
+requirements according to the access information:
+
+![](/cookbooks_img//image-1624432535541.png)
+
+# 6. System Management
+
+Only users with the role of system administrator can use this function. They can create, modify, and delete users:
+
+![](/cookbooks_img//image-1624432652141.png)
+
+## 6.1 New user
+
+Users with system administrator rights can create new user accounts
+
+![](/cookbooks_img//image-1624432668340.png)
+
+- Account types: Ordinary users (with data access and data consumption permissions, but without data access approval and
+  account management permissions); system administrators (with data access and data consumption permissions, data access
+  approval and account management permissions)
+- username: username for login
+- user password:
+  -Effective duration: the account can be used in the system
+  ![](/cookbooks_img//image-1624432740241.png)
+
+## 6.2 Delete user
+
+The system administrator can delete the account of the created user. After the deletion, the account will stop using:
+
+![](/cookbooks_img//image-1624432759224.png)
+
+## 6.3 User Edit
+
+The system administrator can modify the created account:
+
+![](/cookbooks_img//image-1624432778845.png)
+
+The system administrator can modify the account type and effective duration to proceed:
+
+![](/cookbooks_img//image-1624432797226.png)
+
+## 6.4 Change password
+
+The user can modify the account password, click [Modify Password], enter the old password and the new password, after
+confirmation, the new password of this account will take effect:
+
+![](/cookbooks_img//image-1624432829313.png)
diff --git a/versioned_sidebars/version-0.11.0-sidebars.json b/versioned_sidebars/version-0.11.0-sidebars.json
new file mode 100644
index 0000000..ad48c3e
--- /dev/null
+++ b/versioned_sidebars/version-0.11.0-sidebars.json
@@ -0,0 +1,8 @@
+{
+  "version-0.11.0/tutorialSidebar": [
+    {
+      "type": "autogenerated",
+      "dirName": "."
+    }
+  ]
+}
diff --git a/versions.json b/versions.json
new file mode 100644
index 0000000..761a296
--- /dev/null
+++ b/versions.json
@@ -0,0 +1,3 @@
+[
+  "0.11.0"
+]