You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by gi...@apache.org on 2021/07/07 07:51:06 UTC

[incubator-inlong-website] branch asf-site updated: Automated deployment: Wed Jul 7 07:51:00 UTC 2021 a4128bedc268293f11fc09cefbdeca63c906d5af

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-inlong-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new f837dd9  Automated deployment: Wed Jul  7 07:51:00 UTC 2021 a4128bedc268293f11fc09cefbdeca63c906d5af
f837dd9 is described below

commit f837dd96dd9a5b9b54a2fb7c7609ee97b9315664
Author: gosonzhang <go...@users.noreply.github.com>
AuthorDate: Wed Jul 7 07:51:00 2021 +0000

    Automated deployment: Wed Jul  7 07:51:00 UTC 2021 a4128bedc268293f11fc09cefbdeca63c906d5af
---
 docs/en-us/modules/tubemq/architecture.md          |   4 +-
 docs/en-us/modules/tubemq/client_rpc.md            |  31 ++--
 docs/en-us/modules/tubemq/clients_java.md          |  47 +++---
 .../en-us/modules/tubemq/configure_introduction.md |  10 +-
 docs/en-us/modules/tubemq/console_introduction.md  |  22 ++-
 docs/en-us/modules/tubemq/consumer_example.md      |   6 +-
 docs/en-us/modules/tubemq/deployment.md            |  16 +-
 docs/en-us/modules/tubemq/error_code.md            |   6 +-
 docs/en-us/modules/tubemq/http_access_api.md       | 127 +++++++-------
 docs/en-us/modules/tubemq/producer_example.md      |  16 +-
 docs/en-us/modules/tubemq/quick_start.md           |  36 ++--
 .../modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md |  77 ++++-----
 docs/zh-cn/modules/tubemq/architecture.md          |  14 +-
 docs/zh-cn/modules/tubemq/client_rpc.md            |  39 +++--
 docs/zh-cn/modules/tubemq/clients_java.md          |  46 +++--
 .../zh-cn/modules/tubemq/configure_introduction.md |  12 +-
 docs/zh-cn/modules/tubemq/console_introduction.md  |  24 ++-
 docs/zh-cn/modules/tubemq/consumer_example.md      |  12 +-
 docs/zh-cn/modules/tubemq/deployment.md            |  26 +--
 docs/zh-cn/modules/tubemq/error_code.md            |   8 +-
 docs/zh-cn/modules/tubemq/http_access_api.md       |   3 +-
 docs/zh-cn/modules/tubemq/producer_example.md      |  16 +-
 docs/zh-cn/modules/tubemq/quick_start.md           |  40 ++---
 .../modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md |  77 ++++-----
 en-us/docs/modules/tubemq/architecture.html        |   4 +-
 en-us/docs/modules/tubemq/architecture.json        |   2 +-
 en-us/docs/modules/tubemq/architecture.md          |   4 +-
 en-us/docs/modules/tubemq/client_rpc.html          |  26 +--
 en-us/docs/modules/tubemq/client_rpc.json          |   2 +-
 en-us/docs/modules/tubemq/client_rpc.md            |  31 ++--
 en-us/docs/modules/tubemq/clients_java.html        |  54 +++---
 en-us/docs/modules/tubemq/clients_java.json        |   4 +-
 en-us/docs/modules/tubemq/clients_java.md          |  47 +++---
 .../modules/tubemq/configure_introduction.html     |  10 +-
 .../modules/tubemq/configure_introduction.json     |   2 +-
 .../docs/modules/tubemq/configure_introduction.md  |  10 +-
 .../docs/modules/tubemq/console_introduction.html  |  21 ++-
 .../docs/modules/tubemq/console_introduction.json  |   2 +-
 en-us/docs/modules/tubemq/console_introduction.md  |  22 ++-
 en-us/docs/modules/tubemq/consumer_example.html    |  92 +++++-----
 en-us/docs/modules/tubemq/consumer_example.json    |   2 +-
 en-us/docs/modules/tubemq/consumer_example.md      |   6 +-
 en-us/docs/modules/tubemq/deployment.html          |  15 +-
 en-us/docs/modules/tubemq/deployment.json          |   2 +-
 en-us/docs/modules/tubemq/deployment.md            |  16 +-
 en-us/docs/modules/tubemq/error_code.html          |   6 +-
 en-us/docs/modules/tubemq/error_code.json          |   2 +-
 en-us/docs/modules/tubemq/error_code.md            |   6 +-
 en-us/docs/modules/tubemq/http_access_api.html     | 128 +++++++-------
 en-us/docs/modules/tubemq/http_access_api.json     |   2 +-
 en-us/docs/modules/tubemq/http_access_api.md       | 127 +++++++-------
 en-us/docs/modules/tubemq/producer_example.html    | 186 ++++++++++-----------
 en-us/docs/modules/tubemq/producer_example.json    |   2 +-
 en-us/docs/modules/tubemq/producer_example.md      |  16 +-
 en-us/docs/modules/tubemq/quick_start.html         |  40 ++---
 en-us/docs/modules/tubemq/quick_start.json         |   2 +-
 en-us/docs/modules/tubemq/quick_start.md           |  36 ++--
 .../tubemq/tubemq_perf_test_vs_Kafka_cn.html       |  93 ++++++-----
 .../tubemq/tubemq_perf_test_vs_Kafka_cn.json       |   2 +-
 .../modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md |  77 ++++-----
 zh-cn/docs/modules/tubemq/architecture.html        |  14 +-
 zh-cn/docs/modules/tubemq/architecture.json        |   4 +-
 zh-cn/docs/modules/tubemq/architecture.md          |  14 +-
 zh-cn/docs/modules/tubemq/client_rpc.html          |  34 ++--
 zh-cn/docs/modules/tubemq/client_rpc.json          |   4 +-
 zh-cn/docs/modules/tubemq/client_rpc.md            |  39 +++--
 zh-cn/docs/modules/tubemq/clients_java.html        |  58 +++----
 zh-cn/docs/modules/tubemq/clients_java.json        |   4 +-
 zh-cn/docs/modules/tubemq/clients_java.md          |  46 +++--
 .../modules/tubemq/configure_introduction.html     |  12 +-
 .../modules/tubemq/configure_introduction.json     |   4 +-
 .../docs/modules/tubemq/configure_introduction.md  |  12 +-
 .../docs/modules/tubemq/console_introduction.html  |  23 ++-
 .../docs/modules/tubemq/console_introduction.json  |   4 +-
 zh-cn/docs/modules/tubemq/console_introduction.md  |  24 ++-
 zh-cn/docs/modules/tubemq/consumer_example.html    |  54 +++---
 zh-cn/docs/modules/tubemq/consumer_example.json    |   4 +-
 zh-cn/docs/modules/tubemq/consumer_example.md      |  12 +-
 zh-cn/docs/modules/tubemq/deployment.html          |  23 +--
 zh-cn/docs/modules/tubemq/deployment.json          |   4 +-
 zh-cn/docs/modules/tubemq/deployment.md            |  26 +--
 zh-cn/docs/modules/tubemq/error_code.html          |   8 +-
 zh-cn/docs/modules/tubemq/error_code.json          |   4 +-
 zh-cn/docs/modules/tubemq/error_code.md            |   8 +-
 zh-cn/docs/modules/tubemq/http_access_api.html     |   5 +-
 zh-cn/docs/modules/tubemq/http_access_api.json     |   4 +-
 zh-cn/docs/modules/tubemq/http_access_api.md       |   3 +-
 zh-cn/docs/modules/tubemq/producer_example.html    |  16 +-
 zh-cn/docs/modules/tubemq/producer_example.json    |   4 +-
 zh-cn/docs/modules/tubemq/producer_example.md      |  16 +-
 zh-cn/docs/modules/tubemq/quick_start.html         |  47 +++---
 zh-cn/docs/modules/tubemq/quick_start.json         |   4 +-
 zh-cn/docs/modules/tubemq/quick_start.md           |  40 ++---
 .../tubemq/tubemq_perf_test_vs_Kafka_cn.html       |  91 +++++-----
 .../tubemq/tubemq_perf_test_vs_Kafka_cn.json       |   2 +-
 .../modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md |  77 ++++-----
 96 files changed, 1275 insertions(+), 1287 deletions(-)

diff --git a/docs/en-us/modules/tubemq/architecture.md b/docs/en-us/modules/tubemq/architecture.md
index 5bdfb4a..133536d 100644
--- a/docs/en-us/modules/tubemq/architecture.md
+++ b/docs/en-us/modules/tubemq/architecture.md
@@ -2,7 +2,7 @@
 title: Architecture - Apache InLong's TubeMQ module
 ---
 
-## TubeMQ Architecture: ##
+## 1. TubeMQ Architecture:
 After years of evolution, the TubeMQ cluster is divided into the following 5 parts: 
 ![](img/sys_structure.png)
 
@@ -16,7 +16,7 @@ After years of evolution, the TubeMQ cluster is divided into the following 5 par
 
 - **Zookeeper:** Responsible for the Zookeeper part of the offset storage. This part of the function has been weakened to only the persistent storage of the offset. Considering the next multi-node copy function, this module is temporarily reserved;
 
-## Broker File Storage Scheme Improvement: ##
+## 2. Broker File Storage Scheme Improvement:
 Systems that use disks as data persistence media are faced with various system performance problems caused by disk problems. The TubeMQ system is no exception, the performance improvement is largely to solve the problem of how to read, write and store message data. In this regard TubeMQ has made many improvements: storage instances is as the smallest Topic data management unit; each storage instance includes a file storage block and a memory cache block; each Topic can be assigned multip [...]
 
 1. **File storage block:** The disk storage solution of TubeMQ is similar to Kafka, but it is not the same, as shown in the following figure: each file storage block is composed of an index file and a data file; the partiton is a logical partition in the data file; each Topic maintains and manages the file storage block separately, the related mechanisms include the aging cycle, the number of partitions, whether it is readable and writable, etc.
diff --git a/docs/en-us/modules/tubemq/client_rpc.md b/docs/en-us/modules/tubemq/client_rpc.md
index e423857..6f8db4c 100644
--- a/docs/en-us/modules/tubemq/client_rpc.md
+++ b/docs/en-us/modules/tubemq/client_rpc.md
@@ -2,9 +2,8 @@
 title: Client RPC - Apache InLong's TubeMQ module
 ---
 
-# Definition of TubeMQ RPC
 
-## General Introduction
+## 1 General Introduction
 
 Implements of this part can be found in `org.apache.tubemq.corerpc`. Each node in Apache TubeMQ Cluster Communicates by TCP Keep-Alive. Mseeages are definded using binary and protobuf combined.
 ![](img/client_rpc/rpc_bytes_def.png)
@@ -16,7 +15,7 @@ We defined `listSize` as `\&lt;len\&gt;\&lt;data\&gt;` because serialized PB dat
 **Pay more attention when implementing multiple languages and SDKs.** Need to serialize PB data content into arrays of blocks(supported in PB codecs).
 
 
-## PB format code:
+## 2 PB format code:
 
 PB format encoding is divided into RPC framework definition, to the Master message encoding and to the Broker message encoding of three parts, you can use protobuf directly compiled to get different language codecs, it is very convenient to use.
 ![](img/client_rpc/rpc_proto_def.png)
@@ -31,7 +30,7 @@ Flag marks whether the message is requested or not, and the next three marks rep
 ![](img/client_rpc/rpc_header_fill.png)
 
 
-## Interactive diagram of the client's PB request & response:
+## 3 Interactive diagram of the client's PB request & response:
 
 **Producer Interaction**:
 
@@ -58,7 +57,7 @@ Consumer has 7 pairs of command in all, Register, Heartbeat, Exit to Master; Reg
 
 As we can see from the above picture, the Consumer first has to register to the Master, but registering to the Master can not get Metadata information immediately because TubeMQ is using a server-side load-balancing model, and the client needs to wait for the server to dispatch the consumption partition information; Consumer to Broker needs to register the logout operation. Partition is exclusive at the time of consumption, i.e., the same partition can only be consumed by one consumer in [...]
 
-##Client feature:
+## 4 Client feature:
 
 | **FEATURE** | **Java** | **C/C++** | **Go** | **Python** | **Rust** | **NOTE** |
 | --- | --- | --- | --- | --- | --- | --- |
@@ -88,7 +87,7 @@ As we can see from the above picture, the Consumer first has to register to the
 | Consumer Pull Consumption frequency limit | ✅ | | | | | |
 
 
-## Client function Induction CaseByCase:
+## 5 Client function Induction CaseByCase:
 
 **Client side and server side RPC interaction process**:
 
@@ -98,8 +97,10 @@ As we can see from the above picture, the Consumer first has to register to the
 
 As shown above, the client has to maintain local preservation of the sent request message until the RPC times out, or a response message is received and the response The message is associated by the SerialNo generated when the request is sent; the Broker information received from the server side, and the Topic information, which the SDK stores locally and updates with the latest returned information, as well as periodic reports to the Server side; the SDK is maintained to the heartbeat o [...]
 
-**Message: Producer register to Master**:
+### 5.1 Message: Producer register to Master:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_register2M.png)
 
 **ClientId**:Producer needs to construct a ClientId at startup, and the current construction rule is: 
@@ -133,8 +134,10 @@ Java: ClientId = IPV4 + `&quot;-&quot;` + Thread ID + `&quot;-&quot;` + createTi
 **authAuthorizedToken**:Authenticated authorization tokens, if they have data for that field, they need to save and carry that field information for subsequent accesses to the Master and Broker; if the field is changed on subsequent heartbeats, the local cache of that field data needs to be updated.
 
 
-**Mseeage: Heartbeat from Producer to Master**:
+### 5.2 Mseeage: Heartbeat from Producer to Master:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_heartbeat2M.png)
 
 **topicInfos**: The metadata information corresponding to the Topic published by the SDK, including partition information and the Broker where it is located, is decoded. Since there is a lot of metadata, the outflow generated by passing the object data through as is would be very large, so we made Improvements.
@@ -143,14 +146,18 @@ Java: ClientId = IPV4 + `&quot;-&quot;` + Thread ID + `&quot;-&quot;` + createTi
 
 **requireAuth**: Code to indicates the expiration of the previous authAuthorizedToken of the Master, requiring the SDK to report the username and password signatures on the next request.
 
-**Message: Producer exits from Master**:
+### 5.3 Message: Producer exits from Master:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_close2M.png)
 
 Note that if authentication is enable, closing operation will do the authentication to avoid external interference with the operation.
 
-**Message: Producer to Broker**:
+### 5.4 Message: Producer to Broker:
+
 ----------
+
 This part is related to the definition of RPC Message.
 
 ![](img/client_rpc/rpc_producer_sendmsg2B.png)
@@ -165,8 +172,10 @@ This part is related to the definition of RPC Message.
 
 **requireAuth**: Required authentication operations to Broker for data production, not currently in effect due to performance issues. The authAuthorizedToken value in the sent message is based on the value provided by the Master and will change with the change of the Master.
 
-**Partition Loadbalance**:
+### 5.5 Partition Loadbalance:
+
 ----------
+
 Apache TubeMQ currently uses a server-side load balancing mode, where the balancing process is managed and maintained by the server; subsequent versions will add a client-side load balancing mode, so that two modes can co-exist.
 
 **Server side load balancing**:
diff --git a/docs/en-us/modules/tubemq/clients_java.md b/docs/en-us/modules/tubemq/clients_java.md
index 780b1ab..616ef53 100644
--- a/docs/en-us/modules/tubemq/clients_java.md
+++ b/docs/en-us/modules/tubemq/clients_java.md
@@ -1,52 +1,44 @@
 ---
-title: JAVA SDK API - Apache InLong's TubeMQ module
+title: TubeMQ JAVA SDK API - Apache InLong's TubeMQ module
 ---
 
-## **TubeMQ Lib** **接口使用**
 
-------
-
-
-
-### **1. 基础对象接口介绍:**
+## 1 基础对象接口介绍:
 
-#### **a) MessageSessionFactory(消息会话工厂):**
+### 1.1 MessageSessionFactory(消息会话工厂):
 
 TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。
 
  
-
-#### **b) MasterInfo:**
+### 1.2 MasterInfo:
 
 TubeMQ的Master地址信息对象,该对象的特点是支持配置多个Master地址,由于TubeMQ Master借助BDB的存储能力进行元数据管理,以及服务HA热切能力,Master的地址相应地就需要配置多条信息。该配置信息支持IP、域名两种模式,由于TubeMQ的HA是热切模式,客户端要保证到各个Master地址都是连通的。该信息在初始化TubeClientConfig类对象和ConsumerConfig类对象时使用,考虑到配置的方便性,我们将多条Master地址构造成“ip1:port1,ip2:port2,ip3:port3”格式并进行解析。
 
  
-
-#### **c) TubeClientConfig:**
+### 1.3 TubeClientConfig:
 
 MessageSessionFactory(消息会话工厂)初始化类,用来携带创建网络连接信息、客户端控制参数信息的对象类,包括RPC时长设置、Socket属性设置、连接质量检测参数设置、TLS参数设置、认证授权信息设置等信息,该类,连同接下来介绍的ConsumerConfig类,与TubeMQ-3.8.0版本之前版本的类变更最大的类,主要原因是在此之前TubeMQ的接口定义超6年多没有变更,接口使用上存在接口语义定义有歧义、接口属性设置单位不清晰、程序无法识别多种情况的内容选择等问题,考虑到代码开源自查问题方便性,以及新手学习成本问题,我们这次作了接口的重定义。对于重定义的前后差别,见配置接口定义说明部分介绍。
 
  
 
-#### **d) ConsumerConfig:**
+### 1.4 ConsumerConfig:
 
 ConsumerConfig类是TubeClientConfig类的子类,它是在TubeClientConfig类基础上增加了Consumer类对象初始化时候的参数携带,因而在一个既有Producer又有Consumer的MessageSessionFactory(消息会话工厂)类对象里,会话工厂类的相关设置以MessageSessionFactory类初始化的内容为准,Consumer类对象按照创建时传递的初始化类对象为准。在consumer里又根据消费行为的不同分为Pull消费者和Push消费者两种,两种特有的参数通过参数接口携带“pull”或“push”不同特征进行区分。
 
  
-
-#### **e) Message:**
+### 1.5 Message:
 
 Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产端原样传递给消息接收端,attribute内容是与TubeMQ系统共用的字段,业务填写的内容不会丢失和改写,但该字段有可能会新增TubeMQ系统填写的内容,并在后续的版本中,新增的TubeMQ系统内容有可能去掉而不被通知。该部分需要注意的是Message.putSystemHeader(final String msgType, final String msgTime)接口,该接口用来设置消息的消息类型和消息发送时间,msgType用于消费端过滤用,msgTime用做TubeMQ进行数据收发统计时消息时间统计维度用。
 
  
 
-#### **f) MessageProducer:**
+### 1.6 MessageProducer:
 
 消息生产者类,该类完成消息的生产,消息发送分为同步发送和异步发送两种接口,目前消息采用Round Robin方式发往后端服务器,后续这块将考虑按照业务指定的算法进行后端服务器选择方式进行生产。该类使用时需要注意的是,我们支持在初始化时候全量Topic指定的publish,也支持在生产过程中临时增加对新的Topic的publish,但临时增加的Topic不会立即生效,因而在使用新增Topic前,要先调用isTopicCurAcceptPublish接口查询该Topic是否已publish并且被服务器接受,否则有可能消息发送失败。
 
  
 
-#### **g) MessageConsumer:**
+### 1.7 MessageConsumer:
 
 该类有两个子类PullMessageConsumer、PushMessageConsumer,通过这两个子类的包装,完成了对业务侧的Pull和Push语义。实际上TubeMQ是采用Pull模式与后端服务进行交互,为了便于业务的接口使用,我们进行了封装,大家可以看到其差别在于Push在启动时初始化了一个线程组,来完成主动的数据拉取操作。需要注意的地方在于:
 
@@ -60,19 +52,18 @@ Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产
 
 
 
-### **2. 接口调用示例:**
+## 2 接口调用示例:
 
-#### **a) 环境准备:**
+### 2.1 环境准备:
 
 TubeMQ开源包org.apache.tubemq.example里提供了生产和消费的具体代码示例,这里我们通过一个实际的例子来介绍如何填参和调用对应接口。首先我们搭建一个带3个Master节点的TubeMQ集群,3个Master地址及端口分别为test_1.domain.com,test_2.domain.com,test_3.domain.com,端口均为8080,在该集群里我们建立了若干个Broker,并且针对Broker我们创建了3个topic:topic_1,topic_2,topic_3等Topic配置;然后我们启动对应的Broker等待Consumer和Producer的创建。
 
  
-
-#### **b) 创建Consumer:**
+### 2.2 创建Consumer:
 
 见包org.apache.tubemq.example.MessageConsumerExample类文件,Consumer是一个包含网络交互协调的客户端对象,需要做初始化并且长期驻留内存重复使用的模型,它不适合单次拉起消费的场景。如下图示,我们定义了MessageConsumerExample封装类,在该类中定义了进行网络交互的会话工厂MessageSessionFactory类,以及用来做Push消费的PushMessageConsumer类:
 
-- ###### **i.初始化MessageConsumerExample类:**
+##### 2.2.1 初始化MessageConsumerExample类:
 
 1. 首先构造一个ConsumerConfig类,填写初始化信息,包括本机IP V4地址,Master集群地址,消费组组名信息,这里Master地址信息传入值为:”test_1.domain.com:8080,test_2.domain.com:8080,test_3.domain.com:8080”;
 
@@ -116,7 +107,7 @@ public final class MessageConsumerExample {
 
 
 
-- **ii.订阅Topic:**
+#### 2.2.2 订阅Topic:
 
 我们没有采用指定Offset消费的模式进行订阅,也没有过滤需求,因而我们在如下代码里只做了Topic的指定,对应的过滤项集合我们传的是null值,同时,对于不同的Topic,我们可以传递不同的消息回调处理函数;我们这里订阅了3个topic,topic_1,topic_2,topic_3,每个topic分别调用subscribe函数进行对应参数设置:
 
@@ -134,7 +125,7 @@ public void subscribe(final Map<String, TreeSet<String>> topicTidsMap)
 
 
 
-- **iii.进行消费:**
+#### 2.2.3 进行消费:
 
 到此,对集群里对应topic的订阅就已完成,系统运行开始后,回调函数里数据将不断的通过回调函数推送到业务层进行处理:
 
@@ -165,7 +156,7 @@ public class DefaultMessageListener implements MessageListener {
 
 
 
-#### **c) 创建Producer:**
+### 2.3 创建Producer:
 
 现网环境中业务的数据都是通过代理层来做接收汇聚,包装了比较多的异常处理,大部分的业务都没有也不会接触到TubeSDK的Producer类,考虑到业务自己搭建集群使用TubeMQ进行使用的场景,这里提供对应的使用demo,见包org.apache.tubemq.example.MessageProducerExample类文件供参考,**需要注意**的是,业务除非使用数据平台的TubeMQ集群做MQ服务,否则仍要按照现网的接入流程使用代理层来进行数据生产:
 
@@ -201,7 +192,7 @@ public final class MessageProducerExample {
 
 
 
-- **ii. 发布Topic:**
+#### 2.3.1 发布Topic:
 
 ```java
 public void publishTopics(List<String> topicList) throws TubeClientException {
@@ -211,7 +202,7 @@ public void publishTopics(List<String> topicList) throws TubeClientException {
 
 
 
-- **iii. 进行数据生产:**
+#### 2.3.2 进行数据生产:
 
 如下所示,则为具体的数据构造和发送逻辑,构造一个Message对象后调用sendMessage()函数发送即可,有同步接口和异步接口选择,依照业务要求选择不同接口;需要注意的是该业务根据不同消息调用message.putSystemHeader()函数设置消息的过滤属性和发送时间,便于系统进行消息过滤消费,以及指标统计用。完成这些,一条消息即被发送出去,如果返回结果为成功,则消息被成功的接纳并且进行消息处理,如果返回失败,则业务根据具体错误码及错误提示进行判断处理,相关错误详情见《TubeMQ错误信息介绍.xlsx》:
 
@@ -241,7 +232,7 @@ public void sendMessageAsync(int id, long currtime,
 
 
 
-- **iv. Producer不同类MAMessageProducerExample关注点:**
+#### 2.3.3 Producer不同类MAMessageProducerExample关注点:
 
 该类初始化与MessageProducerExample类不同,采用的是TubeMultiSessionFactory多会话工厂类进行的连接初始化,该demo提供了如何使用多会话工厂类的特性,可以用于通过多个物理连接提升系统吞吐量的场景(TubeMQ通过连接复用模式来减少物理连接资源的使用),恰当使用可以提升系统的生产性能。在Consumer侧也可以通过多会话工厂进行初始化,但考虑到消费是长时间过程处理,对连接资源的占用比较小,消费场景不推荐使用。
 
diff --git a/docs/en-us/modules/tubemq/configure_introduction.md b/docs/en-us/modules/tubemq/configure_introduction.md
index 11bc04c..3cb1385 100644
--- a/docs/en-us/modules/tubemq/configure_introduction.md
+++ b/docs/en-us/modules/tubemq/configure_introduction.md
@@ -2,7 +2,7 @@
 title: Configure Introduction - Apache InLong's TubeMQ module
 ---
 
-# TubeMQ configuration item description
+## 1 TubeMQ configuration item description
 
 The TubeMQ server includes two modules for the Master and the Broker. The Master also includes a Web front-end module for external page access (this part is stored in the resources). Considering the actual deployment, two modules are often deployed in the same machine, TubeMQ. The contents of the three parts of the two modules are packaged and delivered to the operation and maintenance; the client does not include the lib package of the server part and is delivered to the user separately.
 
@@ -15,9 +15,9 @@ In addition to the back-end system configuration file, the Master also stores th
 ![](img/configure/conf_velocity_pos.png)
 
 
-## Configuration item details:
+## 2 Configuration item details:
 
-### master.ini file:
+### 2.1 master.ini file:
 [master]
 > Master system runs the main configuration unit, required unit, the value is fixed to "[master]"
 
@@ -105,13 +105,13 @@ In addition to the back-end system configuration file, the Master also stores th
 | tlsTrustStorePath     | no       | string  | The absolute storage path of the TLS TrustStore file + the TrustStore file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
 | tlsTrustStorePassword | no       | string  | The absolute storage path of the TLS TrustStorePassword file + the TrustStorePassword file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
 
-### velocity.properties file:
+### 2.2 velocity.properties file:
 
 | Name                      | Required                          | Type                          | Description                                                  |
 | ------------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
 | file.resource.loader.path | yes      | string | The absolute path of the master web template. This part is the absolute path plus /resources/templates of the project when the master is deployed. The configuration is consistent with the actual deployment. If the configuration fails, the master front page access fails. |
 
-### broker.ini file:
+### 2.3 broker.ini file:
 
 [broker]
 >The broker system runs the main configuration unit, required unit, and the value is fixed to "[broker]"
diff --git a/docs/en-us/modules/tubemq/console_introduction.md b/docs/en-us/modules/tubemq/console_introduction.md
index d05e9ce..655e01c 100644
--- a/docs/en-us/modules/tubemq/console_introduction.md
+++ b/docs/en-us/modules/tubemq/console_introduction.md
@@ -2,20 +2,18 @@
 title: Console Introduction - Apache InLong's TubeMQ module
 ---
 
-# TubeMQ管控台操作指引
-
-## 管控台关系
+## 1 管控台关系
 
 ​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:
 ![](img/console/1568169770714.png)
 ​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。
 
 
-## TubeMQ管控台各版面介绍
+## 2 TubeMQ管控台各版面介绍
 
 ​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topic列表2个部分,我们先介绍简单的分发查询和集群管理,然后再介绍复杂的配置管理。
 
-### 分发查询
+### 2.1 分发查询
 
 ​        点分发查询,我们会看到如下的列表信息,这是当前TubeMQ集群里已注册的消费组信息,包括具体的消费组组名,消费的Topic,以及该组总的消费分区数简介信息,如下图示:
 ![](img/console/1568169796122.png)
@@ -24,12 +22,12 @@ title: Console Introduction - Apache InLong's TubeMQ module
 
 ​       这个页面可以供我们查询,输入Topic或者消费组名,就可以很快确认系统里有哪些消费组在消费Topic,以及每个消费组的消费目标是怎样这些信息。
 
-### 集群管理
+### 2.2 集群管理
 
 ​        集群管理主要管理Master的HA,在这个页面上我们可以看到当前Master的各个节点及节点状态,同时,我们可以通过“切换”操作来改变节点的主备状态。
 ![](img/console/1568169823675.png)
 
-### 配置管理
+### 2.3 配置管理
 
 ​        配置管理版面既包含了Broker、Topic元数据的管理,还包含了Broker和Topic的上线发布以及下线操作,有2层含义,比如Broker列表里,展示的是当前集群里已配置的Broker元数据,包括未上线处于草稿状态、已上线、已下线的Broker记录信息:
 ![](img/console/1568169839931.png)
@@ -41,7 +39,7 @@ title: Console Introduction - Apache InLong's TubeMQ module
 
 ​        所有TubeMQ管控台的变更操作,或者改变操作,都会要求输入操作授权码,该信息由运维通过Master的配置文件master.ini的confModAuthToken字段进行定义:如果你知道这个集群的密码,你就可以进行该项操作,比如你是管理员,你是授权人员,或者你能登陆这个master的机器拿到这个密码,都认为你是有权操作该项功能。
 
-## TubeMQ管控台上涉及的操作及注意事项
+## 3 TubeMQ管控台上涉及的操作及注意事项
 
 ​       如上所说,TubeMQ管控台是运营Tube MQ集群的,套件负责包括Master、Broker这类TubeMQ集群节点管理,包括自动部署和安装等,因此,如下几点需要注意:
 
@@ -68,9 +66,9 @@ title: Console Introduction - Apache InLong's TubeMQ module
 
 ​       这个时候我们就可以针对该Topic进行生产和消费处理。
 
-## 3.对于Topic的元数据进行变更后的操作注意事项:
+## 4 对于Topic的元数据进行变更后的操作注意事项:
 
-**a.如何自行配置Topic参数:**
+### 4.1 如何自行配置Topic参数:
 
 ​       大家点击Topic列表里任意Topic后,会弹出如下框,里面是该Topic的相关元数据信息,其决定了这个Topic在该Broker上,设置了多少个分区,当前读写状态,数据刷盘频率,数据老化周期和时间等信息:
 ![](img/console/1568169925657.png)
@@ -104,13 +102,13 @@ title: Console Introduction - Apache InLong's TubeMQ module
 **特别提醒:大家还需要注意的是,输入授权码修改后,数据变更要刷新后才会生效,同时生效的Broker要按比例进行操作。**
 ![](img/console/1568169954746.png)
 
-**b.Topic变更注意事项:**
+### 4.2 Topic变更注意事项:
 
 ​       如上图示,选择变更Topic元数据后,之前选中的Broker集合会在**配置是否已变更**上出现是的提示。我们还需要对变更进行重载刷新操作,选择Broker集合,然后选择刷新操作,可以批量也可以单条,但是一定要注意的是:操作要分批进行,上一批操作的Broker当前运行状态为running后才能进入下一批的配置刷新操作;如果有节点处于online状态,但长期不进入running状态(缺省最大2分钟),则需要停止刷新,排查问题原因后再继续操作。
 
 ​       进行分批操作原因是,我们系统在变更时,会对指定的Broker做停读停写操作,如果将全量的Broker统一做重载,很明显,集群整体会出现服务不可读或者不可写的情况,从而接入出现不该有的异常。
 
-**c.对于Topic的删除处理:**
+### 4.3 对于Topic的删除处理:
 
 ​       页面上进行的删除是软删除处理,如果要彻底删除该topic需要通过API接口进行硬删除操作处理才能实现(避免业务误操作)。
 
diff --git a/docs/en-us/modules/tubemq/consumer_example.md b/docs/en-us/modules/tubemq/consumer_example.md
index c59a24b..cc32a2b 100644
--- a/docs/en-us/modules/tubemq/consumer_example.md
+++ b/docs/en-us/modules/tubemq/consumer_example.md
@@ -2,10 +2,10 @@
 title: Consumer Example - Apache InLong's TubeMQ module
 ---
 
-## Consumer Example
+## 1 Consumer Example
   TubeMQ provides two ways to consumer message, PullConsumer and PushConsumer:
 
-1. PullConsumer 
+### 1.1 PullConsumer 
     ```java
     public class PullConsumerExample {
 
@@ -38,7 +38,7 @@ title: Consumer Example - Apache InLong's TubeMQ module
     }
     ``` 
    
-2. PushConsumer
+### 1.2 PushConsumer
     ```java
     public class PushConsumerExample {
    
diff --git a/docs/en-us/modules/tubemq/deployment.md b/docs/en-us/modules/tubemq/deployment.md
index 9c61464..5ea4b64 100644
--- a/docs/en-us/modules/tubemq/deployment.md
+++ b/docs/en-us/modules/tubemq/deployment.md
@@ -2,9 +2,7 @@
 title: Deployment - Apache InLong's TubeMQ Module
 ---
 
-# Compile, Deploy and Examples of TubeMQ :
-
-## Compile and Package Project:
+## 1 Compile and Package Project:
 
 Enter the root directory of project and run:
 
@@ -18,7 +16,7 @@ e.g. We put the TubeMQ project package at `E:/`, then run the above command. Com
 
 We can also run individual compilation in each subdirectory. Steps are the same as the whole project's compilation.
 
-**Server Deployment**
+## 2 Server Deployment
 
 As example above, entry directory `..\InLong\inlong-tubemq\tubemq-server\target`, we can see several JARs. `apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz` is the complete server-side installation package, including execution scripts, configuration files, dependencies, and frontend source code. `apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT.jar` is a server-side processing package included in `lib` of the complete project installer. Consider to daily changes and [...]
 
@@ -30,7 +28,7 @@ Here we have a complete package deployed onto server and we place it in `/data/i
 ![](img/sysdeployment/sys_package_list.png)
 
 
-**Configuration System**
+## 3 Configuration System
 
 There are 3 roles in server package: Master, Broker and Tools. Master and Broker can be deployed on the same or different machine. It depends on the bussiness layouts. As example below, we have 3 machine to startup a complete production and consumption cluster with 2 Masters.
 
@@ -62,7 +60,7 @@ then it is `9.23.28.24`.
 
 Note that the upper right corner is configured with Master's web frontend configuration and configuration `file.resource.loader.path` in `/resources/velocity.properties` need to be modified according to the Master's installation path.
 
-**Start up Master**:
+## 4 Start up Master:
 
 After configuration, entry directory `bin` of Master environment and start up master.
 
@@ -76,7 +74,7 @@ Visiting Master's Administrator panel([http://9.23.27.24:8080](http://9.23.27.24
 
 ![](img/sysdeployment/sys_master_console.png)
 
-**Start up Broker**:
+## 5 Start up Broker:
 
 Starting up Broker is a little bit different to starting Master: Master is responsible for managing the entire TubeMQ cluster, including Broker node with Topic configuration on them, production and consumption managament. So we need to add metadata on Master before starting up Broker.
 
@@ -114,7 +112,7 @@ Check the Master Control Panel, broker has successfully registered.
 ![](img/sysdeployment/sys_broker_finished.png)
 
 
-**Topic Configuration and Activation**:
+## 6 Topic Configuration and Activation:
 
 Configuration of Topic is similar with Broker's, we should add metadata on Master before using them, otherwise it will report an Not Found Error during production/consumption. For example, if we try to consum a non-existent topic `test`,
 ![](img/sysdeployment/test_sendmessage.png)
@@ -139,7 +137,7 @@ Topic is available after overload. We can see some status of topic has changed a
 
 **Note** When we are executing overload opertaion, we should make it in batches. Overload operations are controlled by state machines. It would become unwritable and un readale, read-only, readable and writable in order before published. Waiting for overloads on all brokers make topic temporary unreadable and unwritable, which result in production and consumption failure, especially production failure.
 
-**Message Production and Consumption**:
+## 7 Message Production and Consumption:
 
 We pack Demo for test in package or `tubemq-client-0.9.0-incubating-SNAPSHOT.jar` can be used for implementing your own production and consumption.
 We run Producer Demo in below script and we can see data accepted on Broker.
diff --git a/docs/en-us/modules/tubemq/error_code.md b/docs/en-us/modules/tubemq/error_code.md
index 00234bb..ec7591f 100644
--- a/docs/en-us/modules/tubemq/error_code.md
+++ b/docs/en-us/modules/tubemq/error_code.md
@@ -2,13 +2,13 @@
 title: Error Code - Apache InLong's TubeMQ module
 ---
 
-# Introduction of TubeMQ Error
+## 1 Introduction of TubeMQ Error
 
 ​        TubeMQ use `errCode` and `errMsg` combined to return specific operation result. 
         Firstly, determine the type of result(problem) by errCode, and then determine the specific reson of the errCode based on errMsg.
         The following table summarizes all the errCodes and errMsgs that may return during operation.
 
-## errCodes
+## 2 errCodes
 
 | Error Type | errCode | Error Mark | Meaning | Note |
 | ---------- | ------- | ---------- | ------- | ---- |
@@ -35,7 +35,7 @@ title: Error Code - Apache InLong's TubeMQ module
 | Server Error| 503| SERVICE_UNAVILABLE| Temporary ban on reading or writing for business. | Retry it. ||
 | Server Error| 510| INTERNAL_SERVER_ERROR_MSGSET_NULL | Can not read Message Set. | Retry it. ||
 
-## Common errMsgs
+## 3 Common errMsgs
 
 | Record ID | errMsg | Meaning | Note |
 | --------- | ------ | ------- | ---- |
diff --git a/docs/en-us/modules/tubemq/http_access_api.md b/docs/en-us/modules/tubemq/http_access_api.md
index 7df316b..2a0ed95 100644
--- a/docs/en-us/modules/tubemq/http_access_api.md
+++ b/docs/en-us/modules/tubemq/http_access_api.md
@@ -2,11 +2,10 @@
 title: HTTP API - Apache InLong's TubeMQ module
 ---
 
-# HTTP access API definition
+## 1 Master metadata configuration API
 
-## Master metadata configuration API
-
-### `admin_online_broker_configure`
+### 1.1 Cluster management API
+#### 1.1.1 `admin_online_broker_configure`
 
 The online configuration of the Brokers are new or offline. The configuration of Topics are distributed to related Brokers as well.
 
@@ -26,7 +25,7 @@ __Response__
 |code| Returns `0` if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_reload_broker_configure`
+#### 1.1.2 `admin_reload_broker_configure`
 
 Update the configuration of the Brokers which are __online__. The new configuration will be published to Broker server, it
  will return error if the broker is offline.
@@ -47,7 +46,7 @@ __Response__
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_offline_broker_configure`
+#### 1.1.3 `admin_offline_broker_configure`
 
 Offline the configuration of the Brokers which are __online__. It should be called before Broker offline or retired.
 The Broker processes can be terminated once all offline tasks are done.
@@ -68,7 +67,7 @@ __Response__
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_set_broker_read_or_write`
+#### 1.1.4 `admin_set_broker_read_or_write`
 
 Set Broker into a read-only or write-only state. Only Brokers are online and idle can be handled.
 
@@ -90,7 +89,7 @@ __Response__
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_query_broker_run_status`
+#### 1.1.5 `admin_query_broker_run_status`
 
 Query Broker status. Only the Brokers processes are __offline__ and idle can be terminated.
 
@@ -111,7 +110,7 @@ __Response__
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_release_broker_autoforbidden_status`
+#### 1.1.6 `admin_release_broker_autoforbidden_status`
 
 Release the brokers' auto forbidden status.
 
@@ -132,16 +131,16 @@ Response
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_query_master_group_info`
+#### 1.1.7 `admin_query_master_group_info`
 
 Query the detail of master cluster nodes.
 
-### `admin_transfer_current_master`
+#### 1.1.8 `admin_transfer_current_master`
 
 Set current master node as backup node, let it select another master.
 
 
-### `groupAdmin.sh`
+#### 1.9 `groupAdmin.sh`
 
 Clean the invalid node inside master group.
 
@@ -160,8 +159,8 @@ Response
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-
-### `admin_add_broker_configure`
+### 1.2 Broker node configuration API
+#### 1.2.1 `admin_add_broker_configure`
 
 Add broker default configuration (not include topic info). It will be effective after calling load API.
 
@@ -188,7 +187,7 @@ __Request__
 |createDate|yes|the create date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_batch_add_broker_configure`
+#### 1.2.2 `admin_batch_add_broker_configure`
 
 Add broker default configuration in batch (not include topic info). It will be effective after calling load API.
 
@@ -204,7 +203,7 @@ __Request__
 |createDate|yes|the create date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_update_broker_configure`
+#### 1.2.3 `admin_update_broker_configure`
 
 Update broker default configuration (not include topic info). It will be effective after calling load API.
 
@@ -230,7 +229,7 @@ __Request__
 |modifyDate|yes|the modify date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_broker_configure`
+#### 1.2.4 `admin_query_broker_configure`
 
 Query the broker configuration.
 
@@ -257,7 +256,7 @@ __Request__
 |topicStatusId|yes|the status of topic record|int|
 |withTopic|no|whether it needs topic configuration|Boolean|
 
-### `admin_delete_broker_configure`
+#### 1.2.5 `admin_delete_broker_configure`
 
 Delete the broker's default configuration. It requires the related topic configuration to be delete at first, and the broker should be offline. 
 
@@ -271,7 +270,8 @@ __Request__
 |isReserveData|no|whether to reserve production data, default false|Boolean|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_add_new_topic_record`
+### 1.3 Topic configuration API
+#### 1.3.1 `admin_add_new_topic_record`
 
 Add topic related configuration.
 
@@ -297,7 +297,7 @@ __Request__
 |createDate|yes|the create date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_topic_info`
+#### 1.3.2 `admin_query_topic_info`
 
 Query specific topic record info.
 
@@ -323,7 +323,7 @@ __Request__
 |createUser|yes|the creator|String|
 |modifyUser|yes|the modifier|String|
 
-### `admin_modify_topic_info`
+#### 1.3.3 `admin_modify_topic_info`
 
 Modify specific topic record info.
 
@@ -351,7 +351,7 @@ __Request__
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
 
-### `admin_delete_topic_info`
+#### 1.3.4 `admin_delete_topic_info`
 
 Delete specific topic record info softly.
 
@@ -365,7 +365,7 @@ __Request__
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_redo_deleted_topic_info`
+#### 1.3.4 `admin_redo_deleted_topic_info`
 
 Redo the Deleted specific topic record info.
 
@@ -379,7 +379,7 @@ __Request__
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_remove_topic_info`
+#### 1.3.5 `admin_remove_topic_info`
 
 Delete specific topic record info hardly.
 
@@ -393,7 +393,7 @@ __Request__
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_broker_topic_config_info`
+#### 1.3.6 `admin_query_broker_topic_config_info`
 
 Query the topic configuration info of the broker in current cluster.
 
@@ -403,9 +403,10 @@ __Request__
 |---|---|---|---|
 |topicName|yes| the topic name|String|
 
-## Master consumer permission operation API
 
-### `admin_set_topic_info_authorize_control`
+## 2 Master consumer permission operation API
+
+### 2.1 `admin_set_topic_info_authorize_control`
 
 Enable or disable the authorization control feature of the topic. If the consumer group is not authorized, the register request will be denied.
 If the topic's authorization group is empty, the topic will fail.
@@ -420,7 +421,7 @@ __Request__
 |isEnable|no|whether the authorization control is enable, default false|Boolean|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_delete_topic_info_authorize_control`
+### 2.2 `admin_delete_topic_info_authorize_control`
 
 Delete the authorization control feature of the topic. The content of the authorized consumer group list will be delete as well.
 
@@ -432,7 +433,7 @@ __Request__
 |createUser|yes|the creator|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_topic_info_authorize_control`
+### 2.3 `admin_query_topic_info_authorize_control`
 
 Query the authorization control feature of the topic.
 
@@ -443,7 +444,7 @@ __Request__
 |topicName|yes| the topic name|String|
 |createUser|yes|the creator|String|
 
-### `admin_add_authorized_consumergroup_info`
+### 2.4 `admin_add_authorized_consumergroup_info`
 
 Add new authorized consumer group record of the topic. The server will deny the registration from the consumer group which is not exist in
 topic's authorized consumer group.
@@ -459,7 +460,7 @@ __Request__
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_allowed_consumer_group_info`
+### 2.5 `admin_query_allowed_consumer_group_info`
 
 Query the authorized consumer group record of the topic. 
 
@@ -471,7 +472,7 @@ __Request__
 |groupName|yes| the group name to be added|String|
 |createUser|yes|the creator|String|
 
-### `admin_delete_allowed_consumer_group_info`
+### 2.6 `admin_delete_allowed_consumer_group_info`
 
 Delete the authorized consumer group record of the topic. 
 
@@ -483,7 +484,7 @@ __Request__
 |groupName|yes| the group name to be added|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_batch_add_topic_authorize_control`
+### 2.7`admin_batch_add_topic_authorize_control`
 
 Add the authorized consumer group of the topic record in batch mode.
 
@@ -496,7 +497,7 @@ __Request__
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_batch_add_authorized_consumergroup_info`
+### 2.8 `admin_batch_add_authorized_consumergroup_info`
 
 Add the authorized consumer group record in batch mode.
 
@@ -509,7 +510,7 @@ __Request__
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_add_black_consumergroup_info`
+### 2.9 `admin_add_black_consumergroup_info`
 
 Add consumer group into the black list of the topic. The registered consumer on the group cannot consume topic later as well as unregistered one.
 
@@ -523,7 +524,7 @@ __Request__
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_black_consumergroup_info`
+### 2.10 `admin_query_black_consumergroup_info`
 
 Query the black list of the topic. 
 
@@ -535,7 +536,7 @@ __Request__
 |groupName|yes|the group name |List|
 |createUser|yes|the creator|String|
 
-### `admin_delete_black_consumergroup_info`
+### 2.11 `admin_delete_black_consumergroup_info`
 
 Delete the black list of the topic. 
 
@@ -547,7 +548,7 @@ __Request__
 |groupName|yes|the group name |List|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_add_group_filtercond_info`
+### 2.12 `admin_add_group_filtercond_info`
 
 Add condition of consuming filter for the consumer group 
 
@@ -563,7 +564,7 @@ __Request__
 |createUser|yes|the creator|String|
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 
-### `admin_mod_group_filtercond_info`
+### 2.13 `admin_mod_group_filtercond_info`
 
 Modify the condition of consuming filter for the consumer group 
 
@@ -579,7 +580,7 @@ __Request__
 |modifyUser|yes|the modifier|String|
 |modifyDate|no|the modification date in format `yyyyMMddHHmmss`|String|
 
-### `admin_del_group_filtercond_info`
+### 2.14 `admin_del_group_filtercond_info`
 
 Delete the condition of consuming filter for the consumer group 
 
@@ -591,7 +592,7 @@ __Request__
 |groupName|yes|the group name |List|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_group_filtercond_info`
+### 2.15 `admin_query_group_filtercond_info`
 
 Query the condition of consuming filter for the consumer group 
 
@@ -604,7 +605,7 @@ __Request__
 |condStatus|no| the condition status, 0: disable, 1:enable full authorization, 2:enable and limit consuming|Int|
 |filterConds|no| the filter conditions, the max length is 256|String|
 
-### `admin_rebalance_group_allocate`
+### 2.16 `admin_rebalance_group_allocate`
 
 Adjust consuming partition of the specific consumer in consumer group. This includes:  \
 1. release current consuming partition and retrieve new consuming partition.
@@ -622,7 +623,7 @@ __Request__
 |modifyUser|yes|the modifier|String|
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 
-### `admin_set_def_flow_control_rule`
+### 2.17 `admin_set_def_flow_control_rule`
 
 Set default flow control rule. It is effective for all consumer group. It worth to note that the priority is lower than the setting in consumer group.
 
@@ -649,7 +650,7 @@ __Request__
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 
 
-### `admin_upd_def_flow_control_rule`
+### 2.18 `admin_upd_def_flow_control_rule`
 
 Update the default flow control rule.
 
@@ -664,7 +665,7 @@ __Request__
 |flowCtrlInfo|yes|the flow control info in JSON format|String|
 |createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
 
-### `admin_query_def_flow_control_rule`
+### 2.19 `admin_query_def_flow_control_rule`
 
 Query the default flow control rule.
 
@@ -676,7 +677,7 @@ __Request__
 |qryPriorityId|no| the consuming priority Id. It is a composed field `A0B` with default value 301,<br> the value of A,B is [1, 2, 3] which means file, backup memory, and main memory respectively|int|
 |createUser|yes|the creator|String|
 
-### `admin_set_group_flow_control_rule`
+### 2.20 `admin_set_group_flow_control_rule`
 
 Set the group flow control rule.
 
@@ -692,7 +693,7 @@ __Request__
 |createUser|yes|the creator|String|
 |createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
 
-### `admin_upd_group_flow_control_rule`
+### 2.21 `admin_upd_group_flow_control_rule`
 
 Update the group flow control rule.
 
@@ -709,7 +710,7 @@ __Request__
 |createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
 
 
-### `admin_rmv_group_flow_control_rule`
+### 2.22 `admin_rmv_group_flow_control_rule`
 
 Remove the group flow control rule.
 
@@ -721,7 +722,7 @@ __Request__
 |confModAuthToken|yes|the authorized key for configuration update|String|
 |createUser|yes|the creator|String|
 
-### `admin_query_group_flow_control_rule`
+### 2.23 `admin_query_group_flow_control_rule`
 
 Remove the group flow control rule.
 
@@ -734,7 +735,7 @@ __Request__
 |qryPriorityId|no| the consuming priority Id. It is a composed field `A0B` with default value 301, <br>the value of A,B is [1, 2, 3] which means file, backup memory, and main memory respectively|int|
 |createUser|yes|the creator|String|
 
-### `admin_add_consume_group_setting`
+### 2.24 `admin_add_consume_group_setting`
 
 Set whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
 
@@ -749,7 +750,7 @@ __Request__
 |createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_consume_group_setting`
+### 2.25 `admin_query_consume_group_setting`
 
 Query the consume group setting to check whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
 
@@ -762,7 +763,7 @@ __Request__
 |allowedBClientRate|no|the ratio of the number of the consuming target's broker against the number of client in consuming group|int|
 |createUser|yes|the creator|String|
 
-### `admin_upd_consume_group_setting`
+### 2.26 `admin_upd_consume_group_setting`
 
 Update the consume group setting for whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
 
@@ -777,7 +778,7 @@ __Request__
 |modifyDate|yes|the modifying date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_del_consume_group_setting`
+### 2.27 `admin_del_consume_group_setting`
 
 Delete the consume group setting for whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
 
@@ -790,9 +791,9 @@ __Request__
 |modifyDate|yes|the modifying date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-## Master subscriber relation API
+## 3 Master subscriber relation API
 
-1. Query consumer group subscription information
+### 3.1 Query consumer group subscription information
 
 Url ` http://127.0.0.1:8080/webapi.htm?type=op_query&method=admin_query_sub_info&topicName=test&consumeGroup=xxx `
 
@@ -811,7 +812,7 @@ response:
 }									
 ```
 
-2. Query consumer group detailed subscription information
+### 3.2 Query consumer group detailed subscription information
 
 Url `http://127.0.0.1:8080/webapi.htm?type=op_query&method=admin_query_consume_group_detail&consumeGroup=test_25`
 
@@ -836,9 +837,9 @@ response:
 }									
 ```
 
-## Broker operation API
+## 4 Broker operation API
 
-### `admin_snapshot_message`
+### 4.1 `admin_snapshot_message`
 
 Check whether it is transferring data under current broker's topic, and what is the content.
 
@@ -852,7 +853,7 @@ __Request__
 |partitionId|yes|the partition ID which must exists|int|
 |filterConds|yes|the tid value for filtering|String|
 
-### `admin_manual_set_current_offset`
+### 4.2 `admin_manual_set_current_offset`
 
 Modify the offset value of consuming group under current broker. The new value will be persisted to ZK.
 
@@ -867,7 +868,7 @@ __Request__
 |partitionId|yes|the partition ID which must exists|int|
 |manualOffset|yes|the offset to be modified, it must be a valid value|long|
 
-### `admin_query_group_offset`
+### 4.3 `admin_query_group_offset`
 
 Query the offset of consuming group under current broker.
 
@@ -880,7 +881,7 @@ __Request__
 |partitionId|yes|the partition ID which must exists|int|
 |requireRealOffset|no|whether to check real offset on ZK, default false|Boolean|
 
-### `admin_query_broker_all_consumer_info`
+### 4.4 `admin_query_broker_all_consumer_info`
 
 Query consumer info of the specific consume group on the broker.
 
@@ -890,7 +891,7 @@ __Request__
 |---|---|---|---|
 |groupName|yes|the group name|String|
 
-### `admin_query_broker_all_store_info`
+### 4.5 `admin_query_broker_all_store_info`
 
 Query store info of the specific topic on the broker.
 
@@ -900,7 +901,7 @@ __Request__
 |---|---|---|---|
 |topicName|yes|the topic name|String|
 
-### `admin_query_broker_memstore_info`
+### 4.6 `admin_query_broker_memstore_info`
 
 Query memory store info of the specific topic on the broker.
 
diff --git a/docs/en-us/modules/tubemq/producer_example.md b/docs/en-us/modules/tubemq/producer_example.md
index 34d551c..9849c2f 100644
--- a/docs/en-us/modules/tubemq/producer_example.md
+++ b/docs/en-us/modules/tubemq/producer_example.md
@@ -2,14 +2,16 @@
 title: Producer Example - Apache InLong's TubeMQ module
 ---
 
-## Producer Example
+## 1 Producer Example
   TubeMQ provides two ways to initialize session factory, TubeSingleSessionFactory and TubeMultiSessionFactory:
   - TubeSingleSessionFactory creates only one session in the lifecycle, this is very useful in streaming scenarios.
   - TubeMultiSessionFactory creates new session on every call.
 
-1. TubeSingleSessionFactory
-   - Send Message Synchronously
+### 1.1 TubeSingleSessionFactory
+#### 1.1.1 Send Message Synchronously
+
     ```java
+    
     public final class SyncProducerExample {
     
         public static void main(String[] args) throws Throwable {
@@ -31,7 +33,7 @@ title: Producer Example - Apache InLong's TubeMQ module
     }
     ```
      
-   - Send Message Asynchronously
+####1.1.2 Send Message Asynchronously
     ```java
     public final class AsyncProducerExample {
      
@@ -65,7 +67,7 @@ title: Producer Example - Apache InLong's TubeMQ module
     }
     ```
      
-   - Send Message With Attributes
+#### 1.1.3 Send Message With Attributes
     ```java
     public final class ProducerWithAttributeExample {
      
@@ -91,7 +93,7 @@ title: Producer Example - Apache InLong's TubeMQ module
     }
     ```
      
-- TubeMultiSessionFactory
+### 1.2 TubeMultiSessionFactory
 
     ```java
     public class MultiSessionProducerExample {
@@ -146,3 +148,5 @@ title: Producer Example - Apache InLong's TubeMQ module
         }
     }
     ```
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/docs/en-us/modules/tubemq/quick_start.md b/docs/en-us/modules/tubemq/quick_start.md
index 315ad69..3821050 100644
--- a/docs/en-us/modules/tubemq/quick_start.md
+++ b/docs/en-us/modules/tubemq/quick_start.md
@@ -2,13 +2,13 @@
 title: Quick Start - Apache InLong's TubeMQ module
 ---
 
-## Build TubeMQ
+## 1 Build TubeMQ
 
-### Prerequisites
+### 1.1 Prerequisites
 - Java JDK 1.8
 - Maven 3.3+
 
-### Build Distribution Tarball
+### 1.2 Build Distribution Tarball
 - Compile and Package
 ```bash
 mvn clean package -DskipTests
@@ -30,7 +30,7 @@ After the build, please go to `tubemq-server/target`. You can find the
 **apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz** file. It is the TubeMQ deployment package, which includes
 scripts, configuration files, dependency jars and web GUI code.
 
-### Setting Up Your IDE
+### 1.3 Setting Up Your IDE
 If you want to build and debug source code in IDE, go to the project root, and run
 ```bash
 mvn compile
@@ -45,9 +45,9 @@ This command will generate the Java source files from the `protoc` files, the ge
 </configuration>
 ```
 
-## Deploy and Start
+## 2 Deploy and Start
 
-### Configuration Example
+### 2.1 Configuration Example
 There're two components in the cluster: **Master** and **Broker**. Master and Broker
 can be deployed on the same server or different servers. In this example, we setup our cluster
 like this, and all services run on the same node. Zookeeper should be setup in your environment also.
@@ -57,7 +57,7 @@ like this, and all services run on the same node. Zookeeper should be setup in y
 | Broker | 8123 | 8124 | 8081 | Message is stored at /stage/msg_data |
 | Zookeeper | 2181 | | | Offset is stored at /tubemq |
 
-### Prerequisites
+### 2.2 Prerequisites
 - ZooKeeper Cluster
 - [apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz](download/download.md) package file
 
@@ -71,7 +71,7 @@ After you extract the package file, here's the folder structure.
 └── resources
 ```
 
-### Configure Master
+### 2.3 Configure Master
 You can change configurations in `conf/master.ini` according to cluster information.
 - Master IP and Port
 ```ini
@@ -116,7 +116,7 @@ the introduction of availability level.
 **Tips**:Please notice that the master servers should be clock synchronized.
 
 
-### Configure Broker
+### 2.4 Configure Broker
 You can change configurations in `conf/broker.ini` according to cluster information.
 - Broker IP and Port
 ```ini
@@ -143,7 +143,7 @@ zkNodeRoot=/tubemq
 zkServerAddr=localhost:2181             // multi zookeeper addresses can separate with ","
 ```
 
-### Start Master
+### 2.5 Start Master
 Please go to the `bin` folder and run this command to start
 the master service.
 ```bash
@@ -155,7 +155,7 @@ web GUI now.
 
 ![TubeMQ Console GUI](img/tubemq-console-gui.png)
 
-#### Configure Broker Metadata
+#### 2.5.1 Configure Broker Metadata
 Before we start a broker service, we need to configure it on master web GUI first. Go to the `Broker List` page, click `Add Single Broker`, and input the new broker information.
 
 ![Add Broker 1](img/tubemq-add-broker-1.png)
@@ -169,7 +169,7 @@ Click the online link to activate the new added broker.
 
 ![Add Broker 2](img/tubemq-add-broker-2.png)
 
-### Start Broker
+### 2.6 Start Broker
 Please go to the `bin` folder and run this command to start the broker service
 ```bash
 ./tubemq.sh broker start
@@ -181,8 +181,8 @@ After the sub-state of the broker changed to `idle`, we can add topics to that b
 
 ![Add Broker 3](img/tubemq-add-broker-3.png)
 
-## Quick Start
-### Add Topic
+## 3 Quick Start
+### 3.1 Add Topic
 We can add or manage the cluster topics on the web GUI. To add a new topic, go to the
 topic list page and click the add new topic button
 
@@ -208,10 +208,10 @@ that the topic publish/subscribe state is active now.
 
 Now we can use the topic to send messages.
 
-### Run Example
+### 3.2 Run Example
 Now we can use `demo` topic which created before to test our cluster.
 
-- Produce Messages
+#### 3.2.1 Produce Messages
 
 Please don't forget replace `YOUR_MASTER_IP:port` with your server ip and port, and start producer.
 
@@ -223,7 +223,7 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 From the log, we can see the message is sent out.
 ![Demo 1](img/tubemq-send-message.png)
 
-- Consume Messages
+#### 3.2.2 Consume Messages
 
 Please don't forget replace YOUR_MASTER_IP:port with your server ip and port, and start consumer.
 ```bash
@@ -234,7 +234,7 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 From the log, we can see the message received by the consumer.
 ![Demo 2](img/tubemq-consume-message.png)
 
-## The End
+## 4 The End
 Here, the compilation, deployment, system configuration, startup, production and consumption of TubeMQ have been completed. If you need to understand more in-depth content, please check the relevant content in "TubeMQ HTTP API" and make the corresponding configuration settings.
 
 ---
diff --git a/docs/en-us/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md b/docs/en-us/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
index 45916f6..67a0a88 100644
--- a/docs/en-us/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
+++ b/docs/en-us/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
@@ -1,14 +1,14 @@
 # TubeMQ VS Kafka性能对比测试总结
 
-## 背景
+## 1 背景
 TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于[Apache Kafka](http://kafka.apache.org/)。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。
 这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。
 
-## 测试场景方案
+## 2 测试场景方案
 如下是我们根据实际应用场景设计的测试方案:
 ![](img/perf_scheme.png)
 
-## 测试结论
+## 3 测试结论
 用"复仇者联盟"里的角色来形容:
 
 角色|测试场景|要点
@@ -24,8 +24,8 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 3. 在过滤消费时,TubeMQ可以极大地降低服务端的网络出流量,同时还会因过滤消费消耗的资源少于全量消费,反过来促进TubeMQ吞吐量提升;kafka无服务端过滤,出流量与全量消费一致,流量无明显的节约;
 4. 资源消耗方面各有差异:TubeMQ由于采用顺序写随机读,CPU消耗很大,Kafka采用顺序写块读,CPU消耗很小,但其他资源,如文件句柄、网络连接等消耗非常的大。在实际的SAAS模式下的运营环境里,Kafka会因为zookeeper依赖出现系统瓶颈,会因生产、消费、Broker众多,受限制的地方会更多,比如文件句柄、网络连接数等,资源消耗会更大;
 
-## 测试环境及配置
-###【软件版本及部署环境】
+## 4 测试环境及配置
+### 4.1 【软件版本及部署环境】
 
 **角色**|**TubeMQ**|**Kafka**
 :---:|---|---
@@ -36,7 +36,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 **Producer**|1台M10 + 1台CG1|1台M10 + 1台CG1
 **Consumer**|6台TS50万兆机|6台TS50万兆机
 
-###【Broker硬件机型配置】
+### 4.2 【Broker硬件机型配置】
 
 **机型**|配置|**备注**
 :---:|---|---
@@ -44,7 +44,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 **BX1-10G**|SA5212M5(6133\*2/16G\*16/4T\*12/10GE\*2) Pcs|                                     
 **CG1-10G**|CG1-10G\_6.0.2.12\_RM760-FX(6133\*2/16G\*16/5200-480G\*6 RAID/10GE\*2)-ODM Pcs |  
 
-###【Broker系统配置】
+### 4.3 【Broker系统配置】
 
 | **配置项**            | **TubeMQ Broker**     | **Kafka Broker**      |
 |:---:|---|---|
@@ -53,25 +53,25 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 | **配置文件**          | 在tubemq-3.8.0版本broker.ini配置文件上改动: consumerRegTimeoutMs=35000<br>tcpWriteServiceThread=50<br>tcpReadServiceThread=50<br>primaryPath为SATA盘日志目录|kafka_2.11-0.10.2.0版本server.properties配置文件上改动:<br>log.flush.interval.messages=5000<br>log.flush.interval.ms=10000<br>log.dirs为SATA盘日志目录<br>socket.send.buffer.bytes=1024000<br>socket.receive.buffer.bytes=1024000<br>socket.request.max.bytes=2147483600<br>log.segment.bytes=1073741824<br>num.network.threads=25<br>num.io.threads=48< [...]
 | **其它**             | 除测试用例里特别指定,每个topic创建时设置:<br>memCacheMsgSizeInMB=5<br>memCacheFlushIntvl=20000<br>memCacheMsgCntInK=10 <br>unflushThreshold=5000<br>unflushInterval=10000<br>unFlushDataHold=5000 | 客户端代码里设置:<br>生产端:<br>props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br>props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br>props.put("linger.ms", "200");<br>props.put("block.on.buffer.full", false);<br>props.pu [...]
               
-## 测试场景及结论
+## 5 测试场景及结论
 
-### 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
+### 5.1 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
  ![](img/perf_scenario_1.png)
 
-####【结论】
+#### 5.1.1 【结论】
 
 在单topic不同分区的情况下:
 1. TubeMQ吞吐量不随分区变化而变化,同时TubeMQ属于顺序写随机读模式,单实例情况下吞吐量要低于Kafka,CPU要高于Kafka;
 2. Kafka随着分区增多吞吐量略有下降,CPU使用率很低;
 3. TubeMQ分区由于是逻辑分区,增加分区不影响吞吐量;Kafka分区为物理文件的增加,但增加分区入出流量反而会下降;
 
-####【指标】
+#### 5.1.2 【指标】
  ![](img/perf_scenario_1_index.png)
 
-### 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
+### 5.2 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
  ![](img/perf_scenario_2.png)
 
-####【结论】
+#### 5.2.1 【结论】
 
 从场景一和场景二的测试数据结合来看:
 
@@ -81,7 +81,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 4. TubeMQ按照Kafka等同的增加实例(物理文件)后,吞吐量量随之提升,在4个实例的时候测试效果达到并超过Kafka
     5个分区的状态;TubeMQ可以根据业务或者系统配置需要,调整数据读取方式,可以动态提升系统的吞吐量;Kafka随着分区增加,入流量有下降;
 
-####【指标】
+#### 5.2.2 【指标】
 
 **注1 :** 如下场景中,均为单Topic测试下不同分区或实例、不同读取模式场景下的测试,单条消息包长均为1K;
 
@@ -89,10 +89,10 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 读取模式通过admin\_upd\_def\_flow\_control\_rule设置qryPriorityId为对应值.
  ![](img/perf_scenario_2_index.png)
 
-### 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
+### 5.3 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
  ![](img/perf_scenario_3.png)
 
-####【结论】
+#### 5.3.1 【结论】
 
 按照多Topic场景下测试:
 
@@ -103,25 +103,25 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
     Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题;
 4.  数据对比来看,TubeMQ相比Kafka运行更稳定,吞吐量以稳定形势呈现,长时间跑吞吐量不下降,资源占用少,但CPU的占用需要后续版本解决;
 
-####【指标】
+#### 5.3.2 【指标】
 
 **注:** 如下场景中,包长均为1K,分区数均为10。
  ![](img/perf_scenario_3_index.png)
 
-### 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
+### 5.4 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
 
-####【结论】
+#### 5.4.1 【结论】
 
 1.  TubeMQ采用服务端过滤的模式,出流量指标与入流量存在明显差异;
 2.  TubeMQ服务端过滤提供了更多的资源给到生产,生产性能比非过滤情况有提升;
 3.  Kafka采用客户端过滤模式,入流量没有提升,出流量差不多是入流量的2倍,同时入出流量不稳定;
 
-####【指标】
+#### 5.4.2 【指标】
 
 **注:** 如下场景中,topic为100,包长均为1K,分区数均为10
  ![](img/perf_scenario_4_index.png)
 
-### 场景五:TubeMQ、Kafka数据消费时延比对
+### 5.5 场景五:TubeMQ、Kafka数据消费时延比对
 
 | 类型   | 时延            | Ping时延                |
 |---|---|---|
@@ -130,35 +130,35 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 
 备注:TubeMQ的消费端存在一个等待队列处理消息追平生产时的数据未找到的情况,缺省有200ms的等待时延。测试该项时,TubeMQ消费端要调整拉取时延(ConsumerConfig.setMsgNotFoundWaitPeriodMs())为10ms,或者设置频控策略为10ms。
 
-### 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
+### 5.6 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
 
-####【结论】
+#### 5.6.1【结论】
 
 1.  TubeMQ调整Topic的内存缓存大小能对吞吐量形成正面影响,实际使用时可以根据机器情况合理调整;
 2.  从实际使用情况看,内存大小设置并不是越大越好,需要合理设置该值;
 
-####【指标】
+#### 5.6.2 【指标】
 
  **注:** 如下场景中,消费方式均为读取内存(301)的PULL消费,单条消息包长均为1K
  ![](img/perf_scenario_6_index.png)
  
 
-### 场景七:消费严重滞后情况下两系统的表现
+### 5.7 场景七:消费严重滞后情况下两系统的表现
 
-####【结论】
+#### 5.7.1 【结论】
 
 1.  消费严重滞后情况下,TubeMQ和Kafka都会因磁盘IO飙升使得生产消费受阻;
 2.  在带SSD系统里,TubeMQ可以通过SSD转存储消费来换取部分生产和消费入流量;
 3.  按照版本计划,目前TubeMQ的SSD消费转存储特性不是最终实现,后续版本中将进一步改进,使其达到最合适的运行方式;
 
-####【指标】
+#### 5.7.2 【指标】
  ![](img/perf_scenario_7.png)
 
 
-### 场景八:评估多机型情况下两系统的表现
+### 5.8 场景八:评估多机型情况下两系统的表现
  ![](img/perf_scenario_8.png)
       
-####【结论】
+#### 5.8.1 【结论】
 
 1.  TubeMQ在BX1机型下较TS60机型有更高的吞吐量,同时因IO util达到瓶颈无法再提升,吞吐量在CG1机型下又较BX1达到更高的指标值;
 2.  Kafka在BX1机型下系统吞吐量不稳定,且较TS60下测试的要低,在CG1机型下系统吞吐量达到最高,万兆网卡跑满;
@@ -166,29 +166,30 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 4.  在SSD盘存储条件下,Kafka性能指标达到最好,TubeMQ指标不及Kafka;
 5.  CG1机型数据存储盘较小(仅2.2T),RAID 10配置下90分钟以内磁盘即被写满,无法测试两系统长时间运行情况。
 
-####【指标】
+#### 5.8.2 【指标】
 
 **注1:** 如下场景Topic数均配置500个topic,10个分区,消息包大小为1K字节;
 
 **注2:** TubeMQ采用的是301内存读取模式消费;
  ![](img/perf_scenario_8_index.png)
 
-## 附录1 不同机型下资源占用情况图:
-###【BX1机型测试】
+## 6 附录
+### 6.1 附录1 不同机型下资源占用情况图:
+#### 6.1.1 【BX1机型测试】
 ![](img/perf_appendix_1_bx1_1.png)
 ![](img/perf_appendix_1_bx1_2.png)
 ![](img/perf_appendix_1_bx1_3.png)
 ![](img/perf_appendix_1_bx1_4.png)
 
-###【CG1机型测试】
+#### 6.1.2 【CG1机型测试】
 ![](img/perf_appendix_1_cg1_1.png)
 ![](img/perf_appendix_1_cg1_2.png)
 ![](img/perf_appendix_1_cg1_3.png)
 ![](img/perf_appendix_1_cg1_4.png)
 
-## 附录2 多Topic测试时的资源占用情况图:
+### 6.2 附录2 多Topic测试时的资源占用情况图:
 
-###【100个topic】
+#### 6.2.1 【100个topic】
 ![](img/perf_appendix_2_topic_100_1.png)
 ![](img/perf_appendix_2_topic_100_2.png)
 ![](img/perf_appendix_2_topic_100_3.png)
@@ -199,7 +200,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_100_8.png)
 ![](img/perf_appendix_2_topic_100_9.png)
  
-###【200个topic】
+#### 6.2.2 【200个topic】
 ![](img/perf_appendix_2_topic_200_1.png)
 ![](img/perf_appendix_2_topic_200_2.png)
 ![](img/perf_appendix_2_topic_200_3.png)
@@ -210,7 +211,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_200_8.png)
 ![](img/perf_appendix_2_topic_200_9.png)
 
-###【500个topic】
+#### 6.2.3 【500个topic】
 ![](img/perf_appendix_2_topic_500_1.png)
 ![](img/perf_appendix_2_topic_500_2.png)
 ![](img/perf_appendix_2_topic_500_3.png)
@@ -221,7 +222,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_500_8.png)
 ![](img/perf_appendix_2_topic_500_9.png)
 
-###【1000个topic】
+#### 6.2.4【1000个topic】
 ![](img/perf_appendix_2_topic_1000_1.png)
 ![](img/perf_appendix_2_topic_1000_2.png)
 ![](img/perf_appendix_2_topic_1000_3.png)
diff --git a/docs/zh-cn/modules/tubemq/architecture.md b/docs/zh-cn/modules/tubemq/architecture.md
index 9185d56..cc48c29 100644
--- a/docs/zh-cn/modules/tubemq/architecture.md
+++ b/docs/zh-cn/modules/tubemq/architecture.md
@@ -1,8 +1,8 @@
 ---
-title: 架构介绍 - Apache InLong TubeMQ模块
+架构介绍 - Apache InLong TubeMQ模块
 ---
 
-## Apache InLong TubeMQ模块的架构 
+## 1 Apache InLong TubeMQ模块的架构 
 经过多年演变,TubeMQ集群分为如下5个部分:
 ![](img/sys_structure.png)
 
@@ -17,7 +17,7 @@ title: 架构介绍 - Apache InLong TubeMQ模块
 - **Zookeeper**: 负责offset存储的zk部分,该部分功能已弱化到仅做offset的持久化存储,考虑到接下来的多节点副本功能该模块暂时保留。
 
 
-## Apache InLong TubeMQ模块的系统特点
+## 2 Apache InLong TubeMQ模块的系统特点
 - **纯Java实现语言**:
 TubeMQ采用纯Java语言开发,便于开发人员快速熟悉项目及问题处理;
 
@@ -52,19 +52,19 @@ TubeMQ采用连接复用模式,减少连接资源消耗;通过逻辑分区
 基于业务使用上的便利性以,我们简化了客户端逻辑,使其做到最小的功能集合,我们采用基于响应消息的接收质量统计算法来自动剔出坏的Broker节点,基于首次使用时作连接尝试来避免大数据量发送时发送受阻(具体内容见后面章节介绍)。
 
 
-## Broker文件存储方案改进 
+## 3 Broker文件存储方案改进 
 以磁盘为数据持久化媒介的系统都面临各种因磁盘问题导致的系统性能问题,TubeMQ系统也不例外,性能提升很大程度上是在解决消息数据如何读写及存储的问题。在这个方面TubeMQ进行了比较多的改进,我们采用存储实例来作为最小的Topic数据管理单元,每个存储实例包括一个文件存储块和一个内存缓存块,每个Topic可以分配多个存储实例:
 
-### 文件存储块
+### 3.1 文件存储块
  TubeMQ的磁盘存储方案类似Kafka,但又不尽相同,如下图示,每个文件存储块由一个索引文件和一个数据文件组成,partiton为数据文件里的逻辑分区,每个Topic单独维护管理文件存储块的相关机制,包括老化周期,partition个数,是否可读可写等。
 ![](img/store_file.png)
 
-### 内存缓存块
+### 3.2 内存缓存块
  在文件存储块基础上,我们额外增加了一个单独的内存缓存块,即在原有写磁盘基础上增加一块内存,隔离硬盘的慢速影响,数据先刷到内存缓存块,然后由内存缓存块批量地将数据刷到磁盘文件。
 ![](img/store_mem.png)
 
 
-## Apache InLong TubeMQ模块的客户端演进: ##
+## 4 Apache InLong TubeMQ模块的客户端演进:
 业务与TubeMQ接触得最多的是消费侧,怎样更适应业务特点、更方便业务使用我们在这块做了比较多的改进:
 
 - **数据拉取模式支持Push、Pull:**
diff --git a/docs/zh-cn/modules/tubemq/client_rpc.md b/docs/zh-cn/modules/tubemq/client_rpc.md
index 7abae1e..6ee962c 100644
--- a/docs/zh-cn/modules/tubemq/client_rpc.md
+++ b/docs/zh-cn/modules/tubemq/client_rpc.md
@@ -1,10 +1,8 @@
 ---
-title: 客户端RPC - Apache InLong TubeMQ模块
+客户端RPC - Apache InLong TubeMQ模块
 ---
 
-# Apache InLong TubeMQ模块的RPC定义:
-
-## 总体介绍:
+## 1 总体介绍:
 
 这部分介绍内容在/org/apache/inlong/tubemq/corerpc模块下可以找到对应实现,Apache InLong TubeMQ模块的各个节点间(Client、Master、Broker)通过TCP协议长连接交互,其消息采用的是 【二进制 + Protobuf编码】组合方式进行定义,如下图示:
 ![](img/client_rpc/rpc_bytes_def.png)
@@ -14,7 +12,7 @@ title: 客户端RPC - Apache InLong TubeMQ模块
 为什么会以listSize [\&lt;len\&gt;\&lt;data\&gt;]形式定义pb数据内容?因为在TubeMQ的这个实现中,序列化后的PB数据是通过ByteBuffer对象保存的,Java里ByteBuffer存在一个最大块长8196,超过单个块长度的PB消息内容就需要用多个ByteBuffer保存,序列化到TCP消息时候,这块没有统计总长,直接按照PB序列化的ByteBuffer列表写入到了消息中。 **在多语言实现时候,这块需要特别注意:** 需要将PB数据内容序列化成块数组(pb编解码里有对应支持)。
 
 
-## PB格式编码:
+## 2 PB格式编码:
 
 PB格式编码分为RPC框架定义,到Master的消息编码和到Broker的消息编码三个部分,大家采用protobuf直接编译就可以获得不同语言的编解码,使用起来非常的方便:
 ![](img/client_rpc/rpc_proto_def.png)
@@ -29,9 +27,9 @@ RPC.proto定义了6个结构,分为2大类:请求消息与响应消息,响
 ![](img/client_rpc/rpc_header_fill.png)
 
 
-## 客户端的PB请求响应交互图:
+## 3 客户端的PB请求响应交互图:
 
-**Producer交互图**:
+### 3.1 Producer交互图:
 
 Producer在系统中一共4对指令,到master是要做注册,心跳,退出操作;到broker只有发送消息:
 ![](img/client_rpc/rpc_producer_diagram.png)
@@ -47,14 +45,14 @@ Producer在系统中一共4对指令,到master是要做注册,心跳,退
 4. Producer到Broker的连接要注意异常检测,长期运行场景,要能检测出Broker坏点,以及长期不发消息,要将到Broker的连接回收,避免运行不稳定。
 
 
-**Consumer交互图**:
+### 3.2 Consumer交互图:
 
 Consumer一共7对指令,到master是要做注册,心跳,退出操作;到broker包括注册,注销,心跳,拉取消息,确认消息4对,其中到Broker的注册注销是同一个命令,用了不同的状态码表示:
 ![](img/client_rpc/rpc_consumer_diagram.png)
 
 从上图我们可以看到,Consumer首先要注册到Master,但注册到Master时并没有立即获取到元数据信息,原因是TubeMQ是采用的是服务器端负载均衡模式,客户端需要等待服务器派发消费分区信息;Consumer到Broker需要进行注册注销操作,原因在于消费时候分区是独占消费,即同一时刻同一分区者只能被同组的一个消费者进行消费,为了解决这个问题,需要客户端进行注册,获得分区的消费权限;消息拉取与消费确认需要成对出现,虽然协议支持多次拉取然后最后一次确认处理,但从客户端可能超时丢失分区的消费权限,从而导致数据回滚重复消费触发,数据积攒的越多重复消费的量就越多,所以按照1:1的提交比较合适。
 
-## 客户端功能集合:
+## 4 客户端功能集合:
 
 | **特性** | **Java** | **C/C++** | **Go** | **Python** | **Rust** | **备注** |
 | --- | --- | --- | --- | --- | --- | --- |
@@ -84,9 +82,9 @@ Consumer一共7对指令,到master是要做注册,心跳,退出操作;
 | 控制消费者拉取消息的频度 | ✅ | | | | | |
 
 
-## 客户端功能CaseByCase实现介绍:
+## 5 客户端功能CaseByCase实现介绍:
 
-**客户端与服务器端RPC交互过程**:
+### 5.1 客户端与服务器端RPC交互过程:
 
 ----------
 
@@ -94,7 +92,8 @@ Consumer一共7对指令,到master是要做注册,心跳,退出操作;
 
 如上图示,客户端要维持已发请求消息的本地保存,直到RPC超时,或者收到响应消息,响应消息通过请求发送时生成的SerialNo关联;从服务器端收到的Broker信息,以及Topic信息,SDK要保存在本地,并根据最新的返回信息进行更新,以及定期的上报给服务器端;SDK要维持到Master或者Broker的心跳,如果发现Master反馈注册超时错误时,要进行重注册操作;SDK要基于Broker进行连接建立,同一个进程不同对象之间,要允许业务进行选择,是支持按对象建立连接,还是按照进程建立连接。
 
-**Producer到Master注册**:
+### 5.2 Producer到Master注册:
+
 ----------
 ![](img/client_rpc/rpc_producer_register2M.png)
 
@@ -129,8 +128,10 @@ Java的SDK版本里ClientId = 节点IP地址(IPV4) + &quot;-&quot; + 进程I
 **authAuthorizedToken**:认证通过的授权Token,如果有该字段数据,要保存,并且在后续访问Master及Broker时携带该字段信息;如果后续心跳时该字段有变更,则需要更新本地缓存的该字段数据;
 
 
-**Producer到Master保持心跳**:
+### 5.3 Producer到Master保持心跳:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_heartbeat2M.png)
 
 **topicInfos**:SDK发布的Topic对应的元数据信息,包括分区信息以及所在的Broker,具体解码方式如下,由于元数据非常的多,如果将对象数据原样透传所产生的出流量会非常的大,所以我们通过编码方式做了改进:
@@ -139,14 +140,18 @@ Java的SDK版本里ClientId = 节点IP地址(IPV4) + &quot;-&quot; + 进程I
 
 **requireAuth**:标识Master之前的授权访问码(authAuthorizedToken)过期,要求SDK下一次请求,进行用户名及密码的签名信息上报;
 
-**Producer到Master关闭退出**:
+### 5.4 Producer到Master关闭退出:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_close2M.png)
 
 需要注意的是,如果认证开启,关闭会做认证,以避免外部干扰操作。
 
-**Producer到Broker发送消息**:
+### 5.5 Producer到Broker发送消息:
+
 ----------
+
 该部分的内容主要和Message的定义由关联,其中
 
 ![](img/client_rpc/rpc_producer_sendmsg2B.png)
@@ -161,8 +166,10 @@ Java的SDK版本里ClientId = 节点IP地址(IPV4) + &quot;-&quot; + 进程I
 
 **requireAuth**:到Broker进行数据生产的要求认证操作,考虑性能问题,目前未生效,发送消息里填写的authAuthorizedToken值以Master侧提供的值为准,并且随Master侧改变而改变。
 
-**分区负载均衡过程**:
+### 5.6 分区负载均衡过程:
+
 ----------
+
 Apache InLong TubeMQ模块目前采用的是服务器端负载均衡模式,均衡过程由服务器管理维护;后续版本会增加客户端负载均衡模式,形成2种模式共存的情况,由业务根据需要选择不同的均衡方式。
 
 **服务器端负载均衡过程如下**:
diff --git a/docs/zh-cn/modules/tubemq/clients_java.md b/docs/zh-cn/modules/tubemq/clients_java.md
index c41a50c..25d499a 100644
--- a/docs/zh-cn/modules/tubemq/clients_java.md
+++ b/docs/zh-cn/modules/tubemq/clients_java.md
@@ -1,52 +1,47 @@
 ---
-title: JAVA SDK API介绍 - Apache InLong TubeMQ模块
+JAVA SDK API介绍 - Apache InLong TubeMQ模块
 ---
 
-## **Apache InLong TubeMQ模块 Lib** **接口使用**
-
-------
-
 
+## 1 基础对象接口介绍:
 
-### **1. 基础对象接口介绍:**
-
-#### **a) MessageSessionFactory(消息会话工厂):**
+### 1.1 MessageSessionFactory(消息会话工厂):
 
 TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。
 
  
 
-#### **b) MasterInfo:**
+### 1.2 MasterInfo:
 
 TubeMQ的Master地址信息对象,该对象的特点是支持配置多个Master地址,由于TubeMQ Master借助BDB的存储能力进行元数据管理,以及服务HA热切能力,Master的地址相应地就需要配置多条信息。该配置信息支持IP、域名两种模式,由于TubeMQ的HA是热切模式,客户端要保证到各个Master地址都是连通的。该信息在初始化TubeClientConfig类对象和ConsumerConfig类对象时使用,考虑到配置的方便性,我们将多条Master地址构造成“ip1:port1,ip2:port2,ip3:port3”格式并进行解析。
 
  
 
-#### **c) TubeClientConfig:**
+### 1.3 TubeClientConfig:
 
 MessageSessionFactory(消息会话工厂)初始化类,用来携带创建网络连接信息、客户端控制参数信息的对象类,包括RPC时长设置、Socket属性设置、连接质量检测参数设置、TLS参数设置、认证授权信息设置等信息。
 
  
 
-#### **d) ConsumerConfig:**
+### 1.4 ConsumerConfig:
 
 ConsumerConfig类是TubeClientConfig类的子类,它是在TubeClientConfig类基础上增加了Consumer类对象初始化时候的参数携带,因而在一个既有Producer又有Consumer的MessageSessionFactory(消息会话工厂)类对象里,会话工厂类的相关设置以MessageSessionFactory类初始化的内容为准,Consumer类对象按照创建时传递的初始化类对象为准。在consumer里又根据消费行为的不同分为Pull消费者和Push消费者两种,两种特有的参数通过参数接口携带“pull”或“push”不同特征进行区分。
 
  
 
-#### **e) Message:**
+### 1.5 Message:
 
 Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产端原样传递给消息接收端,attribute内容是与TubeMQ系统共用的字段,业务填写的内容不会丢失和改写,但该字段有可能会新增TubeMQ系统填写的内容,并在后续的版本中,新增的TubeMQ系统内容有可能去掉而不被通知。该部分需要注意的是Message.putSystemHeader(final String msgType, final String msgTime)接口,该接口用来设置消息的消息类型和消息发送时间,msgType用于消费端过滤用,msgTime用做TubeMQ进行数据收发统计时消息时间统计维度用。
 
  
 
-#### **f) MessageProducer:**
+### 1.6 MessageProducer:
 
 消息生产者类,该类完成消息的生产,消息发送分为同步发送和异步发送两种接口,目前消息采用Round Robin方式发往后端服务器,后续这块将考虑按照业务指定的算法进行后端服务器选择方式进行生产。该类使用时需要注意的是,我们支持在初始化时候全量Topic指定的publish,也支持在生产过程中临时增加对新的Topic的publish,但临时增加的Topic不会立即生效,因而在使用新增Topic前,要先调用isTopicCurAcceptPublish接口查询该Topic是否已publish并且被服务器接受,否则有可能消息发送失败。
 
  
 
-#### **g) MessageConsumer:**
+### 1.7 MessageConsumer:
 
 该类有两个子类PullMessageConsumer、PushMessageConsumer,通过这两个子类的包装,完成了对业务侧的Pull和Push语义。实际上TubeMQ是采用Pull模式与后端服务进行交互,为了便于业务的接口使用,我们进行了封装,大家可以看到其差别在于Push在启动时初始化了一个线程组,来完成主动的数据拉取操作。需要注意的地方在于:
 
@@ -60,19 +55,19 @@ Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产
 
 
 
-### **2. 接口调用示例:**
+## 2 接口调用示例:
 
-#### **a) 环境准备:**
+### 2.1 环境准备:
 
 TubeMQ开源包org.apache.inlong.tubemq.example里提供了生产和消费的具体代码示例,这里我们通过一个实际的例子来介绍如何填参和调用对应接口。首先我们搭建一个带3个Master节点的TubeMQ集群,3个Master地址及端口分别为test_1.domain.com,test_2.domain.com,test_3.domain.com,端口均为8080,在该集群里我们建立了若干个Broker,并且针对Broker我们创建了3个topic:topic_1,topic_2,topic_3等Topic配置;然后我们启动对应的Broker等待Consumer和Producer的创建。
 
  
 
-#### **b) 创建Consumer:**
+### 2.2 创建Consumer:
 
 见包org.apache.inlong.tubemq.example.MessageConsumerExample类文件,Consumer是一个包含网络交互协调的客户端对象,需要做初始化并且长期驻留内存重复使用的模型,它不适合单次拉起消费的场景。如下图示,我们定义了MessageConsumerExample封装类,在该类中定义了进行网络交互的会话工厂MessageSessionFactory类,以及用来做Push消费的PushMessageConsumer类:
 
-- ###### **i.初始化MessageConsumerExample类:**
+#### 2.2.1 初始化MessageConsumerExample类:
 
 1. 首先构造一个ConsumerConfig类,填写初始化信息,包括本机IP V4地址,Master集群地址,消费组组名信息,这里Master地址信息传入值为:”test_1.domain.com:8080,test_2.domain.com:8080,test_3.domain.com:8080”;
 
@@ -116,7 +111,7 @@ public final class MessageConsumerExample {
 
 
 
-- **ii.订阅Topic:**
+#### 2.2.2 订阅Topic:
 
 我们没有采用指定Offset消费的模式进行订阅,也没有过滤需求,因而我们在如下代码里只做了Topic的指定,对应的过滤项集合我们传的是null值,同时,对于不同的Topic,我们可以传递不同的消息回调处理函数;我们这里订阅了3个topic,topic_1,topic_2,topic_3,每个topic分别调用subscribe函数进行对应参数设置:
 
@@ -133,8 +128,7 @@ public void subscribe(final Map<String, TreeSet<String>> topicTidsMap)
 ```
 
 
-
-- **iii.进行消费:**
+#### 2.2.3 进行消费:
 
 到此,对集群里对应topic的订阅就已完成,系统运行开始后,回调函数里数据将不断的通过回调函数推送到业务层进行处理:
 
@@ -165,11 +159,11 @@ public class DefaultMessageListener implements MessageListener {
 
 
 
-#### **c) 创建Producer:**
+### 3 创建Producer:
 
 现网环境中业务的数据都是通过代理层来做接收汇聚,包装了比较多的异常处理,大部分的业务都没有也不会接触到TubeSDK的Producer类,考虑到业务自己搭建集群使用TubeMQ进行使用的场景,这里提供对应的使用demo,见包org.apache.inlong.tubemq.example.MessageProducerExample类文件供参考,**需要注意**的是,业务除非使用数据平台的TubeMQ集群做MQ服务,否则仍要按照现网的接入流程使用代理层来进行数据生产:
 
-- **i. 初始化MessageProducerExample类:**
+#### 3.1 初始化MessageProducerExample类:
 
 和Consumer的初始化类似,也是构造了一个封装类,定义了一个会话工厂,以及一个Producer类,生产端的会话工厂初始化通过TubeClientConfig类进行,如之前所介绍的,ConsumerConfig类是TubeClientConfig类的子类,虽然传入参数不同,但会话工厂是通过TubeClientConfig类完成的初始化处理:
 
@@ -201,7 +195,7 @@ public final class MessageProducerExample {
 
 
 
-- **ii. 发布Topic:**
+#### 3.2 发布Topic:
 
 ```java
 public void publishTopics(List<String> topicList) throws TubeClientException {
@@ -211,7 +205,7 @@ public void publishTopics(List<String> topicList) throws TubeClientException {
 
 
 
-- **iii. 进行数据生产:**
+#### 3.3 进行数据生产:
 
 如下所示,则为具体的数据构造和发送逻辑,构造一个Message对象后调用sendMessage()函数发送即可,有同步接口和异步接口选择,依照业务要求选择不同接口;需要注意的是该业务根据不同消息调用message.putSystemHeader()函数设置消息的过滤属性和发送时间,便于系统进行消息过滤消费,以及指标统计用。完成这些,一条消息即被发送出去,如果返回结果为成功,则消息被成功的接纳并且进行消息处理,如果返回失败,则业务根据具体错误码及错误提示进行判断处理,相关错误详情见《TubeMQ错误信息介绍.xlsx》:
 
@@ -241,7 +235,7 @@ public void sendMessageAsync(int id, long currtime,
 
 
 
-- **iv. Producer不同类MAMessageProducerExample关注点:**
+#### 3.5 Producer不同类MAMessageProducerExample关注点:
 
 该类初始化与MessageProducerExample类不同,采用的是TubeMultiSessionFactory多会话工厂类进行的连接初始化,该demo提供了如何使用多会话工厂类的特性,可以用于通过多个物理连接提升系统吞吐量的场景(TubeMQ通过连接复用模式来减少物理连接资源的使用),恰当使用可以提升系统的生产性能。在Consumer侧也可以通过多会话工厂进行初始化,但考虑到消费是长时间过程处理,对连接资源的占用比较小,消费场景不推荐使用。
 
diff --git a/docs/zh-cn/modules/tubemq/configure_introduction.md b/docs/zh-cn/modules/tubemq/configure_introduction.md
index aa08d31..1d40bbe 100644
--- a/docs/zh-cn/modules/tubemq/configure_introduction.md
+++ b/docs/zh-cn/modules/tubemq/configure_introduction.md
@@ -1,8 +1,8 @@
 ---
-title: 配置参数介绍 - Apache InLong TubeMQ模块
+配置参数介绍 - Apache InLong TubeMQ模块
 ---
 
-# TubeMQ服务端配置文件说明:
+## 1 TubeMQ服务端配置文件说明:
 
 TubeMQ服务端包括Master和Broker共2个模块,Master又包含供外部页面访问的Web前端模块(该部分存放在resources中),考虑到实际部署时2个模块常常部署在同1台机器中,TubeMQ将2个模块3个部分的内容打包在一起交付给运维使用;客户端则不包含服务端部分的lib包单独交付给业务使用。
 
@@ -17,9 +17,9 @@ Master除了后端系统配置文件外,还在resources里存放了Web前端
 ![](img/configure/conf_velocity_pos.png)
 
 
-## 配置项详情:
+## 2 配置项详情:
 
-### master.ini文件中关键配置内容说明:
+### 2.1 master.ini文件中关键配置内容说明:
 
 | 配置单元 | 配置项 | 是否必选 | 值类型 | 配置说明 |
 | --- | --- | --- | --- | --- |
@@ -92,14 +92,14 @@ Master除了后端系统配置文件外,还在resources里存放了Web前端
 | tlsTrustStorePath | 否 | String | TLS的TrustStore文件的绝对存储路径+TrustStore文件名,在启动TLS功能且启用双向认证时,该字段必填且不能为空 |
 | tlsTrustStorePassword | 否 | String | TLS的TrustStorePassword文件的绝对存储路径+TrustStorePassword文件名,在启动TLS功能且启用双向认证时,该字段必填且不能为空 |
 
-### Master的前台配置文件velocity.properties中关键配置内容说明:
+### 2.2 Master的前台配置文件velocity.properties中关键配置内容说明:
 
 | 配置单元 | 配置项 | 是否必选 | 值类型 | 配置说明 |
 | --- | --- | --- | --- | --- |
 |
  | file.resource.loader.path | 是 | String | Master的Web的模板绝对路径,该部分为实际部署Master时的工程绝对路径+/resources/templates,该配置要与实际部署相吻合,配置失败会导致Master前端页面访问失败。 |
 
-### broker.ini文件中关键配置内容说明:
+### 2.3 broker.ini文件中关键配置内容说明:
 
 | 配置单元 | 配置项 | 是否必选 | 值类型 | 配置说明 |
 | --- | --- | --- | --- | --- |
diff --git a/docs/zh-cn/modules/tubemq/console_introduction.md b/docs/zh-cn/modules/tubemq/console_introduction.md
index 4bd104b..7c0540a 100644
--- a/docs/zh-cn/modules/tubemq/console_introduction.md
+++ b/docs/zh-cn/modules/tubemq/console_introduction.md
@@ -1,21 +1,19 @@
 ---
-title: 管控台操作指引 - Apache InLong TubeMQ模块
+TubeMQ管控台操作指引 - Apache InLong TubeMQ模块
 ---
 
-# TubeMQ管控台操作指引
-
-## 管控台关系
+## 1 管控台关系
 
 ​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:
 ![](img/console/1568169770714.png)
 ​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。
 
 
-## TubeMQ管控台各版面介绍
+## 2 TubeMQ管控台各版面介绍
 
 ​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topic列表2个部分,我们先介绍简单的分发查询和集群管理,然后再介绍复杂的配置管理。
 
-### 分发查询
+### 2.1 分发查询
 
 ​        点分发查询,我们会看到如下的列表信息,这是当前TubeMQ集群里已注册的消费组信息,包括具体的消费组组名,消费的Topic,以及该组总的消费分区数简介信息,如下图示:
 ![](img/console/1568169796122.png)
@@ -24,12 +22,12 @@ title: 管控台操作指引 - Apache InLong TubeMQ模块
 
 ​       这个页面可以供我们查询,输入Topic或者消费组名,就可以很快确认系统里有哪些消费组在消费Topic,以及每个消费组的消费目标是怎样这些信息。
 
-### 集群管理
+### 2.2 集群管理
 
 ​        集群管理主要管理Master的HA,在这个页面上我们可以看到当前Master的各个节点及节点状态,同时,我们可以通过“切换”操作来改变节点的主备状态。
 ![](img/console/1568169823675.png)
 
-### 配置管理
+### 2.3 配置管理
 
 ​        配置管理版面既包含了Broker、Topic元数据的管理,还包含了Broker和Topic的上线发布以及下线操作,有2层含义,比如Broker列表里,展示的是当前集群里已配置的Broker元数据,包括未上线处于草稿状态、已上线、已下线的Broker记录信息:
 ![](img/console/1568169839931.png)
@@ -41,7 +39,7 @@ title: 管控台操作指引 - Apache InLong TubeMQ模块
 
 ​        所有TubeMQ管控台的变更操作,或者改变操作,都会要求输入操作授权码,该信息由运维通过Master的配置文件master.ini的confModAuthToken字段进行定义:如果你知道这个集群的密码,你就可以进行该项操作,比如你是管理员,你是授权人员,或者你能登陆这个master的机器拿到这个密码,都认为你是有权操作该项功能。
 
-## TubeMQ管控台上涉及的操作及注意事项
+### 2.4 TubeMQ管控台上涉及的操作及注意事项
 
 ​       如上所说,TubeMQ管控台是运营Tube MQ集群的,套件负责包括Master、Broker这类TubeMQ集群节点管理,包括自动部署和安装等,因此,如下几点需要注意:
 
@@ -68,9 +66,9 @@ title: 管控台操作指引 - Apache InLong TubeMQ模块
 
 ​       这个时候我们就可以针对该Topic进行生产和消费处理。
 
-## 3.对于Topic的元数据进行变更后的操作注意事项:
+## 3 对于Topic的元数据进行变更后的操作注意事项:
 
-**a.如何自行配置Topic参数:**
+### 3.1 如何自行配置Topic参数:
 
 ​       大家点击Topic列表里任意Topic后,会弹出如下框,里面是该Topic的相关元数据信息,其决定了这个Topic在该Broker上,设置了多少个分区,当前读写状态,数据刷盘频率,数据老化周期和时间等信息:
 ![](img/console/1568169925657.png)
@@ -104,13 +102,13 @@ title: 管控台操作指引 - Apache InLong TubeMQ模块
 **特别提醒:大家还需要注意的是,输入授权码修改后,数据变更要刷新后才会生效,同时生效的Broker要按比例进行操作。**
 ![](img/console/1568169954746.png)
 
-**b.Topic变更注意事项:**
+### 3.2 Topic变更注意事项:
 
 ​       如上图示,选择变更Topic元数据后,之前选中的Broker集合会在**配置是否已变更**上出现是的提示。我们还需要对变更进行重载刷新操作,选择Broker集合,然后选择刷新操作,可以批量也可以单条,但是一定要注意的是:操作要分批进行,上一批操作的Broker当前运行状态为running后才能进入下一批的配置刷新操作;如果有节点处于online状态,但长期不进入running状态(缺省最大2分钟),则需要停止刷新,排查问题原因后再继续操作。
 
 ​       进行分批操作原因是,我们系统在变更时,会对指定的Broker做停读停写操作,如果将全量的Broker统一做重载,很明显,集群整体会出现服务不可读或者不可写的情况,从而接入出现不该有的异常。
 
-**c.对于Topic的删除处理:**
+### 3.3 对于Topic的删除处理:
 
 ​       页面上进行的删除是软删除处理,如果要彻底删除该topic需要通过API接口进行硬删除操作处理才能实现(避免业务误操作)。
 
diff --git a/docs/zh-cn/modules/tubemq/consumer_example.md b/docs/zh-cn/modules/tubemq/consumer_example.md
index b16b087..575118e 100644
--- a/docs/zh-cn/modules/tubemq/consumer_example.md
+++ b/docs/zh-cn/modules/tubemq/consumer_example.md
@@ -1,12 +1,12 @@
 ---
-title: 消费者示例 - Apache InLong TubeMQ模块
+消费者示例 - Apache InLong TubeMQ模块
 ---
 
-## Consumer 示例
+## 1 Consumer 示例
   TubeMQ 提供了两种方式来消费消息: PullConsumer and PushConsumer。
 
 
-### PullConsumer 
+### 1.1 PullConsumer 
    ```java
     public class PullConsumerExample {
 
@@ -39,7 +39,7 @@ title: 消费者示例 - Apache InLong TubeMQ模块
     }
    ``` 
    
-### PushConsumer
+### 1.2 PushConsumer
    ```java
    public class PushConsumerExample {
    
@@ -76,3 +76,7 @@ title: 消费者示例 - Apache InLong TubeMQ模块
         }
     }
     ```
+
+---
+
+<a href="#top">Back to top</a>
diff --git a/docs/zh-cn/modules/tubemq/deployment.md b/docs/zh-cn/modules/tubemq/deployment.md
index ab381c5..9ac8211 100644
--- a/docs/zh-cn/modules/tubemq/deployment.md
+++ b/docs/zh-cn/modules/tubemq/deployment.md
@@ -1,10 +1,8 @@
 ---
-title: 部署指引 - Apache InLong TubeMQ模块
+TubeMQ编译、部署及简单使用 - Apache InLong TubeMQ模块
 ---
 
-# TubeMQ编译、部署及简单使用:
-
-## 工程编译打包:
+## 1 工程编译打包:
 
 进入工程根目录,执行命令:
 
@@ -18,7 +16,7 @@ mvn clean package -Dmaven.test.skip
 
 大家也可以进入各个子目录进行单独编译,编译过程与普通的工程编译处理过程一致。
 
-**部署服务端:**
+## 2 部署服务端:
 如上例子,进入..\InLong\inlong-tubemq\tubemq-server\target目录,服务侧的相关内容如下,其中apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz为完整的服务端安装包,里面包括执行脚本,配置文件,依赖包,以及前端的源码;apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT.jar为服务端处理逻辑包,包含于完整工程安装包的lib里,单独提出是考虑到日常变更升级时改动点多在服务器处理逻辑上,升级的时候只需要单独替换该jar包即可:
 
 ![](img/sysdeployment/sys_package.png)
@@ -28,7 +26,7 @@ mvn clean package -Dmaven.test.skip
 ![](img/sysdeployment/sys_package_list.png)
 
 
-**配置系统:**
+## 3 配置系统:
 
 服务包里打包了3种角色:Master、Broker、Tools,业务使用时可以将Master和Broker放置在一起,也可以单独分开不同机器放置,依照业务对机器的规划进行处理。我们通过如下3台机器搭建一个完整的有2台Master的生产、消费环境:
 
@@ -59,7 +57,8 @@ mvn clean package -Dmaven.test.skip
 
 要注意的是右上角的配置为Master的Web前台配置信息,需要根据Master的安装路径修改/resources/velocity.properties里的file.resource.loader.path信息。
 
-**启动Master**:
+## 4 运行节点
+### 4.1 启动Master:
 
 完成如上配置设置后,首先进入主备Master所在的TubeMQ环境的bin目录,进行服务启动操作:
 
@@ -73,7 +72,7 @@ mvn clean package -Dmaven.test.skip
 
 ![](img/sysdeployment/sys_master_console.png)
 
-**启动Broker**:
+### 4.2 启动Broker:
 
 启动Broker和启动master有些差别:Master负责管理整个TubeMQ集群,包括Broker节点运行管理以及节点上部署的Topic配置管理,还有生产和消费管理等,因此,实体的Broker启动前,首先要在Master上配置Broker元数据,增加Broker相关的管理信息,如下图示:
 
@@ -110,8 +109,8 @@ Master上所有的变更操作在点击确认的时候,都会弹出如上输
 
 ![](img/sysdeployment/sys_broker_finished.png)
 
-
-**配置及生效Topic**:
+## 5 数据生产和消费
+### 5.1 配置及生效Topic:
 
 配置Topic和配置Broker信息类似,都需要先在Master上新增元数据信息,然后才能开始使用,要不生产和消费时候会报topic不存在错误,如我们用安装包里的example对不存在的Topic名test进行生产:
 ![](img/sysdeployment/test_sendmessage.png)
@@ -137,7 +136,7 @@ Demo实例会报如下错误信息:
 
 **大家需要注意的是:** 我们在重载的时候,要对待重载的Broker集合分批次进行。我们的重载通过状态机进行控制,会先进行不可读写—〉只读操作—〉可读写—〉上线运行各个子状态处理,如果所有待重启Broker全量重载,会使得已在线对外服务的Topic对外出现短暂的不可读写状况,使得生产、消费,特别是生产发送失败。
 
-**数据生产和消费**:
+### 5.2 数据生产和消费:
 
 在安装包里,我们打包了example的测试Demo,业务也可以直接使用tubemq-client-0.9.0-incubating-SNAPSHOT.jar封装自己的生产和消费逻辑,总的形式是类似,我们先执行生产者的Demo,我们可以看到Broker上已开始有数据接收:
 ![](img/sysdeployment/test_sendmessage_2.png)
@@ -152,4 +151,7 @@ Demo实例会报如下错误信息:
 
 ![](img/sysdeployment/sys_node_log.png)
 
-在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,就需要查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。
\ No newline at end of file
+在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,就需要查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。
+
+---
+<a href="#top">Back to top</a>
diff --git a/docs/zh-cn/modules/tubemq/error_code.md b/docs/zh-cn/modules/tubemq/error_code.md
index 136a2e2..328160e 100644
--- a/docs/zh-cn/modules/tubemq/error_code.md
+++ b/docs/zh-cn/modules/tubemq/error_code.md
@@ -1,12 +1,12 @@
 ---
-title: 错误码 - Apache InLong TubeMQ模块
+错误码 - Apache InLong TubeMQ模块
 ---
 
-# TubeMQ错误信息介绍
+## 1 TubeMQ错误信息介绍
 
 ​        TubeMQ采用的是 错误码(errCode) + 错误详情(errMsg) 相结合的方式返回具体的操作结果。首先根据错误码确定是哪类问题,然后根据错误详情来确定具体的错误原因。表格汇总了所有的错误码以及运行中大家可能遇到的错误详情的相关对照。
 
-## 错误码
+## 2 错误码
 
 | 错误类别     | 错误码                            | 错误标记                                                     | 含义                                                         | 备注                                           |
 | ------------ | --------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------------------------------------------- |
@@ -33,7 +33,7 @@ title: 错误码 - Apache InLong TubeMQ模块
 | 服务器侧异常| 503          | SERVICE_UNAVILABLE                | 业务临时禁读或者禁写                                         | 继续重试处理,如果持续的出现该类错误,需要联系管理员处理     |                                                |
 | 服务器侧异常| 510          | INTERNAL_SERVER_ERROR_MSGSET_NULL | 读取不到消息集合                                             | 继续重试处理,如果持续的出现该类错误,需要联系管理员处理     |                                                |
 
-## 常见错误信息
+## 3 常见错误信息
 
 | 记录号 | 错误信息                                                     | 含义                                                         | 备注                                                         |
 | ------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
diff --git a/docs/zh-cn/modules/tubemq/http_access_api.md b/docs/zh-cn/modules/tubemq/http_access_api.md
index fcc7247..d420ad3 100644
--- a/docs/zh-cn/modules/tubemq/http_access_api.md
+++ b/docs/zh-cn/modules/tubemq/http_access_api.md
@@ -1,8 +1,7 @@
 ---
-title: HTTP API介绍 - Apache InLong TubeMQ模块
+HTTP API介绍 - Apache InLong TubeMQ模块
 ---
 
-# HTTP API定义
 HTTP API是Master或者Broker对外功能暴露的接口,管控台的各项操作都是基于这些API进行;如果有最新的功能,或者管控台没有涵盖的功能,业务都可以直接通过调用HTTP API接口完成。
 
 该部分接口一共有4个部分:
diff --git a/docs/zh-cn/modules/tubemq/producer_example.md b/docs/zh-cn/modules/tubemq/producer_example.md
index 639c7c5..e3c381a 100644
--- a/docs/zh-cn/modules/tubemq/producer_example.md
+++ b/docs/zh-cn/modules/tubemq/producer_example.md
@@ -1,14 +1,14 @@
 ---
-title: 生产者示例 - Apache InLong TubeMQ模块
+生产者示例 - Apache InLong TubeMQ模块
 ---
 
-## Producer 示例
+## 1 Producer 示例
 TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactory 和 TubeMultiSessionFactory。
   - TubeSingleSessionFactory 在整个生命周期只会创建一个 session
   - TubeMultiSessionFactory 每次调用都会创建一个session
 
-### TubeSingleSessionFactory
-   #### Send Message Synchronously
+### 1.1 TubeSingleSessionFactory
+   #### 1.1.1 Send Message Synchronously
      ```java
      public final class SyncProducerExample {
     
@@ -31,7 +31,7 @@ TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactor
     }
     ```
      
-   #### Send Message Asynchronously
+   #### 1.1.2 Send Message Asynchronously
      ```java
      public final class AsyncProducerExample {
      
@@ -65,7 +65,7 @@ TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactor
     }
     ```
      
-   #### Send Message With Attributes
+   #### 1.1.3 Send Message With Attributes
      ```java
      public final class ProducerWithAttributeExample {
      
@@ -91,7 +91,7 @@ TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactor
     }
     ```
      
-### TubeMultiSessionFactory
+### 1.2 TubeMultiSessionFactory
 
     ```java
     public class MultiSessionProducerExample {
@@ -146,3 +146,5 @@ TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactor
         }
     }
     ```
+---
+<a href="#top">Back to top</a>    
diff --git a/docs/zh-cn/modules/tubemq/quick_start.md b/docs/zh-cn/modules/tubemq/quick_start.md
index 01404ad..40eb4d3 100644
--- a/docs/zh-cn/modules/tubemq/quick_start.md
+++ b/docs/zh-cn/modules/tubemq/quick_start.md
@@ -1,14 +1,14 @@
 ---
-title: 快速开始 - Apache InLong TubeMQ模块
+快速开始 - Apache InLong TubeMQ模块
 ---
 
-## 编译和构建
+## 1 编译和构建
 
-### 准备工作
+### 1.1 准备工作
 - Java JDK 1.8
 - Maven 3.3+
 
-### 从源码包构建
+### 1.2 从源码包构建
 - 编译和打包:
 ```bash
 mvn clean package -DskipTests
@@ -29,7 +29,7 @@ mvn test
 构建完成之后,在 `tubemq-server/target` 目录下会有 **apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz** 文件。
 这是 TubeMQ 的部署包,包含了脚本、配置文件、依赖以及 web GUI相关的内容。
 
-### 配置IDE开发环境
+### 1.3 配置IDE开发环境
 在IDE中构建和调试源码,需要先运行以下命令:
 ```bash
 mvn compile
@@ -44,9 +44,9 @@ mvn compile
 </configuration>
 ```
 
-## 部署运行
+## 2 部署运行
 
-### 配置示例
+### 2.1 配置示例
 TubeMQ 集群包含有两个组件: **Master** 和 **Broker**. Master 和 Broker 可以部署在相同或者不同的节点上,依照业务对机器的规划进行处理。我们通过如下3台机器搭建有2台Master的生产、消费的集群进行配置示例:
 | 所属角色 | TCP端口 | TLS端口 | WEB端口 | 备注 |
 | --- | --- | --- | --- | --- |
@@ -54,7 +54,7 @@ TubeMQ 集群包含有两个组件: **Master** 和 **Broker**. Master 和 Broker
 | Broker | 8123 | 8124 | 8081 | 消息储存在`/stage/msg_data` |
 | ZooKeeper | 2181 | | | Offset储存在根目录`/tubemq` |
 
-### 准备工作
+### 2.2 准备工作
 - ZooKeeper集群
 - [apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz](download/download.md)安装包
 
@@ -68,7 +68,7 @@ TubeMQ 集群包含有两个组件: **Master** 和 **Broker**. Master 和 Broker
 └── resources
 ```
 
-### 配置Master
+### 2.3 配置Master
 编辑`conf/master.ini`,根据集群信息变更以下配置项
 
 - Master IP和端口
@@ -111,7 +111,7 @@ repHelperHost=FIRST_MASTER_NODE_IP:9001  // helperHost用于创建master集群
 **注意**:需保证Master所有节点之间的时钟同步
 
 
-### 配置Broker
+### 2.4 配置Broker
 编辑`conf/broker.ini`,根据集群信息变更以下配置项
 - Broker IP和端口
 ```ini
@@ -139,7 +139,7 @@ zkNodeRoot=/tubemq
 zkServerAddr=localhost:2181             // 指向zookeeper集群,多个地址逗号分开
 ```
 
-### 启动Master
+### 2.5 启动Master
 进入Master节点的 `bin` 目录下,启动服务:
 ```bash
 ./tubemq.sh master start
@@ -148,7 +148,7 @@ zkServerAddr=localhost:2181             // 指向zookeeper集群,多个地址
 ![TubeMQ Console GUI](img/tubemq-console-gui.png)
 
 
-#### 配置Broker元数据
+#### 2.5.1 配置Broker元数据
 Broker启动前,首先要在Master上配置Broker元数据,增加Broker相关的管理信息。在`Broker List` 页面,  `Add Single Broker`,然后填写相关信息:
 
 ![Add Broker 1](img/tubemq-add-broker-1.png)
@@ -160,7 +160,7 @@ Broker启动前,首先要在Master上配置Broker元数据,增加Broker相
 然后上线Broker:
 ![Add Broker 2](img/tubemq-add-broker-2.png)
 
-### 启动Broker
+### 2.6 启动Broker
 进入broker节点的 `bin` 目录下,执行以下命令启动Broker服务:
 
 ```bash
@@ -170,8 +170,8 @@ Broker启动前,首先要在Master上配置Broker元数据,增加Broker相
 刷新页面可以看到 Broker 已经注册,当 `当前运行子状态` 为 `idle` 时, 可以增加topic:
 ![Add Broker 3](img/tubemq-add-broker-3.png)
 
-## 快速使用
-### 新增 Topic
+## 3 快速使用
+### 3.1 新增 Topic
 
 可以通过 web GUI 添加 Topic, 在 `Topic列表`页面添加,需要填写相关信息,比如增加`demo` topic:
 ![Add Topic 1](img/tubemq-add-topic-1.png)
@@ -192,10 +192,10 @@ Broker启动前,首先要在Master上配置Broker元数据,增加Broker相
 ![Add Topic 4](img/tubemq-add-topic-4.png)
 
 
-### 运行Example
+### 3.2 运行Example
 可以通过上面创建的`demo` topic来测试集群。
 
-- 生产消息
+#### 3.2.1 生产消息
 将 `YOUR_MASTER_IP:port` 替换为实际的IP和端口,然后运行producer:
 ```bash
 cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
@@ -205,7 +205,7 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 如果能观察下如下日志,则表示数据发送成功:
 ![Demo 1](img/tubemq-send-message.png)
 
-- 消费消息
+#### 3.2.2 消费消息
 将 `YOUR_MASTER_IP:port` 替换为实际的IP和端口,然后运行Consumer:
 ```bash
 cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
@@ -217,9 +217,11 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 ![Demo 2](img/tubemq-consume-message.png)
 
 
-## 结束
+## 4 结束
 在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,请查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。
 
 ---
+<a href="#top">Back to top</a>
+
 
 
diff --git a/docs/zh-cn/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md b/docs/zh-cn/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
index 45916f6..321a82d 100644
--- a/docs/zh-cn/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
+++ b/docs/zh-cn/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
@@ -1,14 +1,14 @@
 # TubeMQ VS Kafka性能对比测试总结
 
-## 背景
+## 1 背景
 TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于[Apache Kafka](http://kafka.apache.org/)。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。
 这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。
 
-## 测试场景方案
+## 2 测试场景方案
 如下是我们根据实际应用场景设计的测试方案:
 ![](img/perf_scheme.png)
 
-## 测试结论
+## 3 测试结论
 用"复仇者联盟"里的角色来形容:
 
 角色|测试场景|要点
@@ -24,8 +24,8 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 3. 在过滤消费时,TubeMQ可以极大地降低服务端的网络出流量,同时还会因过滤消费消耗的资源少于全量消费,反过来促进TubeMQ吞吐量提升;kafka无服务端过滤,出流量与全量消费一致,流量无明显的节约;
 4. 资源消耗方面各有差异:TubeMQ由于采用顺序写随机读,CPU消耗很大,Kafka采用顺序写块读,CPU消耗很小,但其他资源,如文件句柄、网络连接等消耗非常的大。在实际的SAAS模式下的运营环境里,Kafka会因为zookeeper依赖出现系统瓶颈,会因生产、消费、Broker众多,受限制的地方会更多,比如文件句柄、网络连接数等,资源消耗会更大;
 
-## 测试环境及配置
-###【软件版本及部署环境】
+## 4 测试环境及配置
+### 4.1 【软件版本及部署环境】
 
 **角色**|**TubeMQ**|**Kafka**
 :---:|---|---
@@ -36,7 +36,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 **Producer**|1台M10 + 1台CG1|1台M10 + 1台CG1
 **Consumer**|6台TS50万兆机|6台TS50万兆机
 
-###【Broker硬件机型配置】
+### 4.2 【Broker硬件机型配置】
 
 **机型**|配置|**备注**
 :---:|---|---
@@ -44,7 +44,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 **BX1-10G**|SA5212M5(6133\*2/16G\*16/4T\*12/10GE\*2) Pcs|                                     
 **CG1-10G**|CG1-10G\_6.0.2.12\_RM760-FX(6133\*2/16G\*16/5200-480G\*6 RAID/10GE\*2)-ODM Pcs |  
 
-###【Broker系统配置】
+### 4.3 【Broker系统配置】
 
 | **配置项**            | **TubeMQ Broker**     | **Kafka Broker**      |
 |:---:|---|---|
@@ -53,25 +53,25 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 | **配置文件**          | 在tubemq-3.8.0版本broker.ini配置文件上改动: consumerRegTimeoutMs=35000<br>tcpWriteServiceThread=50<br>tcpReadServiceThread=50<br>primaryPath为SATA盘日志目录|kafka_2.11-0.10.2.0版本server.properties配置文件上改动:<br>log.flush.interval.messages=5000<br>log.flush.interval.ms=10000<br>log.dirs为SATA盘日志目录<br>socket.send.buffer.bytes=1024000<br>socket.receive.buffer.bytes=1024000<br>socket.request.max.bytes=2147483600<br>log.segment.bytes=1073741824<br>num.network.threads=25<br>num.io.threads=48< [...]
 | **其它**             | 除测试用例里特别指定,每个topic创建时设置:<br>memCacheMsgSizeInMB=5<br>memCacheFlushIntvl=20000<br>memCacheMsgCntInK=10 <br>unflushThreshold=5000<br>unflushInterval=10000<br>unFlushDataHold=5000 | 客户端代码里设置:<br>生产端:<br>props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br>props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br>props.put("linger.ms", "200");<br>props.put("block.on.buffer.full", false);<br>props.pu [...]
               
-## 测试场景及结论
+## 5 测试场景及结论
 
-### 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
+### 5.1 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
  ![](img/perf_scenario_1.png)
 
-####【结论】
+#### 5.1.1 【结论】
 
 在单topic不同分区的情况下:
 1. TubeMQ吞吐量不随分区变化而变化,同时TubeMQ属于顺序写随机读模式,单实例情况下吞吐量要低于Kafka,CPU要高于Kafka;
 2. Kafka随着分区增多吞吐量略有下降,CPU使用率很低;
 3. TubeMQ分区由于是逻辑分区,增加分区不影响吞吐量;Kafka分区为物理文件的增加,但增加分区入出流量反而会下降;
 
-####【指标】
+####5.1.2 【指标】
  ![](img/perf_scenario_1_index.png)
 
-### 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
+### 5.2 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
  ![](img/perf_scenario_2.png)
 
-####【结论】
+#### 5.2.1 【结论】
 
 从场景一和场景二的测试数据结合来看:
 
@@ -81,7 +81,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 4. TubeMQ按照Kafka等同的增加实例(物理文件)后,吞吐量量随之提升,在4个实例的时候测试效果达到并超过Kafka
     5个分区的状态;TubeMQ可以根据业务或者系统配置需要,调整数据读取方式,可以动态提升系统的吞吐量;Kafka随着分区增加,入流量有下降;
 
-####【指标】
+#### 5.2.2 【指标】
 
 **注1 :** 如下场景中,均为单Topic测试下不同分区或实例、不同读取模式场景下的测试,单条消息包长均为1K;
 
@@ -89,10 +89,10 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 读取模式通过admin\_upd\_def\_flow\_control\_rule设置qryPriorityId为对应值.
  ![](img/perf_scenario_2_index.png)
 
-### 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
+### 5.3 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
  ![](img/perf_scenario_3.png)
 
-####【结论】
+#### 5.3.1 【结论】
 
 按照多Topic场景下测试:
 
@@ -103,25 +103,25 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
     Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题;
 4.  数据对比来看,TubeMQ相比Kafka运行更稳定,吞吐量以稳定形势呈现,长时间跑吞吐量不下降,资源占用少,但CPU的占用需要后续版本解决;
 
-####【指标】
+#### 5.3.2 【指标】
 
 **注:** 如下场景中,包长均为1K,分区数均为10。
  ![](img/perf_scenario_3_index.png)
 
-### 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
+### 5.4 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
 
-####【结论】
+#### 5.4.1 【结论】
 
 1.  TubeMQ采用服务端过滤的模式,出流量指标与入流量存在明显差异;
 2.  TubeMQ服务端过滤提供了更多的资源给到生产,生产性能比非过滤情况有提升;
 3.  Kafka采用客户端过滤模式,入流量没有提升,出流量差不多是入流量的2倍,同时入出流量不稳定;
 
-####【指标】
+#### 5.4.2 【指标】
 
 **注:** 如下场景中,topic为100,包长均为1K,分区数均为10
  ![](img/perf_scenario_4_index.png)
 
-### 场景五:TubeMQ、Kafka数据消费时延比对
+### 5.5 场景五:TubeMQ、Kafka数据消费时延比对
 
 | 类型   | 时延            | Ping时延                |
 |---|---|---|
@@ -130,35 +130,35 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 
 备注:TubeMQ的消费端存在一个等待队列处理消息追平生产时的数据未找到的情况,缺省有200ms的等待时延。测试该项时,TubeMQ消费端要调整拉取时延(ConsumerConfig.setMsgNotFoundWaitPeriodMs())为10ms,或者设置频控策略为10ms。
 
-### 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
+### 5.6 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
 
-####【结论】
+#### 5.6.1 【结论】
 
 1.  TubeMQ调整Topic的内存缓存大小能对吞吐量形成正面影响,实际使用时可以根据机器情况合理调整;
 2.  从实际使用情况看,内存大小设置并不是越大越好,需要合理设置该值;
 
-####【指标】
+#### 5.6.2 【指标】
 
  **注:** 如下场景中,消费方式均为读取内存(301)的PULL消费,单条消息包长均为1K
  ![](img/perf_scenario_6_index.png)
  
 
-### 场景七:消费严重滞后情况下两系统的表现
+### 5.7 场景七:消费严重滞后情况下两系统的表现
 
-####【结论】
+#### 5.7.1 【结论】
 
 1.  消费严重滞后情况下,TubeMQ和Kafka都会因磁盘IO飙升使得生产消费受阻;
 2.  在带SSD系统里,TubeMQ可以通过SSD转存储消费来换取部分生产和消费入流量;
 3.  按照版本计划,目前TubeMQ的SSD消费转存储特性不是最终实现,后续版本中将进一步改进,使其达到最合适的运行方式;
 
-####【指标】
+#### 5.7.2 【指标】
  ![](img/perf_scenario_7.png)
 
 
-### 场景八:评估多机型情况下两系统的表现
+### 5.8 场景八:评估多机型情况下两系统的表现
  ![](img/perf_scenario_8.png)
       
-####【结论】
+#### 5.8.1【结论】
 
 1.  TubeMQ在BX1机型下较TS60机型有更高的吞吐量,同时因IO util达到瓶颈无法再提升,吞吐量在CG1机型下又较BX1达到更高的指标值;
 2.  Kafka在BX1机型下系统吞吐量不稳定,且较TS60下测试的要低,在CG1机型下系统吞吐量达到最高,万兆网卡跑满;
@@ -166,29 +166,30 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 4.  在SSD盘存储条件下,Kafka性能指标达到最好,TubeMQ指标不及Kafka;
 5.  CG1机型数据存储盘较小(仅2.2T),RAID 10配置下90分钟以内磁盘即被写满,无法测试两系统长时间运行情况。
 
-####【指标】
+#### 5.8.2 【指标】
 
 **注1:** 如下场景Topic数均配置500个topic,10个分区,消息包大小为1K字节;
 
 **注2:** TubeMQ采用的是301内存读取模式消费;
  ![](img/perf_scenario_8_index.png)
 
-## 附录1 不同机型下资源占用情况图:
-###【BX1机型测试】
+## 6 附录 
+## 6.1 附录1 不同机型下资源占用情况图:
+### 6.1.1 【BX1机型测试】
 ![](img/perf_appendix_1_bx1_1.png)
 ![](img/perf_appendix_1_bx1_2.png)
 ![](img/perf_appendix_1_bx1_3.png)
 ![](img/perf_appendix_1_bx1_4.png)
 
-###【CG1机型测试】
+### 6.1.2 【CG1机型测试】
 ![](img/perf_appendix_1_cg1_1.png)
 ![](img/perf_appendix_1_cg1_2.png)
 ![](img/perf_appendix_1_cg1_3.png)
 ![](img/perf_appendix_1_cg1_4.png)
 
-## 附录2 多Topic测试时的资源占用情况图:
+## 6.2 附录2 多Topic测试时的资源占用情况图:
 
-###【100个topic】
+### 6.2.1 【100个topic】
 ![](img/perf_appendix_2_topic_100_1.png)
 ![](img/perf_appendix_2_topic_100_2.png)
 ![](img/perf_appendix_2_topic_100_3.png)
@@ -199,7 +200,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_100_8.png)
 ![](img/perf_appendix_2_topic_100_9.png)
  
-###【200个topic】
+### 6.2.2 【200个topic】
 ![](img/perf_appendix_2_topic_200_1.png)
 ![](img/perf_appendix_2_topic_200_2.png)
 ![](img/perf_appendix_2_topic_200_3.png)
@@ -210,7 +211,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_200_8.png)
 ![](img/perf_appendix_2_topic_200_9.png)
 
-###【500个topic】
+### 6.2.3 【500个topic】
 ![](img/perf_appendix_2_topic_500_1.png)
 ![](img/perf_appendix_2_topic_500_2.png)
 ![](img/perf_appendix_2_topic_500_3.png)
@@ -221,7 +222,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_500_8.png)
 ![](img/perf_appendix_2_topic_500_9.png)
 
-###【1000个topic】
+### 6.2.4 【1000个topic】
 ![](img/perf_appendix_2_topic_1000_1.png)
 ![](img/perf_appendix_2_topic_1000_2.png)
 ![](img/perf_appendix_2_topic_1000_3.png)
diff --git a/en-us/docs/modules/tubemq/architecture.html b/en-us/docs/modules/tubemq/architecture.html
index a8444bd..ca61167 100644
--- a/en-us/docs/modules/tubemq/architecture.html
+++ b/en-us/docs/modules/tubemq/architecture.html
@@ -12,7 +12,7 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>After years of evolution, the TubeMQ cluster is divided into the following 5 parts: 
 <img src="img/sys_structure.png" alt=""></p>
 <ul>
@@ -32,7 +32,7 @@
 <p><strong>Zookeeper:</strong> Responsible for the Zookeeper part of the offset storage. This part of the function has been weakened to only the persistent storage of the offset. Considering the next multi-node copy function, this module is temporarily reserved;</p>
 </li>
 </ul>
-<h2>Broker File Storage Scheme Improvement:</h2>
+<h2>2. Broker File Storage Scheme Improvement:</h2>
 <p>Systems that use disks as data persistence media are faced with various system performance problems caused by disk problems. The TubeMQ system is no exception, the performance improvement is largely to solve the problem of how to read, write and store message data. In this regard TubeMQ has made many improvements: storage instances is as the smallest Topic data management unit; each storage instance includes a file storage block and a memory cache block; each Topic can be assigned mul [...]
 <ol>
 <li>
diff --git a/en-us/docs/modules/tubemq/architecture.json b/en-us/docs/modules/tubemq/architecture.json
index 9ffd18a..74e1a20 100644
--- a/en-us/docs/modules/tubemq/architecture.json
+++ b/en-us/docs/modules/tubemq/architecture.json
@@ -1,6 +1,6 @@
 {
   "filename": "architecture.md",
-  "__html": "<h2>TubeMQ Architecture:</h2>\n<p>After years of evolution, the TubeMQ cluster is divided into the following 5 parts: \n<img src=\"img/sys_structure.png\" alt=\"\"></p>\n<ul>\n<li>\n<p><strong>Portal:</strong> The Portal part responsible for external interaction and maintenance operations, including API and Web. The API connects to the management system outside the cluster. The Web is a page encapsulation of daily operation and maintenance functions based on the API;</p>\n</ [...]
+  "__html": "<h2>1. TubeMQ Architecture:</h2>\n<p>After years of evolution, the TubeMQ cluster is divided into the following 5 parts: \n<img src=\"img/sys_structure.png\" alt=\"\"></p>\n<ul>\n<li>\n<p><strong>Portal:</strong> The Portal part responsible for external interaction and maintenance operations, including API and Web. The API connects to the management system outside the cluster. The Web is a page encapsulation of daily operation and maintenance functions based on the API;</p>\ [...]
   "link": "/en-us/docs/modules/tubemq/architecture.html",
   "meta": {
     "title": "Architecture - Apache InLong's TubeMQ module"
diff --git a/en-us/docs/modules/tubemq/architecture.md b/en-us/docs/modules/tubemq/architecture.md
index 5bdfb4a..133536d 100644
--- a/en-us/docs/modules/tubemq/architecture.md
+++ b/en-us/docs/modules/tubemq/architecture.md
@@ -2,7 +2,7 @@
 title: Architecture - Apache InLong's TubeMQ module
 ---
 
-## TubeMQ Architecture: ##
+## 1. TubeMQ Architecture:
 After years of evolution, the TubeMQ cluster is divided into the following 5 parts: 
 ![](img/sys_structure.png)
 
@@ -16,7 +16,7 @@ After years of evolution, the TubeMQ cluster is divided into the following 5 par
 
 - **Zookeeper:** Responsible for the Zookeeper part of the offset storage. This part of the function has been weakened to only the persistent storage of the offset. Considering the next multi-node copy function, this module is temporarily reserved;
 
-## Broker File Storage Scheme Improvement: ##
+## 2. Broker File Storage Scheme Improvement:
 Systems that use disks as data persistence media are faced with various system performance problems caused by disk problems. The TubeMQ system is no exception, the performance improvement is largely to solve the problem of how to read, write and store message data. In this regard TubeMQ has made many improvements: storage instances is as the smallest Topic data management unit; each storage instance includes a file storage block and a memory cache block; each Topic can be assigned multip [...]
 
 1. **File storage block:** The disk storage solution of TubeMQ is similar to Kafka, but it is not the same, as shown in the following figure: each file storage block is composed of an index file and a data file; the partiton is a logical partition in the data file; each Topic maintains and manages the file storage block separately, the related mechanisms include the aging cycle, the number of partitions, whether it is readable and writable, etc.
diff --git a/en-us/docs/modules/tubemq/client_rpc.html b/en-us/docs/modules/tubemq/client_rpc.html
index bd522b6..9b71329 100644
--- a/en-us/docs/modules/tubemq/client_rpc.html
+++ b/en-us/docs/modules/tubemq/client_rpc.html
@@ -12,14 +12,13 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h2>General Introduction</h2>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>Implements of this part can be found in <code>org.apache.tubemq.corerpc</code>. Each node in Apache TubeMQ Cluster Communicates by TCP Keep-Alive. Mseeages are definded using binary and protobuf combined.
 <img src="img/client_rpc/rpc_bytes_def.png" alt=""></p>
 <p>All we can see in TCP are binary streams. We defind a 4-byte msgToken message <code>RPC\_PROTOCOL\_BEGIN\_TOKEN</code> in header, which are used to distinguish each message and identify the legitimacy of the counterpart. When message client received is not started with these header field, client needs to close the connection and prompt the error and quit or reconnect because the protocal is not supported by TubeMQ or something wrong may happended. Follows is a 4-byte serialNo, this fi [...]
 <p>We defined <code>listSize</code> as <code>\&amp;lt;len\&amp;gt;\&amp;lt;data\&amp;gt;</code> because serialized PB data is saved as a ByteBuffer object in TubeMQ, and in Java, there a maximum(8196) length of ByteBuffer block, an overlength PB message needs to be saved in several ByteBuffer. No total length was counted, and the ByteBuffer is directly written when Serializing in to TCP message.</p>
 <p><strong>Pay more attention when implementing multiple languages and SDKs.</strong> Need to serialize PB data content into arrays of blocks(supported in PB codecs).</p>
-<h2>PB format code:</h2>
+<h2>2 PB format code:</h2>
 <p>PB format encoding is divided into RPC framework definition, to the Master message encoding and to the Broker message encoding of three parts, you can use protobuf directly compiled to get different language codecs, it is very convenient to use.
 <img src="img/client_rpc/rpc_proto_def.png" alt=""></p>
 <p><code>RPC.proto</code> defines 6 struct, which divided into 2 class: Request message and Response message. Response message is divided into Successful Response and Exception Response.
@@ -28,7 +27,7 @@
 <img src="img/client_rpc/rpc_conn_detail.png" alt=""></p>
 <p>Flag marks whether the message is requested or not, and the next three marks represent the content of the message trace, which is not currently used; the related is a fixed mapping of the service type, protocol version, service type, etc., the more critical parameter RequestBody.timeout is the maximum allowable time from when a request is received by the server to when it is actually processed. Long wait time, discarded if exceeded, current default is 10 seconds, request filled as follows.
 <img src="img/client_rpc/rpc_header_fill.png" alt=""></p>
-<h2>Interactive diagram of the client's PB request &amp; response:</h2>
+<h2>3 Interactive diagram of the client's PB request &amp; response:</h2>
 <p><strong>Producer Interaction</strong>:</p>
 <p>The Producer has four pairs of instructions in the system, registration to master, heartbeat to master, exit from master and sending message to brokers.
 <img src="img/client_rpc/rpc_producer_diagram.png" alt=""></p>
@@ -52,7 +51,7 @@
 <p>Consumer has 7 pairs of command in all, Register, Heartbeat, Exit to Master; Register, Logout, Heartbeat, Pulling mseeage to Broker. Registration and Logout to Broker is the same command, indicated by a different status code.</p>
 <p><img src="img/client_rpc/rpc_consumer_diagram.png" alt=""></p>
 <p>As we can see from the above picture, the Consumer first has to register to the Master, but registering to the Master can not get Metadata information immediately because TubeMQ is using a server-side load-balancing model, and the client needs to wait for the server to dispatch the consumption partition information; Consumer to Broker needs to register the logout operation. Partition is exclusive at the time of consumption, i.e., the same partition can only be consumed by one consumer [...]
-<p>##Client feature:</p>
+<h2>4 Client feature:</h2>
 <table>
 <thead>
 <tr>
@@ -284,12 +283,13 @@
 </tr>
 </tbody>
 </table>
-<h2>Client function Induction CaseByCase:</h2>
+<h2>5 Client function Induction CaseByCase:</h2>
 <p><strong>Client side and server side RPC interaction process</strong>:</p>
 <hr>
 <p><img src="img/client_rpc/rpc_inner_structure.png" alt=""></p>
 <p>As shown above, the client has to maintain local preservation of the sent request message until the RPC times out, or a response message is received and the response The message is associated by the SerialNo generated when the request is sent; the Broker information received from the server side, and the Topic information, which the SDK stores locally and updates with the latest returned information, as well as periodic reports to the Server side; the SDK is maintained to the heartbea [...]
-<h2><strong>Message: Producer register to Master</strong>:</h2>
+<h3>5.1 Message: Producer register to Master:</h3>
+<hr>
 <p><img src="img/client_rpc/rpc_producer_register2M.png" alt=""></p>
 <p><strong>ClientId</strong>:Producer needs to construct a ClientId at startup, and the current construction rule is:</p>
 <p>Java: ClientId = IPV4 + <code>&amp;quot;-&amp;quot;</code> + Thread ID + <code>&amp;quot;-&amp;quot;</code> + createTime + <code>&amp;quot;-&amp;quot;</code> + Instance ID + <code>&amp;quot;-&amp;quot;</code> + Client Version ID [+ <code>&amp;quot;-&amp;quot;</code> + SDK]. it is recommended that other languages add the above markup for easier access to the issue Exclusion. The ID value is valid for the lifetime of the Producer.</p>
@@ -306,15 +306,18 @@
 <p><img src="img/client_rpc/rpc_master_authorizedinfo.png" alt=""></p>
 <p><strong>visitAuthorizedToken</strong>: To prevent clients from bypassing the Master's access authorization token, if that data is available, the SDK should save it locally and carry that information on subsequent visits to the Broker; if the field is changed on subsequent heartbeats, the locally cached data for that field needs to be updated.</p>
 <p><strong>authAuthorizedToken</strong>:Authenticated authorization tokens, if they have data for that field, they need to save and carry that field information for subsequent accesses to the Master and Broker; if the field is changed on subsequent heartbeats, the local cache of that field data needs to be updated.</p>
-<h2><strong>Mseeage: Heartbeat from Producer to Master</strong>:</h2>
+<h3>5.2 Mseeage: Heartbeat from Producer to Master:</h3>
+<hr>
 <p><img src="img/client_rpc/rpc_producer_heartbeat2M.png" alt=""></p>
 <p><strong>topicInfos</strong>: The metadata information corresponding to the Topic published by the SDK, including partition information and the Broker where it is located, is decoded. Since there is a lot of metadata, the outflow generated by passing the object data through as is would be very large, so we made Improvements.</p>
 <p><img src="img/client_rpc/rpc_convert_topicinfo.png" alt=""></p>
 <p><strong>requireAuth</strong>: Code to indicates the expiration of the previous authAuthorizedToken of the Master, requiring the SDK to report the username and password signatures on the next request.</p>
-<h2><strong>Message: Producer exits from Master</strong>:</h2>
+<h3>5.3 Message: Producer exits from Master:</h3>
+<hr>
 <p><img src="img/client_rpc/rpc_producer_close2M.png" alt=""></p>
 <p>Note that if authentication is enable, closing operation will do the authentication to avoid external interference with the operation.</p>
-<h2><strong>Message: Producer to Broker</strong>:</h2>
+<h3>5.4 Message: Producer to Broker:</h3>
+<hr>
 <p>This part is related to the definition of RPC Message.</p>
 <p><img src="img/client_rpc/rpc_producer_sendmsg2B.png" alt=""></p>
 <p><strong>Data</strong> is the binary byte stream of Message.</p>
@@ -322,7 +325,8 @@
 <p><strong>sentAddr</strong> is the local IPv4 address of the machine where the SDK is located converted to a 32-bit numeric ID.</p>
 <p><strong>msgType</strong> is the type of filter message. <code>msgTime</code> is the message time when the SDK sends a message, its value comes from the value filled in by <code>putSystemHeader</code> when constructing Message, and there is a corresponding API in Message to get it.</p>
 <p><strong>requireAuth</strong>: Required authentication operations to Broker for data production, not currently in effect due to performance issues. The authAuthorizedToken value in the sent message is based on the value provided by the Master and will change with the change of the Master.</p>
-<h2><strong>Partition Loadbalance</strong>:</h2>
+<h3>5.5 Partition Loadbalance:</h3>
+<hr>
 <p>Apache TubeMQ currently uses a server-side load balancing mode, where the balancing process is managed and maintained by the server; subsequent versions will add a client-side load balancing mode, so that two modes can co-exist.</p>
 <p><strong>Server side load balancing</strong>:</p>
 <ul>
diff --git a/en-us/docs/modules/tubemq/client_rpc.json b/en-us/docs/modules/tubemq/client_rpc.json
index ad89cf7..7a8ea7e 100644
--- a/en-us/docs/modules/tubemq/client_rpc.json
+++ b/en-us/docs/modules/tubemq/client_rpc.json
@@ -1,6 +1,6 @@
 {
   "filename": "client_rpc.md",
-  "__html": "<h1>Definition of TubeMQ RPC</h1>\n<h2>General Introduction</h2>\n<p>Implements of this part can be found in <code>org.apache.tubemq.corerpc</code>. Each node in Apache TubeMQ Cluster Communicates by TCP Keep-Alive. Mseeages are definded using binary and protobuf combined.\n<img src=\"img/client_rpc/rpc_bytes_def.png\" alt=\"\"></p>\n<p>All we can see in TCP are binary streams. We defind a 4-byte msgToken message <code>RPC\\_PROTOCOL\\_BEGIN\\_TOKEN</code> in header, which a [...]
+  "__html": "<h2>1 General Introduction</h2>\n<p>Implements of this part can be found in <code>org.apache.tubemq.corerpc</code>. Each node in Apache TubeMQ Cluster Communicates by TCP Keep-Alive. Mseeages are definded using binary and protobuf combined.\n<img src=\"img/client_rpc/rpc_bytes_def.png\" alt=\"\"></p>\n<p>All we can see in TCP are binary streams. We defind a 4-byte msgToken message <code>RPC\\_PROTOCOL\\_BEGIN\\_TOKEN</code> in header, which are used to distinguish each messa [...]
   "link": "/en-us/docs/modules/tubemq/client_rpc.html",
   "meta": {
     "title": "Client RPC - Apache InLong's TubeMQ module"
diff --git a/en-us/docs/modules/tubemq/client_rpc.md b/en-us/docs/modules/tubemq/client_rpc.md
index e423857..6f8db4c 100644
--- a/en-us/docs/modules/tubemq/client_rpc.md
+++ b/en-us/docs/modules/tubemq/client_rpc.md
@@ -2,9 +2,8 @@
 title: Client RPC - Apache InLong's TubeMQ module
 ---
 
-# Definition of TubeMQ RPC
 
-## General Introduction
+## 1 General Introduction
 
 Implements of this part can be found in `org.apache.tubemq.corerpc`. Each node in Apache TubeMQ Cluster Communicates by TCP Keep-Alive. Mseeages are definded using binary and protobuf combined.
 ![](img/client_rpc/rpc_bytes_def.png)
@@ -16,7 +15,7 @@ We defined `listSize` as `\&lt;len\&gt;\&lt;data\&gt;` because serialized PB dat
 **Pay more attention when implementing multiple languages and SDKs.** Need to serialize PB data content into arrays of blocks(supported in PB codecs).
 
 
-## PB format code:
+## 2 PB format code:
 
 PB format encoding is divided into RPC framework definition, to the Master message encoding and to the Broker message encoding of three parts, you can use protobuf directly compiled to get different language codecs, it is very convenient to use.
 ![](img/client_rpc/rpc_proto_def.png)
@@ -31,7 +30,7 @@ Flag marks whether the message is requested or not, and the next three marks rep
 ![](img/client_rpc/rpc_header_fill.png)
 
 
-## Interactive diagram of the client's PB request & response:
+## 3 Interactive diagram of the client's PB request & response:
 
 **Producer Interaction**:
 
@@ -58,7 +57,7 @@ Consumer has 7 pairs of command in all, Register, Heartbeat, Exit to Master; Reg
 
 As we can see from the above picture, the Consumer first has to register to the Master, but registering to the Master can not get Metadata information immediately because TubeMQ is using a server-side load-balancing model, and the client needs to wait for the server to dispatch the consumption partition information; Consumer to Broker needs to register the logout operation. Partition is exclusive at the time of consumption, i.e., the same partition can only be consumed by one consumer in [...]
 
-##Client feature:
+## 4 Client feature:
 
 | **FEATURE** | **Java** | **C/C++** | **Go** | **Python** | **Rust** | **NOTE** |
 | --- | --- | --- | --- | --- | --- | --- |
@@ -88,7 +87,7 @@ As we can see from the above picture, the Consumer first has to register to the
 | Consumer Pull Consumption frequency limit | ✅ | | | | | |
 
 
-## Client function Induction CaseByCase:
+## 5 Client function Induction CaseByCase:
 
 **Client side and server side RPC interaction process**:
 
@@ -98,8 +97,10 @@ As we can see from the above picture, the Consumer first has to register to the
 
 As shown above, the client has to maintain local preservation of the sent request message until the RPC times out, or a response message is received and the response The message is associated by the SerialNo generated when the request is sent; the Broker information received from the server side, and the Topic information, which the SDK stores locally and updates with the latest returned information, as well as periodic reports to the Server side; the SDK is maintained to the heartbeat o [...]
 
-**Message: Producer register to Master**:
+### 5.1 Message: Producer register to Master:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_register2M.png)
 
 **ClientId**:Producer needs to construct a ClientId at startup, and the current construction rule is: 
@@ -133,8 +134,10 @@ Java: ClientId = IPV4 + `&quot;-&quot;` + Thread ID + `&quot;-&quot;` + createTi
 **authAuthorizedToken**:Authenticated authorization tokens, if they have data for that field, they need to save and carry that field information for subsequent accesses to the Master and Broker; if the field is changed on subsequent heartbeats, the local cache of that field data needs to be updated.
 
 
-**Mseeage: Heartbeat from Producer to Master**:
+### 5.2 Mseeage: Heartbeat from Producer to Master:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_heartbeat2M.png)
 
 **topicInfos**: The metadata information corresponding to the Topic published by the SDK, including partition information and the Broker where it is located, is decoded. Since there is a lot of metadata, the outflow generated by passing the object data through as is would be very large, so we made Improvements.
@@ -143,14 +146,18 @@ Java: ClientId = IPV4 + `&quot;-&quot;` + Thread ID + `&quot;-&quot;` + createTi
 
 **requireAuth**: Code to indicates the expiration of the previous authAuthorizedToken of the Master, requiring the SDK to report the username and password signatures on the next request.
 
-**Message: Producer exits from Master**:
+### 5.3 Message: Producer exits from Master:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_close2M.png)
 
 Note that if authentication is enable, closing operation will do the authentication to avoid external interference with the operation.
 
-**Message: Producer to Broker**:
+### 5.4 Message: Producer to Broker:
+
 ----------
+
 This part is related to the definition of RPC Message.
 
 ![](img/client_rpc/rpc_producer_sendmsg2B.png)
@@ -165,8 +172,10 @@ This part is related to the definition of RPC Message.
 
 **requireAuth**: Required authentication operations to Broker for data production, not currently in effect due to performance issues. The authAuthorizedToken value in the sent message is based on the value provided by the Master and will change with the change of the Master.
 
-**Partition Loadbalance**:
+### 5.5 Partition Loadbalance:
+
 ----------
+
 Apache TubeMQ currently uses a server-side load balancing mode, where the balancing process is managed and maintained by the server; subsequent versions will add a client-side load balancing mode, so that two modes can co-exist.
 
 **Server side load balancing**:
diff --git a/en-us/docs/modules/tubemq/clients_java.html b/en-us/docs/modules/tubemq/clients_java.html
index a405ec6..da3daa8 100644
--- a/en-us/docs/modules/tubemq/clients_java.html
+++ b/en-us/docs/modules/tubemq/clients_java.html
@@ -7,27 +7,25 @@
 	<meta name="keywords" content="clients_java" />
 	<meta name="description" content="clients_java" />
 	<!-- 网页标签标题 -->
-	<title>JAVA SDK API - Apache InLong&#39;s TubeMQ module</title>
+	<title>TubeMQ JAVA SDK API - Apache InLong&#39;s TubeMQ module</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<hr>
-<h3><strong>1. 基础对象接口介绍:</strong></h3>
-<h4><strong>a) MessageSessionFactory(消息会话工厂):</strong></h4>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+<h3>1.1 MessageSessionFactory(消息会话工厂):</h3>
 <p>TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。</p>
-<h4><strong>b) MasterInfo:</strong></h4>
+<h3>1.2 MasterInfo:</h3>
 <p>TubeMQ的Master地址信息对象,该对象的特点是支持配置多个Master地址,由于TubeMQ Master借助BDB的存储能力进行元数据管理,以及服务HA热切能力,Master的地址相应地就需要配置多条信息。该配置信息支持IP、域名两种模式,由于TubeMQ的HA是热切模式,客户端要保证到各个Master地址都是连通的。该信息在初始化TubeClientConfig类对象和ConsumerConfig类对象时使用,考虑到配置的方便性,我们将多条Master地址构造成“ip1:port1,ip2:port2,ip3:port3”格式并进行解析。</p>
-<h4><strong>c) TubeClientConfig:</strong></h4>
+<h3>1.3 TubeClientConfig:</h3>
 <p>MessageSessionFactory(消息会话工厂)初始化类,用来携带创建网络连接信息、客户端控制参数信息的对象类,包括RPC时长设置、Socket属性设置、连接质量检测参数设置、TLS参数设置、认证授权信息设置等信息,该类,连同接下来介绍的ConsumerConfig类,与TubeMQ-3.8.0版本之前版本的类变更最大的类,主要原因是在此之前TubeMQ的接口定义超6年多没有变更,接口使用上存在接口语义定义有歧义、接口属性设置单位不清晰、程序无法识别多种情况的内容选择等问题,考虑到代码开源自查问题方便性,以及新手学习成本问题,我们这次作了接口的重定义。对于重定义的前后差别,见配置接口定义说明部分介绍。</p>
-<h4><strong>d) ConsumerConfig:</strong></h4>
+<h3>1.4 ConsumerConfig:</h3>
 <p>ConsumerConfig类是TubeClientConfig类的子类,它是在TubeClientConfig类基础上增加了Consumer类对象初始化时候的参数携带,因而在一个既有Producer又有Consumer的MessageSessionFactory(消息会话工厂)类对象里,会话工厂类的相关设置以MessageSessionFactory类初始化的内容为准,Consumer类对象按照创建时传递的初始化类对象为准。在consumer里又根据消费行为的不同分为Pull消费者和Push消费者两种,两种特有的参数通过参数接口携带“pull”或“push”不同特征进行区分。</p>
-<h4><strong>e) Message:</strong></h4>
+<h3>1.5 Message:</h3>
 <p>Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产端原样传递给消息接收端,attribute内容是与TubeMQ系统共用的字段,业务填写的内容不会丢失和改写,但该字段有可能会新增TubeMQ系统填写的内容,并在后续的版本中,新增的TubeMQ系统内容有可能去掉而不被通知。该部分需要注意的是Message.putSystemHeader(final String msgType, final String msgTime)接口,该接口用来设置消息的消息类型和消息发送时间,msgType用于消费端过滤用,msgTime用做TubeMQ进行数据收发统计时消息时间统计维度用。</p>
-<h4><strong>f) MessageProducer:</strong></h4>
+<h3>1.6 MessageProducer:</h3>
 <p>消息生产者类,该类完成消息的生产,消息发送分为同步发送和异步发送两种接口,目前消息采用Round Robin方式发往后端服务器,后续这块将考虑按照业务指定的算法进行后端服务器选择方式进行生产。该类使用时需要注意的是,我们支持在初始化时候全量Topic指定的publish,也支持在生产过程中临时增加对新的Topic的publish,但临时增加的Topic不会立即生效,因而在使用新增Topic前,要先调用isTopicCurAcceptPublish接口查询该Topic是否已publish并且被服务器接受,否则有可能消息发送失败。</p>
-<h4><strong>g) MessageConsumer:</strong></h4>
+<h3>1.7 MessageConsumer:</h3>
 <p>该类有两个子类PullMessageConsumer、PushMessageConsumer,通过这两个子类的包装,完成了对业务侧的Pull和Push语义。实际上TubeMQ是采用Pull模式与后端服务进行交互,为了便于业务的接口使用,我们进行了封装,大家可以看到其差别在于Push在启动时初始化了一个线程组,来完成主动的数据拉取操作。需要注意的地方在于:</p>
 <ul>
 <li>
@@ -38,16 +36,12 @@
 </li>
 </ul>
 <hr>
-<h3><strong>2. 接口调用示例:</strong></h3>
-<h4><strong>a) 环境准备:</strong></h4>
+<h2>2 接口调用示例:</h2>
+<h3>2.1 环境准备:</h3>
 <p>TubeMQ开源包org.apache.tubemq.example里提供了生产和消费的具体代码示例,这里我们通过一个实际的例子来介绍如何填参和调用对应接口。首先我们搭建一个带3个Master节点的TubeMQ集群,3个Master地址及端口分别为test_1.domain.com,test_2.domain.com,test_3.domain.com,端口均为8080,在该集群里我们建立了若干个Broker,并且针对Broker我们创建了3个topic:topic_1,topic_2,topic_3等Topic配置;然后我们启动对应的Broker等待Consumer和Producer的创建。</p>
-<h4><strong>b) 创建Consumer:</strong></h4>
+<h3>2.2 创建Consumer:</h3>
 <p>见包org.apache.tubemq.example.MessageConsumerExample类文件,Consumer是一个包含网络交互协调的客户端对象,需要做初始化并且长期驻留内存重复使用的模型,它不适合单次拉起消费的场景。如下图示,我们定义了MessageConsumerExample封装类,在该类中定义了进行网络交互的会话工厂MessageSessionFactory类,以及用来做Push消费的PushMessageConsumer类:</p>
-<ul>
-<li>
-<h6><strong>i.初始化MessageConsumerExample类:</strong></h6>
-</li>
-</ul>
+<h5>2.2.1 初始化MessageConsumerExample类:</h5>
 <ol>
 <li>
 <p>首先构造一个ConsumerConfig类,填写初始化信息,包括本机IP V4地址,Master集群地址,消费组组名信息,这里Master地址信息传入值为:”test_1.domain.com:8080,test_2.domain.com:8080,test_3.domain.com:8080”;</p>
@@ -93,9 +87,7 @@
     }
 }
 </code></pre>
-<ul>
-<li><strong>ii.订阅Topic:</strong></li>
-</ul>
+<h4>2.2.2 订阅Topic:</h4>
 <p>我们没有采用指定Offset消费的模式进行订阅,也没有过滤需求,因而我们在如下代码里只做了Topic的指定,对应的过滤项集合我们传的是null值,同时,对于不同的Topic,我们可以传递不同的消息回调处理函数;我们这里订阅了3个topic,topic_1,topic_2,topic_3,每个topic分别调用subscribe函数进行对应参数设置:</p>
 <pre><code class="language-java"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">subscribe</span><span class="hljs-params">(<span class="hljs-keyword">final</span> Map&lt;String, TreeSet&lt;String&gt;&gt; topicTidsMap)</span>
     <span class="hljs-keyword">throws</span> TubeClientException </span>{
@@ -107,9 +99,7 @@
     messageConsumer.completeSubscribe();
 }
 </code></pre>
-<ul>
-<li><strong>iii.进行消费:</strong></li>
-</ul>
+<h4>2.2.3 进行消费:</h4>
 <p>到此,对集群里对应topic的订阅就已完成,系统运行开始后,回调函数里数据将不断的通过回调函数推送到业务层进行处理:</p>
 <pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">DefaultMessageListener</span> <span class="hljs-keyword">implements</span> <span class="hljs-title">MessageListener</span> </span>{
 
@@ -134,7 +124,7 @@
     }
 }
 </code></pre>
-<h4><strong>c) 创建Producer:</strong></h4>
+<h3>2.3 创建Producer:</h3>
 <p>现网环境中业务的数据都是通过代理层来做接收汇聚,包装了比较多的异常处理,大部分的业务都没有也不会接触到TubeSDK的Producer类,考虑到业务自己搭建集群使用TubeMQ进行使用的场景,这里提供对应的使用demo,见包org.apache.tubemq.example.MessageProducerExample类文件供参考,<strong>需要注意</strong>的是,业务除非使用数据平台的TubeMQ集群做MQ服务,否则仍要按照现网的接入流程使用代理层来进行数据生产:</p>
 <ul>
 <li><strong>i. 初始化MessageProducerExample类:</strong></li>
@@ -164,16 +154,12 @@
     }
 }
 </code></pre>
-<ul>
-<li><strong>ii. 发布Topic:</strong></li>
-</ul>
+<h4>2.3.1 发布Topic:</h4>
 <pre><code class="language-java"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">publishTopics</span><span class="hljs-params">(List&lt;String&gt; topicList)</span> <span class="hljs-keyword">throws</span> TubeClientException </span>{
     <span class="hljs-keyword">this</span>.messageProducer.publish(<span class="hljs-keyword">new</span> TreeSet&lt;String&gt;(topicList));
 }
 </code></pre>
-<ul>
-<li><strong>iii. 进行数据生产:</strong></li>
-</ul>
+<h4>2.3.2 进行数据生产:</h4>
 <p>如下所示,则为具体的数据构造和发送逻辑,构造一个Message对象后调用sendMessage()函数发送即可,有同步接口和异步接口选择,依照业务要求选择不同接口;需要注意的是该业务根据不同消息调用message.putSystemHeader()函数设置消息的过滤属性和发送时间,便于系统进行消息过滤消费,以及指标统计用。完成这些,一条消息即被发送出去,如果返回结果为成功,则消息被成功的接纳并且进行消息处理,如果返回失败,则业务根据具体错误码及错误提示进行判断处理,相关错误详情见《TubeMQ错误信息介绍.xlsx》:</p>
 <pre><code class="language-java"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">sendMessageAsync</span><span class="hljs-params">(<span class="hljs-keyword">int</span> id, <span class="hljs-keyword">long</span> currtime,
                              String topic, <span class="hljs-keyword">byte</span>[] body,
@@ -197,9 +183,7 @@
     }
 }
 </code></pre>
-<ul>
-<li><strong>iv. Producer不同类MAMessageProducerExample关注点:</strong></li>
-</ul>
+<h4>2.3.3 Producer不同类MAMessageProducerExample关注点:</h4>
 <p>该类初始化与MessageProducerExample类不同,采用的是TubeMultiSessionFactory多会话工厂类进行的连接初始化,该demo提供了如何使用多会话工厂类的特性,可以用于通过多个物理连接提升系统吞吐量的场景(TubeMQ通过连接复用模式来减少物理连接资源的使用),恰当使用可以提升系统的生产性能。在Consumer侧也可以通过多会话工厂进行初始化,但考虑到消费是长时间过程处理,对连接资源的占用比较小,消费场景不推荐使用。</p>
 <p>自此,整个生产和消费的示例已经介绍完,大家可以直接下载对应的代码编译跑一边,看看是不是就是这么简单😊</p>
 <hr>
diff --git a/en-us/docs/modules/tubemq/clients_java.json b/en-us/docs/modules/tubemq/clients_java.json
index 2e1a40f..2c9d31b 100644
--- a/en-us/docs/modules/tubemq/clients_java.json
+++ b/en-us/docs/modules/tubemq/clients_java.json
@@ -1,8 +1,8 @@
 {
   "filename": "clients_java.md",
-  "__html": "<h2><strong>TubeMQ Lib</strong> <strong>接口使用</strong></h2>\n<hr>\n<h3><strong>1. 基础对象接口介绍:</strong></h3>\n<h4><strong>a) MessageSessionFactory(消息会话工厂):</strong></h4>\n<p>TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以 [...]
+  "__html": "<h2>1 基础对象接口介绍:</h2>\n<h3>1.1 MessageSessionFactory(消息会话工厂):</h3>\n<p>TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。</p>\n<h3>1.2 MasterInfo:</h3>\n<p>Tub
 eMQ的Master地址信息对象,该对象的特点 [...]
   "link": "/en-us/docs/modules/tubemq/clients_java.html",
   "meta": {
-    "title": "JAVA SDK API - Apache InLong's TubeMQ module"
+    "title": "TubeMQ JAVA SDK API - Apache InLong's TubeMQ module"
   }
 }
\ No newline at end of file
diff --git a/en-us/docs/modules/tubemq/clients_java.md b/en-us/docs/modules/tubemq/clients_java.md
index 780b1ab..616ef53 100644
--- a/en-us/docs/modules/tubemq/clients_java.md
+++ b/en-us/docs/modules/tubemq/clients_java.md
@@ -1,52 +1,44 @@
 ---
-title: JAVA SDK API - Apache InLong's TubeMQ module
+title: TubeMQ JAVA SDK API - Apache InLong's TubeMQ module
 ---
 
-## **TubeMQ Lib** **接口使用**
 
-------
-
-
-
-### **1. 基础对象接口介绍:**
+## 1 基础对象接口介绍:
 
-#### **a) MessageSessionFactory(消息会话工厂):**
+### 1.1 MessageSessionFactory(消息会话工厂):
 
 TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。
 
  
-
-#### **b) MasterInfo:**
+### 1.2 MasterInfo:
 
 TubeMQ的Master地址信息对象,该对象的特点是支持配置多个Master地址,由于TubeMQ Master借助BDB的存储能力进行元数据管理,以及服务HA热切能力,Master的地址相应地就需要配置多条信息。该配置信息支持IP、域名两种模式,由于TubeMQ的HA是热切模式,客户端要保证到各个Master地址都是连通的。该信息在初始化TubeClientConfig类对象和ConsumerConfig类对象时使用,考虑到配置的方便性,我们将多条Master地址构造成“ip1:port1,ip2:port2,ip3:port3”格式并进行解析。
 
  
-
-#### **c) TubeClientConfig:**
+### 1.3 TubeClientConfig:
 
 MessageSessionFactory(消息会话工厂)初始化类,用来携带创建网络连接信息、客户端控制参数信息的对象类,包括RPC时长设置、Socket属性设置、连接质量检测参数设置、TLS参数设置、认证授权信息设置等信息,该类,连同接下来介绍的ConsumerConfig类,与TubeMQ-3.8.0版本之前版本的类变更最大的类,主要原因是在此之前TubeMQ的接口定义超6年多没有变更,接口使用上存在接口语义定义有歧义、接口属性设置单位不清晰、程序无法识别多种情况的内容选择等问题,考虑到代码开源自查问题方便性,以及新手学习成本问题,我们这次作了接口的重定义。对于重定义的前后差别,见配置接口定义说明部分介绍。
 
  
 
-#### **d) ConsumerConfig:**
+### 1.4 ConsumerConfig:
 
 ConsumerConfig类是TubeClientConfig类的子类,它是在TubeClientConfig类基础上增加了Consumer类对象初始化时候的参数携带,因而在一个既有Producer又有Consumer的MessageSessionFactory(消息会话工厂)类对象里,会话工厂类的相关设置以MessageSessionFactory类初始化的内容为准,Consumer类对象按照创建时传递的初始化类对象为准。在consumer里又根据消费行为的不同分为Pull消费者和Push消费者两种,两种特有的参数通过参数接口携带“pull”或“push”不同特征进行区分。
 
  
-
-#### **e) Message:**
+### 1.5 Message:
 
 Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产端原样传递给消息接收端,attribute内容是与TubeMQ系统共用的字段,业务填写的内容不会丢失和改写,但该字段有可能会新增TubeMQ系统填写的内容,并在后续的版本中,新增的TubeMQ系统内容有可能去掉而不被通知。该部分需要注意的是Message.putSystemHeader(final String msgType, final String msgTime)接口,该接口用来设置消息的消息类型和消息发送时间,msgType用于消费端过滤用,msgTime用做TubeMQ进行数据收发统计时消息时间统计维度用。
 
  
 
-#### **f) MessageProducer:**
+### 1.6 MessageProducer:
 
 消息生产者类,该类完成消息的生产,消息发送分为同步发送和异步发送两种接口,目前消息采用Round Robin方式发往后端服务器,后续这块将考虑按照业务指定的算法进行后端服务器选择方式进行生产。该类使用时需要注意的是,我们支持在初始化时候全量Topic指定的publish,也支持在生产过程中临时增加对新的Topic的publish,但临时增加的Topic不会立即生效,因而在使用新增Topic前,要先调用isTopicCurAcceptPublish接口查询该Topic是否已publish并且被服务器接受,否则有可能消息发送失败。
 
  
 
-#### **g) MessageConsumer:**
+### 1.7 MessageConsumer:
 
 该类有两个子类PullMessageConsumer、PushMessageConsumer,通过这两个子类的包装,完成了对业务侧的Pull和Push语义。实际上TubeMQ是采用Pull模式与后端服务进行交互,为了便于业务的接口使用,我们进行了封装,大家可以看到其差别在于Push在启动时初始化了一个线程组,来完成主动的数据拉取操作。需要注意的地方在于:
 
@@ -60,19 +52,18 @@ Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产
 
 
 
-### **2. 接口调用示例:**
+## 2 接口调用示例:
 
-#### **a) 环境准备:**
+### 2.1 环境准备:
 
 TubeMQ开源包org.apache.tubemq.example里提供了生产和消费的具体代码示例,这里我们通过一个实际的例子来介绍如何填参和调用对应接口。首先我们搭建一个带3个Master节点的TubeMQ集群,3个Master地址及端口分别为test_1.domain.com,test_2.domain.com,test_3.domain.com,端口均为8080,在该集群里我们建立了若干个Broker,并且针对Broker我们创建了3个topic:topic_1,topic_2,topic_3等Topic配置;然后我们启动对应的Broker等待Consumer和Producer的创建。
 
  
-
-#### **b) 创建Consumer:**
+### 2.2 创建Consumer:
 
 见包org.apache.tubemq.example.MessageConsumerExample类文件,Consumer是一个包含网络交互协调的客户端对象,需要做初始化并且长期驻留内存重复使用的模型,它不适合单次拉起消费的场景。如下图示,我们定义了MessageConsumerExample封装类,在该类中定义了进行网络交互的会话工厂MessageSessionFactory类,以及用来做Push消费的PushMessageConsumer类:
 
-- ###### **i.初始化MessageConsumerExample类:**
+##### 2.2.1 初始化MessageConsumerExample类:
 
 1. 首先构造一个ConsumerConfig类,填写初始化信息,包括本机IP V4地址,Master集群地址,消费组组名信息,这里Master地址信息传入值为:”test_1.domain.com:8080,test_2.domain.com:8080,test_3.domain.com:8080”;
 
@@ -116,7 +107,7 @@ public final class MessageConsumerExample {
 
 
 
-- **ii.订阅Topic:**
+#### 2.2.2 订阅Topic:
 
 我们没有采用指定Offset消费的模式进行订阅,也没有过滤需求,因而我们在如下代码里只做了Topic的指定,对应的过滤项集合我们传的是null值,同时,对于不同的Topic,我们可以传递不同的消息回调处理函数;我们这里订阅了3个topic,topic_1,topic_2,topic_3,每个topic分别调用subscribe函数进行对应参数设置:
 
@@ -134,7 +125,7 @@ public void subscribe(final Map<String, TreeSet<String>> topicTidsMap)
 
 
 
-- **iii.进行消费:**
+#### 2.2.3 进行消费:
 
 到此,对集群里对应topic的订阅就已完成,系统运行开始后,回调函数里数据将不断的通过回调函数推送到业务层进行处理:
 
@@ -165,7 +156,7 @@ public class DefaultMessageListener implements MessageListener {
 
 
 
-#### **c) 创建Producer:**
+### 2.3 创建Producer:
 
 现网环境中业务的数据都是通过代理层来做接收汇聚,包装了比较多的异常处理,大部分的业务都没有也不会接触到TubeSDK的Producer类,考虑到业务自己搭建集群使用TubeMQ进行使用的场景,这里提供对应的使用demo,见包org.apache.tubemq.example.MessageProducerExample类文件供参考,**需要注意**的是,业务除非使用数据平台的TubeMQ集群做MQ服务,否则仍要按照现网的接入流程使用代理层来进行数据生产:
 
@@ -201,7 +192,7 @@ public final class MessageProducerExample {
 
 
 
-- **ii. 发布Topic:**
+#### 2.3.1 发布Topic:
 
 ```java
 public void publishTopics(List<String> topicList) throws TubeClientException {
@@ -211,7 +202,7 @@ public void publishTopics(List<String> topicList) throws TubeClientException {
 
 
 
-- **iii. 进行数据生产:**
+#### 2.3.2 进行数据生产:
 
 如下所示,则为具体的数据构造和发送逻辑,构造一个Message对象后调用sendMessage()函数发送即可,有同步接口和异步接口选择,依照业务要求选择不同接口;需要注意的是该业务根据不同消息调用message.putSystemHeader()函数设置消息的过滤属性和发送时间,便于系统进行消息过滤消费,以及指标统计用。完成这些,一条消息即被发送出去,如果返回结果为成功,则消息被成功的接纳并且进行消息处理,如果返回失败,则业务根据具体错误码及错误提示进行判断处理,相关错误详情见《TubeMQ错误信息介绍.xlsx》:
 
@@ -241,7 +232,7 @@ public void sendMessageAsync(int id, long currtime,
 
 
 
-- **iv. Producer不同类MAMessageProducerExample关注点:**
+#### 2.3.3 Producer不同类MAMessageProducerExample关注点:
 
 该类初始化与MessageProducerExample类不同,采用的是TubeMultiSessionFactory多会话工厂类进行的连接初始化,该demo提供了如何使用多会话工厂类的特性,可以用于通过多个物理连接提升系统吞吐量的场景(TubeMQ通过连接复用模式来减少物理连接资源的使用),恰当使用可以提升系统的生产性能。在Consumer侧也可以通过多会话工厂进行初始化,但考虑到消费是长时间过程处理,对连接资源的占用比较小,消费场景不推荐使用。
 
diff --git a/en-us/docs/modules/tubemq/configure_introduction.html b/en-us/docs/modules/tubemq/configure_introduction.html
index 51e86b2..ba58d4f 100644
--- a/en-us/docs/modules/tubemq/configure_introduction.html
+++ b/en-us/docs/modules/tubemq/configure_introduction.html
@@ -12,15 +12,15 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>The TubeMQ server includes two modules for the Master and the Broker. The Master also includes a Web front-end module for external page access (this part is stored in the resources). Considering the actual deployment, two modules are often deployed in the same machine, TubeMQ. The contents of the three parts of the two modules are packaged and delivered to the operation and maintenance; the client does not include the lib package of the server part and is delivered to the user separat [...]
 <p>Master and Broker use the ini configuration file format, and the relevant configuration files are placed in the master.ini and broker.ini files in the tubemq-server-3.9.0/conf/ directory:
 <img src="img/configure/conf_ini_pos.png" alt=""></p>
 <p>Their configuration is defined by a set of configuration units. The Master configuration consists of four mandatory units: [master], [zookeeper], [bdbStore], and optional [tlsSetting]. The Broker configuration is mandatory. Broker], [zookeeper] and optional [tlsSetting] consist of a total of 3 configuration units; in actual use, you can also combine the contents of the two configuration files into one ini file.</p>
 <p>In addition to the back-end system configuration file, the Master also stores the Web front-end page module in the resources. The root directory velocity.properties file of the resources is the Web front-end page configuration file of the Master.
 <img src="img/configure/conf_velocity_pos.png" alt=""></p>
-<h2>Configuration item details:</h2>
-<h3>master.ini file:</h3>
+<h2>2 Configuration item details:</h2>
+<h3>2.1 master.ini file:</h3>
 <p>[master]</p>
 <blockquote>
 <p>Master system runs the main configuration unit, required unit, the value is fixed to &quot;[master]&quot;</p>
@@ -434,7 +434,7 @@
 </tr>
 </tbody>
 </table>
-<h3>velocity.properties file:</h3>
+<h3>2.2 velocity.properties file:</h3>
 <table>
 <thead>
 <tr>
@@ -453,7 +453,7 @@
 </tr>
 </tbody>
 </table>
-<h3>broker.ini file:</h3>
+<h3>2.3 broker.ini file:</h3>
 <p>[broker]</p>
 <blockquote>
 <p>The broker system runs the main configuration unit, required unit, and the value is fixed to &quot;[broker]&quot;</p>
diff --git a/en-us/docs/modules/tubemq/configure_introduction.json b/en-us/docs/modules/tubemq/configure_introduction.json
index 4bab6ce..1f61539 100644
--- a/en-us/docs/modules/tubemq/configure_introduction.json
+++ b/en-us/docs/modules/tubemq/configure_introduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "configure_introduction.md",
-  "__html": "<h1>TubeMQ configuration item description</h1>\n<p>The TubeMQ server includes two modules for the Master and the Broker. The Master also includes a Web front-end module for external page access (this part is stored in the resources). Considering the actual deployment, two modules are often deployed in the same machine, TubeMQ. The contents of the three parts of the two modules are packaged and delivered to the operation and maintenance; the client does not include the lib pa [...]
+  "__html": "<h2>1 TubeMQ configuration item description</h2>\n<p>The TubeMQ server includes two modules for the Master and the Broker. The Master also includes a Web front-end module for external page access (this part is stored in the resources). Considering the actual deployment, two modules are often deployed in the same machine, TubeMQ. The contents of the three parts of the two modules are packaged and delivered to the operation and maintenance; the client does not include the lib  [...]
   "link": "/en-us/docs/modules/tubemq/configure_introduction.html",
   "meta": {
     "title": "Configure Introduction - Apache InLong's TubeMQ module"
diff --git a/en-us/docs/modules/tubemq/configure_introduction.md b/en-us/docs/modules/tubemq/configure_introduction.md
index 11bc04c..3cb1385 100644
--- a/en-us/docs/modules/tubemq/configure_introduction.md
+++ b/en-us/docs/modules/tubemq/configure_introduction.md
@@ -2,7 +2,7 @@
 title: Configure Introduction - Apache InLong's TubeMQ module
 ---
 
-# TubeMQ configuration item description
+## 1 TubeMQ configuration item description
 
 The TubeMQ server includes two modules for the Master and the Broker. The Master also includes a Web front-end module for external page access (this part is stored in the resources). Considering the actual deployment, two modules are often deployed in the same machine, TubeMQ. The contents of the three parts of the two modules are packaged and delivered to the operation and maintenance; the client does not include the lib package of the server part and is delivered to the user separately.
 
@@ -15,9 +15,9 @@ In addition to the back-end system configuration file, the Master also stores th
 ![](img/configure/conf_velocity_pos.png)
 
 
-## Configuration item details:
+## 2 Configuration item details:
 
-### master.ini file:
+### 2.1 master.ini file:
 [master]
 > Master system runs the main configuration unit, required unit, the value is fixed to "[master]"
 
@@ -105,13 +105,13 @@ In addition to the back-end system configuration file, the Master also stores th
 | tlsTrustStorePath     | no       | string  | The absolute storage path of the TLS TrustStore file + the TrustStore file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
 | tlsTrustStorePassword | no       | string  | The absolute storage path of the TLS TrustStorePassword file + the TrustStorePassword file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
 
-### velocity.properties file:
+### 2.2 velocity.properties file:
 
 | Name                      | Required                          | Type                          | Description                                                  |
 | ------------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
 | file.resource.loader.path | yes      | string | The absolute path of the master web template. This part is the absolute path plus /resources/templates of the project when the master is deployed. The configuration is consistent with the actual deployment. If the configuration fails, the master front page access fails. |
 
-### broker.ini file:
+### 2.3 broker.ini file:
 
 [broker]
 >The broker system runs the main configuration unit, required unit, and the value is fixed to "[broker]"
diff --git a/en-us/docs/modules/tubemq/console_introduction.html b/en-us/docs/modules/tubemq/console_introduction.html
index b0dc799..886c23f 100644
--- a/en-us/docs/modules/tubemq/console_introduction.html
+++ b/en-us/docs/modules/tubemq/console_introduction.html
@@ -12,30 +12,29 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h2>管控台关系</h2>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:
 <img src="img/console/1568169770714.png" alt="">
 ​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。</p>
-<h2>TubeMQ管控台各版面介绍</h2>
+<h2>2 TubeMQ管控台各版面介绍</h2>
 <p>​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topic列表2个部分,我们先介绍简单的分发查询和集群管理,然后再介绍复杂的配置管理。</p>
-<h3>分发查询</h3>
+<h3>2.1 分发查询</h3>
 <p>​        点分发查询,我们会看到如下的列表信息,这是当前TubeMQ集群里已注册的消费组信息,包括具体的消费组组名,消费的Topic,以及该组总的消费分区数简介信息,如下图示:
 <img src="img/console/1568169796122.png" alt="">
 ​       点击记录,可以看到选中的消费组里的消费者成员,及对应消费的Broker及Partition分区信息,如下图示:
 <img src="img/console/1568169806810.png" alt=""></p>
 <p>​       这个页面可以供我们查询,输入Topic或者消费组名,就可以很快确认系统里有哪些消费组在消费Topic,以及每个消费组的消费目标是怎样这些信息。</p>
-<h3>集群管理</h3>
+<h3>2.2 集群管理</h3>
 <p>​        集群管理主要管理Master的HA,在这个页面上我们可以看到当前Master的各个节点及节点状态,同时,我们可以通过“切换”操作来改变节点的主备状态。
 <img src="img/console/1568169823675.png" alt=""></p>
-<h3>配置管理</h3>
+<h3>2.3 配置管理</h3>
 <p>​        配置管理版面既包含了Broker、Topic元数据的管理,还包含了Broker和Topic的上线发布以及下线操作,有2层含义,比如Broker列表里,展示的是当前集群里已配置的Broker元数据,包括未上线处于草稿状态、已上线、已下线的Broker记录信息:
 <img src="img/console/1568169839931.png" alt=""></p>
 <p>​        从页面信息我们也可以看到,除了Broker的记录信息外,还有Broker在该集群里的管理信息,包括是否已上线,是否处于命令处理中,是否可读,是否可写,配置是否做了更改,是否已加载变更的配置信息。</p>
 <p>​        点单个新增,会弹框如下,这个表示待新增Broker的元数据信息,包括BrokerID,BrokerIP,BrokerPort,以及该Broker里部署的Topic的缺省配置信息,相关的字段详情见《TubeMQ HTTP访问接口定义.xls》
 <img src="img/console/1568169851085.png" alt=""></p>
 <p>​        所有TubeMQ管控台的变更操作,或者改变操作,都会要求输入操作授权码,该信息由运维通过Master的配置文件master.ini的confModAuthToken字段进行定义:如果你知道这个集群的密码,你就可以进行该项操作,比如你是管理员,你是授权人员,或者你能登陆这个master的机器拿到这个密码,都认为你是有权操作该项功能。</p>
-<h2>TubeMQ管控台上涉及的操作及注意事项</h2>
+<h2>3 TubeMQ管控台上涉及的操作及注意事项</h2>
 <p>​       如上所说,TubeMQ管控台是运营Tube MQ集群的,套件负责包括Master、Broker这类TubeMQ集群节点管理,包括自动部署和安装等,因此,如下几点需要注意:</p>
 <p>​       1. <strong>TubeMQ集群做扩缩容增、减Broker节点时,要先在TubeMQ管控台上做相应的节点新增、上线,以及下线、删除等操作后才能在物理环境上做对应Broker节点的增删处理</strong>:</p>
 <p>​        TubeMQ集群对Broker按照状态机管理,如上图示涉及到[draft,online,read-only,write-only,offline] 等状态,记录增加还没生效时是draft状态,确定上线后是online态;节点删除首先要由online状态转为offline状态,然后再通过删除操作清理系统内保存的该节点记录;draft、online和offline是为了区分各个节点所处的环节,Master只将online状态的Broker分发给对应的producer和consumer进行生产和消费;read-only,write-only是Broker处于online状态的子状态,表示只能读或者只能写Broker上的数据;相关的状态及操作见页面详情,增加一条记录即可明白其中的关系。TubeMQ管控台上增加这些记录后,我们就可以进行Broker节点的部署及启动,这个时候Tube集群环境的页面会显示节点运行状态,如果为unregister状态,如下图示,则表示节点注册失败,需要到对应broker节点上检查日志,确
 认原因。目前该部分已经很成熟,出错信息会提 [...]
@@ -52,8 +51,8 @@
 <p>​       重载完成后Topic才能对外使用,我们会发现如下配置变更部分在重启完成后已改变状态:
 <img src="img/console/1568169916091.png" alt=""></p>
 <p>​       这个时候我们就可以针对该Topic进行生产和消费处理。</p>
-<h2>3.对于Topic的元数据进行变更后的操作注意事项:</h2>
-<p><strong>a.如何自行配置Topic参数:</strong></p>
+<h2>4 对于Topic的元数据进行变更后的操作注意事项:</h2>
+<h3>4.1 如何自行配置Topic参数:</h3>
 <p>​       大家点击Topic列表里任意Topic后,会弹出如下框,里面是该Topic的相关元数据信息,其决定了这个Topic在该Broker上,设置了多少个分区,当前读写状态,数据刷盘频率,数据老化周期和时间等信息:
 <img src="img/console/1568169925657.png" alt=""></p>
 <p>​       这些信息由系统管理员设置好默认值后直接定义的,一般不会改变,若业务有特殊需求,比如想增加消费的并行度增多分区,或者想减少刷盘频率,怎么操作?如下图示,各个页面的字段含义及作用如下表:</p>
@@ -170,10 +169,10 @@
 <p>其作用是:a. 选择涉及该Topic元数据修改的Broker节点集合;b. 提供变更操作的授权信息码。</p>
 <p><strong>特别提醒:大家还需要注意的是,输入授权码修改后,数据变更要刷新后才会生效,同时生效的Broker要按比例进行操作。</strong>
 <img src="img/console/1568169954746.png" alt=""></p>
-<p><strong>b.Topic变更注意事项:</strong></p>
+<h3>4.2 Topic变更注意事项:</h3>
 <p>​       如上图示,选择变更Topic元数据后,之前选中的Broker集合会在<strong>配置是否已变更</strong>上出现是的提示。我们还需要对变更进行重载刷新操作,选择Broker集合,然后选择刷新操作,可以批量也可以单条,但是一定要注意的是:操作要分批进行,上一批操作的Broker当前运行状态为running后才能进入下一批的配置刷新操作;如果有节点处于online状态,但长期不进入running状态(缺省最大2分钟),则需要停止刷新,排查问题原因后再继续操作。</p>
 <p>​       进行分批操作原因是,我们系统在变更时,会对指定的Broker做停读停写操作,如果将全量的Broker统一做重载,很明显,集群整体会出现服务不可读或者不可写的情况,从而接入出现不该有的异常。</p>
-<p><strong>c.对于Topic的删除处理:</strong></p>
+<h3>4.3 对于Topic的删除处理:</h3>
 <p>​       页面上进行的删除是软删除处理,如果要彻底删除该topic需要通过API接口进行硬删除操作处理才能实现(避免业务误操作)。</p>
 <p>​       完成如上内容后,Topic元数据就变更完成。</p>
 <hr>
diff --git a/en-us/docs/modules/tubemq/console_introduction.json b/en-us/docs/modules/tubemq/console_introduction.json
index 774674f..b874858 100644
--- a/en-us/docs/modules/tubemq/console_introduction.json
+++ b/en-us/docs/modules/tubemq/console_introduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "console_introduction.md",
-  "__html": "<h1>TubeMQ管控台操作指引</h1>\n<h2>管控台关系</h2>\n<p>​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:\n<img src=\"img/console/1568169770714.png\" alt=\"\">\n​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。</p>\n<h2>TubeMQ管控台各版面介绍</h2>\n<p>​        管控台一共3项内容:分发查询,配置管理,集群管理; [...]
+  "__html": "<h2>1 管控台关系</h2>\n<p>​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:\n<img src=\"img/console/1568169770714.png\" alt=\"\">\n​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。</p>\n<h2>2 TubeMQ管控台各版面介绍</h2>\n<p>​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topi [...]
   "link": "/en-us/docs/modules/tubemq/console_introduction.html",
   "meta": {
     "title": "Console Introduction - Apache InLong's TubeMQ module"
diff --git a/en-us/docs/modules/tubemq/console_introduction.md b/en-us/docs/modules/tubemq/console_introduction.md
index d05e9ce..655e01c 100644
--- a/en-us/docs/modules/tubemq/console_introduction.md
+++ b/en-us/docs/modules/tubemq/console_introduction.md
@@ -2,20 +2,18 @@
 title: Console Introduction - Apache InLong's TubeMQ module
 ---
 
-# TubeMQ管控台操作指引
-
-## 管控台关系
+## 1 管控台关系
 
 ​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:
 ![](img/console/1568169770714.png)
 ​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。
 
 
-## TubeMQ管控台各版面介绍
+## 2 TubeMQ管控台各版面介绍
 
 ​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topic列表2个部分,我们先介绍简单的分发查询和集群管理,然后再介绍复杂的配置管理。
 
-### 分发查询
+### 2.1 分发查询
 
 ​        点分发查询,我们会看到如下的列表信息,这是当前TubeMQ集群里已注册的消费组信息,包括具体的消费组组名,消费的Topic,以及该组总的消费分区数简介信息,如下图示:
 ![](img/console/1568169796122.png)
@@ -24,12 +22,12 @@ title: Console Introduction - Apache InLong's TubeMQ module
 
 ​       这个页面可以供我们查询,输入Topic或者消费组名,就可以很快确认系统里有哪些消费组在消费Topic,以及每个消费组的消费目标是怎样这些信息。
 
-### 集群管理
+### 2.2 集群管理
 
 ​        集群管理主要管理Master的HA,在这个页面上我们可以看到当前Master的各个节点及节点状态,同时,我们可以通过“切换”操作来改变节点的主备状态。
 ![](img/console/1568169823675.png)
 
-### 配置管理
+### 2.3 配置管理
 
 ​        配置管理版面既包含了Broker、Topic元数据的管理,还包含了Broker和Topic的上线发布以及下线操作,有2层含义,比如Broker列表里,展示的是当前集群里已配置的Broker元数据,包括未上线处于草稿状态、已上线、已下线的Broker记录信息:
 ![](img/console/1568169839931.png)
@@ -41,7 +39,7 @@ title: Console Introduction - Apache InLong's TubeMQ module
 
 ​        所有TubeMQ管控台的变更操作,或者改变操作,都会要求输入操作授权码,该信息由运维通过Master的配置文件master.ini的confModAuthToken字段进行定义:如果你知道这个集群的密码,你就可以进行该项操作,比如你是管理员,你是授权人员,或者你能登陆这个master的机器拿到这个密码,都认为你是有权操作该项功能。
 
-## TubeMQ管控台上涉及的操作及注意事项
+## 3 TubeMQ管控台上涉及的操作及注意事项
 
 ​       如上所说,TubeMQ管控台是运营Tube MQ集群的,套件负责包括Master、Broker这类TubeMQ集群节点管理,包括自动部署和安装等,因此,如下几点需要注意:
 
@@ -68,9 +66,9 @@ title: Console Introduction - Apache InLong's TubeMQ module
 
 ​       这个时候我们就可以针对该Topic进行生产和消费处理。
 
-## 3.对于Topic的元数据进行变更后的操作注意事项:
+## 4 对于Topic的元数据进行变更后的操作注意事项:
 
-**a.如何自行配置Topic参数:**
+### 4.1 如何自行配置Topic参数:
 
 ​       大家点击Topic列表里任意Topic后,会弹出如下框,里面是该Topic的相关元数据信息,其决定了这个Topic在该Broker上,设置了多少个分区,当前读写状态,数据刷盘频率,数据老化周期和时间等信息:
 ![](img/console/1568169925657.png)
@@ -104,13 +102,13 @@ title: Console Introduction - Apache InLong's TubeMQ module
 **特别提醒:大家还需要注意的是,输入授权码修改后,数据变更要刷新后才会生效,同时生效的Broker要按比例进行操作。**
 ![](img/console/1568169954746.png)
 
-**b.Topic变更注意事项:**
+### 4.2 Topic变更注意事项:
 
 ​       如上图示,选择变更Topic元数据后,之前选中的Broker集合会在**配置是否已变更**上出现是的提示。我们还需要对变更进行重载刷新操作,选择Broker集合,然后选择刷新操作,可以批量也可以单条,但是一定要注意的是:操作要分批进行,上一批操作的Broker当前运行状态为running后才能进入下一批的配置刷新操作;如果有节点处于online状态,但长期不进入running状态(缺省最大2分钟),则需要停止刷新,排查问题原因后再继续操作。
 
 ​       进行分批操作原因是,我们系统在变更时,会对指定的Broker做停读停写操作,如果将全量的Broker统一做重载,很明显,集群整体会出现服务不可读或者不可写的情况,从而接入出现不该有的异常。
 
-**c.对于Topic的删除处理:**
+### 4.3 对于Topic的删除处理:
 
 ​       页面上进行的删除是软删除处理,如果要彻底删除该topic需要通过API接口进行硬删除操作处理才能实现(避免业务误操作)。
 
diff --git a/en-us/docs/modules/tubemq/consumer_example.html b/en-us/docs/modules/tubemq/consumer_example.html
index e993d37..06d053d 100644
--- a/en-us/docs/modules/tubemq/consumer_example.html
+++ b/en-us/docs/modules/tubemq/consumer_example.html
@@ -12,81 +12,79 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>TubeMQ provides two ways to consumer message, PullConsumer and PushConsumer:</p>
-<ol>
-<li>
-<p>PullConsumer</p>
-<pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">PullConsumerExample</span> </span>{
+<h3>1.1 PullConsumer</h3>
+<pre><code>```java
+public class PullConsumerExample {
 
-    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Throwable </span>{
-        <span class="hljs-keyword">final</span> String masterHostAndPort = <span class="hljs-string">"localhost:8000"</span>;
-        <span class="hljs-keyword">final</span> String topic = <span class="hljs-string">"test"</span>;
-        <span class="hljs-keyword">final</span> String group = <span class="hljs-string">"test-group"</span>;
-        <span class="hljs-keyword">final</span> ConsumerConfig consumerConfig = <span class="hljs-keyword">new</span> ConsumerConfig(masterHostAndPort, group);
+    public static void main(String[] args) throws Throwable {
+        final String masterHostAndPort = &quot;localhost:8000&quot;;
+        final String topic = &quot;test&quot;;
+        final String group = &quot;test-group&quot;;
+        final ConsumerConfig consumerConfig = new ConsumerConfig(masterHostAndPort, group);
         consumerConfig.setConsumePosition(ConsumePosition.CONSUMER_FROM_LATEST_OFFSET);
-        <span class="hljs-keyword">final</span> MessageSessionFactory messageSessionFactory = <span class="hljs-keyword">new</span> TubeSingleSessionFactory(consumerConfig);
-        <span class="hljs-keyword">final</span> PullMessageConsumer messagePullConsumer = messageSessionFactory.createPullConsumer(consumerConfig);
-        messagePullConsumer.subscribe(topic, <span class="hljs-keyword">null</span>);
+        final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+        final PullMessageConsumer messagePullConsumer = messageSessionFactory.createPullConsumer(consumerConfig);
+        messagePullConsumer.subscribe(topic, null);
         messagePullConsumer.completeSubscribe();
-        <span class="hljs-comment">// wait for client to join the exact consumer queue that consumer group allocated</span>
-        <span class="hljs-keyword">while</span> (!messagePullConsumer.isPartitionsReady(<span class="hljs-number">1000</span>)) {
-            ThreadUtils.sleep(<span class="hljs-number">1000</span>);
+        // wait for client to join the exact consumer queue that consumer group allocated
+        while (!messagePullConsumer.isPartitionsReady(1000)) {
+            ThreadUtils.sleep(1000);
         }
-        <span class="hljs-keyword">while</span> (<span class="hljs-keyword">true</span>) {
+        while (true) {
             ConsumerResult result = messagePullConsumer.getMessage();
-            <span class="hljs-keyword">if</span> (result.isSuccess()) {
+            if (result.isSuccess()) {
                 List&lt;Message&gt; messageList = result.getMessageList();
-                <span class="hljs-keyword">for</span> (Message message : messageList) {
-                    System.out.println(<span class="hljs-string">"received message : "</span> + message);
+                for (Message message : messageList) {
+                    System.out.println(&quot;received message : &quot; + message);
                 }
-                messagePullConsumer.confirmConsume(result.getConfirmContext(), <span class="hljs-keyword">true</span>);
+                messagePullConsumer.confirmConsume(result.getConfirmContext(), true);
             }
         }
     }   
 
 }
+``` 
 </code></pre>
-</li>
-<li>
-<p>PushConsumer</p>
-<pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">PushConsumerExample</span> </span>{
+<h3>1.2 PushConsumer</h3>
+<pre><code>```java
+public class PushConsumerExample {
 
-    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">test</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Throwable </span>{
-        <span class="hljs-keyword">final</span> String masterHostAndPort = <span class="hljs-string">"localhost:8000"</span>;
-        <span class="hljs-keyword">final</span> String topic = <span class="hljs-string">"test"</span>;
-        <span class="hljs-keyword">final</span> String group = <span class="hljs-string">"test-group"</span>;
-        <span class="hljs-keyword">final</span> ConsumerConfig consumerConfig = <span class="hljs-keyword">new</span> ConsumerConfig(masterHostAndPort, group);
+    public static void test(String[] args) throws Throwable {
+        final String masterHostAndPort = &quot;localhost:8000&quot;;
+        final String topic = &quot;test&quot;;
+        final String group = &quot;test-group&quot;;
+        final ConsumerConfig consumerConfig = new ConsumerConfig(masterHostAndPort, group);
         consumerConfig.setConsumePosition(ConsumePosition.CONSUMER_FROM_LATEST_OFFSET);
-        <span class="hljs-keyword">final</span> MessageSessionFactory messageSessionFactory = <span class="hljs-keyword">new</span> TubeSingleSessionFactory(consumerConfig);
-        <span class="hljs-keyword">final</span> PushMessageConsumer pushConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
-        pushConsumer.subscribe(topic, <span class="hljs-keyword">null</span>, <span class="hljs-keyword">new</span> MessageListener() {
+        final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+        final PushMessageConsumer pushConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
+        pushConsumer.subscribe(topic, null, new MessageListener() {
 
-            <span class="hljs-meta">@Override</span>
-            <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">receiveMessages</span><span class="hljs-params">(PeerInfo peerInfo, List&lt;Message&gt; messages)</span> <span class="hljs-keyword">throws</span> InterruptedException </span>{
-                <span class="hljs-keyword">for</span> (Message message : messages) {
-                    System.out.println(<span class="hljs-string">"received message : "</span> + <span class="hljs-keyword">new</span> String(message.getData()));
+            @Override
+            public void receiveMessages(PeerInfo peerInfo, List&lt;Message&gt; messages) throws InterruptedException {
+                for (Message message : messages) {
+                    System.out.println(&quot;received message : &quot; + new String(message.getData()));
                 }
             }
 
-            <span class="hljs-meta">@Override</span>
-            <span class="hljs-function"><span class="hljs-keyword">public</span> Executor <span class="hljs-title">getExecutor</span><span class="hljs-params">()</span> </span>{
-                <span class="hljs-keyword">return</span> <span class="hljs-keyword">null</span>;
+            @Override
+            public Executor getExecutor() {
+                return null;
             }
 
-            <span class="hljs-meta">@Override</span>
-            <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">stop</span><span class="hljs-params">()</span> </span>{
-                <span class="hljs-comment">//</span>
+            @Override
+            public void stop() {
+                //
             }
         });
         pushConsumer.completeSubscribe();
-        CountDownLatch latch = <span class="hljs-keyword">new</span> CountDownLatch(<span class="hljs-number">1</span>);
-        latch.await(<span class="hljs-number">10</span>, TimeUnit.MINUTES);
+        CountDownLatch latch = new CountDownLatch(1);
+        latch.await(10, TimeUnit.MINUTES);
     }
 }
+```
 </code></pre>
-</li>
-</ol>
 </div></section><footer class="footer-container"><div class="footer-body"><img src="/img/incubator-logo.svg"/><div class="cols-container"><div class="col col-24"><p>Apache InLong (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with  [...]
 	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
 	<script src="https://f.alicdn.com/react/15.4.1/react-dom.min.js"></script>
diff --git a/en-us/docs/modules/tubemq/consumer_example.json b/en-us/docs/modules/tubemq/consumer_example.json
index 486a057..66c209e 100644
--- a/en-us/docs/modules/tubemq/consumer_example.json
+++ b/en-us/docs/modules/tubemq/consumer_example.json
@@ -1,6 +1,6 @@
 {
   "filename": "consumer_example.md",
-  "__html": "<h2>Consumer Example</h2>\n<p>TubeMQ provides two ways to consumer message, PullConsumer and PushConsumer:</p>\n<ol>\n<li>\n<p>PullConsumer</p>\n<pre><code class=\"language-java\"><span class=\"hljs-keyword\">public</span> <span class=\"hljs-class\"><span class=\"hljs-keyword\">class</span> <span class=\"hljs-title\">PullConsumerExample</span> </span>{\n\n    <span class=\"hljs-function\"><span class=\"hljs-keyword\">public</span> <span class=\"hljs-keyword\">static</span> < [...]
+  "__html": "<h2>1 Consumer Example</h2>\n<p>TubeMQ provides two ways to consumer message, PullConsumer and PushConsumer:</p>\n<h3>1.1 PullConsumer</h3>\n<pre><code>```java\npublic class PullConsumerExample {\n\n    public static void main(String[] args) throws Throwable {\n        final String masterHostAndPort = &quot;localhost:8000&quot;;\n        final String topic = &quot;test&quot;;\n        final String group = &quot;test-group&quot;;\n        final ConsumerConfig consumerConfig = [...]
   "link": "/en-us/docs/modules/tubemq/consumer_example.html",
   "meta": {
     "title": "Consumer Example - Apache InLong's TubeMQ module"
diff --git a/en-us/docs/modules/tubemq/consumer_example.md b/en-us/docs/modules/tubemq/consumer_example.md
index c59a24b..cc32a2b 100644
--- a/en-us/docs/modules/tubemq/consumer_example.md
+++ b/en-us/docs/modules/tubemq/consumer_example.md
@@ -2,10 +2,10 @@
 title: Consumer Example - Apache InLong's TubeMQ module
 ---
 
-## Consumer Example
+## 1 Consumer Example
   TubeMQ provides two ways to consumer message, PullConsumer and PushConsumer:
 
-1. PullConsumer 
+### 1.1 PullConsumer 
     ```java
     public class PullConsumerExample {
 
@@ -38,7 +38,7 @@ title: Consumer Example - Apache InLong's TubeMQ module
     }
     ``` 
    
-2. PushConsumer
+### 1.2 PushConsumer
     ```java
     public class PushConsumerExample {
    
diff --git a/en-us/docs/modules/tubemq/deployment.html b/en-us/docs/modules/tubemq/deployment.html
index 7b93eff..816b0a7 100644
--- a/en-us/docs/modules/tubemq/deployment.html
+++ b/en-us/docs/modules/tubemq/deployment.html
@@ -12,20 +12,19 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h2>Compile and Package Project:</h2>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>Enter the root directory of project and run:</p>
 <pre><code>mvn clean package -Dmaven.test.skip
 </code></pre>
 <p>e.g. We put the TubeMQ project package at <code>E:/</code>, then run the above command. Compilation is complete when all subdirectories are compiled successfully.</p>
 <p><img src="img/sysdeployment/sys_compile.png" alt=""></p>
 <p>We can also run individual compilation in each subdirectory. Steps are the same as the whole project's compilation.</p>
-<p><strong>Server Deployment</strong></p>
+<h2>2 Server Deployment</h2>
 <p>As example above, entry directory <code>..\InLong\inlong-tubemq\tubemq-server\target</code>, we can see several JARs. <code>apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz</code> is the complete server-side installation package, including execution scripts, configuration files, dependencies, and frontend source code. <code>apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT.jar</code> is a server-side processing package included in <code>lib</code> of the complete p [...]
 <p><img src="img/sysdeployment/sys_package.png" alt=""></p>
 <p>Here we have a complete package deployed onto server and we place it in <code>/data/inlong</code></p>
 <p><img src="img/sysdeployment/sys_package_list.png" alt=""></p>
-<p><strong>Configuration System</strong></p>
+<h2>3 Configuration System</h2>
 <p>There are 3 roles in server package: Master, Broker and Tools. Master and Broker can be deployed on the same or different machine. It depends on the bussiness layouts. As example below, we have 3 machine to startup a complete production and consumption cluster with 2 Masters.</p>
 <table>
 <thead>
@@ -116,14 +115,14 @@
 <p>then it is <code>9.23.28.24</code>.</p>
 <p><img src="img/sysdeployment/sys_configure_2.png" alt=""></p>
 <p>Note that the upper right corner is configured with Master's web frontend configuration and configuration <code>file.resource.loader.path</code> in <code>/resources/velocity.properties</code> need to be modified according to the Master's installation path.</p>
-<p><strong>Start up Master</strong>:</p>
+<h2>4 Start up Master:</h2>
 <p>After configuration, entry directory <code>bin</code> of Master environment and start up master.</p>
 <p><img src="img/sysdeployment/sys_master_start.png" alt=""></p>
 <p>We First start up <code>9.23.27.24</code>, and then start up Master on <code>9.23.28.24</code>. The following messages indicate that the master and backup master have been successfully started up and the external service ports are reachable.</p>
 <p><img src="img/sysdeployment/sys_master_startted.png" alt=""></p>
 <p>Visiting Master's Administrator panel(<a href="http://9.23.27.24:8080">http://9.23.27.24:8080</a>), search operation working well indicates that master has been successfully started up.</p>
 <p><img src="img/sysdeployment/sys_master_console.png" alt=""></p>
-<p><strong>Start up Broker</strong>:</p>
+<h2>5 Start up Broker:</h2>
 <p>Starting up Broker is a little bit different to starting Master: Master is responsible for managing the entire TubeMQ cluster, including Broker node with Topic configuration on them, production and consumption managament. So we need to add metadata on Master before starting up Broker.</p>
 <p><img src="img/sysdeployment/sys_broker_configure.png" alt=""></p>
 <p>Confirm and create a draft record of Broker.</p>
@@ -141,7 +140,7 @@
 <p><img src="img/sysdeployment/sys_broker_restart_2.png" alt=""></p>
 <p>Check the Master Control Panel, broker has successfully registered.</p>
 <p><img src="img/sysdeployment/sys_broker_finished.png" alt=""></p>
-<p><strong>Topic Configuration and Activation</strong>:</p>
+<h2>6 Topic Configuration and Activation:</h2>
 <p>Configuration of Topic is similar with Broker's, we should add metadata on Master before using them, otherwise it will report an Not Found Error during production/consumption. For example, if we try to consum a non-existent topic <code>test</code>,
 <img src="img/sysdeployment/test_sendmessage.png" alt=""></p>
 <p>Demo returns error message.
@@ -154,7 +153,7 @@
 <p>Topic is available after overload. We can see some status of topic has changed after overload.</p>
 <p><img src="img/sysdeployment/sys_topic_finished.png" alt=""></p>
 <p><strong>Note</strong> When we are executing overload opertaion, we should make it in batches. Overload operations are controlled by state machines. It would become unwritable and un readale, read-only, readable and writable in order before published. Waiting for overloads on all brokers make topic temporary unreadable and unwritable, which result in production and consumption failure, especially production failure.</p>
-<p><strong>Message Production and Consumption</strong>:</p>
+<h2>7 Message Production and Consumption:</h2>
 <p>We pack Demo for test in package or <code>tubemq-client-0.9.0-incubating-SNAPSHOT.jar</code> can be used for implementing your own production and consumption.
 We run Producer Demo in below script and we can see data accepted on Broker.
 <img src="img/sysdeployment/test_sendmessage_2.png" alt=""></p>
diff --git a/en-us/docs/modules/tubemq/deployment.json b/en-us/docs/modules/tubemq/deployment.json
index 289f97c..7187c71 100644
--- a/en-us/docs/modules/tubemq/deployment.json
+++ b/en-us/docs/modules/tubemq/deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "deployment.md",
-  "__html": "<h1>Compile, Deploy and Examples of TubeMQ :</h1>\n<h2>Compile and Package Project:</h2>\n<p>Enter the root directory of project and run:</p>\n<pre><code>mvn clean package -Dmaven.test.skip\n</code></pre>\n<p>e.g. We put the TubeMQ project package at <code>E:/</code>, then run the above command. Compilation is complete when all subdirectories are compiled successfully.</p>\n<p><img src=\"img/sysdeployment/sys_compile.png\" alt=\"\"></p>\n<p>We can also run individual compila [...]
+  "__html": "<h2>1 Compile and Package Project:</h2>\n<p>Enter the root directory of project and run:</p>\n<pre><code>mvn clean package -Dmaven.test.skip\n</code></pre>\n<p>e.g. We put the TubeMQ project package at <code>E:/</code>, then run the above command. Compilation is complete when all subdirectories are compiled successfully.</p>\n<p><img src=\"img/sysdeployment/sys_compile.png\" alt=\"\"></p>\n<p>We can also run individual compilation in each subdirectory. Steps are the same as  [...]
   "link": "/en-us/docs/modules/tubemq/deployment.html",
   "meta": {
     "title": "Deployment - Apache InLong's TubeMQ Module"
diff --git a/en-us/docs/modules/tubemq/deployment.md b/en-us/docs/modules/tubemq/deployment.md
index 9c61464..5ea4b64 100644
--- a/en-us/docs/modules/tubemq/deployment.md
+++ b/en-us/docs/modules/tubemq/deployment.md
@@ -2,9 +2,7 @@
 title: Deployment - Apache InLong's TubeMQ Module
 ---
 
-# Compile, Deploy and Examples of TubeMQ :
-
-## Compile and Package Project:
+## 1 Compile and Package Project:
 
 Enter the root directory of project and run:
 
@@ -18,7 +16,7 @@ e.g. We put the TubeMQ project package at `E:/`, then run the above command. Com
 
 We can also run individual compilation in each subdirectory. Steps are the same as the whole project's compilation.
 
-**Server Deployment**
+## 2 Server Deployment
 
 As example above, entry directory `..\InLong\inlong-tubemq\tubemq-server\target`, we can see several JARs. `apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz` is the complete server-side installation package, including execution scripts, configuration files, dependencies, and frontend source code. `apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT.jar` is a server-side processing package included in `lib` of the complete project installer. Consider to daily changes and [...]
 
@@ -30,7 +28,7 @@ Here we have a complete package deployed onto server and we place it in `/data/i
 ![](img/sysdeployment/sys_package_list.png)
 
 
-**Configuration System**
+## 3 Configuration System
 
 There are 3 roles in server package: Master, Broker and Tools. Master and Broker can be deployed on the same or different machine. It depends on the bussiness layouts. As example below, we have 3 machine to startup a complete production and consumption cluster with 2 Masters.
 
@@ -62,7 +60,7 @@ then it is `9.23.28.24`.
 
 Note that the upper right corner is configured with Master's web frontend configuration and configuration `file.resource.loader.path` in `/resources/velocity.properties` need to be modified according to the Master's installation path.
 
-**Start up Master**:
+## 4 Start up Master:
 
 After configuration, entry directory `bin` of Master environment and start up master.
 
@@ -76,7 +74,7 @@ Visiting Master's Administrator panel([http://9.23.27.24:8080](http://9.23.27.24
 
 ![](img/sysdeployment/sys_master_console.png)
 
-**Start up Broker**:
+## 5 Start up Broker:
 
 Starting up Broker is a little bit different to starting Master: Master is responsible for managing the entire TubeMQ cluster, including Broker node with Topic configuration on them, production and consumption managament. So we need to add metadata on Master before starting up Broker.
 
@@ -114,7 +112,7 @@ Check the Master Control Panel, broker has successfully registered.
 ![](img/sysdeployment/sys_broker_finished.png)
 
 
-**Topic Configuration and Activation**:
+## 6 Topic Configuration and Activation:
 
 Configuration of Topic is similar with Broker's, we should add metadata on Master before using them, otherwise it will report an Not Found Error during production/consumption. For example, if we try to consum a non-existent topic `test`,
 ![](img/sysdeployment/test_sendmessage.png)
@@ -139,7 +137,7 @@ Topic is available after overload. We can see some status of topic has changed a
 
 **Note** When we are executing overload opertaion, we should make it in batches. Overload operations are controlled by state machines. It would become unwritable and un readale, read-only, readable and writable in order before published. Waiting for overloads on all brokers make topic temporary unreadable and unwritable, which result in production and consumption failure, especially production failure.
 
-**Message Production and Consumption**:
+## 7 Message Production and Consumption:
 
 We pack Demo for test in package or `tubemq-client-0.9.0-incubating-SNAPSHOT.jar` can be used for implementing your own production and consumption.
 We run Producer Demo in below script and we can see data accepted on Broker.
diff --git a/en-us/docs/modules/tubemq/error_code.html b/en-us/docs/modules/tubemq/error_code.html
index 1626f3e..70d23aa 100644
--- a/en-us/docs/modules/tubemq/error_code.html
+++ b/en-us/docs/modules/tubemq/error_code.html
@@ -12,11 +12,11 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>​        TubeMQ use <code>errCode</code> and <code>errMsg</code> combined to return specific operation result.
 Firstly, determine the type of result(problem) by errCode, and then determine the specific reson of the errCode based on errMsg.
 The following table summarizes all the errCodes and errMsgs that may return during operation.</p>
-<h2>errCodes</h2>
+<h2>2 errCodes</h2>
 <table>
 <thead>
 <tr>
@@ -184,7 +184,7 @@ The following table summarizes all the errCodes and errMsgs that may return duri
 </tr>
 </tbody>
 </table>
-<h2>Common errMsgs</h2>
+<h2>3 Common errMsgs</h2>
 <table>
 <thead>
 <tr>
diff --git a/en-us/docs/modules/tubemq/error_code.json b/en-us/docs/modules/tubemq/error_code.json
index 359534c..f5b9f69 100644
--- a/en-us/docs/modules/tubemq/error_code.json
+++ b/en-us/docs/modules/tubemq/error_code.json
@@ -1,6 +1,6 @@
 {
   "filename": "error_code.md",
-  "__html": "<h1>Introduction of TubeMQ Error</h1>\n<p>​        TubeMQ use <code>errCode</code> and <code>errMsg</code> combined to return specific operation result.\nFirstly, determine the type of result(problem) by errCode, and then determine the specific reson of the errCode based on errMsg.\nThe following table summarizes all the errCodes and errMsgs that may return during operation.</p>\n<h2>errCodes</h2>\n<table>\n<thead>\n<tr>\n<th>Error Type</th>\n<th>errCode</th>\n<th>Error Mark [...]
+  "__html": "<h2>1 Introduction of TubeMQ Error</h2>\n<p>​        TubeMQ use <code>errCode</code> and <code>errMsg</code> combined to return specific operation result.\nFirstly, determine the type of result(problem) by errCode, and then determine the specific reson of the errCode based on errMsg.\nThe following table summarizes all the errCodes and errMsgs that may return during operation.</p>\n<h2>2 errCodes</h2>\n<table>\n<thead>\n<tr>\n<th>Error Type</th>\n<th>errCode</th>\n<th>Error  [...]
   "link": "/en-us/docs/modules/tubemq/error_code.html",
   "meta": {
     "title": "Error Code - Apache InLong's TubeMQ module"
diff --git a/en-us/docs/modules/tubemq/error_code.md b/en-us/docs/modules/tubemq/error_code.md
index 00234bb..ec7591f 100644
--- a/en-us/docs/modules/tubemq/error_code.md
+++ b/en-us/docs/modules/tubemq/error_code.md
@@ -2,13 +2,13 @@
 title: Error Code - Apache InLong's TubeMQ module
 ---
 
-# Introduction of TubeMQ Error
+## 1 Introduction of TubeMQ Error
 
 ​        TubeMQ use `errCode` and `errMsg` combined to return specific operation result. 
         Firstly, determine the type of result(problem) by errCode, and then determine the specific reson of the errCode based on errMsg.
         The following table summarizes all the errCodes and errMsgs that may return during operation.
 
-## errCodes
+## 2 errCodes
 
 | Error Type | errCode | Error Mark | Meaning | Note |
 | ---------- | ------- | ---------- | ------- | ---- |
@@ -35,7 +35,7 @@ title: Error Code - Apache InLong's TubeMQ module
 | Server Error| 503| SERVICE_UNAVILABLE| Temporary ban on reading or writing for business. | Retry it. ||
 | Server Error| 510| INTERNAL_SERVER_ERROR_MSGSET_NULL | Can not read Message Set. | Retry it. ||
 
-## Common errMsgs
+## 3 Common errMsgs
 
 | Record ID | errMsg | Meaning | Note |
 | --------- | ------ | ------- | ---- |
diff --git a/en-us/docs/modules/tubemq/http_access_api.html b/en-us/docs/modules/tubemq/http_access_api.html
index 29e73a9..4c813e1 100644
--- a/en-us/docs/modules/tubemq/http_access_api.html
+++ b/en-us/docs/modules/tubemq/http_access_api.html
@@ -12,9 +12,9 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h2>Master metadata configuration API</h2>
-<h3><code>admin_online_broker_configure</code></h3>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+<h3>1.1 Cluster management API</h3>
+<h4>1.1.1 <code>admin_online_broker_configure</code></h4>
 <p>The online configuration of the Brokers are new or offline. The configuration of Topics are distributed to related Brokers as well.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -75,7 +75,7 @@
 </tr>
 </tbody>
 </table>
-<h3><code>admin_reload_broker_configure</code></h3>
+<h4>1.1.2 <code>admin_reload_broker_configure</code></h4>
 <p>Update the configuration of the Brokers which are <strong>online</strong>. The new configuration will be published to Broker server, it
 will return error if the broker is offline.</p>
 <p><strong>Request</strong></p>
@@ -137,7 +137,7 @@ will return error if the broker is offline.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_offline_broker_configure</code></h3>
+<h4>1.1.3 <code>admin_offline_broker_configure</code></h4>
 <p>Offline the configuration of the Brokers which are <strong>online</strong>. It should be called before Broker offline or retired.
 The Broker processes can be terminated once all offline tasks are done.</p>
 <p><strong>Request</strong></p>
@@ -199,7 +199,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_set_broker_read_or_write</code></h3>
+<h4>1.1.4 <code>admin_set_broker_read_or_write</code></h4>
 <p>Set Broker into a read-only or write-only state. Only Brokers are online and idle can be handled.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -272,7 +272,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_broker_run_status</code></h3>
+<h4>1.1.5 <code>admin_query_broker_run_status</code></h4>
 <p>Query Broker status. Only the Brokers processes are <strong>offline</strong> and idle can be terminated.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -339,7 +339,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_release_broker_autoforbidden_status</code></h3>
+<h4>1.1.6 <code>admin_release_broker_autoforbidden_status</code></h4>
 <p>Release the brokers' auto forbidden status.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -406,11 +406,11 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_master_group_info</code></h3>
+<h4>1.1.7 <code>admin_query_master_group_info</code></h4>
 <p>Query the detail of master cluster nodes.</p>
-<h3><code>admin_transfer_current_master</code></h3>
+<h4>1.1.8 <code>admin_transfer_current_master</code></h4>
 <p>Set current master node as backup node, let it select another master.</p>
-<h3><code>groupAdmin.sh</code></h3>
+<h4>1.9 <code>groupAdmin.sh</code></h4>
 <p>Clean the invalid node inside master group.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -465,7 +465,8 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_add_broker_configure</code></h3>
+<h3>1.2 Broker node configuration API</h3>
+<h4>1.2.1 <code>admin_add_broker_configure</code></h4>
 <p>Add broker default configuration (not include topic info). It will be effective after calling load API.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -588,7 +589,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_batch_add_broker_configure</code></h3>
+<h4>1.2.2 <code>admin_batch_add_broker_configure</code></h4>
 <p>Add broker default configuration in batch (not include topic info). It will be effective after calling load API.</p>
 <p>This API take a json string referred as <code>brokerJsonSet</code> as input parameter. The content of Json contains the configuration lists in
 <code>admin_add_broker_configure</code></p>
@@ -629,7 +630,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_update_broker_configure</code></h3>
+<h4>1.2.3 <code>admin_update_broker_configure</code></h4>
 <p>Update broker default configuration (not include topic info). It will be effective after calling load API.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -746,7 +747,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_broker_configure</code></h3>
+<h4>1.2.4 <code>admin_query_broker_configure</code></h4>
 <p>Query the broker configuration.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -869,7 +870,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_delete_broker_configure</code></h3>
+<h4>1.2.5 <code>admin_delete_broker_configure</code></h4>
 <p>Delete the broker's default configuration. It requires the related topic configuration to be delete at first, and the broker should be offline.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -914,7 +915,8 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_add_new_topic_record</code></h3>
+<h3>1.3 Topic configuration API</h3>
+<h4>1.3.1 <code>admin_add_new_topic_record</code></h4>
 <p>Add topic related configuration.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1031,7 +1033,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_topic_info</code></h3>
+<h4>1.3.2 <code>admin_query_topic_info</code></h4>
 <p>Query specific topic record info.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1148,7 +1150,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_modify_topic_info</code></h3>
+<h4>1.3.3 <code>admin_modify_topic_info</code></h4>
 <p>Modify specific topic record info.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1271,7 +1273,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_delete_topic_info</code></h3>
+<h4>1.3.4 <code>admin_delete_topic_info</code></h4>
 <p>Delete specific topic record info softly.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1316,7 +1318,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_redo_deleted_topic_info</code></h3>
+<h4>1.3.4 <code>admin_redo_deleted_topic_info</code></h4>
 <p>Redo the Deleted specific topic record info.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1361,7 +1363,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_remove_topic_info</code></h3>
+<h4>1.3.5 <code>admin_remove_topic_info</code></h4>
 <p>Delete specific topic record info hardly.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1406,7 +1408,7 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_broker_topic_config_info</code></h3>
+<h4>1.3.6 <code>admin_query_broker_topic_config_info</code></h4>
 <p>Query the topic configuration info of the broker in current cluster.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1427,8 +1429,8 @@ The Broker processes can be terminated once all offline tasks are done.</p>
 </tr>
 </tbody>
 </table>
-<h2>Master consumer permission operation API</h2>
-<h3><code>admin_set_topic_info_authorize_control</code></h3>
+<h2>2 Master consumer permission operation API</h2>
+<h3>2.1 <code>admin_set_topic_info_authorize_control</code></h3>
 <p>Enable or disable the authorization control feature of the topic. If the consumer group is not authorized, the register request will be denied.
 If the topic's authorization group is empty, the topic will fail.</p>
 <p><strong>Request</strong></p>
@@ -1474,7 +1476,7 @@ If the topic's authorization group is empty, the topic will fail.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_delete_topic_info_authorize_control</code></h3>
+<h3>2.2 <code>admin_delete_topic_info_authorize_control</code></h3>
 <p>Delete the authorization control feature of the topic. The content of the authorized consumer group list will be delete as well.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1507,7 +1509,7 @@ If the topic's authorization group is empty, the topic will fail.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_topic_info_authorize_control</code></h3>
+<h3>2.3 <code>admin_query_topic_info_authorize_control</code></h3>
 <p>Query the authorization control feature of the topic.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1534,7 +1536,7 @@ If the topic's authorization group is empty, the topic will fail.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_add_authorized_consumergroup_info</code></h3>
+<h3>2.4 <code>admin_add_authorized_consumergroup_info</code></h3>
 <p>Add new authorized consumer group record of the topic. The server will deny the registration from the consumer group which is not exist in
 topic's authorized consumer group.</p>
 <p><strong>Request</strong></p>
@@ -1580,7 +1582,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_allowed_consumer_group_info</code></h3>
+<h3>2.5 <code>admin_query_allowed_consumer_group_info</code></h3>
 <p>Query the authorized consumer group record of the topic.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1613,7 +1615,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_delete_allowed_consumer_group_info</code></h3>
+<h3>2.6 <code>admin_delete_allowed_consumer_group_info</code></h3>
 <p>Delete the authorized consumer group record of the topic.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1646,7 +1648,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_batch_add_topic_authorize_control</code></h3>
+<h3>2.7<code>admin_batch_add_topic_authorize_control</code></h3>
 <p>Add the authorized consumer group of the topic record in batch mode.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1685,7 +1687,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_batch_add_authorized_consumergroup_info</code></h3>
+<h3>2.8 <code>admin_batch_add_authorized_consumergroup_info</code></h3>
 <p>Add the authorized consumer group record in batch mode.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1724,7 +1726,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_add_black_consumergroup_info</code></h3>
+<h3>2.9 <code>admin_add_black_consumergroup_info</code></h3>
 <p>Add consumer group into the black list of the topic. The registered consumer on the group cannot consume topic later as well as unregistered one.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1769,7 +1771,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_black_consumergroup_info</code></h3>
+<h3>2.10 <code>admin_query_black_consumergroup_info</code></h3>
 <p>Query the black list of the topic.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1802,7 +1804,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_delete_black_consumergroup_info</code></h3>
+<h3>2.11 <code>admin_delete_black_consumergroup_info</code></h3>
 <p>Delete the black list of the topic.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1835,7 +1837,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_add_group_filtercond_info</code></h3>
+<h3>2.12 <code>admin_add_group_filtercond_info</code></h3>
 <p>Add condition of consuming filter for the consumer group</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1892,7 +1894,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_mod_group_filtercond_info</code></h3>
+<h3>2.13 <code>admin_mod_group_filtercond_info</code></h3>
 <p>Modify the condition of consuming filter for the consumer group</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1949,7 +1951,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_del_group_filtercond_info</code></h3>
+<h3>2.14 <code>admin_del_group_filtercond_info</code></h3>
 <p>Delete the condition of consuming filter for the consumer group</p>
 <p><strong>Request</strong></p>
 <table>
@@ -1982,7 +1984,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_group_filtercond_info</code></h3>
+<h3>2.15 <code>admin_query_group_filtercond_info</code></h3>
 <p>Query the condition of consuming filter for the consumer group</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2021,7 +2023,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_rebalance_group_allocate</code></h3>
+<h3>2.16 <code>admin_rebalance_group_allocate</code></h3>
 <p>Adjust consuming partition of the specific consumer in consumer group. This includes:  \</p>
 <ol>
 <li>release current consuming partition and retrieve new consuming partition.</li>
@@ -2076,7 +2078,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_set_def_flow_control_rule</code></h3>
+<h3>2.17 <code>admin_set_def_flow_control_rule</code></h3>
 <p>Set default flow control rule. It is effective for all consumer group. It worth to note that the priority is lower than the setting in consumer group.</p>
 <p>The flow control info is described in JSON format, for example:</p>
 <pre><code class="language-json">[{<span class="hljs-attr">"type"</span>:<span class="hljs-number">0</span>,<span class="hljs-attr">"rule"</span>:[{<span class="hljs-attr">"start"</span>:<span class="hljs-string">"08:00"</span>,<span class="hljs-attr">"end"</span>:<span class="hljs-string">"17:59"</span>,<span class="hljs-attr">"dltInM"</span>:<span class="hljs-number">1024</span>,<span class="hljs-attr">"limitInM"</span>:<span class="hljs-number">20</span>,<span class="hljs-attr">"freqI [...]
@@ -2140,7 +2142,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_upd_def_flow_control_rule</code></h3>
+<h3>2.18 <code>admin_upd_def_flow_control_rule</code></h3>
 <p>Update the default flow control rule.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2191,7 +2193,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_def_flow_control_rule</code></h3>
+<h3>2.19 <code>admin_query_def_flow_control_rule</code></h3>
 <p>Query the default flow control rule.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2224,7 +2226,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_set_group_flow_control_rule</code></h3>
+<h3>2.20 <code>admin_set_group_flow_control_rule</code></h3>
 <p>Set the group flow control rule.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2281,7 +2283,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_upd_group_flow_control_rule</code></h3>
+<h3>2.21 <code>admin_upd_group_flow_control_rule</code></h3>
 <p>Update the group flow control rule.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2338,7 +2340,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_rmv_group_flow_control_rule</code></h3>
+<h3>2.22 <code>admin_rmv_group_flow_control_rule</code></h3>
 <p>Remove the group flow control rule.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2371,7 +2373,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_group_flow_control_rule</code></h3>
+<h3>2.23 <code>admin_query_group_flow_control_rule</code></h3>
 <p>Remove the group flow control rule.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2410,7 +2412,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_add_consume_group_setting</code></h3>
+<h3>2.24 <code>admin_add_consume_group_setting</code></h3>
 <p>Set whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2461,7 +2463,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_consume_group_setting</code></h3>
+<h3>2.25 <code>admin_query_consume_group_setting</code></h3>
 <p>Query the consume group setting to check whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2500,7 +2502,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_upd_consume_group_setting</code></h3>
+<h3>2.26 <code>admin_upd_consume_group_setting</code></h3>
 <p>Update the consume group setting for whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2551,7 +2553,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_del_consume_group_setting</code></h3>
+<h3>2.27 <code>admin_del_consume_group_setting</code></h3>
 <p>Delete the consume group setting for whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2590,10 +2592,8 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h2>Master subscriber relation API</h2>
-<ol>
-<li>Query consumer group subscription information</li>
-</ol>
+<h2>3 Master subscriber relation API</h2>
+<h3>3.1 Query consumer group subscription information</h3>
 <p>Url <code>http://127.0.0.1:8080/webapi.htm?type=op_query&amp;method=admin_query_sub_info&amp;topicName=test&amp;consumeGroup=xxx</code></p>
 <p>response:</p>
 <pre><code class="language-json">{
@@ -2607,9 +2607,7 @@ topic's authorized consumer group.</p>
        }]
 }									
 </code></pre>
-<ol start="2">
-<li>Query consumer group detailed subscription information</li>
-</ol>
+<h3>3.2 Query consumer group detailed subscription information</h3>
 <p>Url <code>http://127.0.0.1:8080/webapi.htm?type=op_query&amp;method=admin_query_consume_group_detail&amp;consumeGroup=test_25</code></p>
 <p>response:</p>
 <pre><code class="language-json">{
@@ -2629,8 +2627,8 @@ topic's authorized consumer group.</p>
    }]
 }									
 </code></pre>
-<h2>Broker operation API</h2>
-<h3><code>admin_snapshot_message</code></h3>
+<h2>4 Broker operation API</h2>
+<h3>4.1 <code>admin_snapshot_message</code></h3>
 <p>Check whether it is transferring data under current broker's topic, and what is the content.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2669,7 +2667,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_manual_set_current_offset</code></h3>
+<h3>4.2 <code>admin_manual_set_current_offset</code></h3>
 <p>Modify the offset value of consuming group under current broker. The new value will be persisted to ZK.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2714,7 +2712,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_group_offset</code></h3>
+<h3>4.3 <code>admin_query_group_offset</code></h3>
 <p>Query the offset of consuming group under current broker.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2753,7 +2751,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_broker_all_consumer_info</code></h3>
+<h3>4.4 <code>admin_query_broker_all_consumer_info</code></h3>
 <p>Query consumer info of the specific consume group on the broker.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2774,7 +2772,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_broker_all_store_info</code></h3>
+<h3>4.5 <code>admin_query_broker_all_store_info</code></h3>
 <p>Query store info of the specific topic on the broker.</p>
 <p><strong>Request</strong></p>
 <table>
@@ -2795,7 +2793,7 @@ topic's authorized consumer group.</p>
 </tr>
 </tbody>
 </table>
-<h3><code>admin_query_broker_memstore_info</code></h3>
+<h3>4.6 <code>admin_query_broker_memstore_info</code></h3>
 <p>Query memory store info of the specific topic on the broker.</p>
 <p><strong>Request</strong></p>
 <table>
diff --git a/en-us/docs/modules/tubemq/http_access_api.json b/en-us/docs/modules/tubemq/http_access_api.json
index 9338035..82d3b3a 100644
--- a/en-us/docs/modules/tubemq/http_access_api.json
+++ b/en-us/docs/modules/tubemq/http_access_api.json
@@ -1,6 +1,6 @@
 {
   "filename": "http_access_api.md",
-  "__html": "<h1>HTTP access API definition</h1>\n<h2>Master metadata configuration API</h2>\n<h3><code>admin_online_broker_configure</code></h3>\n<p>The online configuration of the Brokers are new or offline. The configuration of Topics are distributed to related Brokers as well.</p>\n<p><strong>Request</strong></p>\n<table>\n<thead>\n<tr>\n<th>name</th>\n<th>must</th>\n<th>description</th>\n<th>type</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>brokerId</td>\n<td>yes</td>\n<td>The broker I [...]
+  "__html": "<h2>1 Master metadata configuration API</h2>\n<h3>1.1 Cluster management API</h3>\n<h4>1.1.1 <code>admin_online_broker_configure</code></h4>\n<p>The online configuration of the Brokers are new or offline. The configuration of Topics are distributed to related Brokers as well.</p>\n<p><strong>Request</strong></p>\n<table>\n<thead>\n<tr>\n<th>name</th>\n<th>must</th>\n<th>description</th>\n<th>type</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>brokerId</td>\n<td>yes</td>\n<td>The  [...]
   "link": "/en-us/docs/modules/tubemq/http_access_api.html",
   "meta": {
     "title": "HTTP API - Apache InLong's TubeMQ module"
diff --git a/en-us/docs/modules/tubemq/http_access_api.md b/en-us/docs/modules/tubemq/http_access_api.md
index 7df316b..2a0ed95 100644
--- a/en-us/docs/modules/tubemq/http_access_api.md
+++ b/en-us/docs/modules/tubemq/http_access_api.md
@@ -2,11 +2,10 @@
 title: HTTP API - Apache InLong's TubeMQ module
 ---
 
-# HTTP access API definition
+## 1 Master metadata configuration API
 
-## Master metadata configuration API
-
-### `admin_online_broker_configure`
+### 1.1 Cluster management API
+#### 1.1.1 `admin_online_broker_configure`
 
 The online configuration of the Brokers are new or offline. The configuration of Topics are distributed to related Brokers as well.
 
@@ -26,7 +25,7 @@ __Response__
 |code| Returns `0` if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_reload_broker_configure`
+#### 1.1.2 `admin_reload_broker_configure`
 
 Update the configuration of the Brokers which are __online__. The new configuration will be published to Broker server, it
  will return error if the broker is offline.
@@ -47,7 +46,7 @@ __Response__
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_offline_broker_configure`
+#### 1.1.3 `admin_offline_broker_configure`
 
 Offline the configuration of the Brokers which are __online__. It should be called before Broker offline or retired.
 The Broker processes can be terminated once all offline tasks are done.
@@ -68,7 +67,7 @@ __Response__
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_set_broker_read_or_write`
+#### 1.1.4 `admin_set_broker_read_or_write`
 
 Set Broker into a read-only or write-only state. Only Brokers are online and idle can be handled.
 
@@ -90,7 +89,7 @@ __Response__
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_query_broker_run_status`
+#### 1.1.5 `admin_query_broker_run_status`
 
 Query Broker status. Only the Brokers processes are __offline__ and idle can be terminated.
 
@@ -111,7 +110,7 @@ __Response__
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_release_broker_autoforbidden_status`
+#### 1.1.6 `admin_release_broker_autoforbidden_status`
 
 Release the brokers' auto forbidden status.
 
@@ -132,16 +131,16 @@ Response
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-### `admin_query_master_group_info`
+#### 1.1.7 `admin_query_master_group_info`
 
 Query the detail of master cluster nodes.
 
-### `admin_transfer_current_master`
+#### 1.1.8 `admin_transfer_current_master`
 
 Set current master node as backup node, let it select another master.
 
 
-### `groupAdmin.sh`
+#### 1.9 `groupAdmin.sh`
 
 Clean the invalid node inside master group.
 
@@ -160,8 +159,8 @@ Response
 |code| return 0 if success, otherwise failed | int|
 |errMsg| "OK" if success, other return error message| string|
 
-
-### `admin_add_broker_configure`
+### 1.2 Broker node configuration API
+#### 1.2.1 `admin_add_broker_configure`
 
 Add broker default configuration (not include topic info). It will be effective after calling load API.
 
@@ -188,7 +187,7 @@ __Request__
 |createDate|yes|the create date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_batch_add_broker_configure`
+#### 1.2.2 `admin_batch_add_broker_configure`
 
 Add broker default configuration in batch (not include topic info). It will be effective after calling load API.
 
@@ -204,7 +203,7 @@ __Request__
 |createDate|yes|the create date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_update_broker_configure`
+#### 1.2.3 `admin_update_broker_configure`
 
 Update broker default configuration (not include topic info). It will be effective after calling load API.
 
@@ -230,7 +229,7 @@ __Request__
 |modifyDate|yes|the modify date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_broker_configure`
+#### 1.2.4 `admin_query_broker_configure`
 
 Query the broker configuration.
 
@@ -257,7 +256,7 @@ __Request__
 |topicStatusId|yes|the status of topic record|int|
 |withTopic|no|whether it needs topic configuration|Boolean|
 
-### `admin_delete_broker_configure`
+#### 1.2.5 `admin_delete_broker_configure`
 
 Delete the broker's default configuration. It requires the related topic configuration to be delete at first, and the broker should be offline. 
 
@@ -271,7 +270,8 @@ __Request__
 |isReserveData|no|whether to reserve production data, default false|Boolean|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_add_new_topic_record`
+### 1.3 Topic configuration API
+#### 1.3.1 `admin_add_new_topic_record`
 
 Add topic related configuration.
 
@@ -297,7 +297,7 @@ __Request__
 |createDate|yes|the create date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_topic_info`
+#### 1.3.2 `admin_query_topic_info`
 
 Query specific topic record info.
 
@@ -323,7 +323,7 @@ __Request__
 |createUser|yes|the creator|String|
 |modifyUser|yes|the modifier|String|
 
-### `admin_modify_topic_info`
+#### 1.3.3 `admin_modify_topic_info`
 
 Modify specific topic record info.
 
@@ -351,7 +351,7 @@ __Request__
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
 
-### `admin_delete_topic_info`
+#### 1.3.4 `admin_delete_topic_info`
 
 Delete specific topic record info softly.
 
@@ -365,7 +365,7 @@ __Request__
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_redo_deleted_topic_info`
+#### 1.3.4 `admin_redo_deleted_topic_info`
 
 Redo the Deleted specific topic record info.
 
@@ -379,7 +379,7 @@ __Request__
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_remove_topic_info`
+#### 1.3.5 `admin_remove_topic_info`
 
 Delete specific topic record info hardly.
 
@@ -393,7 +393,7 @@ __Request__
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_broker_topic_config_info`
+#### 1.3.6 `admin_query_broker_topic_config_info`
 
 Query the topic configuration info of the broker in current cluster.
 
@@ -403,9 +403,10 @@ __Request__
 |---|---|---|---|
 |topicName|yes| the topic name|String|
 
-## Master consumer permission operation API
 
-### `admin_set_topic_info_authorize_control`
+## 2 Master consumer permission operation API
+
+### 2.1 `admin_set_topic_info_authorize_control`
 
 Enable or disable the authorization control feature of the topic. If the consumer group is not authorized, the register request will be denied.
 If the topic's authorization group is empty, the topic will fail.
@@ -420,7 +421,7 @@ __Request__
 |isEnable|no|whether the authorization control is enable, default false|Boolean|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_delete_topic_info_authorize_control`
+### 2.2 `admin_delete_topic_info_authorize_control`
 
 Delete the authorization control feature of the topic. The content of the authorized consumer group list will be delete as well.
 
@@ -432,7 +433,7 @@ __Request__
 |createUser|yes|the creator|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_topic_info_authorize_control`
+### 2.3 `admin_query_topic_info_authorize_control`
 
 Query the authorization control feature of the topic.
 
@@ -443,7 +444,7 @@ __Request__
 |topicName|yes| the topic name|String|
 |createUser|yes|the creator|String|
 
-### `admin_add_authorized_consumergroup_info`
+### 2.4 `admin_add_authorized_consumergroup_info`
 
 Add new authorized consumer group record of the topic. The server will deny the registration from the consumer group which is not exist in
 topic's authorized consumer group.
@@ -459,7 +460,7 @@ __Request__
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_allowed_consumer_group_info`
+### 2.5 `admin_query_allowed_consumer_group_info`
 
 Query the authorized consumer group record of the topic. 
 
@@ -471,7 +472,7 @@ __Request__
 |groupName|yes| the group name to be added|String|
 |createUser|yes|the creator|String|
 
-### `admin_delete_allowed_consumer_group_info`
+### 2.6 `admin_delete_allowed_consumer_group_info`
 
 Delete the authorized consumer group record of the topic. 
 
@@ -483,7 +484,7 @@ __Request__
 |groupName|yes| the group name to be added|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_batch_add_topic_authorize_control`
+### 2.7`admin_batch_add_topic_authorize_control`
 
 Add the authorized consumer group of the topic record in batch mode.
 
@@ -496,7 +497,7 @@ __Request__
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_batch_add_authorized_consumergroup_info`
+### 2.8 `admin_batch_add_authorized_consumergroup_info`
 
 Add the authorized consumer group record in batch mode.
 
@@ -509,7 +510,7 @@ __Request__
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_add_black_consumergroup_info`
+### 2.9 `admin_add_black_consumergroup_info`
 
 Add consumer group into the black list of the topic. The registered consumer on the group cannot consume topic later as well as unregistered one.
 
@@ -523,7 +524,7 @@ __Request__
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_black_consumergroup_info`
+### 2.10 `admin_query_black_consumergroup_info`
 
 Query the black list of the topic. 
 
@@ -535,7 +536,7 @@ __Request__
 |groupName|yes|the group name |List|
 |createUser|yes|the creator|String|
 
-### `admin_delete_black_consumergroup_info`
+### 2.11 `admin_delete_black_consumergroup_info`
 
 Delete the black list of the topic. 
 
@@ -547,7 +548,7 @@ __Request__
 |groupName|yes|the group name |List|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_add_group_filtercond_info`
+### 2.12 `admin_add_group_filtercond_info`
 
 Add condition of consuming filter for the consumer group 
 
@@ -563,7 +564,7 @@ __Request__
 |createUser|yes|the creator|String|
 |createDate|no|the creating date in format `yyyyMMddHHmmss`|String|
 
-### `admin_mod_group_filtercond_info`
+### 2.13 `admin_mod_group_filtercond_info`
 
 Modify the condition of consuming filter for the consumer group 
 
@@ -579,7 +580,7 @@ __Request__
 |modifyUser|yes|the modifier|String|
 |modifyDate|no|the modification date in format `yyyyMMddHHmmss`|String|
 
-### `admin_del_group_filtercond_info`
+### 2.14 `admin_del_group_filtercond_info`
 
 Delete the condition of consuming filter for the consumer group 
 
@@ -591,7 +592,7 @@ __Request__
 |groupName|yes|the group name |List|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_group_filtercond_info`
+### 2.15 `admin_query_group_filtercond_info`
 
 Query the condition of consuming filter for the consumer group 
 
@@ -604,7 +605,7 @@ __Request__
 |condStatus|no| the condition status, 0: disable, 1:enable full authorization, 2:enable and limit consuming|Int|
 |filterConds|no| the filter conditions, the max length is 256|String|
 
-### `admin_rebalance_group_allocate`
+### 2.16 `admin_rebalance_group_allocate`
 
 Adjust consuming partition of the specific consumer in consumer group. This includes:  \
 1. release current consuming partition and retrieve new consuming partition.
@@ -622,7 +623,7 @@ __Request__
 |modifyUser|yes|the modifier|String|
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 
-### `admin_set_def_flow_control_rule`
+### 2.17 `admin_set_def_flow_control_rule`
 
 Set default flow control rule. It is effective for all consumer group. It worth to note that the priority is lower than the setting in consumer group.
 
@@ -649,7 +650,7 @@ __Request__
 |modifyDate|yes|the modification date in format `yyyyMMddHHmmss`|String|
 
 
-### `admin_upd_def_flow_control_rule`
+### 2.18 `admin_upd_def_flow_control_rule`
 
 Update the default flow control rule.
 
@@ -664,7 +665,7 @@ __Request__
 |flowCtrlInfo|yes|the flow control info in JSON format|String|
 |createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
 
-### `admin_query_def_flow_control_rule`
+### 2.19 `admin_query_def_flow_control_rule`
 
 Query the default flow control rule.
 
@@ -676,7 +677,7 @@ __Request__
 |qryPriorityId|no| the consuming priority Id. It is a composed field `A0B` with default value 301,<br> the value of A,B is [1, 2, 3] which means file, backup memory, and main memory respectively|int|
 |createUser|yes|the creator|String|
 
-### `admin_set_group_flow_control_rule`
+### 2.20 `admin_set_group_flow_control_rule`
 
 Set the group flow control rule.
 
@@ -692,7 +693,7 @@ __Request__
 |createUser|yes|the creator|String|
 |createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
 
-### `admin_upd_group_flow_control_rule`
+### 2.21 `admin_upd_group_flow_control_rule`
 
 Update the group flow control rule.
 
@@ -709,7 +710,7 @@ __Request__
 |createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
 
 
-### `admin_rmv_group_flow_control_rule`
+### 2.22 `admin_rmv_group_flow_control_rule`
 
 Remove the group flow control rule.
 
@@ -721,7 +722,7 @@ __Request__
 |confModAuthToken|yes|the authorized key for configuration update|String|
 |createUser|yes|the creator|String|
 
-### `admin_query_group_flow_control_rule`
+### 2.23 `admin_query_group_flow_control_rule`
 
 Remove the group flow control rule.
 
@@ -734,7 +735,7 @@ __Request__
 |qryPriorityId|no| the consuming priority Id. It is a composed field `A0B` with default value 301, <br>the value of A,B is [1, 2, 3] which means file, backup memory, and main memory respectively|int|
 |createUser|yes|the creator|String|
 
-### `admin_add_consume_group_setting`
+### 2.24 `admin_add_consume_group_setting`
 
 Set whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
 
@@ -749,7 +750,7 @@ __Request__
 |createDate|yes|the creating date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_query_consume_group_setting`
+### 2.25 `admin_query_consume_group_setting`
 
 Query the consume group setting to check whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
 
@@ -762,7 +763,7 @@ __Request__
 |allowedBClientRate|no|the ratio of the number of the consuming target's broker against the number of client in consuming group|int|
 |createUser|yes|the creator|String|
 
-### `admin_upd_consume_group_setting`
+### 2.26 `admin_upd_consume_group_setting`
 
 Update the consume group setting for whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
 
@@ -777,7 +778,7 @@ __Request__
 |modifyDate|yes|the modifying date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-### `admin_del_consume_group_setting`
+### 2.27 `admin_del_consume_group_setting`
 
 Delete the consume group setting for whether to allow consume group to consume via specific offset, and the ratio of broker and client when starting the consume group.
 
@@ -790,9 +791,9 @@ __Request__
 |modifyDate|yes|the modifying date in format `yyyyMMddHHmmss`|String|
 |confModAuthToken|yes|the authorized key for configuration update|String|
 
-## Master subscriber relation API
+## 3 Master subscriber relation API
 
-1. Query consumer group subscription information
+### 3.1 Query consumer group subscription information
 
 Url ` http://127.0.0.1:8080/webapi.htm?type=op_query&method=admin_query_sub_info&topicName=test&consumeGroup=xxx `
 
@@ -811,7 +812,7 @@ response:
 }									
 ```
 
-2. Query consumer group detailed subscription information
+### 3.2 Query consumer group detailed subscription information
 
 Url `http://127.0.0.1:8080/webapi.htm?type=op_query&method=admin_query_consume_group_detail&consumeGroup=test_25`
 
@@ -836,9 +837,9 @@ response:
 }									
 ```
 
-## Broker operation API
+## 4 Broker operation API
 
-### `admin_snapshot_message`
+### 4.1 `admin_snapshot_message`
 
 Check whether it is transferring data under current broker's topic, and what is the content.
 
@@ -852,7 +853,7 @@ __Request__
 |partitionId|yes|the partition ID which must exists|int|
 |filterConds|yes|the tid value for filtering|String|
 
-### `admin_manual_set_current_offset`
+### 4.2 `admin_manual_set_current_offset`
 
 Modify the offset value of consuming group under current broker. The new value will be persisted to ZK.
 
@@ -867,7 +868,7 @@ __Request__
 |partitionId|yes|the partition ID which must exists|int|
 |manualOffset|yes|the offset to be modified, it must be a valid value|long|
 
-### `admin_query_group_offset`
+### 4.3 `admin_query_group_offset`
 
 Query the offset of consuming group under current broker.
 
@@ -880,7 +881,7 @@ __Request__
 |partitionId|yes|the partition ID which must exists|int|
 |requireRealOffset|no|whether to check real offset on ZK, default false|Boolean|
 
-### `admin_query_broker_all_consumer_info`
+### 4.4 `admin_query_broker_all_consumer_info`
 
 Query consumer info of the specific consume group on the broker.
 
@@ -890,7 +891,7 @@ __Request__
 |---|---|---|---|
 |groupName|yes|the group name|String|
 
-### `admin_query_broker_all_store_info`
+### 4.5 `admin_query_broker_all_store_info`
 
 Query store info of the specific topic on the broker.
 
@@ -900,7 +901,7 @@ __Request__
 |---|---|---|---|
 |topicName|yes|the topic name|String|
 
-### `admin_query_broker_memstore_info`
+### 4.6 `admin_query_broker_memstore_info`
 
 Query memory store info of the specific topic on the broker.
 
diff --git a/en-us/docs/modules/tubemq/producer_example.html b/en-us/docs/modules/tubemq/producer_example.html
index abb8a1e..7f58a53 100644
--- a/en-us/docs/modules/tubemq/producer_example.html
+++ b/en-us/docs/modules/tubemq/producer_example.html
@@ -12,157 +12,153 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>TubeMQ provides two ways to initialize session factory, TubeSingleSessionFactory and TubeMultiSessionFactory:</p>
 <ul>
 <li>TubeSingleSessionFactory creates only one session in the lifecycle, this is very useful in streaming scenarios.</li>
 <li>TubeMultiSessionFactory creates new session on every call.</li>
 </ul>
-<ol>
-<li>
-<p>TubeSingleSessionFactory</p>
-<ul>
-<li>Send Message Synchronously</li>
-</ul>
-<pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">final</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SyncProducerExample</span> </span>{
+<h3>1.1 TubeSingleSessionFactory</h3>
+<h4>1.1.1 Send Message Synchronously</h4>
+<pre><code>```java
+
+public final class SyncProducerExample {
 
-    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Throwable </span>{
-        <span class="hljs-keyword">final</span> String masterHostAndPort = <span class="hljs-string">"localhost:8000"</span>;
-        <span class="hljs-keyword">final</span> TubeClientConfig clientConfig = <span class="hljs-keyword">new</span> TubeClientConfig(masterHostAndPort);
-        <span class="hljs-keyword">final</span> MessageSessionFactory messageSessionFactory = <span class="hljs-keyword">new</span> TubeSingleSessionFactory(clientConfig);
-        <span class="hljs-keyword">final</span> MessageProducer messageProducer = messageSessionFactory.createProducer();
-        <span class="hljs-keyword">final</span> String topic = <span class="hljs-string">"test"</span>;
-        <span class="hljs-keyword">final</span> String body = <span class="hljs-string">"This is a test message from single-session-factory!"</span>;
-        <span class="hljs-keyword">byte</span>[] bodyData = StringUtils.getBytesUtf8(body);
+    public static void main(String[] args) throws Throwable {
+        final String masterHostAndPort = &quot;localhost:8000&quot;;
+        final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+        final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+        final MessageProducer messageProducer = messageSessionFactory.createProducer();
+        final String topic = &quot;test&quot;;
+        final String body = &quot;This is a test message from single-session-factory!&quot;;
+        byte[] bodyData = StringUtils.getBytesUtf8(body);
         messageProducer.publish(topic);
-        Message message = <span class="hljs-keyword">new</span> Message(topic, bodyData);
+        Message message = new Message(topic, bodyData);
         MessageSentResult result = messageProducer.sendMessage(message);
-        <span class="hljs-keyword">if</span> (result.isSuccess()) {
-            System.out.println(<span class="hljs-string">"sync send message : "</span> + message);
+        if (result.isSuccess()) {
+            System.out.println(&quot;sync send message : &quot; + message);
         }
         messageProducer.shutdown();
     }
 }
+```
 </code></pre>
-<ul>
-<li>Send Message Asynchronously</li>
-</ul>
-<pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">final</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AsyncProducerExample</span> </span>{
- 
-    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Throwable </span>{
-        <span class="hljs-keyword">final</span> String masterHostAndPort = <span class="hljs-string">"localhost:8000"</span>;
-        <span class="hljs-keyword">final</span> TubeClientConfig clientConfig = <span class="hljs-keyword">new</span> TubeClientConfig(masterHostAndPort);
-        <span class="hljs-keyword">final</span> MessageSessionFactory messageSessionFactory = <span class="hljs-keyword">new</span> TubeSingleSessionFactory(clientConfig);
-        <span class="hljs-keyword">final</span> MessageProducer messageProducer = messageSessionFactory.createProducer();
-        <span class="hljs-keyword">final</span> String topic = <span class="hljs-string">"test"</span>;
-        <span class="hljs-keyword">final</span> String body = <span class="hljs-string">"async send message from single-session-factory!"</span>;
-        <span class="hljs-keyword">byte</span>[] bodyData = StringUtils.getBytesUtf8(body);
+<p>####1.1.2 Send Message Asynchronously
+```java
+public final class AsyncProducerExample {</p>
+<pre><code>    public static void main(String[] args) throws Throwable {
+        final String masterHostAndPort = &quot;localhost:8000&quot;;
+        final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+        final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+        final MessageProducer messageProducer = messageSessionFactory.createProducer();
+        final String topic = &quot;test&quot;;
+        final String body = &quot;async send message from single-session-factory!&quot;;
+        byte[] bodyData = StringUtils.getBytesUtf8(body);
         messageProducer.publish(topic);
-        <span class="hljs-keyword">final</span> Message message = <span class="hljs-keyword">new</span> Message(topic, bodyData);
-        messageProducer.sendMessage(message, <span class="hljs-keyword">new</span> MessageSentCallback(){
-            <span class="hljs-meta">@Override</span>
-            <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onMessageSent</span><span class="hljs-params">(MessageSentResult result)</span> </span>{
-                <span class="hljs-keyword">if</span> (result.isSuccess()) {
-                    System.out.println(<span class="hljs-string">"async send message : "</span> + message);
-                } <span class="hljs-keyword">else</span> {
-                    System.out.println(<span class="hljs-string">"async send message failed : "</span> + result.getErrMsg());
+        final Message message = new Message(topic, bodyData);
+        messageProducer.sendMessage(message, new MessageSentCallback(){
+            @Override
+            public void onMessageSent(MessageSentResult result) {
+                if (result.isSuccess()) {
+                    System.out.println(&quot;async send message : &quot; + message);
+                } else {
+                    System.out.println(&quot;async send message failed : &quot; + result.getErrMsg());
                 }
             }
-            <span class="hljs-meta">@Override</span>
-            <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onException</span><span class="hljs-params">(Throwable e)</span> </span>{
-                System.out.println(<span class="hljs-string">"async send message error : "</span> + e);
+            @Override
+            public void onException(Throwable e) {
+                System.out.println(&quot;async send message error : &quot; + e);
             }
         });
         messageProducer.shutdown();
     }
 
 }
+```
 </code></pre>
-<ul>
-<li>Send Message With Attributes</li>
-</ul>
-<pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">final</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">ProducerWithAttributeExample</span> </span>{
+<h4>1.1.3 Send Message With Attributes</h4>
+<pre><code>```java
+public final class ProducerWithAttributeExample {
  
-    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Throwable </span>{
-        <span class="hljs-keyword">final</span> String masterHostAndPort = <span class="hljs-string">"localhost:8000"</span>;
-        <span class="hljs-keyword">final</span> TubeClientConfig clientConfig = <span class="hljs-keyword">new</span> TubeClientConfig(masterHostAndPort);
-        <span class="hljs-keyword">final</span> MessageSessionFactory messageSessionFactory = <span class="hljs-keyword">new</span> TubeSingleSessionFactory(clientConfig);
-        <span class="hljs-keyword">final</span> MessageProducer messageProducer = messageSessionFactory.createProducer();
-        <span class="hljs-keyword">final</span> String topic = <span class="hljs-string">"test"</span>;
-        <span class="hljs-keyword">final</span> String body = <span class="hljs-string">"send message with attribute from single-session-factory!"</span>;
-        <span class="hljs-keyword">byte</span>[] bodyData = StringUtils.getBytesUtf8(body);
+    public static void main(String[] args) throws Throwable {
+        final String masterHostAndPort = &quot;localhost:8000&quot;;
+        final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+        final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+        final MessageProducer messageProducer = messageSessionFactory.createProducer();
+        final String topic = &quot;test&quot;;
+        final String body = &quot;send message with attribute from single-session-factory!&quot;;
+        byte[] bodyData = StringUtils.getBytesUtf8(body);
         messageProducer.publish(topic);
-        Message message = <span class="hljs-keyword">new</span> Message(topic, bodyData);
-        <span class="hljs-comment">//set attribute</span>
-        message.setAttrKeyVal(<span class="hljs-string">"test_key"</span>, <span class="hljs-string">"test value"</span>);
-        <span class="hljs-comment">//msgType is used for consumer filtering, and msgTime(accurate to minute) is used as the pipe to send and receive statistics</span>
-        SimpleDateFormat sdf = <span class="hljs-keyword">new</span> SimpleDateFormat(<span class="hljs-string">"yyyyMMddHHmm"</span>);
-        message.putSystemHeader(<span class="hljs-string">"test"</span>, sdf.format(<span class="hljs-keyword">new</span> Date()));
+        Message message = new Message(topic, bodyData);
+        //set attribute
+        message.setAttrKeyVal(&quot;test_key&quot;, &quot;test value&quot;);
+        //msgType is used for consumer filtering, and msgTime(accurate to minute) is used as the pipe to send and receive statistics
+        SimpleDateFormat sdf = new SimpleDateFormat(&quot;yyyyMMddHHmm&quot;);
+        message.putSystemHeader(&quot;test&quot;, sdf.format(new Date()));
         messageProducer.sendMessage(message);
         messageProducer.shutdown();
     }
 
 }
+```
 </code></pre>
-</li>
-</ol>
-<ul>
-<li>
-<p>TubeMultiSessionFactory</p>
-<pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">MultiSessionProducerExample</span> </span>{
+<h3>1.2 TubeMultiSessionFactory</h3>
+<pre><code>```java
+public class MultiSessionProducerExample {
     
-    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Throwable </span>{
-        <span class="hljs-keyword">final</span> <span class="hljs-keyword">int</span> SESSION_FACTORY_NUM = <span class="hljs-number">10</span>;
-        <span class="hljs-keyword">final</span> String masterHostAndPort = <span class="hljs-string">"localhost:8000"</span>;
-        <span class="hljs-keyword">final</span> TubeClientConfig clientConfig = <span class="hljs-keyword">new</span> TubeClientConfig(masterHostAndPort);
-        <span class="hljs-keyword">final</span> List&lt;MessageSessionFactory&gt; sessionFactoryList = <span class="hljs-keyword">new</span> ArrayList&lt;&gt;(SESSION_FACTORY_NUM);
-        <span class="hljs-keyword">final</span> ExecutorService sendExecutorService = Executors.newFixedThreadPool(SESSION_FACTORY_NUM);
-        <span class="hljs-keyword">final</span> CountDownLatch latch = <span class="hljs-keyword">new</span> CountDownLatch(SESSION_FACTORY_NUM);
-        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">0</span>; i &lt; SESSION_FACTORY_NUM; i++) {
-            TubeMultiSessionFactory tubeMultiSessionFactory = <span class="hljs-keyword">new</span> TubeMultiSessionFactory(clientConfig);
+    public static void main(String[] args) throws Throwable {
+        final int SESSION_FACTORY_NUM = 10;
+        final String masterHostAndPort = &quot;localhost:8000&quot;;
+        final TubeClientConfig clientConfig = new TubeClientConfig(masterHostAndPort);
+        final List&lt;MessageSessionFactory&gt; sessionFactoryList = new ArrayList&lt;&gt;(SESSION_FACTORY_NUM);
+        final ExecutorService sendExecutorService = Executors.newFixedThreadPool(SESSION_FACTORY_NUM);
+        final CountDownLatch latch = new CountDownLatch(SESSION_FACTORY_NUM);
+        for (int i = 0; i &lt; SESSION_FACTORY_NUM; i++) {
+            TubeMultiSessionFactory tubeMultiSessionFactory = new TubeMultiSessionFactory(clientConfig);
             sessionFactoryList.add(tubeMultiSessionFactory);
             MessageProducer producer = tubeMultiSessionFactory.createProducer();
-            Sender sender = <span class="hljs-keyword">new</span> Sender(producer, latch);
+            Sender sender = new Sender(producer, latch);
             sendExecutorService.submit(sender);
         }
         latch.await();
         sendExecutorService.shutdownNow();
-        <span class="hljs-keyword">for</span> (MessageSessionFactory sessionFactory : sessionFactoryList) {
+        for (MessageSessionFactory sessionFactory : sessionFactoryList) {
             sessionFactory.shutdown();
         }
     }
 
-    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Sender</span> <span class="hljs-keyword">implements</span> <span class="hljs-title">Runnable</span> </span>{
+    private static class Sender implements Runnable {
         
-        <span class="hljs-keyword">private</span> MessageProducer producer;
+        private MessageProducer producer;
         
-        <span class="hljs-keyword">private</span> CountDownLatch latch;
+        private CountDownLatch latch;
 
-        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">Sender</span><span class="hljs-params">(MessageProducer producer, CountDownLatch latch)</span> </span>{
-            <span class="hljs-keyword">this</span>.producer = producer;
-            <span class="hljs-keyword">this</span>.latch = latch;
+        public Sender(MessageProducer producer, CountDownLatch latch) {
+            this.producer = producer;
+            this.latch = latch;
         }
 
-        <span class="hljs-meta">@Override</span>
-        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">run</span><span class="hljs-params">()</span> </span>{
-            <span class="hljs-keyword">final</span> String topic = <span class="hljs-string">"test"</span>;
-            <span class="hljs-keyword">try</span> {
+        @Override
+        public void run() {
+            final String topic = &quot;test&quot;;
+            try {
                 producer.publish(topic);
-                <span class="hljs-keyword">final</span> <span class="hljs-keyword">byte</span>[] bodyData = StringUtils.getBytesUtf8(<span class="hljs-string">"This is a test message from multi-session factory"</span>);
-                Message message = <span class="hljs-keyword">new</span> Message(topic, bodyData);
+                final byte[] bodyData = StringUtils.getBytesUtf8(&quot;This is a test message from multi-session factory&quot;);
+                Message message = new Message(topic, bodyData);
                 producer.sendMessage(message);
                 producer.shutdown();
-            } <span class="hljs-keyword">catch</span> (Throwable ex) {
-                System.out.println(<span class="hljs-string">"send message error : "</span> + ex);
-            } <span class="hljs-keyword">finally</span> {
+            } catch (Throwable ex) {
+                System.out.println(&quot;send message error : &quot; + ex);
+            } finally {
                 latch.countDown();
             }
         }
     }
 }
+```
 </code></pre>
-</li>
-</ul>
+<hr>
+<p><a href="#top">Back to top</a></p>
 </div></section><footer class="footer-container"><div class="footer-body"><img src="/img/incubator-logo.svg"/><div class="cols-container"><div class="col col-24"><p>Apache InLong (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with  [...]
 	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
 	<script src="https://f.alicdn.com/react/15.4.1/react-dom.min.js"></script>
diff --git a/en-us/docs/modules/tubemq/producer_example.json b/en-us/docs/modules/tubemq/producer_example.json
index 2318366..37a403c 100644
--- a/en-us/docs/modules/tubemq/producer_example.json
+++ b/en-us/docs/modules/tubemq/producer_example.json
@@ -1,6 +1,6 @@
 {
   "filename": "producer_example.md",
-  "__html": "<h2>Producer Example</h2>\n<p>TubeMQ provides two ways to initialize session factory, TubeSingleSessionFactory and TubeMultiSessionFactory:</p>\n<ul>\n<li>TubeSingleSessionFactory creates only one session in the lifecycle, this is very useful in streaming scenarios.</li>\n<li>TubeMultiSessionFactory creates new session on every call.</li>\n</ul>\n<ol>\n<li>\n<p>TubeSingleSessionFactory</p>\n<ul>\n<li>Send Message Synchronously</li>\n</ul>\n<pre><code class=\"language-java\"> [...]
+  "__html": "<h2>1 Producer Example</h2>\n<p>TubeMQ provides two ways to initialize session factory, TubeSingleSessionFactory and TubeMultiSessionFactory:</p>\n<ul>\n<li>TubeSingleSessionFactory creates only one session in the lifecycle, this is very useful in streaming scenarios.</li>\n<li>TubeMultiSessionFactory creates new session on every call.</li>\n</ul>\n<h3>1.1 TubeSingleSessionFactory</h3>\n<h4>1.1.1 Send Message Synchronously</h4>\n<pre><code>```java\n\npublic final class SyncP [...]
   "link": "/en-us/docs/modules/tubemq/producer_example.html",
   "meta": {
     "title": "Producer Example - Apache InLong's TubeMQ module"
diff --git a/en-us/docs/modules/tubemq/producer_example.md b/en-us/docs/modules/tubemq/producer_example.md
index 34d551c..9849c2f 100644
--- a/en-us/docs/modules/tubemq/producer_example.md
+++ b/en-us/docs/modules/tubemq/producer_example.md
@@ -2,14 +2,16 @@
 title: Producer Example - Apache InLong's TubeMQ module
 ---
 
-## Producer Example
+## 1 Producer Example
   TubeMQ provides two ways to initialize session factory, TubeSingleSessionFactory and TubeMultiSessionFactory:
   - TubeSingleSessionFactory creates only one session in the lifecycle, this is very useful in streaming scenarios.
   - TubeMultiSessionFactory creates new session on every call.
 
-1. TubeSingleSessionFactory
-   - Send Message Synchronously
+### 1.1 TubeSingleSessionFactory
+#### 1.1.1 Send Message Synchronously
+
     ```java
+    
     public final class SyncProducerExample {
     
         public static void main(String[] args) throws Throwable {
@@ -31,7 +33,7 @@ title: Producer Example - Apache InLong's TubeMQ module
     }
     ```
      
-   - Send Message Asynchronously
+####1.1.2 Send Message Asynchronously
     ```java
     public final class AsyncProducerExample {
      
@@ -65,7 +67,7 @@ title: Producer Example - Apache InLong's TubeMQ module
     }
     ```
      
-   - Send Message With Attributes
+#### 1.1.3 Send Message With Attributes
     ```java
     public final class ProducerWithAttributeExample {
      
@@ -91,7 +93,7 @@ title: Producer Example - Apache InLong's TubeMQ module
     }
     ```
      
-- TubeMultiSessionFactory
+### 1.2 TubeMultiSessionFactory
 
     ```java
     public class MultiSessionProducerExample {
@@ -146,3 +148,5 @@ title: Producer Example - Apache InLong's TubeMQ module
         }
     }
     ```
+---
+<a href="#top">Back to top</a>
\ No newline at end of file
diff --git a/en-us/docs/modules/tubemq/quick_start.html b/en-us/docs/modules/tubemq/quick_start.html
index 8edb445..a63b01d 100644
--- a/en-us/docs/modules/tubemq/quick_start.html
+++ b/en-us/docs/modules/tubemq/quick_start.html
@@ -12,13 +12,13 @@
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h3>Prerequisites</h3>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+<h3>1.1 Prerequisites</h3>
 <ul>
 <li>Java JDK 1.8</li>
 <li>Maven 3.3+</li>
 </ul>
-<h3>Build Distribution Tarball</h3>
+<h3>1.2 Build Distribution Tarball</h3>
 <ul>
 <li>Compile and Package</li>
 </ul>
@@ -39,7 +39,7 @@ mvn <span class="hljs-built_in">test</span>
 <p>After the build, please go to <code>tubemq-server/target</code>. You can find the
 <strong>apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz</strong> file. It is the TubeMQ deployment package, which includes
 scripts, configuration files, dependency jars and web GUI code.</p>
-<h3>Setting Up Your IDE</h3>
+<h3>1.3 Setting Up Your IDE</h3>
 <p>If you want to build and debug source code in IDE, go to the project root, and run</p>
 <pre><code class="language-bash">mvn compile
 </code></pre>
@@ -50,8 +50,8 @@ scripts, configuration files, dependency jars and web GUI code.</p>
     <span class="hljs-tag">&lt;<span class="hljs-name">protocExecutable</span>&gt;</span>/usr/local/bin/protoc<span class="hljs-tag">&lt;/<span class="hljs-name">protocExecutable</span>&gt;</span>
 <span class="hljs-tag">&lt;/<span class="hljs-name">configuration</span>&gt;</span>
 </code></pre>
-<h2>Deploy and Start</h2>
-<h3>Configuration Example</h3>
+<h2>2 Deploy and Start</h2>
+<h3>2.1 Configuration Example</h3>
 <p>There're two components in the cluster: <strong>Master</strong> and <strong>Broker</strong>. Master and Broker
 can be deployed on the same server or different servers. In this example, we setup our cluster
 like this, and all services run on the same node. Zookeeper should be setup in your environment also.</p>
@@ -89,7 +89,7 @@ like this, and all services run on the same node. Zookeeper should be setup in y
 </tr>
 </tbody>
 </table>
-<h3>Prerequisites</h3>
+<h3>2.2 Prerequisites</h3>
 <ul>
 <li>ZooKeeper Cluster</li>
 <li><a href="download/download.md">apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz</a> package file</li>
@@ -102,7 +102,7 @@ like this, and all services run on the same node. Zookeeper should be setup in y
 ├── logs
 └── resources
 </code></pre>
-<h3>Configure Master</h3>
+<h3>2.3 Configure Master</h3>
 <p>You can change configurations in <code>conf/master.ini</code> according to cluster information.</p>
 <ul>
 <li>Master IP and Port</li>
@@ -166,7 +166,7 @@ the introduction of availability level.</li>
 </tbody>
 </table>
 <p><strong>Tips</strong>:Please notice that the master servers should be clock synchronized.</p>
-<h3>Configure Broker</h3>
+<h3>2.4 Configure Broker</h3>
 <p>You can change configurations in <code>conf/broker.ini</code> according to cluster information.</p>
 <ul>
 <li>Broker IP and Port</li>
@@ -194,7 +194,7 @@ the introduction of availability level.</li>
 zkNodeRoot=/tubemq
 zkServerAddr=localhost:2181             // multi zookeeper addresses can separate with ","
 </code></pre>
-<h3>Start Master</h3>
+<h3>2.5 Start Master</h3>
 <p>Please go to the <code>bin</code> folder and run this command to start
 the master service.</p>
 <pre><code class="language-bash">./tubemq.sh master start
@@ -202,7 +202,7 @@ the master service.</p>
 <p>You should be able to access <code>http://your-master-ip:8080</code> to see the
 web GUI now.</p>
 <p><img src="img/tubemq-console-gui.png" alt="TubeMQ Console GUI"></p>
-<h4>Configure Broker Metadata</h4>
+<h4>2.5.1 Configure Broker Metadata</h4>
 <p>Before we start a broker service, we need to configure it on master web GUI first. Go to the <code>Broker List</code> page, click <code>Add Single Broker</code>, and input the new broker information.</p>
 <p><img src="img/tubemq-add-broker-1.png" alt="Add Broker 1"></p>
 <p>In this example, we only need to input broker IP and authToken:</p>
@@ -213,15 +213,15 @@ web GUI now.</p>
 </ol>
 <p>Click the online link to activate the new added broker.</p>
 <p><img src="img/tubemq-add-broker-2.png" alt="Add Broker 2"></p>
-<h3>Start Broker</h3>
+<h3>2.6 Start Broker</h3>
 <p>Please go to the <code>bin</code> folder and run this command to start the broker service</p>
 <pre><code class="language-bash">./tubemq.sh broker start
 </code></pre>
 <p>Refresh the GUI broker list page, you can see that the broker now is registered.</p>
 <p>After the sub-state of the broker changed to <code>idle</code>, we can add topics to that broker.</p>
 <p><img src="img/tubemq-add-broker-3.png" alt="Add Broker 3"></p>
-<h2>Quick Start</h2>
-<h3>Add Topic</h3>
+<h2>3 Quick Start</h2>
+<h3>3.1 Add Topic</h3>
 <p>We can add or manage the cluster topics on the web GUI. To add a new topic, go to the
 topic list page and click the add new topic button</p>
 <p><img src="img/tubemq-add-topic-1.png" alt="Add Topic 1"></p>
@@ -236,27 +236,23 @@ that the topic publish/subscribe state is active now.</p>
 <p><img src="img/tubemq-add-topic-3.png" alt="Add Topic 3"></p>
 <p><img src="img/tubemq-add-topic-4.png" alt="Add Topic 4"></p>
 <p>Now we can use the topic to send messages.</p>
-<h3>Run Example</h3>
+<h3>3.2 Run Example</h3>
 <p>Now we can use <code>demo</code> topic which created before to test our cluster.</p>
-<ul>
-<li>Produce Messages</li>
-</ul>
+<h4>3.2.1 Produce Messages</h4>
 <p>Please don't forget replace <code>YOUR_MASTER_IP:port</code> with your server ip and port, and start producer.</p>
 <pre><code class="language-bash"><span class="hljs-built_in">cd</span> /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 ./bin/tubemq-producer-test.sh --master-servers YOUR_MASTER_IP1:port,YOUR_MASTER_IP2:port --topicName demo
 </code></pre>
 <p>From the log, we can see the message is sent out.
 <img src="img/tubemq-send-message.png" alt="Demo 1"></p>
-<ul>
-<li>Consume Messages</li>
-</ul>
+<h4>3.2.2 Consume Messages</h4>
 <p>Please don't forget replace YOUR_MASTER_IP:port with your server ip and port, and start consumer.</p>
 <pre><code class="language-bash"><span class="hljs-built_in">cd</span> /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 ./bin/tubemq-consumer-test.sh --master-servers YOUR_MASTER_IP1:port,YOUR_MASTER_IP2:port --topicName demo --groupName test_consume
 </code></pre>
 <p>From the log, we can see the message received by the consumer.
 <img src="img/tubemq-consume-message.png" alt="Demo 2"></p>
-<h2>The End</h2>
+<h2>4 The End</h2>
 <p>Here, the compilation, deployment, system configuration, startup, production and consumption of TubeMQ have been completed. If you need to understand more in-depth content, please check the relevant content in &quot;TubeMQ HTTP API&quot; and make the corresponding configuration settings.</p>
 <hr>
 </div></section><footer class="footer-container"><div class="footer-body"><img src="/img/incubator-logo.svg"/><div class="cols-container"><div class="col col-24"><p>Apache InLong (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with  [...]
diff --git a/en-us/docs/modules/tubemq/quick_start.json b/en-us/docs/modules/tubemq/quick_start.json
index 1b13b97..1ea60da 100644
--- a/en-us/docs/modules/tubemq/quick_start.json
+++ b/en-us/docs/modules/tubemq/quick_start.json
@@ -1,6 +1,6 @@
 {
   "filename": "quick_start.md",
-  "__html": "<h2>Build TubeMQ</h2>\n<h3>Prerequisites</h3>\n<ul>\n<li>Java JDK 1.8</li>\n<li>Maven 3.3+</li>\n</ul>\n<h3>Build Distribution Tarball</h3>\n<ul>\n<li>Compile and Package</li>\n</ul>\n<pre><code class=\"language-bash\">mvn clean package -DskipTests\n</code></pre>\n<ul>\n<li>Run Unit Tests:</li>\n</ul>\n<pre><code class=\"language-bash\">mvn <span class=\"hljs-built_in\">test</span>\n</code></pre>\n<ul>\n<li>Build Individual Module:</li>\n</ul>\n<pre><code class=\"language-ba [...]
+  "__html": "<h2>1 Build TubeMQ</h2>\n<h3>1.1 Prerequisites</h3>\n<ul>\n<li>Java JDK 1.8</li>\n<li>Maven 3.3+</li>\n</ul>\n<h3>1.2 Build Distribution Tarball</h3>\n<ul>\n<li>Compile and Package</li>\n</ul>\n<pre><code class=\"language-bash\">mvn clean package -DskipTests\n</code></pre>\n<ul>\n<li>Run Unit Tests:</li>\n</ul>\n<pre><code class=\"language-bash\">mvn <span class=\"hljs-built_in\">test</span>\n</code></pre>\n<ul>\n<li>Build Individual Module:</li>\n</ul>\n<pre><code class=\"l [...]
   "link": "/en-us/docs/modules/tubemq/quick_start.html",
   "meta": {
     "title": "Quick Start - Apache InLong's TubeMQ module"
diff --git a/en-us/docs/modules/tubemq/quick_start.md b/en-us/docs/modules/tubemq/quick_start.md
index 315ad69..3821050 100644
--- a/en-us/docs/modules/tubemq/quick_start.md
+++ b/en-us/docs/modules/tubemq/quick_start.md
@@ -2,13 +2,13 @@
 title: Quick Start - Apache InLong's TubeMQ module
 ---
 
-## Build TubeMQ
+## 1 Build TubeMQ
 
-### Prerequisites
+### 1.1 Prerequisites
 - Java JDK 1.8
 - Maven 3.3+
 
-### Build Distribution Tarball
+### 1.2 Build Distribution Tarball
 - Compile and Package
 ```bash
 mvn clean package -DskipTests
@@ -30,7 +30,7 @@ After the build, please go to `tubemq-server/target`. You can find the
 **apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz** file. It is the TubeMQ deployment package, which includes
 scripts, configuration files, dependency jars and web GUI code.
 
-### Setting Up Your IDE
+### 1.3 Setting Up Your IDE
 If you want to build and debug source code in IDE, go to the project root, and run
 ```bash
 mvn compile
@@ -45,9 +45,9 @@ This command will generate the Java source files from the `protoc` files, the ge
 </configuration>
 ```
 
-## Deploy and Start
+## 2 Deploy and Start
 
-### Configuration Example
+### 2.1 Configuration Example
 There're two components in the cluster: **Master** and **Broker**. Master and Broker
 can be deployed on the same server or different servers. In this example, we setup our cluster
 like this, and all services run on the same node. Zookeeper should be setup in your environment also.
@@ -57,7 +57,7 @@ like this, and all services run on the same node. Zookeeper should be setup in y
 | Broker | 8123 | 8124 | 8081 | Message is stored at /stage/msg_data |
 | Zookeeper | 2181 | | | Offset is stored at /tubemq |
 
-### Prerequisites
+### 2.2 Prerequisites
 - ZooKeeper Cluster
 - [apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz](download/download.md) package file
 
@@ -71,7 +71,7 @@ After you extract the package file, here's the folder structure.
 └── resources
 ```
 
-### Configure Master
+### 2.3 Configure Master
 You can change configurations in `conf/master.ini` according to cluster information.
 - Master IP and Port
 ```ini
@@ -116,7 +116,7 @@ the introduction of availability level.
 **Tips**:Please notice that the master servers should be clock synchronized.
 
 
-### Configure Broker
+### 2.4 Configure Broker
 You can change configurations in `conf/broker.ini` according to cluster information.
 - Broker IP and Port
 ```ini
@@ -143,7 +143,7 @@ zkNodeRoot=/tubemq
 zkServerAddr=localhost:2181             // multi zookeeper addresses can separate with ","
 ```
 
-### Start Master
+### 2.5 Start Master
 Please go to the `bin` folder and run this command to start
 the master service.
 ```bash
@@ -155,7 +155,7 @@ web GUI now.
 
 ![TubeMQ Console GUI](img/tubemq-console-gui.png)
 
-#### Configure Broker Metadata
+#### 2.5.1 Configure Broker Metadata
 Before we start a broker service, we need to configure it on master web GUI first. Go to the `Broker List` page, click `Add Single Broker`, and input the new broker information.
 
 ![Add Broker 1](img/tubemq-add-broker-1.png)
@@ -169,7 +169,7 @@ Click the online link to activate the new added broker.
 
 ![Add Broker 2](img/tubemq-add-broker-2.png)
 
-### Start Broker
+### 2.6 Start Broker
 Please go to the `bin` folder and run this command to start the broker service
 ```bash
 ./tubemq.sh broker start
@@ -181,8 +181,8 @@ After the sub-state of the broker changed to `idle`, we can add topics to that b
 
 ![Add Broker 3](img/tubemq-add-broker-3.png)
 
-## Quick Start
-### Add Topic
+## 3 Quick Start
+### 3.1 Add Topic
 We can add or manage the cluster topics on the web GUI. To add a new topic, go to the
 topic list page and click the add new topic button
 
@@ -208,10 +208,10 @@ that the topic publish/subscribe state is active now.
 
 Now we can use the topic to send messages.
 
-### Run Example
+### 3.2 Run Example
 Now we can use `demo` topic which created before to test our cluster.
 
-- Produce Messages
+#### 3.2.1 Produce Messages
 
 Please don't forget replace `YOUR_MASTER_IP:port` with your server ip and port, and start producer.
 
@@ -223,7 +223,7 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 From the log, we can see the message is sent out.
 ![Demo 1](img/tubemq-send-message.png)
 
-- Consume Messages
+#### 3.2.2 Consume Messages
 
 Please don't forget replace YOUR_MASTER_IP:port with your server ip and port, and start consumer.
 ```bash
@@ -234,7 +234,7 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 From the log, we can see the message received by the consumer.
 ![Demo 2](img/tubemq-consume-message.png)
 
-## The End
+## 4 The End
 Here, the compilation, deployment, system configuration, startup, production and consumption of TubeMQ have been completed. If you need to understand more in-depth content, please check the relevant content in "TubeMQ HTTP API" and make the corresponding configuration settings.
 
 ---
diff --git a/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html b/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html
index bd08d50..de7189f 100644
--- a/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html
+++ b/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html
@@ -13,13 +13,13 @@
 </head>
 <body>
 	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h2>背景</h2>
+<h2>1 背景</h2>
 <p>TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于<a href="http://kafka.apache.org/">Apache Kafka</a>。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。
 这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。</p>
-<h2>测试场景方案</h2>
+<h2>2 测试场景方案</h2>
 <p>如下是我们根据实际应用场景设计的测试方案:
 <img src="img/perf_scheme.png" alt=""></p>
-<h2>测试结论</h2>
+<h2>3 测试结论</h2>
 <p>用&quot;复仇者联盟&quot;里的角色来形容:</p>
 <table>
 <thead>
@@ -59,8 +59,8 @@
 <li>在过滤消费时,TubeMQ可以极大地降低服务端的网络出流量,同时还会因过滤消费消耗的资源少于全量消费,反过来促进TubeMQ吞吐量提升;kafka无服务端过滤,出流量与全量消费一致,流量无明显的节约;</li>
 <li>资源消耗方面各有差异:TubeMQ由于采用顺序写随机读,CPU消耗很大,Kafka采用顺序写块读,CPU消耗很小,但其他资源,如文件句柄、网络连接等消耗非常的大。在实际的SAAS模式下的运营环境里,Kafka会因为zookeeper依赖出现系统瓶颈,会因生产、消费、Broker众多,受限制的地方会更多,比如文件句柄、网络连接数等,资源消耗会更大;</li>
 </ol>
-<h2>测试环境及配置</h2>
-<p>###【软件版本及部署环境】</p>
+<h2>4 测试环境及配置</h2>
+<h3>4.1 【软件版本及部署环境】</h3>
 <table>
 <thead>
 <tr>
@@ -102,7 +102,7 @@
 </tr>
 </tbody>
 </table>
-<p>###【Broker硬件机型配置】</p>
+<h3>4.2 【Broker硬件机型配置】</h3>
 <table>
 <thead>
 <tr>
@@ -129,7 +129,7 @@
 </tr>
 </tbody>
 </table>
-<p>###【Broker系统配置】</p>
+<h3>4.3 【Broker系统配置】</h3>
 <table>
 <thead>
 <tr>
@@ -161,21 +161,21 @@
 </tr>
 </tbody>
 </table>
-<h2>测试场景及结论</h2>
-<h3>场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能</h3>
+<h2>5 测试场景及结论</h2>
+<h3>5.1 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能</h3>
 <p><img src="img/perf_scenario_1.png" alt=""></p>
-<p>####【结论】</p>
+<h4>5.1.1 【结论】</h4>
 <p>在单topic不同分区的情况下:</p>
 <ol>
 <li>TubeMQ吞吐量不随分区变化而变化,同时TubeMQ属于顺序写随机读模式,单实例情况下吞吐量要低于Kafka,CPU要高于Kafka;</li>
 <li>Kafka随着分区增多吞吐量略有下降,CPU使用率很低;</li>
 <li>TubeMQ分区由于是逻辑分区,增加分区不影响吞吐量;Kafka分区为物理文件的增加,但增加分区入出流量反而会下降;</li>
 </ol>
-<p>####【指标】
-<img src="img/perf_scenario_1_index.png" alt=""></p>
-<h3>场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况</h3>
+<h4>5.1.2 【指标】</h4>
+<p><img src="img/perf_scenario_1_index.png" alt=""></p>
+<h3>5.2 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况</h3>
 <p><img src="img/perf_scenario_2.png" alt=""></p>
-<p>####【结论】</p>
+<h4>5.2.1 【结论】</h4>
 <p>从场景一和场景二的测试数据结合来看:</p>
 <ol>
 <li>TubeMQ随着实例数增多,吞吐量增长,在4个实例的时候吞吐量与Kafka持平,磁盘IO使用率比Kafka低,CPU使用率比Kafka高;</li>
@@ -184,14 +184,14 @@
 <li>TubeMQ按照Kafka等同的增加实例(物理文件)后,吞吐量量随之提升,在4个实例的时候测试效果达到并超过Kafka
 5个分区的状态;TubeMQ可以根据业务或者系统配置需要,调整数据读取方式,可以动态提升系统的吞吐量;Kafka随着分区增加,入流量有下降;</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.2.2 【指标】</h4>
 <p><strong>注1 :</strong> 如下场景中,均为单Topic测试下不同分区或实例、不同读取模式场景下的测试,单条消息包长均为1K;</p>
 <p><strong>注2 :</strong>
 读取模式通过admin_upd_def_flow_control_rule设置qryPriorityId为对应值.
 <img src="img/perf_scenario_2_index.png" alt=""></p>
-<h3>场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况</h3>
+<h3>5.3 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况</h3>
 <p><img src="img/perf_scenario_3.png" alt=""></p>
-<p>####【结论】</p>
+<h4>5.3.1 【结论】</h4>
 <p>按照多Topic场景下测试:</p>
 <ol>
 <li>TubeMQ随着Topic数增加,生产和消费性能维持在一个均线上,没有特别大的流量波动,占用的文件句柄、内存量、网络连接数不多(1k
@@ -201,20 +201,20 @@ topic下文件句柄约7500个,网络连接150个),但CPU占用比较大
 Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题;</li>
 <li>数据对比来看,TubeMQ相比Kafka运行更稳定,吞吐量以稳定形势呈现,长时间跑吞吐量不下降,资源占用少,但CPU的占用需要后续版本解决;</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.3.2 【指标】</h4>
 <p><strong>注:</strong> 如下场景中,包长均为1K,分区数均为10。
 <img src="img/perf_scenario_3_index.png" alt=""></p>
-<h3>场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容</h3>
-<p>####【结论】</p>
+<h3>5.4 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容</h3>
+<h4>5.4.1 【结论】</h4>
 <ol>
 <li>TubeMQ采用服务端过滤的模式,出流量指标与入流量存在明显差异;</li>
 <li>TubeMQ服务端过滤提供了更多的资源给到生产,生产性能比非过滤情况有提升;</li>
 <li>Kafka采用客户端过滤模式,入流量没有提升,出流量差不多是入流量的2倍,同时入出流量不稳定;</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.4.2 【指标】</h4>
 <p><strong>注:</strong> 如下场景中,topic为100,包长均为1K,分区数均为10
 <img src="img/perf_scenario_4_index.png" alt=""></p>
-<h3>场景五:TubeMQ、Kafka数据消费时延比对</h3>
+<h3>5.5 场景五:TubeMQ、Kafka数据消费时延比对</h3>
 <table>
 <thead>
 <tr>
@@ -237,27 +237,27 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 </tbody>
 </table>
 <p>备注:TubeMQ的消费端存在一个等待队列处理消息追平生产时的数据未找到的情况,缺省有200ms的等待时延。测试该项时,TubeMQ消费端要调整拉取时延(ConsumerConfig.setMsgNotFoundWaitPeriodMs())为10ms,或者设置频控策略为10ms。</p>
-<h3>场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响</h3>
-<p>####【结论】</p>
+<h3>5.6 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响</h3>
+<h4>5.6.1【结论】</h4>
 <ol>
 <li>TubeMQ调整Topic的内存缓存大小能对吞吐量形成正面影响,实际使用时可以根据机器情况合理调整;</li>
 <li>从实际使用情况看,内存大小设置并不是越大越好,需要合理设置该值;</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.6.2 【指标】</h4>
 <p><strong>注:</strong> 如下场景中,消费方式均为读取内存(301)的PULL消费,单条消息包长均为1K
 <img src="img/perf_scenario_6_index.png" alt=""></p>
-<h3>场景七:消费严重滞后情况下两系统的表现</h3>
-<p>####【结论】</p>
+<h3>5.7 场景七:消费严重滞后情况下两系统的表现</h3>
+<h4>5.7.1 【结论】</h4>
 <ol>
 <li>消费严重滞后情况下,TubeMQ和Kafka都会因磁盘IO飙升使得生产消费受阻;</li>
 <li>在带SSD系统里,TubeMQ可以通过SSD转存储消费来换取部分生产和消费入流量;</li>
 <li>按照版本计划,目前TubeMQ的SSD消费转存储特性不是最终实现,后续版本中将进一步改进,使其达到最合适的运行方式;</li>
 </ol>
-<p>####【指标】
-<img src="img/perf_scenario_7.png" alt=""></p>
-<h3>场景八:评估多机型情况下两系统的表现</h3>
+<h4>5.7.2 【指标】</h4>
+<p><img src="img/perf_scenario_7.png" alt=""></p>
+<h3>5.8 场景八:评估多机型情况下两系统的表现</h3>
 <p><img src="img/perf_scenario_8.png" alt=""></p>
-<p>####【结论】</p>
+<h4>5.8.1 【结论】</h4>
 <ol>
 <li>TubeMQ在BX1机型下较TS60机型有更高的吞吐量,同时因IO util达到瓶颈无法再提升,吞吐量在CG1机型下又较BX1达到更高的指标值;</li>
 <li>Kafka在BX1机型下系统吞吐量不稳定,且较TS60下测试的要低,在CG1机型下系统吞吐量达到最高,万兆网卡跑满;</li>
@@ -265,24 +265,25 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 <li>在SSD盘存储条件下,Kafka性能指标达到最好,TubeMQ指标不及Kafka;</li>
 <li>CG1机型数据存储盘较小(仅2.2T),RAID 10配置下90分钟以内磁盘即被写满,无法测试两系统长时间运行情况。</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.8.2 【指标】</h4>
 <p><strong>注1:</strong> 如下场景Topic数均配置500个topic,10个分区,消息包大小为1K字节;</p>
 <p><strong>注2:</strong> TubeMQ采用的是301内存读取模式消费;
 <img src="img/perf_scenario_8_index.png" alt=""></p>
-<h2>附录1 不同机型下资源占用情况图:</h2>
-<p>###【BX1机型测试】
-<img src="img/perf_appendix_1_bx1_1.png" alt="">
+<h2>6 附录</h2>
+<h3>6.1 附录1 不同机型下资源占用情况图:</h3>
+<h4>6.1.1 【BX1机型测试】</h4>
+<p><img src="img/perf_appendix_1_bx1_1.png" alt="">
 <img src="img/perf_appendix_1_bx1_2.png" alt="">
 <img src="img/perf_appendix_1_bx1_3.png" alt="">
 <img src="img/perf_appendix_1_bx1_4.png" alt=""></p>
-<p>###【CG1机型测试】
-<img src="img/perf_appendix_1_cg1_1.png" alt="">
+<h4>6.1.2 【CG1机型测试】</h4>
+<p><img src="img/perf_appendix_1_cg1_1.png" alt="">
 <img src="img/perf_appendix_1_cg1_2.png" alt="">
 <img src="img/perf_appendix_1_cg1_3.png" alt="">
 <img src="img/perf_appendix_1_cg1_4.png" alt=""></p>
-<h2>附录2 多Topic测试时的资源占用情况图:</h2>
-<p>###【100个topic】
-<img src="img/perf_appendix_2_topic_100_1.png" alt="">
+<h3>6.2 附录2 多Topic测试时的资源占用情况图:</h3>
+<h4>6.2.1 【100个topic】</h4>
+<p><img src="img/perf_appendix_2_topic_100_1.png" alt="">
 <img src="img/perf_appendix_2_topic_100_2.png" alt="">
 <img src="img/perf_appendix_2_topic_100_3.png" alt="">
 <img src="img/perf_appendix_2_topic_100_4.png" alt="">
@@ -291,8 +292,8 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 <img src="img/perf_appendix_2_topic_100_7.png" alt="">
 <img src="img/perf_appendix_2_topic_100_8.png" alt="">
 <img src="img/perf_appendix_2_topic_100_9.png" alt=""></p>
-<p>###【200个topic】
-<img src="img/perf_appendix_2_topic_200_1.png" alt="">
+<h4>6.2.2 【200个topic】</h4>
+<p><img src="img/perf_appendix_2_topic_200_1.png" alt="">
 <img src="img/perf_appendix_2_topic_200_2.png" alt="">
 <img src="img/perf_appendix_2_topic_200_3.png" alt="">
 <img src="img/perf_appendix_2_topic_200_4.png" alt="">
@@ -301,8 +302,8 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 <img src="img/perf_appendix_2_topic_200_7.png" alt="">
 <img src="img/perf_appendix_2_topic_200_8.png" alt="">
 <img src="img/perf_appendix_2_topic_200_9.png" alt=""></p>
-<p>###【500个topic】
-<img src="img/perf_appendix_2_topic_500_1.png" alt="">
+<h4>6.2.3 【500个topic】</h4>
+<p><img src="img/perf_appendix_2_topic_500_1.png" alt="">
 <img src="img/perf_appendix_2_topic_500_2.png" alt="">
 <img src="img/perf_appendix_2_topic_500_3.png" alt="">
 <img src="img/perf_appendix_2_topic_500_4.png" alt="">
@@ -311,8 +312,8 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 <img src="img/perf_appendix_2_topic_500_7.png" alt="">
 <img src="img/perf_appendix_2_topic_500_8.png" alt="">
 <img src="img/perf_appendix_2_topic_500_9.png" alt=""></p>
-<p>###【1000个topic】
-<img src="img/perf_appendix_2_topic_1000_1.png" alt="">
+<h4>6.2.4【1000个topic】</h4>
+<p><img src="img/perf_appendix_2_topic_1000_1.png" alt="">
 <img src="img/perf_appendix_2_topic_1000_2.png" alt="">
 <img src="img/perf_appendix_2_topic_1000_3.png" alt="">
 <img src="img/perf_appendix_2_topic_1000_4.png" alt="">
diff --git a/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.json b/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.json
index aea7f16..214fd1f 100644
--- a/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.json
+++ b/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.json
@@ -1,6 +1,6 @@
 {
   "filename": "tubemq_perf_test_vs_Kafka_cn.md",
-  "__html": "<h1>TubeMQ VS Kafka性能对比测试总结</h1>\n<h2>背景</h2>\n<p>TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于<a href=\"http://kafka.apache.org/\">Apache Kafka</a>。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。\n这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。</p>\n<h2>测试场景方案</h2>\n<p>如下是我们根据实际应用场景设计的测试方案:\n<img src=\"img/perf_scheme.png\" alt=\"\"></p>\n<h2>测试结论</h2>\n<p>用&quot;复仇者联盟&quot;里的角色来形容:</p>\n<table>\n<thead>\n<tr>\n<th  [...]
+  "__html": "<h1>TubeMQ VS Kafka性能对比测试总结</h1>\n<h2>1 背景</h2>\n<p>TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于<a href=\"http://kafka.apache.org/\">Apache Kafka</a>。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。\n这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。</p>\n<h2>2 测试场景方案</h2>\n<p>如下是我们根据实际应用场景设计的测试方案:\n<img src=\"img/perf_scheme.png\" alt=\"\"></p>\n<h2>3 测试结论</h2>\n<p>用&quot;复仇者联盟&quot;里的角色来形容:</p>\n<table>\n<thead>\n<tr> [...]
   "link": "/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md b/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
index 45916f6..67a0a88 100644
--- a/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
+++ b/en-us/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
@@ -1,14 +1,14 @@
 # TubeMQ VS Kafka性能对比测试总结
 
-## 背景
+## 1 背景
 TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于[Apache Kafka](http://kafka.apache.org/)。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。
 这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。
 
-## 测试场景方案
+## 2 测试场景方案
 如下是我们根据实际应用场景设计的测试方案:
 ![](img/perf_scheme.png)
 
-## 测试结论
+## 3 测试结论
 用"复仇者联盟"里的角色来形容:
 
 角色|测试场景|要点
@@ -24,8 +24,8 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 3. 在过滤消费时,TubeMQ可以极大地降低服务端的网络出流量,同时还会因过滤消费消耗的资源少于全量消费,反过来促进TubeMQ吞吐量提升;kafka无服务端过滤,出流量与全量消费一致,流量无明显的节约;
 4. 资源消耗方面各有差异:TubeMQ由于采用顺序写随机读,CPU消耗很大,Kafka采用顺序写块读,CPU消耗很小,但其他资源,如文件句柄、网络连接等消耗非常的大。在实际的SAAS模式下的运营环境里,Kafka会因为zookeeper依赖出现系统瓶颈,会因生产、消费、Broker众多,受限制的地方会更多,比如文件句柄、网络连接数等,资源消耗会更大;
 
-## 测试环境及配置
-###【软件版本及部署环境】
+## 4 测试环境及配置
+### 4.1 【软件版本及部署环境】
 
 **角色**|**TubeMQ**|**Kafka**
 :---:|---|---
@@ -36,7 +36,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 **Producer**|1台M10 + 1台CG1|1台M10 + 1台CG1
 **Consumer**|6台TS50万兆机|6台TS50万兆机
 
-###【Broker硬件机型配置】
+### 4.2 【Broker硬件机型配置】
 
 **机型**|配置|**备注**
 :---:|---|---
@@ -44,7 +44,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 **BX1-10G**|SA5212M5(6133\*2/16G\*16/4T\*12/10GE\*2) Pcs|                                     
 **CG1-10G**|CG1-10G\_6.0.2.12\_RM760-FX(6133\*2/16G\*16/5200-480G\*6 RAID/10GE\*2)-ODM Pcs |  
 
-###【Broker系统配置】
+### 4.3 【Broker系统配置】
 
 | **配置项**            | **TubeMQ Broker**     | **Kafka Broker**      |
 |:---:|---|---|
@@ -53,25 +53,25 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 | **配置文件**          | 在tubemq-3.8.0版本broker.ini配置文件上改动: consumerRegTimeoutMs=35000<br>tcpWriteServiceThread=50<br>tcpReadServiceThread=50<br>primaryPath为SATA盘日志目录|kafka_2.11-0.10.2.0版本server.properties配置文件上改动:<br>log.flush.interval.messages=5000<br>log.flush.interval.ms=10000<br>log.dirs为SATA盘日志目录<br>socket.send.buffer.bytes=1024000<br>socket.receive.buffer.bytes=1024000<br>socket.request.max.bytes=2147483600<br>log.segment.bytes=1073741824<br>num.network.threads=25<br>num.io.threads=48< [...]
 | **其它**             | 除测试用例里特别指定,每个topic创建时设置:<br>memCacheMsgSizeInMB=5<br>memCacheFlushIntvl=20000<br>memCacheMsgCntInK=10 <br>unflushThreshold=5000<br>unflushInterval=10000<br>unFlushDataHold=5000 | 客户端代码里设置:<br>生产端:<br>props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br>props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br>props.put("linger.ms", "200");<br>props.put("block.on.buffer.full", false);<br>props.pu [...]
               
-## 测试场景及结论
+## 5 测试场景及结论
 
-### 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
+### 5.1 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
  ![](img/perf_scenario_1.png)
 
-####【结论】
+#### 5.1.1 【结论】
 
 在单topic不同分区的情况下:
 1. TubeMQ吞吐量不随分区变化而变化,同时TubeMQ属于顺序写随机读模式,单实例情况下吞吐量要低于Kafka,CPU要高于Kafka;
 2. Kafka随着分区增多吞吐量略有下降,CPU使用率很低;
 3. TubeMQ分区由于是逻辑分区,增加分区不影响吞吐量;Kafka分区为物理文件的增加,但增加分区入出流量反而会下降;
 
-####【指标】
+#### 5.1.2 【指标】
  ![](img/perf_scenario_1_index.png)
 
-### 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
+### 5.2 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
  ![](img/perf_scenario_2.png)
 
-####【结论】
+#### 5.2.1 【结论】
 
 从场景一和场景二的测试数据结合来看:
 
@@ -81,7 +81,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 4. TubeMQ按照Kafka等同的增加实例(物理文件)后,吞吐量量随之提升,在4个实例的时候测试效果达到并超过Kafka
     5个分区的状态;TubeMQ可以根据业务或者系统配置需要,调整数据读取方式,可以动态提升系统的吞吐量;Kafka随着分区增加,入流量有下降;
 
-####【指标】
+#### 5.2.2 【指标】
 
 **注1 :** 如下场景中,均为单Topic测试下不同分区或实例、不同读取模式场景下的测试,单条消息包长均为1K;
 
@@ -89,10 +89,10 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 读取模式通过admin\_upd\_def\_flow\_control\_rule设置qryPriorityId为对应值.
  ![](img/perf_scenario_2_index.png)
 
-### 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
+### 5.3 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
  ![](img/perf_scenario_3.png)
 
-####【结论】
+#### 5.3.1 【结论】
 
 按照多Topic场景下测试:
 
@@ -103,25 +103,25 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
     Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题;
 4.  数据对比来看,TubeMQ相比Kafka运行更稳定,吞吐量以稳定形势呈现,长时间跑吞吐量不下降,资源占用少,但CPU的占用需要后续版本解决;
 
-####【指标】
+#### 5.3.2 【指标】
 
 **注:** 如下场景中,包长均为1K,分区数均为10。
  ![](img/perf_scenario_3_index.png)
 
-### 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
+### 5.4 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
 
-####【结论】
+#### 5.4.1 【结论】
 
 1.  TubeMQ采用服务端过滤的模式,出流量指标与入流量存在明显差异;
 2.  TubeMQ服务端过滤提供了更多的资源给到生产,生产性能比非过滤情况有提升;
 3.  Kafka采用客户端过滤模式,入流量没有提升,出流量差不多是入流量的2倍,同时入出流量不稳定;
 
-####【指标】
+#### 5.4.2 【指标】
 
 **注:** 如下场景中,topic为100,包长均为1K,分区数均为10
  ![](img/perf_scenario_4_index.png)
 
-### 场景五:TubeMQ、Kafka数据消费时延比对
+### 5.5 场景五:TubeMQ、Kafka数据消费时延比对
 
 | 类型   | 时延            | Ping时延                |
 |---|---|---|
@@ -130,35 +130,35 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 
 备注:TubeMQ的消费端存在一个等待队列处理消息追平生产时的数据未找到的情况,缺省有200ms的等待时延。测试该项时,TubeMQ消费端要调整拉取时延(ConsumerConfig.setMsgNotFoundWaitPeriodMs())为10ms,或者设置频控策略为10ms。
 
-### 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
+### 5.6 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
 
-####【结论】
+#### 5.6.1【结论】
 
 1.  TubeMQ调整Topic的内存缓存大小能对吞吐量形成正面影响,实际使用时可以根据机器情况合理调整;
 2.  从实际使用情况看,内存大小设置并不是越大越好,需要合理设置该值;
 
-####【指标】
+#### 5.6.2 【指标】
 
  **注:** 如下场景中,消费方式均为读取内存(301)的PULL消费,单条消息包长均为1K
  ![](img/perf_scenario_6_index.png)
  
 
-### 场景七:消费严重滞后情况下两系统的表现
+### 5.7 场景七:消费严重滞后情况下两系统的表现
 
-####【结论】
+#### 5.7.1 【结论】
 
 1.  消费严重滞后情况下,TubeMQ和Kafka都会因磁盘IO飙升使得生产消费受阻;
 2.  在带SSD系统里,TubeMQ可以通过SSD转存储消费来换取部分生产和消费入流量;
 3.  按照版本计划,目前TubeMQ的SSD消费转存储特性不是最终实现,后续版本中将进一步改进,使其达到最合适的运行方式;
 
-####【指标】
+#### 5.7.2 【指标】
  ![](img/perf_scenario_7.png)
 
 
-### 场景八:评估多机型情况下两系统的表现
+### 5.8 场景八:评估多机型情况下两系统的表现
  ![](img/perf_scenario_8.png)
       
-####【结论】
+#### 5.8.1 【结论】
 
 1.  TubeMQ在BX1机型下较TS60机型有更高的吞吐量,同时因IO util达到瓶颈无法再提升,吞吐量在CG1机型下又较BX1达到更高的指标值;
 2.  Kafka在BX1机型下系统吞吐量不稳定,且较TS60下测试的要低,在CG1机型下系统吞吐量达到最高,万兆网卡跑满;
@@ -166,29 +166,30 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 4.  在SSD盘存储条件下,Kafka性能指标达到最好,TubeMQ指标不及Kafka;
 5.  CG1机型数据存储盘较小(仅2.2T),RAID 10配置下90分钟以内磁盘即被写满,无法测试两系统长时间运行情况。
 
-####【指标】
+#### 5.8.2 【指标】
 
 **注1:** 如下场景Topic数均配置500个topic,10个分区,消息包大小为1K字节;
 
 **注2:** TubeMQ采用的是301内存读取模式消费;
  ![](img/perf_scenario_8_index.png)
 
-## 附录1 不同机型下资源占用情况图:
-###【BX1机型测试】
+## 6 附录
+### 6.1 附录1 不同机型下资源占用情况图:
+#### 6.1.1 【BX1机型测试】
 ![](img/perf_appendix_1_bx1_1.png)
 ![](img/perf_appendix_1_bx1_2.png)
 ![](img/perf_appendix_1_bx1_3.png)
 ![](img/perf_appendix_1_bx1_4.png)
 
-###【CG1机型测试】
+#### 6.1.2 【CG1机型测试】
 ![](img/perf_appendix_1_cg1_1.png)
 ![](img/perf_appendix_1_cg1_2.png)
 ![](img/perf_appendix_1_cg1_3.png)
 ![](img/perf_appendix_1_cg1_4.png)
 
-## 附录2 多Topic测试时的资源占用情况图:
+### 6.2 附录2 多Topic测试时的资源占用情况图:
 
-###【100个topic】
+#### 6.2.1 【100个topic】
 ![](img/perf_appendix_2_topic_100_1.png)
 ![](img/perf_appendix_2_topic_100_2.png)
 ![](img/perf_appendix_2_topic_100_3.png)
@@ -199,7 +200,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_100_8.png)
 ![](img/perf_appendix_2_topic_100_9.png)
  
-###【200个topic】
+#### 6.2.2 【200个topic】
 ![](img/perf_appendix_2_topic_200_1.png)
 ![](img/perf_appendix_2_topic_200_2.png)
 ![](img/perf_appendix_2_topic_200_3.png)
@@ -210,7 +211,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_200_8.png)
 ![](img/perf_appendix_2_topic_200_9.png)
 
-###【500个topic】
+#### 6.2.3 【500个topic】
 ![](img/perf_appendix_2_topic_500_1.png)
 ![](img/perf_appendix_2_topic_500_2.png)
 ![](img/perf_appendix_2_topic_500_3.png)
@@ -221,7 +222,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_500_8.png)
 ![](img/perf_appendix_2_topic_500_9.png)
 
-###【1000个topic】
+#### 6.2.4【1000个topic】
 ![](img/perf_appendix_2_topic_1000_1.png)
 ![](img/perf_appendix_2_topic_1000_2.png)
 ![](img/perf_appendix_2_topic_1000_3.png)
diff --git a/zh-cn/docs/modules/tubemq/architecture.html b/zh-cn/docs/modules/tubemq/architecture.html
index 332d880..20d69e5 100644
--- a/zh-cn/docs/modules/tubemq/architecture.html
+++ b/zh-cn/docs/modules/tubemq/architecture.html
@@ -7,12 +7,12 @@
 	<meta name="keywords" content="architecture" />
 	<meta name="description" content="architecture" />
 	<!-- 网页标签标题 -->
-	<title>架构介绍 - Apache InLong TubeMQ模块</title>
+	<title>architecture</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>经过多年演变,TubeMQ集群分为如下5个部分:
 <img src="img/sys_structure.png" alt=""></p>
 <ul>
@@ -32,7 +32,7 @@
 <p><strong>Zookeeper</strong>: 负责offset存储的zk部分,该部分功能已弱化到仅做offset的持久化存储,考虑到接下来的多节点副本功能该模块暂时保留。</p>
 </li>
 </ul>
-<h2>Apache InLong TubeMQ模块的系统特点</h2>
+<h2>2 Apache InLong TubeMQ模块的系统特点</h2>
 <ul>
 <li>
 <p><strong>纯Java实现语言</strong>:
@@ -79,15 +79,15 @@ TubeMQ采用连接复用模式,减少连接资源消耗;通过逻辑分区
 基于业务使用上的便利性以,我们简化了客户端逻辑,使其做到最小的功能集合,我们采用基于响应消息的接收质量统计算法来自动剔出坏的Broker节点,基于首次使用时作连接尝试来避免大数据量发送时发送受阻(具体内容见后面章节介绍)。</p>
 </li>
 </ul>
-<h2>Broker文件存储方案改进</h2>
+<h2>3 Broker文件存储方案改进</h2>
 <p>以磁盘为数据持久化媒介的系统都面临各种因磁盘问题导致的系统性能问题,TubeMQ系统也不例外,性能提升很大程度上是在解决消息数据如何读写及存储的问题。在这个方面TubeMQ进行了比较多的改进,我们采用存储实例来作为最小的Topic数据管理单元,每个存储实例包括一个文件存储块和一个内存缓存块,每个Topic可以分配多个存储实例:</p>
-<h3>文件存储块</h3>
+<h3>3.1 文件存储块</h3>
 <p>TubeMQ的磁盘存储方案类似Kafka,但又不尽相同,如下图示,每个文件存储块由一个索引文件和一个数据文件组成,partiton为数据文件里的逻辑分区,每个Topic单独维护管理文件存储块的相关机制,包括老化周期,partition个数,是否可读可写等。
 <img src="img/store_file.png" alt=""></p>
-<h3>内存缓存块</h3>
+<h3>3.2 内存缓存块</h3>
 <p>在文件存储块基础上,我们额外增加了一个单独的内存缓存块,即在原有写磁盘基础上增加一块内存,隔离硬盘的慢速影响,数据先刷到内存缓存块,然后由内存缓存块批量地将数据刷到磁盘文件。
 <img src="img/store_mem.png" alt=""></p>
-<h2>Apache InLong TubeMQ模块的客户端演进:</h2>
+<h2>4 Apache InLong TubeMQ模块的客户端演进:</h2>
 <p>业务与TubeMQ接触得最多的是消费侧,怎样更适应业务特点、更方便业务使用我们在这块做了比较多的改进:</p>
 <ul>
 <li>
diff --git a/zh-cn/docs/modules/tubemq/architecture.json b/zh-cn/docs/modules/tubemq/architecture.json
index 099ba58..ff0efec 100644
--- a/zh-cn/docs/modules/tubemq/architecture.json
+++ b/zh-cn/docs/modules/tubemq/architecture.json
@@ -1,8 +1,8 @@
 {
   "filename": "architecture.md",
-  "__html": "<h2>Apache InLong TubeMQ模块的架构</h2>\n<p>经过多年演变,TubeMQ集群分为如下5个部分:\n<img src=\"img/sys_structure.png\" alt=\"\"></p>\n<ul>\n<li>\n<p><strong>Portal</strong>: 负责对外交互和运维操作的Portal部分,包括API和Web两块,API对接集群之外的管理系统,Web是在API基础上对日常运维功能做的页面封装;</p>\n</li>\n<li>\n<p><strong>Master</strong>: 负责集群控制的Control部分,该部分由1个或多个Master节点组成,Master HA通过Master节点间心跳保活、实时热备切换完成(这是大家使用TubeMQ的Lib时需要填写对应集群所有Master节点地址的原因),主Master负责管理整个集群的状态、资源调度、权限检查、元数据查询等;</p>\n</li>\n<li>\n<p><strong>Broker</strong>: 负责实际数据存储 [...]
+  "__html": "<h2>1 Apache InLong TubeMQ模块的架构</h2>\n<p>经过多年演变,TubeMQ集群分为如下5个部分:\n<img src=\"img/sys_structure.png\" alt=\"\"></p>\n<ul>\n<li>\n<p><strong>Portal</strong>: 负责对外交互和运维操作的Portal部分,包括API和Web两块,API对接集群之外的管理系统,Web是在API基础上对日常运维功能做的页面封装;</p>\n</li>\n<li>\n<p><strong>Master</strong>: 负责集群控制的Control部分,该部分由1个或多个Master节点组成,Master HA通过Master节点间心跳保活、实时热备切换完成(这是大家使用TubeMQ的Lib时需要填写对应集群所有Master节点地址的原因),主Master负责管理整个集群的状态、资源调度、权限检查、元数据查询等;</p>\n</li>\n<li>\n<p><strong>Broker</strong>: 负责实际数据 [...]
   "link": "/zh-cn/docs/modules/tubemq/architecture.html",
   "meta": {
-    "title": "架构介绍 - Apache InLong TubeMQ模块"
+    "架构介绍 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/architecture.md b/zh-cn/docs/modules/tubemq/architecture.md
index 9185d56..cc48c29 100644
--- a/zh-cn/docs/modules/tubemq/architecture.md
+++ b/zh-cn/docs/modules/tubemq/architecture.md
@@ -1,8 +1,8 @@
 ---
-title: 架构介绍 - Apache InLong TubeMQ模块
+架构介绍 - Apache InLong TubeMQ模块
 ---
 
-## Apache InLong TubeMQ模块的架构 
+## 1 Apache InLong TubeMQ模块的架构 
 经过多年演变,TubeMQ集群分为如下5个部分:
 ![](img/sys_structure.png)
 
@@ -17,7 +17,7 @@ title: 架构介绍 - Apache InLong TubeMQ模块
 - **Zookeeper**: 负责offset存储的zk部分,该部分功能已弱化到仅做offset的持久化存储,考虑到接下来的多节点副本功能该模块暂时保留。
 
 
-## Apache InLong TubeMQ模块的系统特点
+## 2 Apache InLong TubeMQ模块的系统特点
 - **纯Java实现语言**:
 TubeMQ采用纯Java语言开发,便于开发人员快速熟悉项目及问题处理;
 
@@ -52,19 +52,19 @@ TubeMQ采用连接复用模式,减少连接资源消耗;通过逻辑分区
 基于业务使用上的便利性以,我们简化了客户端逻辑,使其做到最小的功能集合,我们采用基于响应消息的接收质量统计算法来自动剔出坏的Broker节点,基于首次使用时作连接尝试来避免大数据量发送时发送受阻(具体内容见后面章节介绍)。
 
 
-## Broker文件存储方案改进 
+## 3 Broker文件存储方案改进 
 以磁盘为数据持久化媒介的系统都面临各种因磁盘问题导致的系统性能问题,TubeMQ系统也不例外,性能提升很大程度上是在解决消息数据如何读写及存储的问题。在这个方面TubeMQ进行了比较多的改进,我们采用存储实例来作为最小的Topic数据管理单元,每个存储实例包括一个文件存储块和一个内存缓存块,每个Topic可以分配多个存储实例:
 
-### 文件存储块
+### 3.1 文件存储块
  TubeMQ的磁盘存储方案类似Kafka,但又不尽相同,如下图示,每个文件存储块由一个索引文件和一个数据文件组成,partiton为数据文件里的逻辑分区,每个Topic单独维护管理文件存储块的相关机制,包括老化周期,partition个数,是否可读可写等。
 ![](img/store_file.png)
 
-### 内存缓存块
+### 3.2 内存缓存块
  在文件存储块基础上,我们额外增加了一个单独的内存缓存块,即在原有写磁盘基础上增加一块内存,隔离硬盘的慢速影响,数据先刷到内存缓存块,然后由内存缓存块批量地将数据刷到磁盘文件。
 ![](img/store_mem.png)
 
 
-## Apache InLong TubeMQ模块的客户端演进: ##
+## 4 Apache InLong TubeMQ模块的客户端演进:
 业务与TubeMQ接触得最多的是消费侧,怎样更适应业务特点、更方便业务使用我们在这块做了比较多的改进:
 
 - **数据拉取模式支持Push、Pull:**
diff --git a/zh-cn/docs/modules/tubemq/client_rpc.html b/zh-cn/docs/modules/tubemq/client_rpc.html
index d5c0238..b656ddb 100644
--- a/zh-cn/docs/modules/tubemq/client_rpc.html
+++ b/zh-cn/docs/modules/tubemq/client_rpc.html
@@ -7,18 +7,17 @@
 	<meta name="keywords" content="client_rpc" />
 	<meta name="description" content="client_rpc" />
 	<!-- 网页标签标题 -->
-	<title>客户端RPC - Apache InLong TubeMQ模块</title>
+	<title>client_rpc</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h2>总体介绍:</h2>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>这部分介绍内容在/org/apache/inlong/tubemq/corerpc模块下可以找到对应实现,Apache InLong TubeMQ模块的各个节点间(Client、Master、Broker)通过TCP协议长连接交互,其消息采用的是 【二进制 + Protobuf编码】组合方式进行定义,如下图示:
 <img src="img/client_rpc/rpc_bytes_def.png" alt=""></p>
 <p>在TCP里我们看到的都是二进制流,我们定义了4字节的msgToken消息头字段RPC_PROTOCOL_BEGIN_TOKEN,用来区分每一条消息以及识别对端的合法性,客户端收到的消息不是以该字段开始的响应消息时,说明连接方非本系统支持的协议,或者返回数据出现了异常,这个时候需要关闭该连接,提示错误退出或者重连;紧接着的是4字节的消息序列号serialNo,该字段由请求方生成通过请求消息携带给服务端,服务器端完成该请求消息服务后通过请求消息的对应响应消息原样返回,主要用于客户端关联请求响应的上下文;4字节的listSize字段表示接下来按照PB编码的数据块个数,即后面跟随的[&amp;lt;len&amp;gt;&amp;lt;data&amp;gt;]内容的块数,目前协议定义下该字段不为0;[&amp;lt;len&amp;gt;&amp;lt;data&amp;gt;]是2个字段组合,即数据块长度,数据,主要是表示这个数据块长度及具�
 �的数据。</p>
 <p>为什么会以listSize [&amp;lt;len&amp;gt;&amp;lt;data&amp;gt;]形式定义pb数据内容?因为在TubeMQ的这个实现中,序列化后的PB数据是通过ByteBuffer对象保存的,Java里ByteBuffer存在一个最大块长8196,超过单个块长度的PB消息内容就需要用多个ByteBuffer保存,序列化到TCP消息时候,这块没有统计总长,直接按照PB序列化的ByteBuffer列表写入到了消息中。 <strong>在多语言实现时候,这块需要特别注意:</strong> 需要将PB数据内容序列化成块数组(pb编解码里有对应支持)。</p>
-<h2>PB格式编码:</h2>
+<h2>2 PB格式编码:</h2>
 <p>PB格式编码分为RPC框架定义,到Master的消息编码和到Broker的消息编码三个部分,大家采用protobuf直接编译就可以获得不同语言的编解码,使用起来非常的方便:
 <img src="img/client_rpc/rpc_proto_def.png" alt=""></p>
 <p>RPC.proto定义了6个结构,分为2大类:请求消息与响应消息,响应消息里又分为正常的响应返回以及抛异常情况下的响应返回:
@@ -27,8 +26,8 @@
 <img src="img/client_rpc/rpc_conn_detail.png" alt=""></p>
 <p>其中flag标记的是否请求消息,后面3个标记的是消息跟踪的相关内容,目前没有使用;相关的服务类型,协议版本,服务类型等是固定的映射关系,比较关键的一个参数RequestBody.timeout是一个请求被服务器收到到实际处理时的最大允许等待时间长,超过就丢弃,目前缺省为10秒,请求填写具体见如下部分:
 <img src="img/client_rpc/rpc_header_fill.png" alt=""></p>
-<h2>客户端的PB请求响应交互图:</h2>
-<p><strong>Producer交互图</strong>:</p>
+<h2>3 客户端的PB请求响应交互图:</h2>
+<h3>3.1 Producer交互图:</h3>
 <p>Producer在系统中一共4对指令,到master是要做注册,心跳,退出操作;到broker只有发送消息:
 <img src="img/client_rpc/rpc_producer_diagram.png" alt=""></p>
 <p>从这里我们可以看到,Producer实现逻辑就是从Master侧获取指定Topic对应的分区列表等元数据信息,获得这些信息后按照客户端的规则选择分区并把消息发送给对应的Broker,而到Broker的发送是直接进行TCP连接方式进行。有同学会疑惑这样是否不安全,不注册直接发消息方式,最初考虑是内部使用尽可能的接纳消息,后来考虑安全问题,我们在这个基础上增加了授权信息携带,在服务端进行认证和授权检查,解决客户端绕开Master直连以及无授权乱发消息的情况,但这种只会在严格环境开启。生产端这块 <strong>多语言实现的时候需要注意:</strong></p>
@@ -46,11 +45,11 @@
 <p>Producer到Broker的连接要注意异常检测,长期运行场景,要能检测出Broker坏点,以及长期不发消息,要将到Broker的连接回收,避免运行不稳定。</p>
 </li>
 </ol>
-<p><strong>Consumer交互图</strong>:</p>
+<h3>3.2 Consumer交互图:</h3>
 <p>Consumer一共7对指令,到master是要做注册,心跳,退出操作;到broker包括注册,注销,心跳,拉取消息,确认消息4对,其中到Broker的注册注销是同一个命令,用了不同的状态码表示:
 <img src="img/client_rpc/rpc_consumer_diagram.png" alt=""></p>
 <p>从上图我们可以看到,Consumer首先要注册到Master,但注册到Master时并没有立即获取到元数据信息,原因是TubeMQ是采用的是服务器端负载均衡模式,客户端需要等待服务器派发消费分区信息;Consumer到Broker需要进行注册注销操作,原因在于消费时候分区是独占消费,即同一时刻同一分区者只能被同组的一个消费者进行消费,为了解决这个问题,需要客户端进行注册,获得分区的消费权限;消息拉取与消费确认需要成对出现,虽然协议支持多次拉取然后最后一次确认处理,但从客户端可能超时丢失分区的消费权限,从而导致数据回滚重复消费触发,数据积攒的越多重复消费的量就越多,所以按照1:1的提交比较合适。</p>
-<h2>客户端功能集合:</h2>
+<h2>4 客户端功能集合:</h2>
 <table>
 <thead>
 <tr>
@@ -282,12 +281,13 @@
 </tr>
 </tbody>
 </table>
-<h2>客户端功能CaseByCase实现介绍:</h2>
-<p><strong>客户端与服务器端RPC交互过程</strong>:</p>
+<h2>5 客户端功能CaseByCase实现介绍:</h2>
+<h3>5.1 客户端与服务器端RPC交互过程:</h3>
 <hr>
 <p><img src="img/client_rpc/rpc_inner_structure.png" alt=""></p>
 <p>如上图示,客户端要维持已发请求消息的本地保存,直到RPC超时,或者收到响应消息,响应消息通过请求发送时生成的SerialNo关联;从服务器端收到的Broker信息,以及Topic信息,SDK要保存在本地,并根据最新的返回信息进行更新,以及定期的上报给服务器端;SDK要维持到Master或者Broker的心跳,如果发现Master反馈注册超时错误时,要进行重注册操作;SDK要基于Broker进行连接建立,同一个进程不同对象之间,要允许业务进行选择,是支持按对象建立连接,还是按照进程建立连接。</p>
-<h2><strong>Producer到Master注册</strong>:</h2>
+<h3>5.2 Producer到Master注册:</h3>
+<hr>
 <p><img src="img/client_rpc/rpc_producer_register2M.png" alt=""></p>
 <p><strong>ClientId</strong>:Producer需要在启动时候构造一个ClientId,目前的构造规则是:</p>
 <p>Java的SDK版本里ClientId = 节点IP地址(IPV4) + &quot;-&quot; + 进程ID + &quot;-&quot; + createTime+&quot;-&quot; +本进程内第n个实例+&quot;-&quot; +客户端版本ID 【+ &quot;-&quot; + SDK实现语言】,建议其他语言增加如上标记,以便于问题排查。该ID值在Producer生命周期内有效;</p>
@@ -304,15 +304,18 @@
 <p><img src="img/client_rpc/rpc_master_authorizedinfo.png" alt=""></p>
 <p><strong>visitAuthorizedToken</strong>:防客户端绕开Master的访问授权Token,如果有该数据,SDK要保存本地,并且在后续访问Broker时携带该信息;如果后续心跳时该字段有变更,则需要更新本地缓存的该字段数据;</p>
 <p><strong>authAuthorizedToken</strong>:认证通过的授权Token,如果有该字段数据,要保存,并且在后续访问Master及Broker时携带该字段信息;如果后续心跳时该字段有变更,则需要更新本地缓存的该字段数据;</p>
-<h2><strong>Producer到Master保持心跳</strong>:</h2>
+<h3>5.3 Producer到Master保持心跳:</h3>
+<hr>
 <p><img src="img/client_rpc/rpc_producer_heartbeat2M.png" alt=""></p>
 <p><strong>topicInfos</strong>:SDK发布的Topic对应的元数据信息,包括分区信息以及所在的Broker,具体解码方式如下,由于元数据非常的多,如果将对象数据原样透传所产生的出流量会非常的大,所以我们通过编码方式做了改进:</p>
 <p><img src="img/client_rpc/rpc_convert_topicinfo.png" alt=""></p>
 <p><strong>requireAuth</strong>:标识Master之前的授权访问码(authAuthorizedToken)过期,要求SDK下一次请求,进行用户名及密码的签名信息上报;</p>
-<h2><strong>Producer到Master关闭退出</strong>:</h2>
+<h3>5.4 Producer到Master关闭退出:</h3>
+<hr>
 <p><img src="img/client_rpc/rpc_producer_close2M.png" alt=""></p>
 <p>需要注意的是,如果认证开启,关闭会做认证,以避免外部干扰操作。</p>
-<h2><strong>Producer到Broker发送消息</strong>:</h2>
+<h3>5.5 Producer到Broker发送消息:</h3>
+<hr>
 <p>该部分的内容主要和Message的定义由关联,其中</p>
 <p><img src="img/client_rpc/rpc_producer_sendmsg2B.png" alt=""></p>
 <p><strong>Data</strong>是Message的二进制字节流:</p>
@@ -320,7 +323,8 @@
 <p><strong>sentAddr</strong>是SDK所在的本机IPv4地址转成32位的数字ID;</p>
 <p><strong>msgType</strong>是过滤的消息类型,msgTime是SDK发消息时的消息时间,其值来源于构造Message时通过putSystemHeader填写的值,在Message里有对应的API获取;</p>
 <p><strong>requireAuth</strong>:到Broker进行数据生产的要求认证操作,考虑性能问题,目前未生效,发送消息里填写的authAuthorizedToken值以Master侧提供的值为准,并且随Master侧改变而改变。</p>
-<h2><strong>分区负载均衡过程</strong>:</h2>
+<h3>5.6 分区负载均衡过程:</h3>
+<hr>
 <p>Apache InLong TubeMQ模块目前采用的是服务器端负载均衡模式,均衡过程由服务器管理维护;后续版本会增加客户端负载均衡模式,形成2种模式共存的情况,由业务根据需要选择不同的均衡方式。</p>
 <p><strong>服务器端负载均衡过程如下</strong>:</p>
 <ul>
diff --git a/zh-cn/docs/modules/tubemq/client_rpc.json b/zh-cn/docs/modules/tubemq/client_rpc.json
index 72f5c47..ed9fbec 100644
--- a/zh-cn/docs/modules/tubemq/client_rpc.json
+++ b/zh-cn/docs/modules/tubemq/client_rpc.json
@@ -1,8 +1,8 @@
 {
   "filename": "client_rpc.md",
-  "__html": "<h1>Apache InLong TubeMQ模块的RPC定义:</h1>\n<h2>总体介绍:</h2>\n<p>这部分介绍内容在/org/apache/inlong/tubemq/corerpc模块下可以找到对应实现,Apache InLong TubeMQ模块的各个节点间(Client、Master、Broker)通过TCP协议长连接交互,其消息采用的是 【二进制 + Protobuf编码】组合方式进行定义,如下图示:\n<img src=\"img/client_rpc/rpc_bytes_def.png\" alt=\"\"></p>\n<p>在TCP里我们看到的都是二进制流,我们定义了4字节的msgToken消息头字段RPC_PROTOCOL_BEGIN_TOKEN,用来区分每一条消息以及识别对端的合法性,客户端收到的消息不是以该字段开始的响应消息时,说明连接方非本系统支持的协议,或者返回数据出现了异常,这个时候需要关闭该连接,提示错误退出或者重连;紧接着的是4字节的消息序列号serialNo,该字段由请求方生成通过请求消息携带给 [...]
+  "__html": "<h2>1 总体介绍:</h2>\n<p>这部分介绍内容在/org/apache/inlong/tubemq/corerpc模块下可以找到对应实现,Apache InLong TubeMQ模块的各个节点间(Client、Master、Broker)通过TCP协议长连接交互,其消息采用的是 【二进制 + Protobuf编码】组合方式进行定义,如下图示:\n<img src=\"img/client_rpc/rpc_bytes_def.png\" alt=\"\"></p>\n<p>在TCP里我们看到的都是二进制流,我们定义了4字节的msgToken消息头字段RPC_PROTOCOL_BEGIN_TOKEN,用来区分每一条消息以及识别对端的合法性,客户端收到的消息不是以该字段开始的响应消息时,说明连接方非本系统支持的协议,或者返回数据出现了异常,这个时候需要关闭该连接,提示错误退出或者重连;紧接着的是4字节的消息序列号serialNo,该字段由请求方生成通过请求消息携带给服务端,服务器端完成该请求消息服务后通过请求消息的对应响应
 消息原样返回,主要 [...]
   "link": "/zh-cn/docs/modules/tubemq/client_rpc.html",
   "meta": {
-    "title": "客户端RPC - Apache InLong TubeMQ模块"
+    "客户端RPC - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/client_rpc.md b/zh-cn/docs/modules/tubemq/client_rpc.md
index 7abae1e..6ee962c 100644
--- a/zh-cn/docs/modules/tubemq/client_rpc.md
+++ b/zh-cn/docs/modules/tubemq/client_rpc.md
@@ -1,10 +1,8 @@
 ---
-title: 客户端RPC - Apache InLong TubeMQ模块
+客户端RPC - Apache InLong TubeMQ模块
 ---
 
-# Apache InLong TubeMQ模块的RPC定义:
-
-## 总体介绍:
+## 1 总体介绍:
 
 这部分介绍内容在/org/apache/inlong/tubemq/corerpc模块下可以找到对应实现,Apache InLong TubeMQ模块的各个节点间(Client、Master、Broker)通过TCP协议长连接交互,其消息采用的是 【二进制 + Protobuf编码】组合方式进行定义,如下图示:
 ![](img/client_rpc/rpc_bytes_def.png)
@@ -14,7 +12,7 @@ title: 客户端RPC - Apache InLong TubeMQ模块
 为什么会以listSize [\&lt;len\&gt;\&lt;data\&gt;]形式定义pb数据内容?因为在TubeMQ的这个实现中,序列化后的PB数据是通过ByteBuffer对象保存的,Java里ByteBuffer存在一个最大块长8196,超过单个块长度的PB消息内容就需要用多个ByteBuffer保存,序列化到TCP消息时候,这块没有统计总长,直接按照PB序列化的ByteBuffer列表写入到了消息中。 **在多语言实现时候,这块需要特别注意:** 需要将PB数据内容序列化成块数组(pb编解码里有对应支持)。
 
 
-## PB格式编码:
+## 2 PB格式编码:
 
 PB格式编码分为RPC框架定义,到Master的消息编码和到Broker的消息编码三个部分,大家采用protobuf直接编译就可以获得不同语言的编解码,使用起来非常的方便:
 ![](img/client_rpc/rpc_proto_def.png)
@@ -29,9 +27,9 @@ RPC.proto定义了6个结构,分为2大类:请求消息与响应消息,响
 ![](img/client_rpc/rpc_header_fill.png)
 
 
-## 客户端的PB请求响应交互图:
+## 3 客户端的PB请求响应交互图:
 
-**Producer交互图**:
+### 3.1 Producer交互图:
 
 Producer在系统中一共4对指令,到master是要做注册,心跳,退出操作;到broker只有发送消息:
 ![](img/client_rpc/rpc_producer_diagram.png)
@@ -47,14 +45,14 @@ Producer在系统中一共4对指令,到master是要做注册,心跳,退
 4. Producer到Broker的连接要注意异常检测,长期运行场景,要能检测出Broker坏点,以及长期不发消息,要将到Broker的连接回收,避免运行不稳定。
 
 
-**Consumer交互图**:
+### 3.2 Consumer交互图:
 
 Consumer一共7对指令,到master是要做注册,心跳,退出操作;到broker包括注册,注销,心跳,拉取消息,确认消息4对,其中到Broker的注册注销是同一个命令,用了不同的状态码表示:
 ![](img/client_rpc/rpc_consumer_diagram.png)
 
 从上图我们可以看到,Consumer首先要注册到Master,但注册到Master时并没有立即获取到元数据信息,原因是TubeMQ是采用的是服务器端负载均衡模式,客户端需要等待服务器派发消费分区信息;Consumer到Broker需要进行注册注销操作,原因在于消费时候分区是独占消费,即同一时刻同一分区者只能被同组的一个消费者进行消费,为了解决这个问题,需要客户端进行注册,获得分区的消费权限;消息拉取与消费确认需要成对出现,虽然协议支持多次拉取然后最后一次确认处理,但从客户端可能超时丢失分区的消费权限,从而导致数据回滚重复消费触发,数据积攒的越多重复消费的量就越多,所以按照1:1的提交比较合适。
 
-## 客户端功能集合:
+## 4 客户端功能集合:
 
 | **特性** | **Java** | **C/C++** | **Go** | **Python** | **Rust** | **备注** |
 | --- | --- | --- | --- | --- | --- | --- |
@@ -84,9 +82,9 @@ Consumer一共7对指令,到master是要做注册,心跳,退出操作;
 | 控制消费者拉取消息的频度 | ✅ | | | | | |
 
 
-## 客户端功能CaseByCase实现介绍:
+## 5 客户端功能CaseByCase实现介绍:
 
-**客户端与服务器端RPC交互过程**:
+### 5.1 客户端与服务器端RPC交互过程:
 
 ----------
 
@@ -94,7 +92,8 @@ Consumer一共7对指令,到master是要做注册,心跳,退出操作;
 
 如上图示,客户端要维持已发请求消息的本地保存,直到RPC超时,或者收到响应消息,响应消息通过请求发送时生成的SerialNo关联;从服务器端收到的Broker信息,以及Topic信息,SDK要保存在本地,并根据最新的返回信息进行更新,以及定期的上报给服务器端;SDK要维持到Master或者Broker的心跳,如果发现Master反馈注册超时错误时,要进行重注册操作;SDK要基于Broker进行连接建立,同一个进程不同对象之间,要允许业务进行选择,是支持按对象建立连接,还是按照进程建立连接。
 
-**Producer到Master注册**:
+### 5.2 Producer到Master注册:
+
 ----------
 ![](img/client_rpc/rpc_producer_register2M.png)
 
@@ -129,8 +128,10 @@ Java的SDK版本里ClientId = 节点IP地址(IPV4) + &quot;-&quot; + 进程I
 **authAuthorizedToken**:认证通过的授权Token,如果有该字段数据,要保存,并且在后续访问Master及Broker时携带该字段信息;如果后续心跳时该字段有变更,则需要更新本地缓存的该字段数据;
 
 
-**Producer到Master保持心跳**:
+### 5.3 Producer到Master保持心跳:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_heartbeat2M.png)
 
 **topicInfos**:SDK发布的Topic对应的元数据信息,包括分区信息以及所在的Broker,具体解码方式如下,由于元数据非常的多,如果将对象数据原样透传所产生的出流量会非常的大,所以我们通过编码方式做了改进:
@@ -139,14 +140,18 @@ Java的SDK版本里ClientId = 节点IP地址(IPV4) + &quot;-&quot; + 进程I
 
 **requireAuth**:标识Master之前的授权访问码(authAuthorizedToken)过期,要求SDK下一次请求,进行用户名及密码的签名信息上报;
 
-**Producer到Master关闭退出**:
+### 5.4 Producer到Master关闭退出:
+
 ----------
+
 ![](img/client_rpc/rpc_producer_close2M.png)
 
 需要注意的是,如果认证开启,关闭会做认证,以避免外部干扰操作。
 
-**Producer到Broker发送消息**:
+### 5.5 Producer到Broker发送消息:
+
 ----------
+
 该部分的内容主要和Message的定义由关联,其中
 
 ![](img/client_rpc/rpc_producer_sendmsg2B.png)
@@ -161,8 +166,10 @@ Java的SDK版本里ClientId = 节点IP地址(IPV4) + &quot;-&quot; + 进程I
 
 **requireAuth**:到Broker进行数据生产的要求认证操作,考虑性能问题,目前未生效,发送消息里填写的authAuthorizedToken值以Master侧提供的值为准,并且随Master侧改变而改变。
 
-**分区负载均衡过程**:
+### 5.6 分区负载均衡过程:
+
 ----------
+
 Apache InLong TubeMQ模块目前采用的是服务器端负载均衡模式,均衡过程由服务器管理维护;后续版本会增加客户端负载均衡模式,形成2种模式共存的情况,由业务根据需要选择不同的均衡方式。
 
 **服务器端负载均衡过程如下**:
diff --git a/zh-cn/docs/modules/tubemq/clients_java.html b/zh-cn/docs/modules/tubemq/clients_java.html
index dee7f6e..c7b0f9d 100644
--- a/zh-cn/docs/modules/tubemq/clients_java.html
+++ b/zh-cn/docs/modules/tubemq/clients_java.html
@@ -7,27 +7,25 @@
 	<meta name="keywords" content="clients_java" />
 	<meta name="description" content="clients_java" />
 	<!-- 网页标签标题 -->
-	<title>JAVA SDK API介绍 - Apache InLong TubeMQ模块</title>
+	<title>clients_java</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<hr>
-<h3><strong>1. 基础对象接口介绍:</strong></h3>
-<h4><strong>a) MessageSessionFactory(消息会话工厂):</strong></h4>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+<h3>1.1 MessageSessionFactory(消息会话工厂):</h3>
 <p>TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。</p>
-<h4><strong>b) MasterInfo:</strong></h4>
+<h3>1.2 MasterInfo:</h3>
 <p>TubeMQ的Master地址信息对象,该对象的特点是支持配置多个Master地址,由于TubeMQ Master借助BDB的存储能力进行元数据管理,以及服务HA热切能力,Master的地址相应地就需要配置多条信息。该配置信息支持IP、域名两种模式,由于TubeMQ的HA是热切模式,客户端要保证到各个Master地址都是连通的。该信息在初始化TubeClientConfig类对象和ConsumerConfig类对象时使用,考虑到配置的方便性,我们将多条Master地址构造成“ip1:port1,ip2:port2,ip3:port3”格式并进行解析。</p>
-<h4><strong>c) TubeClientConfig:</strong></h4>
+<h3>1.3 TubeClientConfig:</h3>
 <p>MessageSessionFactory(消息会话工厂)初始化类,用来携带创建网络连接信息、客户端控制参数信息的对象类,包括RPC时长设置、Socket属性设置、连接质量检测参数设置、TLS参数设置、认证授权信息设置等信息。</p>
-<h4><strong>d) ConsumerConfig:</strong></h4>
+<h3>1.4 ConsumerConfig:</h3>
 <p>ConsumerConfig类是TubeClientConfig类的子类,它是在TubeClientConfig类基础上增加了Consumer类对象初始化时候的参数携带,因而在一个既有Producer又有Consumer的MessageSessionFactory(消息会话工厂)类对象里,会话工厂类的相关设置以MessageSessionFactory类初始化的内容为准,Consumer类对象按照创建时传递的初始化类对象为准。在consumer里又根据消费行为的不同分为Pull消费者和Push消费者两种,两种特有的参数通过参数接口携带“pull”或“push”不同特征进行区分。</p>
-<h4><strong>e) Message:</strong></h4>
+<h3>1.5 Message:</h3>
 <p>Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产端原样传递给消息接收端,attribute内容是与TubeMQ系统共用的字段,业务填写的内容不会丢失和改写,但该字段有可能会新增TubeMQ系统填写的内容,并在后续的版本中,新增的TubeMQ系统内容有可能去掉而不被通知。该部分需要注意的是Message.putSystemHeader(final String msgType, final String msgTime)接口,该接口用来设置消息的消息类型和消息发送时间,msgType用于消费端过滤用,msgTime用做TubeMQ进行数据收发统计时消息时间统计维度用。</p>
-<h4><strong>f) MessageProducer:</strong></h4>
+<h3>1.6 MessageProducer:</h3>
 <p>消息生产者类,该类完成消息的生产,消息发送分为同步发送和异步发送两种接口,目前消息采用Round Robin方式发往后端服务器,后续这块将考虑按照业务指定的算法进行后端服务器选择方式进行生产。该类使用时需要注意的是,我们支持在初始化时候全量Topic指定的publish,也支持在生产过程中临时增加对新的Topic的publish,但临时增加的Topic不会立即生效,因而在使用新增Topic前,要先调用isTopicCurAcceptPublish接口查询该Topic是否已publish并且被服务器接受,否则有可能消息发送失败。</p>
-<h4><strong>g) MessageConsumer:</strong></h4>
+<h3>1.7 MessageConsumer:</h3>
 <p>该类有两个子类PullMessageConsumer、PushMessageConsumer,通过这两个子类的包装,完成了对业务侧的Pull和Push语义。实际上TubeMQ是采用Pull模式与后端服务进行交互,为了便于业务的接口使用,我们进行了封装,大家可以看到其差别在于Push在启动时初始化了一个线程组,来完成主动的数据拉取操作。需要注意的地方在于:</p>
 <ul>
 <li>
@@ -38,16 +36,12 @@
 </li>
 </ul>
 <hr>
-<h3><strong>2. 接口调用示例:</strong></h3>
-<h4><strong>a) 环境准备:</strong></h4>
+<h2>2 接口调用示例:</h2>
+<h3>2.1 环境准备:</h3>
 <p>TubeMQ开源包org.apache.inlong.tubemq.example里提供了生产和消费的具体代码示例,这里我们通过一个实际的例子来介绍如何填参和调用对应接口。首先我们搭建一个带3个Master节点的TubeMQ集群,3个Master地址及端口分别为test_1.domain.com,test_2.domain.com,test_3.domain.com,端口均为8080,在该集群里我们建立了若干个Broker,并且针对Broker我们创建了3个topic:topic_1,topic_2,topic_3等Topic配置;然后我们启动对应的Broker等待Consumer和Producer的创建。</p>
-<h4><strong>b) 创建Consumer:</strong></h4>
+<h3>2.2 创建Consumer:</h3>
 <p>见包org.apache.inlong.tubemq.example.MessageConsumerExample类文件,Consumer是一个包含网络交互协调的客户端对象,需要做初始化并且长期驻留内存重复使用的模型,它不适合单次拉起消费的场景。如下图示,我们定义了MessageConsumerExample封装类,在该类中定义了进行网络交互的会话工厂MessageSessionFactory类,以及用来做Push消费的PushMessageConsumer类:</p>
-<ul>
-<li>
-<h6><strong>i.初始化MessageConsumerExample类:</strong></h6>
-</li>
-</ul>
+<h4>2.2.1 初始化MessageConsumerExample类:</h4>
 <ol>
 <li>
 <p>首先构造一个ConsumerConfig类,填写初始化信息,包括本机IP V4地址,Master集群地址,消费组组名信息,这里Master地址信息传入值为:”test_1.domain.com:8080,test_2.domain.com:8080,test_3.domain.com:8080”;</p>
@@ -93,9 +87,7 @@
     }
 }
 </code></pre>
-<ul>
-<li><strong>ii.订阅Topic:</strong></li>
-</ul>
+<h4>2.2.2 订阅Topic:</h4>
 <p>我们没有采用指定Offset消费的模式进行订阅,也没有过滤需求,因而我们在如下代码里只做了Topic的指定,对应的过滤项集合我们传的是null值,同时,对于不同的Topic,我们可以传递不同的消息回调处理函数;我们这里订阅了3个topic,topic_1,topic_2,topic_3,每个topic分别调用subscribe函数进行对应参数设置:</p>
 <pre><code class="language-java"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">subscribe</span><span class="hljs-params">(<span class="hljs-keyword">final</span> Map&lt;String, TreeSet&lt;String&gt;&gt; topicTidsMap)</span>
     <span class="hljs-keyword">throws</span> TubeClientException </span>{
@@ -107,9 +99,7 @@
     messageConsumer.completeSubscribe();
 }
 </code></pre>
-<ul>
-<li><strong>iii.进行消费:</strong></li>
-</ul>
+<h4>2.2.3 进行消费:</h4>
 <p>到此,对集群里对应topic的订阅就已完成,系统运行开始后,回调函数里数据将不断的通过回调函数推送到业务层进行处理:</p>
 <pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">DefaultMessageListener</span> <span class="hljs-keyword">implements</span> <span class="hljs-title">MessageListener</span> </span>{
 
@@ -134,11 +124,9 @@
     }
 }
 </code></pre>
-<h4><strong>c) 创建Producer:</strong></h4>
+<h3>3 创建Producer:</h3>
 <p>现网环境中业务的数据都是通过代理层来做接收汇聚,包装了比较多的异常处理,大部分的业务都没有也不会接触到TubeSDK的Producer类,考虑到业务自己搭建集群使用TubeMQ进行使用的场景,这里提供对应的使用demo,见包org.apache.inlong.tubemq.example.MessageProducerExample类文件供参考,<strong>需要注意</strong>的是,业务除非使用数据平台的TubeMQ集群做MQ服务,否则仍要按照现网的接入流程使用代理层来进行数据生产:</p>
-<ul>
-<li><strong>i. 初始化MessageProducerExample类:</strong></li>
-</ul>
+<h4>3.1 初始化MessageProducerExample类:</h4>
 <p>和Consumer的初始化类似,也是构造了一个封装类,定义了一个会话工厂,以及一个Producer类,生产端的会话工厂初始化通过TubeClientConfig类进行,如之前所介绍的,ConsumerConfig类是TubeClientConfig类的子类,虽然传入参数不同,但会话工厂是通过TubeClientConfig类完成的初始化处理:</p>
 <pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">final</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">MessageProducerExample</span> </span>{
 
@@ -164,16 +152,12 @@
     }
 }
 </code></pre>
-<ul>
-<li><strong>ii. 发布Topic:</strong></li>
-</ul>
+<h4>3.2 发布Topic:</h4>
 <pre><code class="language-java"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">publishTopics</span><span class="hljs-params">(List&lt;String&gt; topicList)</span> <span class="hljs-keyword">throws</span> TubeClientException </span>{
     <span class="hljs-keyword">this</span>.messageProducer.publish(<span class="hljs-keyword">new</span> TreeSet&lt;String&gt;(topicList));
 }
 </code></pre>
-<ul>
-<li><strong>iii. 进行数据生产:</strong></li>
-</ul>
+<h4>3.3 进行数据生产:</h4>
 <p>如下所示,则为具体的数据构造和发送逻辑,构造一个Message对象后调用sendMessage()函数发送即可,有同步接口和异步接口选择,依照业务要求选择不同接口;需要注意的是该业务根据不同消息调用message.putSystemHeader()函数设置消息的过滤属性和发送时间,便于系统进行消息过滤消费,以及指标统计用。完成这些,一条消息即被发送出去,如果返回结果为成功,则消息被成功的接纳并且进行消息处理,如果返回失败,则业务根据具体错误码及错误提示进行判断处理,相关错误详情见《TubeMQ错误信息介绍.xlsx》:</p>
 <pre><code class="language-java"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">sendMessageAsync</span><span class="hljs-params">(<span class="hljs-keyword">int</span> id, <span class="hljs-keyword">long</span> currtime,
                              String topic, <span class="hljs-keyword">byte</span>[] body,
@@ -197,9 +181,7 @@
     }
 }
 </code></pre>
-<ul>
-<li><strong>iv. Producer不同类MAMessageProducerExample关注点:</strong></li>
-</ul>
+<h4>3.5 Producer不同类MAMessageProducerExample关注点:</h4>
 <p>该类初始化与MessageProducerExample类不同,采用的是TubeMultiSessionFactory多会话工厂类进行的连接初始化,该demo提供了如何使用多会话工厂类的特性,可以用于通过多个物理连接提升系统吞吐量的场景(TubeMQ通过连接复用模式来减少物理连接资源的使用),恰当使用可以提升系统的生产性能。在Consumer侧也可以通过多会话工厂进行初始化,但考虑到消费是长时间过程处理,对连接资源的占用比较小,消费场景不推荐使用。</p>
 <p>自此,整个生产和消费的示例已经介绍完,大家可以直接下载对应的代码编译跑一边,看看是不是就是这么简单😊</p>
 <hr>
diff --git a/zh-cn/docs/modules/tubemq/clients_java.json b/zh-cn/docs/modules/tubemq/clients_java.json
index 4a7353b..63a5a3c 100644
--- a/zh-cn/docs/modules/tubemq/clients_java.json
+++ b/zh-cn/docs/modules/tubemq/clients_java.json
@@ -1,8 +1,8 @@
 {
   "filename": "clients_java.md",
-  "__html": "<h2><strong>Apache InLong TubeMQ模块 Lib</strong> <strong>接口使用</strong></h2>\n<hr>\n<h3><strong>1. 基础对象接口介绍:</strong></h3>\n<h4><strong>a) MessageSessionFactory(消息会话工厂):</strong></h4>\n<p>TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造 [...]
+  "__html": "<h2>1 基础对象接口介绍:</h2>\n<h3>1.1 MessageSessionFactory(消息会话工厂):</h3>\n<p>TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。</p>\n<h3>1.2 MasterInfo:</h3>\n<p>Tub
 eMQ的Master地址信息对象,该对象的特点 [...]
   "link": "/zh-cn/docs/modules/tubemq/clients_java.html",
   "meta": {
-    "title": "JAVA SDK API介绍 - Apache InLong TubeMQ模块"
+    "JAVA SDK API介绍 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/clients_java.md b/zh-cn/docs/modules/tubemq/clients_java.md
index c41a50c..25d499a 100644
--- a/zh-cn/docs/modules/tubemq/clients_java.md
+++ b/zh-cn/docs/modules/tubemq/clients_java.md
@@ -1,52 +1,47 @@
 ---
-title: JAVA SDK API介绍 - Apache InLong TubeMQ模块
+JAVA SDK API介绍 - Apache InLong TubeMQ模块
 ---
 
-## **Apache InLong TubeMQ模块 Lib** **接口使用**
-
-------
-
 
+## 1 基础对象接口介绍:
 
-### **1. 基础对象接口介绍:**
-
-#### **a) MessageSessionFactory(消息会话工厂):**
+### 1.1 MessageSessionFactory(消息会话工厂):
 
 TubeMQ 采用MessageSessionFactory(消息会话工厂)来管理网络连接,又根据业务不同客户端是否复用连接细分为TubeSingleSessionFactory(单连接会话工厂)类和TubeMultiSessionFactory(多连接会话工厂)类2个部分,其实现逻辑大家可以从代码可以看到,单连接会话通过定义clientFactory静态类,实现了进程内不同客户端连接相同目标服务器时底层物理连接只建立一条的特征,多连接会话里定义的clientFactory为非静态类,从而实现同进程内通过不同会话工厂,创建的客户端所属的连接会话不同建立不同的物理连接。通过这种构造解决连接创建过多的问题,业务可以根据自身需要可以选择不同的消息会话工厂类,一般情况下我们使用单连接会话工厂类。
 
  
 
-#### **b) MasterInfo:**
+### 1.2 MasterInfo:
 
 TubeMQ的Master地址信息对象,该对象的特点是支持配置多个Master地址,由于TubeMQ Master借助BDB的存储能力进行元数据管理,以及服务HA热切能力,Master的地址相应地就需要配置多条信息。该配置信息支持IP、域名两种模式,由于TubeMQ的HA是热切模式,客户端要保证到各个Master地址都是连通的。该信息在初始化TubeClientConfig类对象和ConsumerConfig类对象时使用,考虑到配置的方便性,我们将多条Master地址构造成“ip1:port1,ip2:port2,ip3:port3”格式并进行解析。
 
  
 
-#### **c) TubeClientConfig:**
+### 1.3 TubeClientConfig:
 
 MessageSessionFactory(消息会话工厂)初始化类,用来携带创建网络连接信息、客户端控制参数信息的对象类,包括RPC时长设置、Socket属性设置、连接质量检测参数设置、TLS参数设置、认证授权信息设置等信息。
 
  
 
-#### **d) ConsumerConfig:**
+### 1.4 ConsumerConfig:
 
 ConsumerConfig类是TubeClientConfig类的子类,它是在TubeClientConfig类基础上增加了Consumer类对象初始化时候的参数携带,因而在一个既有Producer又有Consumer的MessageSessionFactory(消息会话工厂)类对象里,会话工厂类的相关设置以MessageSessionFactory类初始化的内容为准,Consumer类对象按照创建时传递的初始化类对象为准。在consumer里又根据消费行为的不同分为Pull消费者和Push消费者两种,两种特有的参数通过参数接口携带“pull”或“push”不同特征进行区分。
 
  
 
-#### **e) Message:**
+### 1.5 Message:
 
 Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产端原样传递给消息接收端,attribute内容是与TubeMQ系统共用的字段,业务填写的内容不会丢失和改写,但该字段有可能会新增TubeMQ系统填写的内容,并在后续的版本中,新增的TubeMQ系统内容有可能去掉而不被通知。该部分需要注意的是Message.putSystemHeader(final String msgType, final String msgTime)接口,该接口用来设置消息的消息类型和消息发送时间,msgType用于消费端过滤用,msgTime用做TubeMQ进行数据收发统计时消息时间统计维度用。
 
  
 
-#### **f) MessageProducer:**
+### 1.6 MessageProducer:
 
 消息生产者类,该类完成消息的生产,消息发送分为同步发送和异步发送两种接口,目前消息采用Round Robin方式发往后端服务器,后续这块将考虑按照业务指定的算法进行后端服务器选择方式进行生产。该类使用时需要注意的是,我们支持在初始化时候全量Topic指定的publish,也支持在生产过程中临时增加对新的Topic的publish,但临时增加的Topic不会立即生效,因而在使用新增Topic前,要先调用isTopicCurAcceptPublish接口查询该Topic是否已publish并且被服务器接受,否则有可能消息发送失败。
 
  
 
-#### **g) MessageConsumer:**
+### 1.7 MessageConsumer:
 
 该类有两个子类PullMessageConsumer、PushMessageConsumer,通过这两个子类的包装,完成了对业务侧的Pull和Push语义。实际上TubeMQ是采用Pull模式与后端服务进行交互,为了便于业务的接口使用,我们进行了封装,大家可以看到其差别在于Push在启动时初始化了一个线程组,来完成主动的数据拉取操作。需要注意的地方在于:
 
@@ -60,19 +55,19 @@ Message类是TubeMQ里传递的消息对象类,业务设置的data会从生产
 
 
 
-### **2. 接口调用示例:**
+## 2 接口调用示例:
 
-#### **a) 环境准备:**
+### 2.1 环境准备:
 
 TubeMQ开源包org.apache.inlong.tubemq.example里提供了生产和消费的具体代码示例,这里我们通过一个实际的例子来介绍如何填参和调用对应接口。首先我们搭建一个带3个Master节点的TubeMQ集群,3个Master地址及端口分别为test_1.domain.com,test_2.domain.com,test_3.domain.com,端口均为8080,在该集群里我们建立了若干个Broker,并且针对Broker我们创建了3个topic:topic_1,topic_2,topic_3等Topic配置;然后我们启动对应的Broker等待Consumer和Producer的创建。
 
  
 
-#### **b) 创建Consumer:**
+### 2.2 创建Consumer:
 
 见包org.apache.inlong.tubemq.example.MessageConsumerExample类文件,Consumer是一个包含网络交互协调的客户端对象,需要做初始化并且长期驻留内存重复使用的模型,它不适合单次拉起消费的场景。如下图示,我们定义了MessageConsumerExample封装类,在该类中定义了进行网络交互的会话工厂MessageSessionFactory类,以及用来做Push消费的PushMessageConsumer类:
 
-- ###### **i.初始化MessageConsumerExample类:**
+#### 2.2.1 初始化MessageConsumerExample类:
 
 1. 首先构造一个ConsumerConfig类,填写初始化信息,包括本机IP V4地址,Master集群地址,消费组组名信息,这里Master地址信息传入值为:”test_1.domain.com:8080,test_2.domain.com:8080,test_3.domain.com:8080”;
 
@@ -116,7 +111,7 @@ public final class MessageConsumerExample {
 
 
 
-- **ii.订阅Topic:**
+#### 2.2.2 订阅Topic:
 
 我们没有采用指定Offset消费的模式进行订阅,也没有过滤需求,因而我们在如下代码里只做了Topic的指定,对应的过滤项集合我们传的是null值,同时,对于不同的Topic,我们可以传递不同的消息回调处理函数;我们这里订阅了3个topic,topic_1,topic_2,topic_3,每个topic分别调用subscribe函数进行对应参数设置:
 
@@ -133,8 +128,7 @@ public void subscribe(final Map<String, TreeSet<String>> topicTidsMap)
 ```
 
 
-
-- **iii.进行消费:**
+#### 2.2.3 进行消费:
 
 到此,对集群里对应topic的订阅就已完成,系统运行开始后,回调函数里数据将不断的通过回调函数推送到业务层进行处理:
 
@@ -165,11 +159,11 @@ public class DefaultMessageListener implements MessageListener {
 
 
 
-#### **c) 创建Producer:**
+### 3 创建Producer:
 
 现网环境中业务的数据都是通过代理层来做接收汇聚,包装了比较多的异常处理,大部分的业务都没有也不会接触到TubeSDK的Producer类,考虑到业务自己搭建集群使用TubeMQ进行使用的场景,这里提供对应的使用demo,见包org.apache.inlong.tubemq.example.MessageProducerExample类文件供参考,**需要注意**的是,业务除非使用数据平台的TubeMQ集群做MQ服务,否则仍要按照现网的接入流程使用代理层来进行数据生产:
 
-- **i. 初始化MessageProducerExample类:**
+#### 3.1 初始化MessageProducerExample类:
 
 和Consumer的初始化类似,也是构造了一个封装类,定义了一个会话工厂,以及一个Producer类,生产端的会话工厂初始化通过TubeClientConfig类进行,如之前所介绍的,ConsumerConfig类是TubeClientConfig类的子类,虽然传入参数不同,但会话工厂是通过TubeClientConfig类完成的初始化处理:
 
@@ -201,7 +195,7 @@ public final class MessageProducerExample {
 
 
 
-- **ii. 发布Topic:**
+#### 3.2 发布Topic:
 
 ```java
 public void publishTopics(List<String> topicList) throws TubeClientException {
@@ -211,7 +205,7 @@ public void publishTopics(List<String> topicList) throws TubeClientException {
 
 
 
-- **iii. 进行数据生产:**
+#### 3.3 进行数据生产:
 
 如下所示,则为具体的数据构造和发送逻辑,构造一个Message对象后调用sendMessage()函数发送即可,有同步接口和异步接口选择,依照业务要求选择不同接口;需要注意的是该业务根据不同消息调用message.putSystemHeader()函数设置消息的过滤属性和发送时间,便于系统进行消息过滤消费,以及指标统计用。完成这些,一条消息即被发送出去,如果返回结果为成功,则消息被成功的接纳并且进行消息处理,如果返回失败,则业务根据具体错误码及错误提示进行判断处理,相关错误详情见《TubeMQ错误信息介绍.xlsx》:
 
@@ -241,7 +235,7 @@ public void sendMessageAsync(int id, long currtime,
 
 
 
-- **iv. Producer不同类MAMessageProducerExample关注点:**
+#### 3.5 Producer不同类MAMessageProducerExample关注点:
 
 该类初始化与MessageProducerExample类不同,采用的是TubeMultiSessionFactory多会话工厂类进行的连接初始化,该demo提供了如何使用多会话工厂类的特性,可以用于通过多个物理连接提升系统吞吐量的场景(TubeMQ通过连接复用模式来减少物理连接资源的使用),恰当使用可以提升系统的生产性能。在Consumer侧也可以通过多会话工厂进行初始化,但考虑到消费是长时间过程处理,对连接资源的占用比较小,消费场景不推荐使用。
 
diff --git a/zh-cn/docs/modules/tubemq/configure_introduction.html b/zh-cn/docs/modules/tubemq/configure_introduction.html
index b51fe3f..4d90d4c 100644
--- a/zh-cn/docs/modules/tubemq/configure_introduction.html
+++ b/zh-cn/docs/modules/tubemq/configure_introduction.html
@@ -7,20 +7,20 @@
 	<meta name="keywords" content="configure_introduction" />
 	<meta name="description" content="configure_introduction" />
 	<!-- 网页标签标题 -->
-	<title>配置参数介绍 - Apache InLong TubeMQ模块</title>
+	<title>configure_introduction</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>TubeMQ服务端包括Master和Broker共2个模块,Master又包含供外部页面访问的Web前端模块(该部分存放在resources中),考虑到实际部署时2个模块常常部署在同1台机器中,TubeMQ将2个模块3个部分的内容打包在一起交付给运维使用;客户端则不包含服务端部分的lib包单独交付给业务使用。</p>
 <p>Master与Broker采用ini配置文件格式,相关配置文件分别放置在tubemq-server-3.9.0/conf/目录的master.ini和broker.ini文件中:</p>
 <p><img src="img/configure/conf_ini_pos.png" alt=""></p>
 <p>他们的配置是按照配置单元集合来定义的,Master配置由必选的[master]、[zookeeper]、[bdbStore]和可选的[tlsSetting]一共4个配置单元组成,Broker配置由必选的[broker]、[zookeeper]和可选的[tlsSetting]一共3个配置单元组成;实际使用时,大家也可将两个配置文件内容合并放置为一个ini文件。</p>
 <p>Master除了后端系统配置文件外,还在resources里存放了Web前端页面模块,resources的根目录velocity.properties文件为Master的Web前端页面配置文件。</p>
 <p><img src="img/configure/conf_velocity_pos.png" alt=""></p>
-<h2>配置项详情:</h2>
-<h3>master.ini文件中关键配置内容说明:</h3>
+<h2>2 配置项详情:</h2>
+<h3>2.1 master.ini文件中关键配置内容说明:</h3>
 <table>
 <thead>
 <tr>
@@ -510,7 +510,7 @@
 </tr>
 </tbody>
 </table>
-<h3>Master的前台配置文件velocity.properties中关键配置内容说明:</h3>
+<h3>2.2 Master的前台配置文件velocity.properties中关键配置内容说明:</h3>
 <table>
 <thead>
 <tr>
@@ -538,7 +538,7 @@
 </tr>
 </tbody>
 </table>
-<h3>broker.ini文件中关键配置内容说明:</h3>
+<h3>2.3 broker.ini文件中关键配置内容说明:</h3>
 <table>
 <thead>
 <tr>
diff --git a/zh-cn/docs/modules/tubemq/configure_introduction.json b/zh-cn/docs/modules/tubemq/configure_introduction.json
index 077aca4..5616b4c 100644
--- a/zh-cn/docs/modules/tubemq/configure_introduction.json
+++ b/zh-cn/docs/modules/tubemq/configure_introduction.json
@@ -1,8 +1,8 @@
 {
   "filename": "configure_introduction.md",
-  "__html": "<h1>TubeMQ服务端配置文件说明:</h1>\n<p>TubeMQ服务端包括Master和Broker共2个模块,Master又包含供外部页面访问的Web前端模块(该部分存放在resources中),考虑到实际部署时2个模块常常部署在同1台机器中,TubeMQ将2个模块3个部分的内容打包在一起交付给运维使用;客户端则不包含服务端部分的lib包单独交付给业务使用。</p>\n<p>Master与Broker采用ini配置文件格式,相关配置文件分别放置在tubemq-server-3.9.0/conf/目录的master.ini和broker.ini文件中:</p>\n<p><img src=\"img/configure/conf_ini_pos.png\" alt=\"\"></p>\n<p>他们的配置是按照配置单元集合来定义的,Master配置由必选的[master]、[zookeeper]、[bdbStore]和可选的[tlsSetting]一共4个配置单元组成,Broker配置由必选的[broker]、[zookeeper]和可选的 [...]
+  "__html": "<h2>1 TubeMQ服务端配置文件说明:</h2>\n<p>TubeMQ服务端包括Master和Broker共2个模块,Master又包含供外部页面访问的Web前端模块(该部分存放在resources中),考虑到实际部署时2个模块常常部署在同1台机器中,TubeMQ将2个模块3个部分的内容打包在一起交付给运维使用;客户端则不包含服务端部分的lib包单独交付给业务使用。</p>\n<p>Master与Broker采用ini配置文件格式,相关配置文件分别放置在tubemq-server-3.9.0/conf/目录的master.ini和broker.ini文件中:</p>\n<p><img src=\"img/configure/conf_ini_pos.png\" alt=\"\"></p>\n<p>他们的配置是按照配置单元集合来定义的,Master配置由必选的[master]、[zookeeper]、[bdbStore]和可选的[tlsSetting]一共4个配置单元组成,Broker配置由必选的[broker]、[zookeeper]和可 [...]
   "link": "/zh-cn/docs/modules/tubemq/configure_introduction.html",
   "meta": {
-    "title": "配置参数介绍 - Apache InLong TubeMQ模块"
+    "配置参数介绍 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/configure_introduction.md b/zh-cn/docs/modules/tubemq/configure_introduction.md
index aa08d31..1d40bbe 100644
--- a/zh-cn/docs/modules/tubemq/configure_introduction.md
+++ b/zh-cn/docs/modules/tubemq/configure_introduction.md
@@ -1,8 +1,8 @@
 ---
-title: 配置参数介绍 - Apache InLong TubeMQ模块
+配置参数介绍 - Apache InLong TubeMQ模块
 ---
 
-# TubeMQ服务端配置文件说明:
+## 1 TubeMQ服务端配置文件说明:
 
 TubeMQ服务端包括Master和Broker共2个模块,Master又包含供外部页面访问的Web前端模块(该部分存放在resources中),考虑到实际部署时2个模块常常部署在同1台机器中,TubeMQ将2个模块3个部分的内容打包在一起交付给运维使用;客户端则不包含服务端部分的lib包单独交付给业务使用。
 
@@ -17,9 +17,9 @@ Master除了后端系统配置文件外,还在resources里存放了Web前端
 ![](img/configure/conf_velocity_pos.png)
 
 
-## 配置项详情:
+## 2 配置项详情:
 
-### master.ini文件中关键配置内容说明:
+### 2.1 master.ini文件中关键配置内容说明:
 
 | 配置单元 | 配置项 | 是否必选 | 值类型 | 配置说明 |
 | --- | --- | --- | --- | --- |
@@ -92,14 +92,14 @@ Master除了后端系统配置文件外,还在resources里存放了Web前端
 | tlsTrustStorePath | 否 | String | TLS的TrustStore文件的绝对存储路径+TrustStore文件名,在启动TLS功能且启用双向认证时,该字段必填且不能为空 |
 | tlsTrustStorePassword | 否 | String | TLS的TrustStorePassword文件的绝对存储路径+TrustStorePassword文件名,在启动TLS功能且启用双向认证时,该字段必填且不能为空 |
 
-### Master的前台配置文件velocity.properties中关键配置内容说明:
+### 2.2 Master的前台配置文件velocity.properties中关键配置内容说明:
 
 | 配置单元 | 配置项 | 是否必选 | 值类型 | 配置说明 |
 | --- | --- | --- | --- | --- |
 |
  | file.resource.loader.path | 是 | String | Master的Web的模板绝对路径,该部分为实际部署Master时的工程绝对路径+/resources/templates,该配置要与实际部署相吻合,配置失败会导致Master前端页面访问失败。 |
 
-### broker.ini文件中关键配置内容说明:
+### 2.3 broker.ini文件中关键配置内容说明:
 
 | 配置单元 | 配置项 | 是否必选 | 值类型 | 配置说明 |
 | --- | --- | --- | --- | --- |
diff --git a/zh-cn/docs/modules/tubemq/console_introduction.html b/zh-cn/docs/modules/tubemq/console_introduction.html
index d78f192..bf653de 100644
--- a/zh-cn/docs/modules/tubemq/console_introduction.html
+++ b/zh-cn/docs/modules/tubemq/console_introduction.html
@@ -7,35 +7,34 @@
 	<meta name="keywords" content="console_introduction" />
 	<meta name="description" content="console_introduction" />
 	<!-- 网页标签标题 -->
-	<title>管控台操作指引 - Apache InLong TubeMQ模块</title>
+	<title>console_introduction</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h2>管控台关系</h2>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:
 <img src="img/console/1568169770714.png" alt="">
 ​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。</p>
-<h2>TubeMQ管控台各版面介绍</h2>
+<h2>2 TubeMQ管控台各版面介绍</h2>
 <p>​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topic列表2个部分,我们先介绍简单的分发查询和集群管理,然后再介绍复杂的配置管理。</p>
-<h3>分发查询</h3>
+<h3>2.1 分发查询</h3>
 <p>​        点分发查询,我们会看到如下的列表信息,这是当前TubeMQ集群里已注册的消费组信息,包括具体的消费组组名,消费的Topic,以及该组总的消费分区数简介信息,如下图示:
 <img src="img/console/1568169796122.png" alt="">
 ​       点击记录,可以看到选中的消费组里的消费者成员,及对应消费的Broker及Partition分区信息,如下图示:
 <img src="img/console/1568169806810.png" alt=""></p>
 <p>​       这个页面可以供我们查询,输入Topic或者消费组名,就可以很快确认系统里有哪些消费组在消费Topic,以及每个消费组的消费目标是怎样这些信息。</p>
-<h3>集群管理</h3>
+<h3>2.2 集群管理</h3>
 <p>​        集群管理主要管理Master的HA,在这个页面上我们可以看到当前Master的各个节点及节点状态,同时,我们可以通过“切换”操作来改变节点的主备状态。
 <img src="img/console/1568169823675.png" alt=""></p>
-<h3>配置管理</h3>
+<h3>2.3 配置管理</h3>
 <p>​        配置管理版面既包含了Broker、Topic元数据的管理,还包含了Broker和Topic的上线发布以及下线操作,有2层含义,比如Broker列表里,展示的是当前集群里已配置的Broker元数据,包括未上线处于草稿状态、已上线、已下线的Broker记录信息:
 <img src="img/console/1568169839931.png" alt=""></p>
 <p>​        从页面信息我们也可以看到,除了Broker的记录信息外,还有Broker在该集群里的管理信息,包括是否已上线,是否处于命令处理中,是否可读,是否可写,配置是否做了更改,是否已加载变更的配置信息。</p>
 <p>​        点单个新增,会弹框如下,这个表示待新增Broker的元数据信息,包括BrokerID,BrokerIP,BrokerPort,以及该Broker里部署的Topic的缺省配置信息,相关的字段详情见《TubeMQ HTTP访问接口定义.xls》
 <img src="img/console/1568169851085.png" alt=""></p>
 <p>​        所有TubeMQ管控台的变更操作,或者改变操作,都会要求输入操作授权码,该信息由运维通过Master的配置文件master.ini的confModAuthToken字段进行定义:如果你知道这个集群的密码,你就可以进行该项操作,比如你是管理员,你是授权人员,或者你能登陆这个master的机器拿到这个密码,都认为你是有权操作该项功能。</p>
-<h2>TubeMQ管控台上涉及的操作及注意事项</h2>
+<h3>2.4 TubeMQ管控台上涉及的操作及注意事项</h3>
 <p>​       如上所说,TubeMQ管控台是运营Tube MQ集群的,套件负责包括Master、Broker这类TubeMQ集群节点管理,包括自动部署和安装等,因此,如下几点需要注意:</p>
 <p>​       1. <strong>TubeMQ集群做扩缩容增、减Broker节点时,要先在TubeMQ管控台上做相应的节点新增、上线,以及下线、删除等操作后才能在物理环境上做对应Broker节点的增删处理</strong>:</p>
 <p>​        TubeMQ集群对Broker按照状态机管理,如上图示涉及到[draft,online,read-only,write-only,offline] 等状态,记录增加还没生效时是draft状态,确定上线后是online态;节点删除首先要由online状态转为offline状态,然后再通过删除操作清理系统内保存的该节点记录;draft、online和offline是为了区分各个节点所处的环节,Master只将online状态的Broker分发给对应的producer和consumer进行生产和消费;read-only,write-only是Broker处于online状态的子状态,表示只能读或者只能写Broker上的数据;相关的状态及操作见页面详情,增加一条记录即可明白其中的关系。TubeMQ管控台上增加这些记录后,我们就可以进行Broker节点的部署及启动,这个时候Tube集群环境的页面会显示节点运行状态,如果为unregister状态,如下图示,则表示节点注册失败,需要到对应broker节点上检查日志,确
 认原因。目前该部分已经很成熟,出错信息会提 [...]
@@ -52,8 +51,8 @@
 <p>​       重载完成后Topic才能对外使用,我们会发现如下配置变更部分在重启完成后已改变状态:
 <img src="img/console/1568169916091.png" alt=""></p>
 <p>​       这个时候我们就可以针对该Topic进行生产和消费处理。</p>
-<h2>3.对于Topic的元数据进行变更后的操作注意事项:</h2>
-<p><strong>a.如何自行配置Topic参数:</strong></p>
+<h2>3 对于Topic的元数据进行变更后的操作注意事项:</h2>
+<h3>3.1 如何自行配置Topic参数:</h3>
 <p>​       大家点击Topic列表里任意Topic后,会弹出如下框,里面是该Topic的相关元数据信息,其决定了这个Topic在该Broker上,设置了多少个分区,当前读写状态,数据刷盘频率,数据老化周期和时间等信息:
 <img src="img/console/1568169925657.png" alt=""></p>
 <p>​       这些信息由系统管理员设置好默认值后直接定义的,一般不会改变,若业务有特殊需求,比如想增加消费的并行度增多分区,或者想减少刷盘频率,怎么操作?如下图示,各个页面的字段含义及作用如下表:</p>
@@ -170,10 +169,10 @@
 <p>其作用是:a. 选择涉及该Topic元数据修改的Broker节点集合;b. 提供变更操作的授权信息码。</p>
 <p><strong>特别提醒:大家还需要注意的是,输入授权码修改后,数据变更要刷新后才会生效,同时生效的Broker要按比例进行操作。</strong>
 <img src="img/console/1568169954746.png" alt=""></p>
-<p><strong>b.Topic变更注意事项:</strong></p>
+<h3>3.2 Topic变更注意事项:</h3>
 <p>​       如上图示,选择变更Topic元数据后,之前选中的Broker集合会在<strong>配置是否已变更</strong>上出现是的提示。我们还需要对变更进行重载刷新操作,选择Broker集合,然后选择刷新操作,可以批量也可以单条,但是一定要注意的是:操作要分批进行,上一批操作的Broker当前运行状态为running后才能进入下一批的配置刷新操作;如果有节点处于online状态,但长期不进入running状态(缺省最大2分钟),则需要停止刷新,排查问题原因后再继续操作。</p>
 <p>​       进行分批操作原因是,我们系统在变更时,会对指定的Broker做停读停写操作,如果将全量的Broker统一做重载,很明显,集群整体会出现服务不可读或者不可写的情况,从而接入出现不该有的异常。</p>
-<p><strong>c.对于Topic的删除处理:</strong></p>
+<h3>3.3 对于Topic的删除处理:</h3>
 <p>​       页面上进行的删除是软删除处理,如果要彻底删除该topic需要通过API接口进行硬删除操作处理才能实现(避免业务误操作)。</p>
 <p>​       完成如上内容后,Topic元数据就变更完成。</p>
 <hr>
diff --git a/zh-cn/docs/modules/tubemq/console_introduction.json b/zh-cn/docs/modules/tubemq/console_introduction.json
index 13fd392..93a75cb 100644
--- a/zh-cn/docs/modules/tubemq/console_introduction.json
+++ b/zh-cn/docs/modules/tubemq/console_introduction.json
@@ -1,8 +1,8 @@
 {
   "filename": "console_introduction.md",
-  "__html": "<h1>TubeMQ管控台操作指引</h1>\n<h2>管控台关系</h2>\n<p>​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:\n<img src=\"img/console/1568169770714.png\" alt=\"\">\n​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。</p>\n<h2>TubeMQ管控台各版面介绍</h2>\n<p>​        管控台一共3项内容:分发查询,配置管理,集群管理; [...]
+  "__html": "<h2>1 管控台关系</h2>\n<p>​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:\n<img src=\"img/console/1568169770714.png\" alt=\"\">\n​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。</p>\n<h2>2 TubeMQ管控台各版面介绍</h2>\n<p>​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topi [...]
   "link": "/zh-cn/docs/modules/tubemq/console_introduction.html",
   "meta": {
-    "title": "管控台操作指引 - Apache InLong TubeMQ模块"
+    "TubeMQ管控台操作指引 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/console_introduction.md b/zh-cn/docs/modules/tubemq/console_introduction.md
index 4bd104b..7c0540a 100644
--- a/zh-cn/docs/modules/tubemq/console_introduction.md
+++ b/zh-cn/docs/modules/tubemq/console_introduction.md
@@ -1,21 +1,19 @@
 ---
-title: 管控台操作指引 - Apache InLong TubeMQ模块
+TubeMQ管控台操作指引 - Apache InLong TubeMQ模块
 ---
 
-# TubeMQ管控台操作指引
-
-## 管控台关系
+## 1 管控台关系
 
 ​        TubeMQ管控台是管理TubeMQ集群的简单运营工具,包括集群里的Master、Broker,以及Broker上部署的Topic元数据等与TubeMQ系统相关的运营数据及操作。需要说明的是,当前提供的TubeMQ前台所提供的功能没有涵盖TubeMQ所提供的功能范围,大家可以参照《TubeMQ HTTP访问接口定义.xls》定义自行实现符合业务需要的管控前台。TubeMQ管控台的访问地址为http://portal:webport/config/topic_list.htm:
 ![](img/console/1568169770714.png)
 ​       其中portal为该集群中任意的主、备Master的IP地址,webport为配置的Master的Web端口。
 
 
-## TubeMQ管控台各版面介绍
+## 2 TubeMQ管控台各版面介绍
 
 ​        管控台一共3项内容:分发查询,配置管理,集群管理;配置管理又分为Broker列表,Topic列表2个部分,我们先介绍简单的分发查询和集群管理,然后再介绍复杂的配置管理。
 
-### 分发查询
+### 2.1 分发查询
 
 ​        点分发查询,我们会看到如下的列表信息,这是当前TubeMQ集群里已注册的消费组信息,包括具体的消费组组名,消费的Topic,以及该组总的消费分区数简介信息,如下图示:
 ![](img/console/1568169796122.png)
@@ -24,12 +22,12 @@ title: 管控台操作指引 - Apache InLong TubeMQ模块
 
 ​       这个页面可以供我们查询,输入Topic或者消费组名,就可以很快确认系统里有哪些消费组在消费Topic,以及每个消费组的消费目标是怎样这些信息。
 
-### 集群管理
+### 2.2 集群管理
 
 ​        集群管理主要管理Master的HA,在这个页面上我们可以看到当前Master的各个节点及节点状态,同时,我们可以通过“切换”操作来改变节点的主备状态。
 ![](img/console/1568169823675.png)
 
-### 配置管理
+### 2.3 配置管理
 
 ​        配置管理版面既包含了Broker、Topic元数据的管理,还包含了Broker和Topic的上线发布以及下线操作,有2层含义,比如Broker列表里,展示的是当前集群里已配置的Broker元数据,包括未上线处于草稿状态、已上线、已下线的Broker记录信息:
 ![](img/console/1568169839931.png)
@@ -41,7 +39,7 @@ title: 管控台操作指引 - Apache InLong TubeMQ模块
 
 ​        所有TubeMQ管控台的变更操作,或者改变操作,都会要求输入操作授权码,该信息由运维通过Master的配置文件master.ini的confModAuthToken字段进行定义:如果你知道这个集群的密码,你就可以进行该项操作,比如你是管理员,你是授权人员,或者你能登陆这个master的机器拿到这个密码,都认为你是有权操作该项功能。
 
-## TubeMQ管控台上涉及的操作及注意事项
+### 2.4 TubeMQ管控台上涉及的操作及注意事项
 
 ​       如上所说,TubeMQ管控台是运营Tube MQ集群的,套件负责包括Master、Broker这类TubeMQ集群节点管理,包括自动部署和安装等,因此,如下几点需要注意:
 
@@ -68,9 +66,9 @@ title: 管控台操作指引 - Apache InLong TubeMQ模块
 
 ​       这个时候我们就可以针对该Topic进行生产和消费处理。
 
-## 3.对于Topic的元数据进行变更后的操作注意事项:
+## 3 对于Topic的元数据进行变更后的操作注意事项:
 
-**a.如何自行配置Topic参数:**
+### 3.1 如何自行配置Topic参数:
 
 ​       大家点击Topic列表里任意Topic后,会弹出如下框,里面是该Topic的相关元数据信息,其决定了这个Topic在该Broker上,设置了多少个分区,当前读写状态,数据刷盘频率,数据老化周期和时间等信息:
 ![](img/console/1568169925657.png)
@@ -104,13 +102,13 @@ title: 管控台操作指引 - Apache InLong TubeMQ模块
 **特别提醒:大家还需要注意的是,输入授权码修改后,数据变更要刷新后才会生效,同时生效的Broker要按比例进行操作。**
 ![](img/console/1568169954746.png)
 
-**b.Topic变更注意事项:**
+### 3.2 Topic变更注意事项:
 
 ​       如上图示,选择变更Topic元数据后,之前选中的Broker集合会在**配置是否已变更**上出现是的提示。我们还需要对变更进行重载刷新操作,选择Broker集合,然后选择刷新操作,可以批量也可以单条,但是一定要注意的是:操作要分批进行,上一批操作的Broker当前运行状态为running后才能进入下一批的配置刷新操作;如果有节点处于online状态,但长期不进入running状态(缺省最大2分钟),则需要停止刷新,排查问题原因后再继续操作。
 
 ​       进行分批操作原因是,我们系统在变更时,会对指定的Broker做停读停写操作,如果将全量的Broker统一做重载,很明显,集群整体会出现服务不可读或者不可写的情况,从而接入出现不该有的异常。
 
-**c.对于Topic的删除处理:**
+### 3.3 对于Topic的删除处理:
 
 ​       页面上进行的删除是软删除处理,如果要彻底删除该topic需要通过API接口进行硬删除操作处理才能实现(避免业务误操作)。
 
diff --git a/zh-cn/docs/modules/tubemq/consumer_example.html b/zh-cn/docs/modules/tubemq/consumer_example.html
index 204f0d5..3ff895f 100644
--- a/zh-cn/docs/modules/tubemq/consumer_example.html
+++ b/zh-cn/docs/modules/tubemq/consumer_example.html
@@ -7,14 +7,14 @@
 	<meta name="keywords" content="consumer_example" />
 	<meta name="description" content="consumer_example" />
 	<!-- 网页标签标题 -->
-	<title>消费者示例 - Apache InLong TubeMQ模块</title>
+	<title>consumer_example</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>TubeMQ 提供了两种方式来消费消息: PullConsumer and PushConsumer。</p>
-<h3>PullConsumer</h3>
+<h3>1.1 PullConsumer</h3>
 <pre><code class="language-java"> <span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">PullConsumerExample</span> </span>{
 
      <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Throwable </span>{
@@ -45,42 +45,46 @@
 
  }
 </code></pre>
-<h3>PushConsumer</h3>
-<pre><code class="language-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">PushConsumerExample</span> </span>{
+<h3>1.2 PushConsumer</h3>
+<pre><code class="language-java">public class PushConsumerExample {
 
-     <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">test</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Throwable </span>{
-         <span class="hljs-keyword">final</span> String masterHostAndPort = <span class="hljs-string">"localhost:8000"</span>;
-         <span class="hljs-keyword">final</span> String topic = <span class="hljs-string">"test"</span>;
-         <span class="hljs-keyword">final</span> String group = <span class="hljs-string">"test-group"</span>;
-         <span class="hljs-keyword">final</span> ConsumerConfig consumerConfig = <span class="hljs-keyword">new</span> ConsumerConfig(masterHostAndPort, group);
+     public static void test(String[] args) throws Throwable {
+         final String masterHostAndPort = "localhost:8000";
+         final String topic = "test";
+         final String group = "test-group";
+         final ConsumerConfig consumerConfig = new ConsumerConfig(masterHostAndPort, group);
          consumerConfig.setConsumePosition(ConsumePosition.CONSUMER_FROM_LATEST_OFFSET);
-         <span class="hljs-keyword">final</span> MessageSessionFactory messageSessionFactory = <span class="hljs-keyword">new</span> TubeSingleSessionFactory(consumerConfig);
-         <span class="hljs-keyword">final</span> PushMessageConsumer pushConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
-         pushConsumer.subscribe(topic, <span class="hljs-keyword">null</span>, <span class="hljs-keyword">new</span> MessageListener() {
+         final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+         final PushMessageConsumer pushConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
+         pushConsumer.subscribe(topic, null, new MessageListener() {
 
-             <span class="hljs-meta">@Override</span>
-             <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">receiveMessages</span><span class="hljs-params">(PeerInfo peerInfo, List&lt;Message&gt; messages)</span> <span class="hljs-keyword">throws</span> InterruptedException </span>{
-                 <span class="hljs-keyword">for</span> (Message message : messages) {
-                     System.out.println(<span class="hljs-string">"received message : "</span> + <span class="hljs-keyword">new</span> String(message.getData()));
+             @Override
+             public void receiveMessages(PeerInfo peerInfo, List&lt;Message&gt; messages) throws InterruptedException {
+                 for (Message message : messages) {
+                     System.out.println("received message : " + new String(message.getData()));
                  }
              }
 
-             <span class="hljs-meta">@Override</span>
-             <span class="hljs-function"><span class="hljs-keyword">public</span> Executor <span class="hljs-title">getExecutor</span><span class="hljs-params">()</span> </span>{
-                 <span class="hljs-keyword">return</span> <span class="hljs-keyword">null</span>;
+             @Override
+             public Executor getExecutor() {
+                 return null;
              }
 
-             <span class="hljs-meta">@Override</span>
-             <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">stop</span><span class="hljs-params">()</span> </span>{
-                 <span class="hljs-comment">//</span>
+             @Override
+             public void stop() {
+                 //
              }
          });
          pushConsumer.completeSubscribe();
-         CountDownLatch latch = <span class="hljs-keyword">new</span> CountDownLatch(<span class="hljs-number">1</span>);
-         latch.await(<span class="hljs-number">10</span>, TimeUnit.MINUTES);
+         CountDownLatch latch = new CountDownLatch(1);
+         latch.await(10, TimeUnit.MINUTES);
      }
  }
  ```
+
+---
+
+&lt;a href="#top"&gt;Back to top&lt;/a&gt;
 </code></pre>
 </div></section><footer class="footer-container"><div class="footer-body"><img src="/img/incubator-logo.svg"/><div class="cols-container"><div class="col col-24"><p>Apache InLong (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with  [...]
 	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
diff --git a/zh-cn/docs/modules/tubemq/consumer_example.json b/zh-cn/docs/modules/tubemq/consumer_example.json
index c0f1add..564e5ac 100644
--- a/zh-cn/docs/modules/tubemq/consumer_example.json
+++ b/zh-cn/docs/modules/tubemq/consumer_example.json
@@ -1,8 +1,8 @@
 {
   "filename": "consumer_example.md",
-  "__html": "<h2>Consumer 示例</h2>\n<p>TubeMQ 提供了两种方式来消费消息: PullConsumer and PushConsumer。</p>\n<h3>PullConsumer</h3>\n<pre><code class=\"language-java\"> <span class=\"hljs-keyword\">public</span> <span class=\"hljs-class\"><span class=\"hljs-keyword\">class</span> <span class=\"hljs-title\">PullConsumerExample</span> </span>{\n\n     <span class=\"hljs-function\"><span class=\"hljs-keyword\">public</span> <span class=\"hljs-keyword\">static</span> <span class=\"hljs-keyword\">void</span [...]
+  "__html": "<h2>1 Consumer 示例</h2>\n<p>TubeMQ 提供了两种方式来消费消息: PullConsumer and PushConsumer。</p>\n<h3>1.1 PullConsumer</h3>\n<pre><code class=\"language-java\"> <span class=\"hljs-keyword\">public</span> <span class=\"hljs-class\"><span class=\"hljs-keyword\">class</span> <span class=\"hljs-title\">PullConsumerExample</span> </span>{\n\n     <span class=\"hljs-function\"><span class=\"hljs-keyword\">public</span> <span class=\"hljs-keyword\">static</span> <span class=\"hljs-keyword\">void [...]
   "link": "/zh-cn/docs/modules/tubemq/consumer_example.html",
   "meta": {
-    "title": "消费者示例 - Apache InLong TubeMQ模块"
+    "消费者示例 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/consumer_example.md b/zh-cn/docs/modules/tubemq/consumer_example.md
index b16b087..575118e 100644
--- a/zh-cn/docs/modules/tubemq/consumer_example.md
+++ b/zh-cn/docs/modules/tubemq/consumer_example.md
@@ -1,12 +1,12 @@
 ---
-title: 消费者示例 - Apache InLong TubeMQ模块
+消费者示例 - Apache InLong TubeMQ模块
 ---
 
-## Consumer 示例
+## 1 Consumer 示例
   TubeMQ 提供了两种方式来消费消息: PullConsumer and PushConsumer。
 
 
-### PullConsumer 
+### 1.1 PullConsumer 
    ```java
     public class PullConsumerExample {
 
@@ -39,7 +39,7 @@ title: 消费者示例 - Apache InLong TubeMQ模块
     }
    ``` 
    
-### PushConsumer
+### 1.2 PushConsumer
    ```java
    public class PushConsumerExample {
    
@@ -76,3 +76,7 @@ title: 消费者示例 - Apache InLong TubeMQ模块
         }
     }
     ```
+
+---
+
+<a href="#top">Back to top</a>
diff --git a/zh-cn/docs/modules/tubemq/deployment.html b/zh-cn/docs/modules/tubemq/deployment.html
index 8c89976..347bbee 100644
--- a/zh-cn/docs/modules/tubemq/deployment.html
+++ b/zh-cn/docs/modules/tubemq/deployment.html
@@ -7,25 +7,24 @@
 	<meta name="keywords" content="deployment" />
 	<meta name="description" content="deployment" />
 	<!-- 网页标签标题 -->
-	<title>部署指引 - Apache InLong TubeMQ模块</title>
+	<title>deployment</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h2>工程编译打包:</h2>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>进入工程根目录,执行命令:</p>
 <pre><code>mvn clean package -Dmaven.test.skip
 </code></pre>
 <p>例如将TubeMQ源码包放在E盘根目录,按照如下方式执行上述命令,当各个子目录都编译成功时工程编译完成:</p>
 <p><img src="img/sysdeployment/sys_compile.png" alt=""></p>
 <p>大家也可以进入各个子目录进行单独编译,编译过程与普通的工程编译处理过程一致。</p>
-<p><strong>部署服务端:</strong>
-如上例子,进入..\InLong\inlong-tubemq\tubemq-server\target目录,服务侧的相关内容如下,其中apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz为完整的服务端安装包,里面包括执行脚本,配置文件,依赖包,以及前端的源码;apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT.jar为服务端处理逻辑包,包含于完整工程安装包的lib里,单独提出是考虑到日常变更升级时改动点多在服务器处理逻辑上,升级的时候只需要单独替换该jar包即可:</p>
+<h2>2 部署服务端:</h2>
+<p>如上例子,进入..\InLong\inlong-tubemq\tubemq-server\target目录,服务侧的相关内容如下,其中apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz为完整的服务端安装包,里面包括执行脚本,配置文件,依赖包,以及前端的源码;apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT.jar为服务端处理逻辑包,包含于完整工程安装包的lib里,单独提出是考虑到日常变更升级时改动点多在服务器处理逻辑上,升级的时候只需要单独替换该jar包即可:</p>
 <p><img src="img/sysdeployment/sys_package.png" alt=""></p>
 <p>这里我们是全新安装,将上述完整的工程安装包部署到待安装机器上,我们这里是放置在/data/inlong目录下:</p>
 <p><img src="img/sysdeployment/sys_package_list.png" alt=""></p>
-<p><strong>配置系统:</strong></p>
+<h2>3 配置系统:</h2>
 <p>服务包里打包了3种角色:Master、Broker、Tools,业务使用时可以将Master和Broker放置在一起,也可以单独分开不同机器放置,依照业务对机器的规划进行处理。我们通过如下3台机器搭建一个完整的有2台Master的生产、消费环境:</p>
 <table>
 <thead>
@@ -116,14 +115,15 @@
 <p>然后是配置9.23.28.24:</p>
 <p><img src="img/sysdeployment/sys_configure_2.png" alt=""></p>
 <p>要注意的是右上角的配置为Master的Web前台配置信息,需要根据Master的安装路径修改/resources/velocity.properties里的file.resource.loader.path信息。</p>
-<p><strong>启动Master</strong>:</p>
+<h2>4 运行节点</h2>
+<h3>4.1 启动Master:</h3>
 <p>完成如上配置设置后,首先进入主备Master所在的TubeMQ环境的bin目录,进行服务启动操作:</p>
 <p><img src="img/sysdeployment/sys_master_start.png" alt=""></p>
 <p>我们首先启动9.23.27.24,然后启动9.23.28.24上的Master,如下打印可以表示主备Master都已启动成功并开启了对外服务端口:</p>
 <p><img src="img/sysdeployment/sys_master_startted.png" alt=""></p>
 <p>访问Master的管控台(<a href="http://9.23.27.24:8080">http://9.23.27.24:8080</a> ),点击页面可以查看到如下集群信息,则表示master已成功启动:</p>
 <p><img src="img/sysdeployment/sys_master_console.png" alt=""></p>
-<p><strong>启动Broker</strong>:</p>
+<h3>4.2 启动Broker:</h3>
 <p>启动Broker和启动master有些差别:Master负责管理整个TubeMQ集群,包括Broker节点运行管理以及节点上部署的Topic配置管理,还有生产和消费管理等,因此,实体的Broker启动前,首先要在Master上配置Broker元数据,增加Broker相关的管理信息,如下图示:</p>
 <p><img src="img/sysdeployment/sys_broker_configure.png" alt=""></p>
 <p>点击确认后形成一个草稿的Broker记录:</p>
@@ -141,7 +141,8 @@
 <p><img src="img/sysdeployment/sys_broker_restart_2.png" alt=""></p>
 <p>查看Master管控台,broker已经注册成功:</p>
 <p><img src="img/sysdeployment/sys_broker_finished.png" alt=""></p>
-<p><strong>配置及生效Topic</strong>:</p>
+<h2>5 数据生产和消费</h2>
+<h3>5.1 配置及生效Topic:</h3>
 <p>配置Topic和配置Broker信息类似,都需要先在Master上新增元数据信息,然后才能开始使用,要不生产和消费时候会报topic不存在错误,如我们用安装包里的example对不存在的Topic名test进行生产:
 <img src="img/sysdeployment/test_sendmessage.png" alt=""></p>
 <p>Demo实例会报如下错误信息:</p>
@@ -154,7 +155,7 @@
 <p>重载完成后Topic才能对外使用,我们会发现如下配置变更部分在重启完成后已改变状态:</p>
 <p><img src="img/sysdeployment/sys_topic_finished.png" alt=""></p>
 <p><strong>大家需要注意的是:</strong> 我们在重载的时候,要对待重载的Broker集合分批次进行。我们的重载通过状态机进行控制,会先进行不可读写—〉只读操作—〉可读写—〉上线运行各个子状态处理,如果所有待重启Broker全量重载,会使得已在线对外服务的Topic对外出现短暂的不可读写状况,使得生产、消费,特别是生产发送失败。</p>
-<p><strong>数据生产和消费</strong>:</p>
+<h3>5.2 数据生产和消费:</h3>
 <p>在安装包里,我们打包了example的测试Demo,业务也可以直接使用tubemq-client-0.9.0-incubating-SNAPSHOT.jar封装自己的生产和消费逻辑,总的形式是类似,我们先执行生产者的Demo,我们可以看到Broker上已开始有数据接收:
 <img src="img/sysdeployment/test_sendmessage_2.png" alt=""></p>
 <p><img src="img/sysdeployment/sys_node_status.png" alt=""></p>
@@ -163,6 +164,8 @@
 <p>在Broker的生产和消费指标日志里,相关数据已经存在:</p>
 <p><img src="img/sysdeployment/sys_node_log.png" alt=""></p>
 <p>在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,就需要查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。</p>
+<hr>
+<p><a href="#top">Back to top</a></p>
 </div></section><footer class="footer-container"><div class="footer-body"><img src="/img/incubator-logo.svg"/><div class="cols-container"><div class="col col-24"><p>Apache InLong (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with  [...]
 	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
 	<script src="https://f.alicdn.com/react/15.4.1/react-dom.min.js"></script>
diff --git a/zh-cn/docs/modules/tubemq/deployment.json b/zh-cn/docs/modules/tubemq/deployment.json
index 5ca7832..290a50c 100644
--- a/zh-cn/docs/modules/tubemq/deployment.json
+++ b/zh-cn/docs/modules/tubemq/deployment.json
@@ -1,8 +1,8 @@
 {
   "filename": "deployment.md",
-  "__html": "<h1>TubeMQ编译、部署及简单使用:</h1>\n<h2>工程编译打包:</h2>\n<p>进入工程根目录,执行命令:</p>\n<pre><code>mvn clean package -Dmaven.test.skip\n</code></pre>\n<p>例如将TubeMQ源码包放在E盘根目录,按照如下方式执行上述命令,当各个子目录都编译成功时工程编译完成:</p>\n<p><img src=\"img/sysdeployment/sys_compile.png\" alt=\"\"></p>\n<p>大家也可以进入各个子目录进行单独编译,编译过程与普通的工程编译处理过程一致。</p>\n<p><strong>部署服务端:</strong>\n如上例子,进入..\\InLong\\inlong-tubemq\\tubemq-server\\target目录,服务侧的相关内容如下,其中apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz为完整的服务端安装包,里 [...]
+  "__html": "<h2>1 工程编译打包:</h2>\n<p>进入工程根目录,执行命令:</p>\n<pre><code>mvn clean package -Dmaven.test.skip\n</code></pre>\n<p>例如将TubeMQ源码包放在E盘根目录,按照如下方式执行上述命令,当各个子目录都编译成功时工程编译完成:</p>\n<p><img src=\"img/sysdeployment/sys_compile.png\" alt=\"\"></p>\n<p>大家也可以进入各个子目录进行单独编译,编译过程与普通的工程编译处理过程一致。</p>\n<h2>2 部署服务端:</h2>\n<p>如上例子,进入..\\InLong\\inlong-tubemq\\tubemq-server\\target目录,服务侧的相关内容如下,其中apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz为完整的服务端安装包,里面包括执行脚本,配置文件,依赖包,以及前端的源码;apache- [...]
   "link": "/zh-cn/docs/modules/tubemq/deployment.html",
   "meta": {
-    "title": "部署指引 - Apache InLong TubeMQ模块"
+    "TubeMQ编译、部署及简单使用 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/deployment.md b/zh-cn/docs/modules/tubemq/deployment.md
index ab381c5..9ac8211 100644
--- a/zh-cn/docs/modules/tubemq/deployment.md
+++ b/zh-cn/docs/modules/tubemq/deployment.md
@@ -1,10 +1,8 @@
 ---
-title: 部署指引 - Apache InLong TubeMQ模块
+TubeMQ编译、部署及简单使用 - Apache InLong TubeMQ模块
 ---
 
-# TubeMQ编译、部署及简单使用:
-
-## 工程编译打包:
+## 1 工程编译打包:
 
 进入工程根目录,执行命令:
 
@@ -18,7 +16,7 @@ mvn clean package -Dmaven.test.skip
 
 大家也可以进入各个子目录进行单独编译,编译过程与普通的工程编译处理过程一致。
 
-**部署服务端:**
+## 2 部署服务端:
 如上例子,进入..\InLong\inlong-tubemq\tubemq-server\target目录,服务侧的相关内容如下,其中apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT-bin.tar.gz为完整的服务端安装包,里面包括执行脚本,配置文件,依赖包,以及前端的源码;apache-inlong-tubemq-server-0.9.0-incubating-SNAPSHOT.jar为服务端处理逻辑包,包含于完整工程安装包的lib里,单独提出是考虑到日常变更升级时改动点多在服务器处理逻辑上,升级的时候只需要单独替换该jar包即可:
 
 ![](img/sysdeployment/sys_package.png)
@@ -28,7 +26,7 @@ mvn clean package -Dmaven.test.skip
 ![](img/sysdeployment/sys_package_list.png)
 
 
-**配置系统:**
+## 3 配置系统:
 
 服务包里打包了3种角色:Master、Broker、Tools,业务使用时可以将Master和Broker放置在一起,也可以单独分开不同机器放置,依照业务对机器的规划进行处理。我们通过如下3台机器搭建一个完整的有2台Master的生产、消费环境:
 
@@ -59,7 +57,8 @@ mvn clean package -Dmaven.test.skip
 
 要注意的是右上角的配置为Master的Web前台配置信息,需要根据Master的安装路径修改/resources/velocity.properties里的file.resource.loader.path信息。
 
-**启动Master**:
+## 4 运行节点
+### 4.1 启动Master:
 
 完成如上配置设置后,首先进入主备Master所在的TubeMQ环境的bin目录,进行服务启动操作:
 
@@ -73,7 +72,7 @@ mvn clean package -Dmaven.test.skip
 
 ![](img/sysdeployment/sys_master_console.png)
 
-**启动Broker**:
+### 4.2 启动Broker:
 
 启动Broker和启动master有些差别:Master负责管理整个TubeMQ集群,包括Broker节点运行管理以及节点上部署的Topic配置管理,还有生产和消费管理等,因此,实体的Broker启动前,首先要在Master上配置Broker元数据,增加Broker相关的管理信息,如下图示:
 
@@ -110,8 +109,8 @@ Master上所有的变更操作在点击确认的时候,都会弹出如上输
 
 ![](img/sysdeployment/sys_broker_finished.png)
 
-
-**配置及生效Topic**:
+## 5 数据生产和消费
+### 5.1 配置及生效Topic:
 
 配置Topic和配置Broker信息类似,都需要先在Master上新增元数据信息,然后才能开始使用,要不生产和消费时候会报topic不存在错误,如我们用安装包里的example对不存在的Topic名test进行生产:
 ![](img/sysdeployment/test_sendmessage.png)
@@ -137,7 +136,7 @@ Demo实例会报如下错误信息:
 
 **大家需要注意的是:** 我们在重载的时候,要对待重载的Broker集合分批次进行。我们的重载通过状态机进行控制,会先进行不可读写—〉只读操作—〉可读写—〉上线运行各个子状态处理,如果所有待重启Broker全量重载,会使得已在线对外服务的Topic对外出现短暂的不可读写状况,使得生产、消费,特别是生产发送失败。
 
-**数据生产和消费**:
+### 5.2 数据生产和消费:
 
 在安装包里,我们打包了example的测试Demo,业务也可以直接使用tubemq-client-0.9.0-incubating-SNAPSHOT.jar封装自己的生产和消费逻辑,总的形式是类似,我们先执行生产者的Demo,我们可以看到Broker上已开始有数据接收:
 ![](img/sysdeployment/test_sendmessage_2.png)
@@ -152,4 +151,7 @@ Demo实例会报如下错误信息:
 
 ![](img/sysdeployment/sys_node_log.png)
 
-在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,就需要查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。
\ No newline at end of file
+在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,就需要查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。
+
+---
+<a href="#top">Back to top</a>
diff --git a/zh-cn/docs/modules/tubemq/error_code.html b/zh-cn/docs/modules/tubemq/error_code.html
index 8974427..5523de1 100644
--- a/zh-cn/docs/modules/tubemq/error_code.html
+++ b/zh-cn/docs/modules/tubemq/error_code.html
@@ -7,14 +7,14 @@
 	<meta name="keywords" content="error_code" />
 	<meta name="description" content="error_code" />
 	<!-- 网页标签标题 -->
-	<title>错误码 - Apache InLong TubeMQ模块</title>
+	<title>error_code</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>​        TubeMQ采用的是 错误码(errCode) + 错误详情(errMsg) 相结合的方式返回具体的操作结果。首先根据错误码确定是哪类问题,然后根据错误详情来确定具体的错误原因。表格汇总了所有的错误码以及运行中大家可能遇到的错误详情的相关对照。</p>
-<h2>错误码</h2>
+<h2>2 错误码</h2>
 <table>
 <thead>
 <tr>
@@ -182,7 +182,7 @@
 </tr>
 </tbody>
 </table>
-<h2>常见错误信息</h2>
+<h2>3 常见错误信息</h2>
 <table>
 <thead>
 <tr>
diff --git a/zh-cn/docs/modules/tubemq/error_code.json b/zh-cn/docs/modules/tubemq/error_code.json
index f3a9075..00fa73a 100644
--- a/zh-cn/docs/modules/tubemq/error_code.json
+++ b/zh-cn/docs/modules/tubemq/error_code.json
@@ -1,8 +1,8 @@
 {
   "filename": "error_code.md",
-  "__html": "<h1>TubeMQ错误信息介绍</h1>\n<p>​        TubeMQ采用的是 错误码(errCode) + 错误详情(errMsg) 相结合的方式返回具体的操作结果。首先根据错误码确定是哪类问题,然后根据错误详情来确定具体的错误原因。表格汇总了所有的错误码以及运行中大家可能遇到的错误详情的相关对照。</p>\n<h2>错误码</h2>\n<table>\n<thead>\n<tr>\n<th>错误类别</th>\n<th>错误码</th>\n<th>错误标记</th>\n<th>含义</th>\n<th>备注</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>成功操作</td>\n<td>200</td>\n<td>SUCCESS</td>\n<td>操作成功</td>\n<td></td>\n</tr>\n<tr>\n<td>成功操作</td>\n<td>201</td>\n<td>NOT_READY</td>\n<td>请求已接纳,但服务器还没有ready,服务还没有运行</td>\n<td> [...]
+  "__html": "<h2>1 TubeMQ错误信息介绍</h2>\n<p>​        TubeMQ采用的是 错误码(errCode) + 错误详情(errMsg) 相结合的方式返回具体的操作结果。首先根据错误码确定是哪类问题,然后根据错误详情来确定具体的错误原因。表格汇总了所有的错误码以及运行中大家可能遇到的错误详情的相关对照。</p>\n<h2>2 错误码</h2>\n<table>\n<thead>\n<tr>\n<th>错误类别</th>\n<th>错误码</th>\n<th>错误标记</th>\n<th>含义</th>\n<th>备注</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>成功操作</td>\n<td>200</td>\n<td>SUCCESS</td>\n<td>操作成功</td>\n<td></td>\n</tr>\n<tr>\n<td>成功操作</td>\n<td>201</td>\n<td>NOT_READY</td>\n<td>请求已接纳,但服务器还没有ready,服务还没有运行</td>\n [...]
   "link": "/zh-cn/docs/modules/tubemq/error_code.html",
   "meta": {
-    "title": "错误码 - Apache InLong TubeMQ模块"
+    "错误码 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/error_code.md b/zh-cn/docs/modules/tubemq/error_code.md
index 136a2e2..328160e 100644
--- a/zh-cn/docs/modules/tubemq/error_code.md
+++ b/zh-cn/docs/modules/tubemq/error_code.md
@@ -1,12 +1,12 @@
 ---
-title: 错误码 - Apache InLong TubeMQ模块
+错误码 - Apache InLong TubeMQ模块
 ---
 
-# TubeMQ错误信息介绍
+## 1 TubeMQ错误信息介绍
 
 ​        TubeMQ采用的是 错误码(errCode) + 错误详情(errMsg) 相结合的方式返回具体的操作结果。首先根据错误码确定是哪类问题,然后根据错误详情来确定具体的错误原因。表格汇总了所有的错误码以及运行中大家可能遇到的错误详情的相关对照。
 
-## 错误码
+## 2 错误码
 
 | 错误类别     | 错误码                            | 错误标记                                                     | 含义                                                         | 备注                                           |
 | ------------ | --------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------------------------------------------- |
@@ -33,7 +33,7 @@ title: 错误码 - Apache InLong TubeMQ模块
 | 服务器侧异常| 503          | SERVICE_UNAVILABLE                | 业务临时禁读或者禁写                                         | 继续重试处理,如果持续的出现该类错误,需要联系管理员处理     |                                                |
 | 服务器侧异常| 510          | INTERNAL_SERVER_ERROR_MSGSET_NULL | 读取不到消息集合                                             | 继续重试处理,如果持续的出现该类错误,需要联系管理员处理     |                                                |
 
-## 常见错误信息
+## 3 常见错误信息
 
 | 记录号 | 错误信息                                                     | 含义                                                         | 备注                                                         |
 | ------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
diff --git a/zh-cn/docs/modules/tubemq/http_access_api.html b/zh-cn/docs/modules/tubemq/http_access_api.html
index 3d20f65..fe028d3 100644
--- a/zh-cn/docs/modules/tubemq/http_access_api.html
+++ b/zh-cn/docs/modules/tubemq/http_access_api.html
@@ -7,13 +7,12 @@
 	<meta name="keywords" content="http_access_api" />
 	<meta name="description" content="http_access_api" />
 	<!-- 网页标签标题 -->
-	<title>HTTP API介绍 - Apache InLong TubeMQ模块</title>
+	<title>http_access_api</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<p>HTTP API是Master或者Broker对外功能暴露的接口,管控台的各项操作都是基于这些API进行;如果有最新的功能,或者管控台没有涵盖的功能,业务都可以直接通过调用HTTP API接口完成。</p>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>该部分接口一共有4个部分:</p>
 <ul>
 <li>Master元数据配置相关的操作接口,接口数量 24个</li>
diff --git a/zh-cn/docs/modules/tubemq/http_access_api.json b/zh-cn/docs/modules/tubemq/http_access_api.json
index e082264..e033670 100644
--- a/zh-cn/docs/modules/tubemq/http_access_api.json
+++ b/zh-cn/docs/modules/tubemq/http_access_api.json
@@ -1,8 +1,8 @@
 {
   "filename": "http_access_api.md",
-  "__html": "<h1>HTTP API定义</h1>\n<p>HTTP API是Master或者Broker对外功能暴露的接口,管控台的各项操作都是基于这些API进行;如果有最新的功能,或者管控台没有涵盖的功能,业务都可以直接通过调用HTTP API接口完成。</p>\n<p>该部分接口一共有4个部分:</p>\n<ul>\n<li>Master元数据配置相关的操作接口,接口数量 24个</li>\n<li>Master消费权限操作接口,接口数量 33个</li>\n<li>Master订阅关系接口,接口数量 2个</li>\n<li>Broker相关操作接口定义,接口数量 6个\n<img src=\"img/api_interface/http-api.png\" alt=\"\"></li>\n</ul>\n<p>由于接口众多且参数繁杂,md格式不能比较好的表达,因而以excel附件形式提供给到大家:\n<a href=\"appendixfiles/http_access_api_definition_cn.xls\" target=\"_blank [...]
+  "__html": "<p>HTTP API是Master或者Broker对外功能暴露的接口,管控台的各项操作都是基于这些API进行;如果有最新的功能,或者管控台没有涵盖的功能,业务都可以直接通过调用HTTP API接口完成。</p>\n<p>该部分接口一共有4个部分:</p>\n<ul>\n<li>Master元数据配置相关的操作接口,接口数量 24个</li>\n<li>Master消费权限操作接口,接口数量 33个</li>\n<li>Master订阅关系接口,接口数量 2个</li>\n<li>Broker相关操作接口定义,接口数量 6个\n<img src=\"img/api_interface/http-api.png\" alt=\"\"></li>\n</ul>\n<p>由于接口众多且参数繁杂,md格式不能比较好的表达,因而以excel附件形式提供给到大家:\n<a href=\"appendixfiles/http_access_api_definition_cn.xls\" target=\"_blank\">TubeMQ HTTP API</a [...]
   "link": "/zh-cn/docs/modules/tubemq/http_access_api.html",
   "meta": {
-    "title": "HTTP API介绍 - Apache InLong TubeMQ模块"
+    "HTTP API介绍 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/http_access_api.md b/zh-cn/docs/modules/tubemq/http_access_api.md
index fcc7247..d420ad3 100644
--- a/zh-cn/docs/modules/tubemq/http_access_api.md
+++ b/zh-cn/docs/modules/tubemq/http_access_api.md
@@ -1,8 +1,7 @@
 ---
-title: HTTP API介绍 - Apache InLong TubeMQ模块
+HTTP API介绍 - Apache InLong TubeMQ模块
 ---
 
-# HTTP API定义
 HTTP API是Master或者Broker对外功能暴露的接口,管控台的各项操作都是基于这些API进行;如果有最新的功能,或者管控台没有涵盖的功能,业务都可以直接通过调用HTTP API接口完成。
 
 该部分接口一共有4个部分:
diff --git a/zh-cn/docs/modules/tubemq/producer_example.html b/zh-cn/docs/modules/tubemq/producer_example.html
index 13358c8..81c1cd3 100644
--- a/zh-cn/docs/modules/tubemq/producer_example.html
+++ b/zh-cn/docs/modules/tubemq/producer_example.html
@@ -7,19 +7,19 @@
 	<meta name="keywords" content="producer_example" />
 	<meta name="description" content="producer_example" />
 	<!-- 网页标签标题 -->
-	<title>生产者示例 - Apache InLong TubeMQ模块</title>
+	<title>producer_example</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
 <p>TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactory 和 TubeMultiSessionFactory。</p>
 <ul>
 <li>TubeSingleSessionFactory 在整个生命周期只会创建一个 session</li>
 <li>TubeMultiSessionFactory 每次调用都会创建一个session</li>
 </ul>
-<h3>TubeSingleSessionFactory</h3>
-<h4>Send Message Synchronously</h4>
+<h3>1.1 TubeSingleSessionFactory</h3>
+<h4>1.1.1 Send Message Synchronously</h4>
 <pre><code> ```java
  public final class SyncProducerExample {
 
@@ -42,7 +42,7 @@
 }
 ```
 </code></pre>
-<h4>Send Message Asynchronously</h4>
+<h4>1.1.2 Send Message Asynchronously</h4>
 <pre><code> ```java
  public final class AsyncProducerExample {
  
@@ -76,7 +76,7 @@
 }
 ```
 </code></pre>
-<h4>Send Message With Attributes</h4>
+<h4>1.1.3 Send Message With Attributes</h4>
 <pre><code> ```java
  public final class ProducerWithAttributeExample {
  
@@ -102,7 +102,7 @@
 }
 ```
 </code></pre>
-<h3>TubeMultiSessionFactory</h3>
+<h3>1.2 TubeMultiSessionFactory</h3>
 <pre><code>```java
 public class MultiSessionProducerExample {
     
@@ -157,6 +157,8 @@ public class MultiSessionProducerExample {
 }
 ```
 </code></pre>
+<hr>
+<p><a href="#top">Back to top</a></p>
 </div></section><footer class="footer-container"><div class="footer-body"><img src="/img/incubator-logo.svg"/><div class="cols-container"><div class="col col-24"><p>Apache InLong (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with  [...]
 	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
 	<script src="https://f.alicdn.com/react/15.4.1/react-dom.min.js"></script>
diff --git a/zh-cn/docs/modules/tubemq/producer_example.json b/zh-cn/docs/modules/tubemq/producer_example.json
index dcc9ccd..6fe1d48 100644
--- a/zh-cn/docs/modules/tubemq/producer_example.json
+++ b/zh-cn/docs/modules/tubemq/producer_example.json
@@ -1,8 +1,8 @@
 {
   "filename": "producer_example.md",
-  "__html": "<h2>Producer 示例</h2>\n<p>TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactory 和 TubeMultiSessionFactory。</p>\n<ul>\n<li>TubeSingleSessionFactory 在整个生命周期只会创建一个 session</li>\n<li>TubeMultiSessionFactory 每次调用都会创建一个session</li>\n</ul>\n<h3>TubeSingleSessionFactory</h3>\n<h4>Send Message Synchronously</h4>\n<pre><code> ```java\n public final class SyncProducerExample {\n\n    public static void main(String[] args) throws Throwable {\n        final String masterHostAndPort  [...]
+  "__html": "<h2>1 Producer 示例</h2>\n<p>TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactory 和 TubeMultiSessionFactory。</p>\n<ul>\n<li>TubeSingleSessionFactory 在整个生命周期只会创建一个 session</li>\n<li>TubeMultiSessionFactory 每次调用都会创建一个session</li>\n</ul>\n<h3>1.1 TubeSingleSessionFactory</h3>\n<h4>1.1.1 Send Message Synchronously</h4>\n<pre><code> ```java\n public final class SyncProducerExample {\n\n    public static void main(String[] args) throws Throwable {\n        final String master [...]
   "link": "/zh-cn/docs/modules/tubemq/producer_example.html",
   "meta": {
-    "title": "生产者示例 - Apache InLong TubeMQ模块"
+    "生产者示例 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/producer_example.md b/zh-cn/docs/modules/tubemq/producer_example.md
index 639c7c5..e3c381a 100644
--- a/zh-cn/docs/modules/tubemq/producer_example.md
+++ b/zh-cn/docs/modules/tubemq/producer_example.md
@@ -1,14 +1,14 @@
 ---
-title: 生产者示例 - Apache InLong TubeMQ模块
+生产者示例 - Apache InLong TubeMQ模块
 ---
 
-## Producer 示例
+## 1 Producer 示例
 TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactory 和 TubeMultiSessionFactory。
   - TubeSingleSessionFactory 在整个生命周期只会创建一个 session
   - TubeMultiSessionFactory 每次调用都会创建一个session
 
-### TubeSingleSessionFactory
-   #### Send Message Synchronously
+### 1.1 TubeSingleSessionFactory
+   #### 1.1.1 Send Message Synchronously
      ```java
      public final class SyncProducerExample {
     
@@ -31,7 +31,7 @@ TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactor
     }
     ```
      
-   #### Send Message Asynchronously
+   #### 1.1.2 Send Message Asynchronously
      ```java
      public final class AsyncProducerExample {
      
@@ -65,7 +65,7 @@ TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactor
     }
     ```
      
-   #### Send Message With Attributes
+   #### 1.1.3 Send Message With Attributes
      ```java
      public final class ProducerWithAttributeExample {
      
@@ -91,7 +91,7 @@ TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactor
     }
     ```
      
-### TubeMultiSessionFactory
+### 1.2 TubeMultiSessionFactory
 
     ```java
     public class MultiSessionProducerExample {
@@ -146,3 +146,5 @@ TubeMQ提供了两种方式来初始化 session factory: TubeSingleSessionFactor
         }
     }
     ```
+---
+<a href="#top">Back to top</a>    
diff --git a/zh-cn/docs/modules/tubemq/quick_start.html b/zh-cn/docs/modules/tubemq/quick_start.html
index 9ffe142..ae36a22 100644
--- a/zh-cn/docs/modules/tubemq/quick_start.html
+++ b/zh-cn/docs/modules/tubemq/quick_start.html
@@ -7,18 +7,18 @@
 	<meta name="keywords" content="quick_start" />
 	<meta name="description" content="quick_start" />
 	<!-- 网页标签标题 -->
-	<title>快速开始 - Apache InLong TubeMQ模块</title>
+	<title>quick_start</title>
 	<link rel="shortcut icon" href="/img/apache.ico"/>
 	<link rel="stylesheet" href="/build/documentation.css" />
 </head>
 <body>
-	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h3>准备工作</h3>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
+<h3>1.1 准备工作</h3>
 <ul>
 <li>Java JDK 1.8</li>
 <li>Maven 3.3+</li>
 </ul>
-<h3>从源码包构建</h3>
+<h3>1.2 从源码包构建</h3>
 <ul>
 <li>编译和打包:</li>
 </ul>
@@ -38,7 +38,7 @@ mvn <span class="hljs-built_in">test</span>
 </code></pre>
 <p>构建完成之后,在 <code>tubemq-server/target</code> 目录下会有 <strong>apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz</strong> 文件。
 这是 TubeMQ 的部署包,包含了脚本、配置文件、依赖以及 web GUI相关的内容。</p>
-<h3>配置IDE开发环境</h3>
+<h3>1.3 配置IDE开发环境</h3>
 <p>在IDE中构建和调试源码,需要先运行以下命令:</p>
 <pre><code class="language-bash">mvn compile
 </code></pre>
@@ -49,8 +49,8 @@ mvn <span class="hljs-built_in">test</span>
     <span class="hljs-tag">&lt;<span class="hljs-name">protocExecutable</span>&gt;</span>/usr/local/bin/protoc<span class="hljs-tag">&lt;/<span class="hljs-name">protocExecutable</span>&gt;</span>
 <span class="hljs-tag">&lt;/<span class="hljs-name">configuration</span>&gt;</span>
 </code></pre>
-<h2>部署运行</h2>
-<h3>配置示例</h3>
+<h2>2 部署运行</h2>
+<h3>2.1 配置示例</h3>
 <p>TubeMQ 集群包含有两个组件: <strong>Master</strong> 和 <strong>Broker</strong>. Master 和 Broker 可以部署在相同或者不同的节点上,依照业务对机器的规划进行处理。我们通过如下3台机器搭建有2台Master的生产、消费的集群进行配置示例:</p>
 <table>
 <thead>
@@ -86,7 +86,7 @@ mvn <span class="hljs-built_in">test</span>
 </tr>
 </tbody>
 </table>
-<h3>准备工作</h3>
+<h3>2.2 准备工作</h3>
 <ul>
 <li>ZooKeeper集群</li>
 <li><a href="download/download.md">apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz</a>安装包</li>
@@ -99,7 +99,7 @@ mvn <span class="hljs-built_in">test</span>
 ├── logs
 └── resources
 </code></pre>
-<h3>配置Master</h3>
+<h3>2.3 配置Master</h3>
 <p>编辑<code>conf/master.ini</code>,根据集群信息变更以下配置项</p>
 <ul>
 <li>Master IP和端口</li>
@@ -160,7 +160,7 @@ zkServerAddr=localhost:2181              // 指向zookeeper集群,多个地址
 </tbody>
 </table>
 <p><strong>注意</strong>:需保证Master所有节点之间的时钟同步</p>
-<h3>配置Broker</h3>
+<h3>2.4 配置Broker</h3>
 <p>编辑<code>conf/broker.ini</code>,根据集群信息变更以下配置项</p>
 <ul>
 <li>Broker IP和端口</li>
@@ -188,13 +188,13 @@ zkServerAddr=localhost:2181              // 指向zookeeper集群,多个地址
 zkNodeRoot=/tubemq                      
 zkServerAddr=localhost:2181             // 指向zookeeper集群,多个地址逗号分开
 </code></pre>
-<h3>启动Master</h3>
+<h3>2.5 启动Master</h3>
 <p>进入Master节点的 <code>bin</code> 目录下,启动服务:</p>
 <pre><code class="language-bash">./tubemq.sh master start
 </code></pre>
 <p>访问Master的管控台 <code>http://YOUR_MASTER_IP:8080</code> ,页面可查则表示master已成功启动:
 <img src="img/tubemq-console-gui.png" alt="TubeMQ Console GUI"></p>
-<h4>配置Broker元数据</h4>
+<h4>2.5.1 配置Broker元数据</h4>
 <p>Broker启动前,首先要在Master上配置Broker元数据,增加Broker相关的管理信息。在<code>Broker List</code> 页面,  <code>Add Single Broker</code>,然后填写相关信息:</p>
 <p><img src="img/tubemq-add-broker-1.png" alt="Add Broker 1"></p>
 <p>需要填写的内容包括:</p>
@@ -204,14 +204,14 @@ zkServerAddr=localhost:2181             // 指向zookeeper集群,多个地址
 </ol>
 <p>然后上线Broker:
 <img src="img/tubemq-add-broker-2.png" alt="Add Broker 2"></p>
-<h3>启动Broker</h3>
+<h3>2.6 启动Broker</h3>
 <p>进入broker节点的 <code>bin</code> 目录下,执行以下命令启动Broker服务:</p>
 <pre><code class="language-bash">./tubemq.sh broker start
 </code></pre>
 <p>刷新页面可以看到 Broker 已经注册,当 <code>当前运行子状态</code> 为 <code>idle</code> 时, 可以增加topic:
 <img src="img/tubemq-add-broker-3.png" alt="Add Broker 3"></p>
-<h2>快速使用</h2>
-<h3>新增 Topic</h3>
+<h2>3 快速使用</h2>
+<h3>3.1 新增 Topic</h3>
 <p>可以通过 web GUI 添加 Topic, 在 <code>Topic列表</code>页面添加,需要填写相关信息,比如增加<code>demo</code> topic:
 <img src="img/tubemq-add-topic-1.png" alt="Add Topic 1"></p>
 <p>然后选择部署 Topic 的 Broker
@@ -223,29 +223,26 @@ zkServerAddr=localhost:2181             // 指向zookeeper集群,多个地址
 <p><img src="img/tubemq-add-topic-3.png" alt="Add Topic 3"></p>
 <p>之后就可以在页面查看Topic信息。</p>
 <p><img src="img/tubemq-add-topic-4.png" alt="Add Topic 4"></p>
-<h3>运行Example</h3>
+<h3>3.2 运行Example</h3>
 <p>可以通过上面创建的<code>demo</code> topic来测试集群。</p>
-<ul>
-<li>生产消息
-将 <code>YOUR_MASTER_IP:port</code> 替换为实际的IP和端口,然后运行producer:</li>
-</ul>
+<h4>3.2.1 生产消息</h4>
+<p>将 <code>YOUR_MASTER_IP:port</code> 替换为实际的IP和端口,然后运行producer:</p>
 <pre><code class="language-bash"><span class="hljs-built_in">cd</span> /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 ./bin/tubemq-producer-test.sh --master-servers YOUR_MASTER_IP1:port,YOUR_MASTER_IP2:port --topicName demo
 </code></pre>
 <p>如果能观察下如下日志,则表示数据发送成功:
 <img src="img/tubemq-send-message.png" alt="Demo 1"></p>
-<ul>
-<li>消费消息
-将 <code>YOUR_MASTER_IP:port</code> 替换为实际的IP和端口,然后运行Consumer:</li>
-</ul>
+<h4>3.2.2 消费消息</h4>
+<p>将 <code>YOUR_MASTER_IP:port</code> 替换为实际的IP和端口,然后运行Consumer:</p>
 <pre><code class="language-bash"><span class="hljs-built_in">cd</span> /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 ./bin/tubemq-consumer-test.sh --master-servers YOUR_MASTER_IP1:port,YOUR_MASTER_IP2:port --topicName demo --groupName test_consume
 </code></pre>
 <p>如果能观察下如下日志,则表示数据被消费者消费到:</p>
 <p><img src="img/tubemq-consume-message.png" alt="Demo 2"></p>
-<h2>结束</h2>
+<h2>4 结束</h2>
 <p>在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,请查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。</p>
 <hr>
+<p><a href="#top">Back to top</a></p>
 </div></section><footer class="footer-container"><div class="footer-body"><img src="/img/incubator-logo.svg"/><div class="cols-container"><div class="col col-24"><p>Apache InLong (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with  [...]
 	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
 	<script src="https://f.alicdn.com/react/15.4.1/react-dom.min.js"></script>
diff --git a/zh-cn/docs/modules/tubemq/quick_start.json b/zh-cn/docs/modules/tubemq/quick_start.json
index 4f8cf72..ca3d404 100644
--- a/zh-cn/docs/modules/tubemq/quick_start.json
+++ b/zh-cn/docs/modules/tubemq/quick_start.json
@@ -1,8 +1,8 @@
 {
   "filename": "quick_start.md",
-  "__html": "<h2>编译和构建</h2>\n<h3>准备工作</h3>\n<ul>\n<li>Java JDK 1.8</li>\n<li>Maven 3.3+</li>\n</ul>\n<h3>从源码包构建</h3>\n<ul>\n<li>编译和打包:</li>\n</ul>\n<pre><code class=\"language-bash\">mvn clean package -DskipTests\n</code></pre>\n<ul>\n<li>单元测试:</li>\n</ul>\n<pre><code class=\"language-bash\">mvn <span class=\"hljs-built_in\">test</span>\n</code></pre>\n<ul>\n<li>单独对每个 module 进行构建:</li>\n</ul>\n<pre><code class=\"language-bash\">mvn clean install\n<span class=\"hljs-built_in\">cd</span> m [...]
+  "__html": "<h2>1 编译和构建</h2>\n<h3>1.1 准备工作</h3>\n<ul>\n<li>Java JDK 1.8</li>\n<li>Maven 3.3+</li>\n</ul>\n<h3>1.2 从源码包构建</h3>\n<ul>\n<li>编译和打包:</li>\n</ul>\n<pre><code class=\"language-bash\">mvn clean package -DskipTests\n</code></pre>\n<ul>\n<li>单元测试:</li>\n</ul>\n<pre><code class=\"language-bash\">mvn <span class=\"hljs-built_in\">test</span>\n</code></pre>\n<ul>\n<li>单独对每个 module 进行构建:</li>\n</ul>\n<pre><code class=\"language-bash\">mvn clean install\n<span class=\"hljs-built_in\">c [...]
   "link": "/zh-cn/docs/modules/tubemq/quick_start.html",
   "meta": {
-    "title": "快速开始 - Apache InLong TubeMQ模块"
+    "快速开始 - Apache InLong TubeMQ模块": ""
   }
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/quick_start.md b/zh-cn/docs/modules/tubemq/quick_start.md
index 01404ad..40eb4d3 100644
--- a/zh-cn/docs/modules/tubemq/quick_start.md
+++ b/zh-cn/docs/modules/tubemq/quick_start.md
@@ -1,14 +1,14 @@
 ---
-title: 快速开始 - Apache InLong TubeMQ模块
+快速开始 - Apache InLong TubeMQ模块
 ---
 
-## 编译和构建
+## 1 编译和构建
 
-### 准备工作
+### 1.1 准备工作
 - Java JDK 1.8
 - Maven 3.3+
 
-### 从源码包构建
+### 1.2 从源码包构建
 - 编译和打包:
 ```bash
 mvn clean package -DskipTests
@@ -29,7 +29,7 @@ mvn test
 构建完成之后,在 `tubemq-server/target` 目录下会有 **apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz** 文件。
 这是 TubeMQ 的部署包,包含了脚本、配置文件、依赖以及 web GUI相关的内容。
 
-### 配置IDE开发环境
+### 1.3 配置IDE开发环境
 在IDE中构建和调试源码,需要先运行以下命令:
 ```bash
 mvn compile
@@ -44,9 +44,9 @@ mvn compile
 </configuration>
 ```
 
-## 部署运行
+## 2 部署运行
 
-### 配置示例
+### 2.1 配置示例
 TubeMQ 集群包含有两个组件: **Master** 和 **Broker**. Master 和 Broker 可以部署在相同或者不同的节点上,依照业务对机器的规划进行处理。我们通过如下3台机器搭建有2台Master的生产、消费的集群进行配置示例:
 | 所属角色 | TCP端口 | TLS端口 | WEB端口 | 备注 |
 | --- | --- | --- | --- | --- |
@@ -54,7 +54,7 @@ TubeMQ 集群包含有两个组件: **Master** 和 **Broker**. Master 和 Broker
 | Broker | 8123 | 8124 | 8081 | 消息储存在`/stage/msg_data` |
 | ZooKeeper | 2181 | | | Offset储存在根目录`/tubemq` |
 
-### 准备工作
+### 2.2 准备工作
 - ZooKeeper集群
 - [apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin.tar.gz](download/download.md)安装包
 
@@ -68,7 +68,7 @@ TubeMQ 集群包含有两个组件: **Master** 和 **Broker**. Master 和 Broker
 └── resources
 ```
 
-### 配置Master
+### 2.3 配置Master
 编辑`conf/master.ini`,根据集群信息变更以下配置项
 
 - Master IP和端口
@@ -111,7 +111,7 @@ repHelperHost=FIRST_MASTER_NODE_IP:9001  // helperHost用于创建master集群
 **注意**:需保证Master所有节点之间的时钟同步
 
 
-### 配置Broker
+### 2.4 配置Broker
 编辑`conf/broker.ini`,根据集群信息变更以下配置项
 - Broker IP和端口
 ```ini
@@ -139,7 +139,7 @@ zkNodeRoot=/tubemq
 zkServerAddr=localhost:2181             // 指向zookeeper集群,多个地址逗号分开
 ```
 
-### 启动Master
+### 2.5 启动Master
 进入Master节点的 `bin` 目录下,启动服务:
 ```bash
 ./tubemq.sh master start
@@ -148,7 +148,7 @@ zkServerAddr=localhost:2181             // 指向zookeeper集群,多个地址
 ![TubeMQ Console GUI](img/tubemq-console-gui.png)
 
 
-#### 配置Broker元数据
+#### 2.5.1 配置Broker元数据
 Broker启动前,首先要在Master上配置Broker元数据,增加Broker相关的管理信息。在`Broker List` 页面,  `Add Single Broker`,然后填写相关信息:
 
 ![Add Broker 1](img/tubemq-add-broker-1.png)
@@ -160,7 +160,7 @@ Broker启动前,首先要在Master上配置Broker元数据,增加Broker相
 然后上线Broker:
 ![Add Broker 2](img/tubemq-add-broker-2.png)
 
-### 启动Broker
+### 2.6 启动Broker
 进入broker节点的 `bin` 目录下,执行以下命令启动Broker服务:
 
 ```bash
@@ -170,8 +170,8 @@ Broker启动前,首先要在Master上配置Broker元数据,增加Broker相
 刷新页面可以看到 Broker 已经注册,当 `当前运行子状态` 为 `idle` 时, 可以增加topic:
 ![Add Broker 3](img/tubemq-add-broker-3.png)
 
-## 快速使用
-### 新增 Topic
+## 3 快速使用
+### 3.1 新增 Topic
 
 可以通过 web GUI 添加 Topic, 在 `Topic列表`页面添加,需要填写相关信息,比如增加`demo` topic:
 ![Add Topic 1](img/tubemq-add-topic-1.png)
@@ -192,10 +192,10 @@ Broker启动前,首先要在Master上配置Broker元数据,增加Broker相
 ![Add Topic 4](img/tubemq-add-topic-4.png)
 
 
-### 运行Example
+### 3.2 运行Example
 可以通过上面创建的`demo` topic来测试集群。
 
-- 生产消息
+#### 3.2.1 生产消息
 将 `YOUR_MASTER_IP:port` 替换为实际的IP和端口,然后运行producer:
 ```bash
 cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
@@ -205,7 +205,7 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 如果能观察下如下日志,则表示数据发送成功:
 ![Demo 1](img/tubemq-send-message.png)
 
-- 消费消息
+#### 3.2.2 消费消息
 将 `YOUR_MASTER_IP:port` 替换为实际的IP和端口,然后运行Consumer:
 ```bash
 cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
@@ -217,9 +217,11 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
 ![Demo 2](img/tubemq-consume-message.png)
 
 
-## 结束
+## 4 结束
 在这里,已经完成了TubeMQ的编译,部署,系统配置,启动,生产和消费。如果需要了解更深入的内容,请查看《TubeMQ HTTP API》里的相关内容,进行相应的配置设置。
 
 ---
+<a href="#top">Back to top</a>
+
 
 
diff --git a/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html b/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html
index b19e6b3..21e3eed 100644
--- a/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html
+++ b/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html
@@ -13,13 +13,13 @@
 </head>
 <body>
 	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/zh-cn/index.html"><a href="//www.apache.org"><img class="logo apache" style="width:120px" src="/img/asf_logo.svg"/></a><div class="logo-split"></div><img class="logo tube" style="width:120px;top:12px;position:absolute" src="/img/Tube logo.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class= [...]
-<h2>背景</h2>
+<h2>1 背景</h2>
 <p>TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于<a href="http://kafka.apache.org/">Apache Kafka</a>。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。
 这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。</p>
-<h2>测试场景方案</h2>
+<h2>2 测试场景方案</h2>
 <p>如下是我们根据实际应用场景设计的测试方案:
 <img src="img/perf_scheme.png" alt=""></p>
-<h2>测试结论</h2>
+<h2>3 测试结论</h2>
 <p>用&quot;复仇者联盟&quot;里的角色来形容:</p>
 <table>
 <thead>
@@ -59,8 +59,8 @@
 <li>在过滤消费时,TubeMQ可以极大地降低服务端的网络出流量,同时还会因过滤消费消耗的资源少于全量消费,反过来促进TubeMQ吞吐量提升;kafka无服务端过滤,出流量与全量消费一致,流量无明显的节约;</li>
 <li>资源消耗方面各有差异:TubeMQ由于采用顺序写随机读,CPU消耗很大,Kafka采用顺序写块读,CPU消耗很小,但其他资源,如文件句柄、网络连接等消耗非常的大。在实际的SAAS模式下的运营环境里,Kafka会因为zookeeper依赖出现系统瓶颈,会因生产、消费、Broker众多,受限制的地方会更多,比如文件句柄、网络连接数等,资源消耗会更大;</li>
 </ol>
-<h2>测试环境及配置</h2>
-<p>###【软件版本及部署环境】</p>
+<h2>4 测试环境及配置</h2>
+<h3>4.1 【软件版本及部署环境】</h3>
 <table>
 <thead>
 <tr>
@@ -102,7 +102,7 @@
 </tr>
 </tbody>
 </table>
-<p>###【Broker硬件机型配置】</p>
+<h3>4.2 【Broker硬件机型配置】</h3>
 <table>
 <thead>
 <tr>
@@ -129,7 +129,7 @@
 </tr>
 </tbody>
 </table>
-<p>###【Broker系统配置】</p>
+<h3>4.3 【Broker系统配置】</h3>
 <table>
 <thead>
 <tr>
@@ -161,21 +161,21 @@
 </tr>
 </tbody>
 </table>
-<h2>测试场景及结论</h2>
-<h3>场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能</h3>
+<h2>5 测试场景及结论</h2>
+<h3>5.1 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能</h3>
 <p><img src="img/perf_scenario_1.png" alt=""></p>
-<p>####【结论】</p>
+<h4>5.1.1 【结论】</h4>
 <p>在单topic不同分区的情况下:</p>
 <ol>
 <li>TubeMQ吞吐量不随分区变化而变化,同时TubeMQ属于顺序写随机读模式,单实例情况下吞吐量要低于Kafka,CPU要高于Kafka;</li>
 <li>Kafka随着分区增多吞吐量略有下降,CPU使用率很低;</li>
 <li>TubeMQ分区由于是逻辑分区,增加分区不影响吞吐量;Kafka分区为物理文件的增加,但增加分区入出流量反而会下降;</li>
 </ol>
-<p>####【指标】
+<p>####5.1.2 【指标】
 <img src="img/perf_scenario_1_index.png" alt=""></p>
-<h3>场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况</h3>
+<h3>5.2 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况</h3>
 <p><img src="img/perf_scenario_2.png" alt=""></p>
-<p>####【结论】</p>
+<h4>5.2.1 【结论】</h4>
 <p>从场景一和场景二的测试数据结合来看:</p>
 <ol>
 <li>TubeMQ随着实例数增多,吞吐量增长,在4个实例的时候吞吐量与Kafka持平,磁盘IO使用率比Kafka低,CPU使用率比Kafka高;</li>
@@ -184,14 +184,14 @@
 <li>TubeMQ按照Kafka等同的增加实例(物理文件)后,吞吐量量随之提升,在4个实例的时候测试效果达到并超过Kafka
 5个分区的状态;TubeMQ可以根据业务或者系统配置需要,调整数据读取方式,可以动态提升系统的吞吐量;Kafka随着分区增加,入流量有下降;</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.2.2 【指标】</h4>
 <p><strong>注1 :</strong> 如下场景中,均为单Topic测试下不同分区或实例、不同读取模式场景下的测试,单条消息包长均为1K;</p>
 <p><strong>注2 :</strong>
 读取模式通过admin_upd_def_flow_control_rule设置qryPriorityId为对应值.
 <img src="img/perf_scenario_2_index.png" alt=""></p>
-<h3>场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况</h3>
+<h3>5.3 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况</h3>
 <p><img src="img/perf_scenario_3.png" alt=""></p>
-<p>####【结论】</p>
+<h4>5.3.1 【结论】</h4>
 <p>按照多Topic场景下测试:</p>
 <ol>
 <li>TubeMQ随着Topic数增加,生产和消费性能维持在一个均线上,没有特别大的流量波动,占用的文件句柄、内存量、网络连接数不多(1k
@@ -201,20 +201,20 @@ topic下文件句柄约7500个,网络连接150个),但CPU占用比较大
 Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题;</li>
 <li>数据对比来看,TubeMQ相比Kafka运行更稳定,吞吐量以稳定形势呈现,长时间跑吞吐量不下降,资源占用少,但CPU的占用需要后续版本解决;</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.3.2 【指标】</h4>
 <p><strong>注:</strong> 如下场景中,包长均为1K,分区数均为10。
 <img src="img/perf_scenario_3_index.png" alt=""></p>
-<h3>场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容</h3>
-<p>####【结论】</p>
+<h3>5.4 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容</h3>
+<h4>5.4.1 【结论】</h4>
 <ol>
 <li>TubeMQ采用服务端过滤的模式,出流量指标与入流量存在明显差异;</li>
 <li>TubeMQ服务端过滤提供了更多的资源给到生产,生产性能比非过滤情况有提升;</li>
 <li>Kafka采用客户端过滤模式,入流量没有提升,出流量差不多是入流量的2倍,同时入出流量不稳定;</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.4.2 【指标】</h4>
 <p><strong>注:</strong> 如下场景中,topic为100,包长均为1K,分区数均为10
 <img src="img/perf_scenario_4_index.png" alt=""></p>
-<h3>场景五:TubeMQ、Kafka数据消费时延比对</h3>
+<h3>5.5 场景五:TubeMQ、Kafka数据消费时延比对</h3>
 <table>
 <thead>
 <tr>
@@ -237,27 +237,27 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 </tbody>
 </table>
 <p>备注:TubeMQ的消费端存在一个等待队列处理消息追平生产时的数据未找到的情况,缺省有200ms的等待时延。测试该项时,TubeMQ消费端要调整拉取时延(ConsumerConfig.setMsgNotFoundWaitPeriodMs())为10ms,或者设置频控策略为10ms。</p>
-<h3>场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响</h3>
-<p>####【结论】</p>
+<h3>5.6 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响</h3>
+<h4>5.6.1 【结论】</h4>
 <ol>
 <li>TubeMQ调整Topic的内存缓存大小能对吞吐量形成正面影响,实际使用时可以根据机器情况合理调整;</li>
 <li>从实际使用情况看,内存大小设置并不是越大越好,需要合理设置该值;</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.6.2 【指标】</h4>
 <p><strong>注:</strong> 如下场景中,消费方式均为读取内存(301)的PULL消费,单条消息包长均为1K
 <img src="img/perf_scenario_6_index.png" alt=""></p>
-<h3>场景七:消费严重滞后情况下两系统的表现</h3>
-<p>####【结论】</p>
+<h3>5.7 场景七:消费严重滞后情况下两系统的表现</h3>
+<h4>5.7.1 【结论】</h4>
 <ol>
 <li>消费严重滞后情况下,TubeMQ和Kafka都会因磁盘IO飙升使得生产消费受阻;</li>
 <li>在带SSD系统里,TubeMQ可以通过SSD转存储消费来换取部分生产和消费入流量;</li>
 <li>按照版本计划,目前TubeMQ的SSD消费转存储特性不是最终实现,后续版本中将进一步改进,使其达到最合适的运行方式;</li>
 </ol>
-<p>####【指标】
-<img src="img/perf_scenario_7.png" alt=""></p>
-<h3>场景八:评估多机型情况下两系统的表现</h3>
+<h4>5.7.2 【指标】</h4>
+<p><img src="img/perf_scenario_7.png" alt=""></p>
+<h3>5.8 场景八:评估多机型情况下两系统的表现</h3>
 <p><img src="img/perf_scenario_8.png" alt=""></p>
-<p>####【结论】</p>
+<h4>5.8.1【结论】</h4>
 <ol>
 <li>TubeMQ在BX1机型下较TS60机型有更高的吞吐量,同时因IO util达到瓶颈无法再提升,吞吐量在CG1机型下又较BX1达到更高的指标值;</li>
 <li>Kafka在BX1机型下系统吞吐量不稳定,且较TS60下测试的要低,在CG1机型下系统吞吐量达到最高,万兆网卡跑满;</li>
@@ -265,24 +265,25 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 <li>在SSD盘存储条件下,Kafka性能指标达到最好,TubeMQ指标不及Kafka;</li>
 <li>CG1机型数据存储盘较小(仅2.2T),RAID 10配置下90分钟以内磁盘即被写满,无法测试两系统长时间运行情况。</li>
 </ol>
-<p>####【指标】</p>
+<h4>5.8.2 【指标】</h4>
 <p><strong>注1:</strong> 如下场景Topic数均配置500个topic,10个分区,消息包大小为1K字节;</p>
 <p><strong>注2:</strong> TubeMQ采用的是301内存读取模式消费;
 <img src="img/perf_scenario_8_index.png" alt=""></p>
-<h2>附录1 不同机型下资源占用情况图:</h2>
-<p>###【BX1机型测试】
-<img src="img/perf_appendix_1_bx1_1.png" alt="">
+<h2>6 附录</h2>
+<h2>6.1 附录1 不同机型下资源占用情况图:</h2>
+<h3>6.1.1 【BX1机型测试】</h3>
+<p><img src="img/perf_appendix_1_bx1_1.png" alt="">
 <img src="img/perf_appendix_1_bx1_2.png" alt="">
 <img src="img/perf_appendix_1_bx1_3.png" alt="">
 <img src="img/perf_appendix_1_bx1_4.png" alt=""></p>
-<p>###【CG1机型测试】
-<img src="img/perf_appendix_1_cg1_1.png" alt="">
+<h3>6.1.2 【CG1机型测试】</h3>
+<p><img src="img/perf_appendix_1_cg1_1.png" alt="">
 <img src="img/perf_appendix_1_cg1_2.png" alt="">
 <img src="img/perf_appendix_1_cg1_3.png" alt="">
 <img src="img/perf_appendix_1_cg1_4.png" alt=""></p>
-<h2>附录2 多Topic测试时的资源占用情况图:</h2>
-<p>###【100个topic】
-<img src="img/perf_appendix_2_topic_100_1.png" alt="">
+<h2>6.2 附录2 多Topic测试时的资源占用情况图:</h2>
+<h3>6.2.1 【100个topic】</h3>
+<p><img src="img/perf_appendix_2_topic_100_1.png" alt="">
 <img src="img/perf_appendix_2_topic_100_2.png" alt="">
 <img src="img/perf_appendix_2_topic_100_3.png" alt="">
 <img src="img/perf_appendix_2_topic_100_4.png" alt="">
@@ -291,8 +292,8 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 <img src="img/perf_appendix_2_topic_100_7.png" alt="">
 <img src="img/perf_appendix_2_topic_100_8.png" alt="">
 <img src="img/perf_appendix_2_topic_100_9.png" alt=""></p>
-<p>###【200个topic】
-<img src="img/perf_appendix_2_topic_200_1.png" alt="">
+<h3>6.2.2 【200个topic】</h3>
+<p><img src="img/perf_appendix_2_topic_200_1.png" alt="">
 <img src="img/perf_appendix_2_topic_200_2.png" alt="">
 <img src="img/perf_appendix_2_topic_200_3.png" alt="">
 <img src="img/perf_appendix_2_topic_200_4.png" alt="">
@@ -301,8 +302,8 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 <img src="img/perf_appendix_2_topic_200_7.png" alt="">
 <img src="img/perf_appendix_2_topic_200_8.png" alt="">
 <img src="img/perf_appendix_2_topic_200_9.png" alt=""></p>
-<p>###【500个topic】
-<img src="img/perf_appendix_2_topic_500_1.png" alt="">
+<h3>6.2.3 【500个topic】</h3>
+<p><img src="img/perf_appendix_2_topic_500_1.png" alt="">
 <img src="img/perf_appendix_2_topic_500_2.png" alt="">
 <img src="img/perf_appendix_2_topic_500_3.png" alt="">
 <img src="img/perf_appendix_2_topic_500_4.png" alt="">
@@ -311,8 +312,8 @@ Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题
 <img src="img/perf_appendix_2_topic_500_7.png" alt="">
 <img src="img/perf_appendix_2_topic_500_8.png" alt="">
 <img src="img/perf_appendix_2_topic_500_9.png" alt=""></p>
-<p>###【1000个topic】
-<img src="img/perf_appendix_2_topic_1000_1.png" alt="">
+<h3>6.2.4 【1000个topic】</h3>
+<p><img src="img/perf_appendix_2_topic_1000_1.png" alt="">
 <img src="img/perf_appendix_2_topic_1000_2.png" alt="">
 <img src="img/perf_appendix_2_topic_1000_3.png" alt="">
 <img src="img/perf_appendix_2_topic_1000_4.png" alt="">
diff --git a/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.json b/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.json
index c3e05b7..047e08b 100644
--- a/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.json
+++ b/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.json
@@ -1,6 +1,6 @@
 {
   "filename": "tubemq_perf_test_vs_Kafka_cn.md",
-  "__html": "<h1>TubeMQ VS Kafka性能对比测试总结</h1>\n<h2>背景</h2>\n<p>TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于<a href=\"http://kafka.apache.org/\">Apache Kafka</a>。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。\n这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。</p>\n<h2>测试场景方案</h2>\n<p>如下是我们根据实际应用场景设计的测试方案:\n<img src=\"img/perf_scheme.png\" alt=\"\"></p>\n<h2>测试结论</h2>\n<p>用&quot;复仇者联盟&quot;里的角色来形容:</p>\n<table>\n<thead>\n<tr>\n<th  [...]
+  "__html": "<h1>TubeMQ VS Kafka性能对比测试总结</h1>\n<h2>1 背景</h2>\n<p>TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于<a href=\"http://kafka.apache.org/\">Apache Kafka</a>。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。\n这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。</p>\n<h2>2 测试场景方案</h2>\n<p>如下是我们根据实际应用场景设计的测试方案:\n<img src=\"img/perf_scheme.png\" alt=\"\"></p>\n<h2>3 测试结论</h2>\n<p>用&quot;复仇者联盟&quot;里的角色来形容:</p>\n<table>\n<thead>\n<tr> [...]
   "link": "/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md b/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
index 45916f6..321a82d 100644
--- a/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
+++ b/zh-cn/docs/modules/tubemq/tubemq_perf_test_vs_Kafka_cn.md
@@ -1,14 +1,14 @@
 # TubeMQ VS Kafka性能对比测试总结
 
-## 背景
+## 1 背景
 TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思想源于[Apache Kafka](http://kafka.apache.org/)。在实现上,则完全采取自适应的方式,结合实战做了很多优化及研发工作,如分区管理、分配机制和全新节点通讯流程,自主开发高性能的底层RPC通讯模块等。
 这些实现使得TubeMQ在保证实时性和一致性的前提下,具有很好的健壮性及更高的吞吐能力。结合目前主流消息中间件使用情况,以Kafka为参照做性能对比测试,对比常规应用场景下两套系统性能。
 
-## 测试场景方案
+## 2 测试场景方案
 如下是我们根据实际应用场景设计的测试方案:
 ![](img/perf_scheme.png)
 
-## 测试结论
+## 3 测试结论
 用"复仇者联盟"里的角色来形容:
 
 角色|测试场景|要点
@@ -24,8 +24,8 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 3. 在过滤消费时,TubeMQ可以极大地降低服务端的网络出流量,同时还会因过滤消费消耗的资源少于全量消费,反过来促进TubeMQ吞吐量提升;kafka无服务端过滤,出流量与全量消费一致,流量无明显的节约;
 4. 资源消耗方面各有差异:TubeMQ由于采用顺序写随机读,CPU消耗很大,Kafka采用顺序写块读,CPU消耗很小,但其他资源,如文件句柄、网络连接等消耗非常的大。在实际的SAAS模式下的运营环境里,Kafka会因为zookeeper依赖出现系统瓶颈,会因生产、消费、Broker众多,受限制的地方会更多,比如文件句柄、网络连接数等,资源消耗会更大;
 
-## 测试环境及配置
-###【软件版本及部署环境】
+## 4 测试环境及配置
+### 4.1 【软件版本及部署环境】
 
 **角色**|**TubeMQ**|**Kafka**
 :---:|---|---
@@ -36,7 +36,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 **Producer**|1台M10 + 1台CG1|1台M10 + 1台CG1
 **Consumer**|6台TS50万兆机|6台TS50万兆机
 
-###【Broker硬件机型配置】
+### 4.2 【Broker硬件机型配置】
 
 **机型**|配置|**备注**
 :---:|---|---
@@ -44,7 +44,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 **BX1-10G**|SA5212M5(6133\*2/16G\*16/4T\*12/10GE\*2) Pcs|                                     
 **CG1-10G**|CG1-10G\_6.0.2.12\_RM760-FX(6133\*2/16G\*16/5200-480G\*6 RAID/10GE\*2)-ODM Pcs |  
 
-###【Broker系统配置】
+### 4.3 【Broker系统配置】
 
 | **配置项**            | **TubeMQ Broker**     | **Kafka Broker**      |
 |:---:|---|---|
@@ -53,25 +53,25 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 | **配置文件**          | 在tubemq-3.8.0版本broker.ini配置文件上改动: consumerRegTimeoutMs=35000<br>tcpWriteServiceThread=50<br>tcpReadServiceThread=50<br>primaryPath为SATA盘日志目录|kafka_2.11-0.10.2.0版本server.properties配置文件上改动:<br>log.flush.interval.messages=5000<br>log.flush.interval.ms=10000<br>log.dirs为SATA盘日志目录<br>socket.send.buffer.bytes=1024000<br>socket.receive.buffer.bytes=1024000<br>socket.request.max.bytes=2147483600<br>log.segment.bytes=1073741824<br>num.network.threads=25<br>num.io.threads=48< [...]
 | **其它**             | 除测试用例里特别指定,每个topic创建时设置:<br>memCacheMsgSizeInMB=5<br>memCacheFlushIntvl=20000<br>memCacheMsgCntInK=10 <br>unflushThreshold=5000<br>unflushInterval=10000<br>unFlushDataHold=5000 | 客户端代码里设置:<br>生产端:<br>props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br>props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");<br>props.put("linger.ms", "200");<br>props.put("block.on.buffer.full", false);<br>props.pu [...]
               
-## 测试场景及结论
+## 5 测试场景及结论
 
-### 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
+### 5.1 场景一:基础场景,单topic情况,一入两出模型,分别使用不同的消费模式、不同大小的消息包,分区逐步做横向扩展,对比TubeMQ和Kafka性能
  ![](img/perf_scenario_1.png)
 
-####【结论】
+#### 5.1.1 【结论】
 
 在单topic不同分区的情况下:
 1. TubeMQ吞吐量不随分区变化而变化,同时TubeMQ属于顺序写随机读模式,单实例情况下吞吐量要低于Kafka,CPU要高于Kafka;
 2. Kafka随着分区增多吞吐量略有下降,CPU使用率很低;
 3. TubeMQ分区由于是逻辑分区,增加分区不影响吞吐量;Kafka分区为物理文件的增加,但增加分区入出流量反而会下降;
 
-####【指标】
+####5.1.2 【指标】
  ![](img/perf_scenario_1_index.png)
 
-### 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
+### 5.2 场景二:单topic情况,一入两出模型,固定消费包大小,横向扩展实例数,对比TubeMQ和Kafka性能情况
  ![](img/perf_scenario_2.png)
 
-####【结论】
+#### 5.2.1 【结论】
 
 从场景一和场景二的测试数据结合来看:
 
@@ -81,7 +81,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 4. TubeMQ按照Kafka等同的增加实例(物理文件)后,吞吐量量随之提升,在4个实例的时候测试效果达到并超过Kafka
     5个分区的状态;TubeMQ可以根据业务或者系统配置需要,调整数据读取方式,可以动态提升系统的吞吐量;Kafka随着分区增加,入流量有下降;
 
-####【指标】
+#### 5.2.2 【指标】
 
 **注1 :** 如下场景中,均为单Topic测试下不同分区或实例、不同读取模式场景下的测试,单条消息包长均为1K;
 
@@ -89,10 +89,10 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 读取模式通过admin\_upd\_def\_flow\_control\_rule设置qryPriorityId为对应值.
  ![](img/perf_scenario_2_index.png)
 
-### 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
+### 5.3 场景三:多topic场景,固定消息包大小、实例及分区数,考察100、200、500、1000个topic场景下TubeMQ和Kafka性能情况
  ![](img/perf_scenario_3.png)
 
-####【结论】
+#### 5.3.1 【结论】
 
 按照多Topic场景下测试:
 
@@ -103,25 +103,25 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
     Topic配置时,网络连接达到了1.2W,文件句柄达到了4.5W)等问题;
 4.  数据对比来看,TubeMQ相比Kafka运行更稳定,吞吐量以稳定形势呈现,长时间跑吞吐量不下降,资源占用少,但CPU的占用需要后续版本解决;
 
-####【指标】
+#### 5.3.2 【指标】
 
 **注:** 如下场景中,包长均为1K,分区数均为10。
  ![](img/perf_scenario_3_index.png)
 
-### 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
+### 5.4 场景四:100个topic,一入一全量出五份部分过滤出:一份全量Topic的Pull消费;过滤消费采用5个不同的消费组,从同样的20个Topic中过滤出10%消息内容
 
-####【结论】
+#### 5.4.1 【结论】
 
 1.  TubeMQ采用服务端过滤的模式,出流量指标与入流量存在明显差异;
 2.  TubeMQ服务端过滤提供了更多的资源给到生产,生产性能比非过滤情况有提升;
 3.  Kafka采用客户端过滤模式,入流量没有提升,出流量差不多是入流量的2倍,同时入出流量不稳定;
 
-####【指标】
+#### 5.4.2 【指标】
 
 **注:** 如下场景中,topic为100,包长均为1K,分区数均为10
  ![](img/perf_scenario_4_index.png)
 
-### 场景五:TubeMQ、Kafka数据消费时延比对
+### 5.5 场景五:TubeMQ、Kafka数据消费时延比对
 
 | 类型   | 时延            | Ping时延                |
 |---|---|---|
@@ -130,35 +130,35 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 
 备注:TubeMQ的消费端存在一个等待队列处理消息追平生产时的数据未找到的情况,缺省有200ms的等待时延。测试该项时,TubeMQ消费端要调整拉取时延(ConsumerConfig.setMsgNotFoundWaitPeriodMs())为10ms,或者设置频控策略为10ms。
 
-### 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
+### 5.6 场景六:调整Topic配置的内存缓存大小(memCacheMsgSizeInMB)对吞吐量的影响
 
-####【结论】
+#### 5.6.1 【结论】
 
 1.  TubeMQ调整Topic的内存缓存大小能对吞吐量形成正面影响,实际使用时可以根据机器情况合理调整;
 2.  从实际使用情况看,内存大小设置并不是越大越好,需要合理设置该值;
 
-####【指标】
+#### 5.6.2 【指标】
 
  **注:** 如下场景中,消费方式均为读取内存(301)的PULL消费,单条消息包长均为1K
  ![](img/perf_scenario_6_index.png)
  
 
-### 场景七:消费严重滞后情况下两系统的表现
+### 5.7 场景七:消费严重滞后情况下两系统的表现
 
-####【结论】
+#### 5.7.1 【结论】
 
 1.  消费严重滞后情况下,TubeMQ和Kafka都会因磁盘IO飙升使得生产消费受阻;
 2.  在带SSD系统里,TubeMQ可以通过SSD转存储消费来换取部分生产和消费入流量;
 3.  按照版本计划,目前TubeMQ的SSD消费转存储特性不是最终实现,后续版本中将进一步改进,使其达到最合适的运行方式;
 
-####【指标】
+#### 5.7.2 【指标】
  ![](img/perf_scenario_7.png)
 
 
-### 场景八:评估多机型情况下两系统的表现
+### 5.8 场景八:评估多机型情况下两系统的表现
  ![](img/perf_scenario_8.png)
       
-####【结论】
+#### 5.8.1【结论】
 
 1.  TubeMQ在BX1机型下较TS60机型有更高的吞吐量,同时因IO util达到瓶颈无法再提升,吞吐量在CG1机型下又较BX1达到更高的指标值;
 2.  Kafka在BX1机型下系统吞吐量不稳定,且较TS60下测试的要低,在CG1机型下系统吞吐量达到最高,万兆网卡跑满;
@@ -166,29 +166,30 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 4.  在SSD盘存储条件下,Kafka性能指标达到最好,TubeMQ指标不及Kafka;
 5.  CG1机型数据存储盘较小(仅2.2T),RAID 10配置下90分钟以内磁盘即被写满,无法测试两系统长时间运行情况。
 
-####【指标】
+#### 5.8.2 【指标】
 
 **注1:** 如下场景Topic数均配置500个topic,10个分区,消息包大小为1K字节;
 
 **注2:** TubeMQ采用的是301内存读取模式消费;
  ![](img/perf_scenario_8_index.png)
 
-## 附录1 不同机型下资源占用情况图:
-###【BX1机型测试】
+## 6 附录 
+## 6.1 附录1 不同机型下资源占用情况图:
+### 6.1.1 【BX1机型测试】
 ![](img/perf_appendix_1_bx1_1.png)
 ![](img/perf_appendix_1_bx1_2.png)
 ![](img/perf_appendix_1_bx1_3.png)
 ![](img/perf_appendix_1_bx1_4.png)
 
-###【CG1机型测试】
+### 6.1.2 【CG1机型测试】
 ![](img/perf_appendix_1_cg1_1.png)
 ![](img/perf_appendix_1_cg1_2.png)
 ![](img/perf_appendix_1_cg1_3.png)
 ![](img/perf_appendix_1_cg1_4.png)
 
-## 附录2 多Topic测试时的资源占用情况图:
+## 6.2 附录2 多Topic测试时的资源占用情况图:
 
-###【100个topic】
+### 6.2.1 【100个topic】
 ![](img/perf_appendix_2_topic_100_1.png)
 ![](img/perf_appendix_2_topic_100_2.png)
 ![](img/perf_appendix_2_topic_100_3.png)
@@ -199,7 +200,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_100_8.png)
 ![](img/perf_appendix_2_topic_100_9.png)
  
-###【200个topic】
+### 6.2.2 【200个topic】
 ![](img/perf_appendix_2_topic_200_1.png)
 ![](img/perf_appendix_2_topic_200_2.png)
 ![](img/perf_appendix_2_topic_200_3.png)
@@ -210,7 +211,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_200_8.png)
 ![](img/perf_appendix_2_topic_200_9.png)
 
-###【500个topic】
+### 6.2.3 【500个topic】
 ![](img/perf_appendix_2_topic_500_1.png)
 ![](img/perf_appendix_2_topic_500_2.png)
 ![](img/perf_appendix_2_topic_500_3.png)
@@ -221,7 +222,7 @@ TubeMQ是腾讯大数据自研的分布式消息中间件。其系统架构思
 ![](img/perf_appendix_2_topic_500_8.png)
 ![](img/perf_appendix_2_topic_500_9.png)
 
-###【1000个topic】
+### 6.2.4 【1000个topic】
 ![](img/perf_appendix_2_topic_1000_1.png)
 ![](img/perf_appendix_2_topic_1000_2.png)
 ![](img/perf_appendix_2_topic_1000_3.png)