You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tubemq.apache.org by za...@apache.org on 2020/05/29 08:27:32 UTC

[incubator-tubemq-website] branch master updated: [TUBEMQ-142] Organize documents (#6)

This is an automated email from the ASF dual-hosted git repository.

zakwu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tubemq-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 3ea81a3  [TUBEMQ-142] Organize documents (#6)
3ea81a3 is described below

commit 3ea81a3c240c789d21502c89da98ac2f4c65e5b8
Author: Tboy <gu...@immomo.com>
AuthorDate: Fri May 29 16:27:25 2020 +0800

    [TUBEMQ-142] Organize documents (#6)
---
 docs/en-us/architecture.md     |  26 +++++
 docs/en-us/consumer_example.md |  96 +++++++++++++++++
 docs/en-us/contact.md          |  26 ++---
 docs/en-us/contribution.md     |  19 ++--
 docs/en-us/deployment.md       | 161 +++++++++++++++++++++++++++++
 docs/en-us/producer_example.md | 148 ++++++++++++++++++++++++++
 docs/en-us/quick_start.md      | 230 +++++++++++++++++++++++++++++++++++++++++
 docs/zh-cn/architecture.md     |  26 +++++
 docs/zh-cn/consumer_example.md |  96 +++++++++++++++++
 docs/zh-cn/contact.md          |  26 ++---
 docs/zh-cn/contribution.md     |  19 ++--
 docs/zh-cn/deployment.md       | 161 +++++++++++++++++++++++++++++
 docs/zh-cn/producer_example.md | 148 ++++++++++++++++++++++++++
 docs/zh-cn/quick_start.md      | 230 +++++++++++++++++++++++++++++++++++++++++
 site_config/docs.js            | 114 +++++++++++++-------
 15 files changed, 1434 insertions(+), 92 deletions(-)

diff --git a/docs/en-us/architecture.md b/docs/en-us/architecture.md
new file mode 100644
index 0000000..7f0e3de
--- /dev/null
+++ b/docs/en-us/architecture.md
@@ -0,0 +1,26 @@
+## TubeMQ Architecture: ##
+After years of evolution, the TubeMQ cluster is divided into the following 5 parts: 
+![](img/sys_structure.png)
+
+- **Portal:** The Portal part responsible for external interaction and maintenance operations, including API and Web. The API connects to the management system outside the cluster. The Web is a page encapsulation of daily operation and maintenance functions based on the API;
+
+- **Master:** It is responsible for the Control part of the cluster. This part is composed of one or more Master nodes. Master HA performs heartbeat keep-alive and real-time hot standby switching between master nodes (This is the reason why everyone needs to fill in the addresses of all Master nodes corresponding to the cluster when using TubeMQ Lib). The main master is responsible for managing the status of the entire cluster, resource scheduling, permission checking, metadata query, etc;
+
+- **Broker:** The Store part responsible for data storage. This part is composed of independent Broker nodes. Each Broker node manages the Topic set in this node, including the addition, deletion, modification, and inquiring about Topics. It is also responsible for message storage, consumption, aging, partition expansion, data consumption offset records, etc. on the topic, and the external capabilities of the cluster, including the number of topics, throughput, and capacity, are complete [...]
+
+- **Client:** The Client part responsible for data production and consumption. We provide this part in the form of Lib. The most commonly used is the consumer. Compared with the previous, the consumer now supports Push and Pull data pull modes, data consumption behavior support both order and filtered consumption. For the Pull consumption mode, the service supports resetting the precise offset through the client to support the business extract-once consumption. At the same time, the cons [...]
+
+- **Zookeeper:** Responsible for the zk part of the offset storage. This part of the function has been weakened to only the persistent storage of the offset. Considering the next multi-node copy function, this module is temporarily reserved;
+
+## Broker File Storage Scheme Improvement: ##
+Systems that use disk as a medium for data persistence are faced with a variety of performance issues caused by disk problems,TubeMQ is no exception. Performance improvements are largely addressed to solve the problem of how message data is read, write and stored. In this respect, TubeMQ has made some improvements:
+
+1. **File structure and organization adjustment:** TubeMQ's disk storage scheme is similar to Kafka, but not the same, as the figure shown below. Storage instance is consisted of an index file and a data file, each topic can allocate one or more storage instances. And each topic separately maintains the mechanisms of managing storage instances, including aging cycles, number of partitions, whether readable or writable, etc.
+![](img/store_file.png)
+
+2. **Memory block cache:** We add an additional memory cache block for each storage instance based on the file storage, i.e. add a piece of memory to the original write disk to isolate the slow impact of hard disk. The data is brushed into memory first, and the memory control block then brushes the data to disk files in bulk.
+![](img/store_mem.png)
+
+3. **SSD Auxiliary Storage:** For servers with SSD hardware in addition to disk storage, we have made a layer of SSD secondary storage, which is different from the common practice that external systems save data to SSD first, and then transfer data from SSD to disk: according to our analysis, for normal sequential disk accesses, the performance is sufficient to meet the needs of data persistence. When disk IO is up to 100%, the performance degradation is mainly due to lagged consumption, [...]
+![](img/store_ssd.png)
+
diff --git a/docs/en-us/consumer_example.md b/docs/en-us/consumer_example.md
new file mode 100644
index 0000000..62236a1
--- /dev/null
+++ b/docs/en-us/consumer_example.md
@@ -0,0 +1,96 @@
+## Consumer Example
+  TubeMQ provides two ways to consumer message, PullConsumer and PushConsumer:
+
+1. PullConsumer 
+   ```
+   public class PullConsumerExample {
+   
+       public static void main(String[] args) throws Throwable {
+           final String localHostIP = "127.0.0.1";
+           final String masterHostAndPort = "localhost:8000";
+           final String topic = "test";
+           final String group = "test-group";
+           final ConsumerConfig consumerConfig = new ConsumerConfig(localHostIP, masterHostAndPort, group);
+           /* consumeModel
+            *  Set the start position of the consumer group. The value can be [-1, 0, 1]. Default value is 0.
+            * -1: Start from 0 for the first time. Otherwise start from last consume position.
+            *  0: Start from the latest position for the first time. Otherwise start from last consume position.
+            *  1: Start from the latest consume position.
+           */
+           consumerConfig.setConsumeModel(0);
+           final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+           final PullMessageConsumer messagePullConsumer = messageSessionFactory.createPullConsumer(consumerConfig);
+           messagePullConsumer.subscribe(topic, null);
+           messagePullConsumer.completeSubscribe();
+           // wait for client to join the exact consumer queue that consumer group allocated
+           while (!messagePullConsumer.isPartitionsReady(1000)) {
+               ThreadUtils.sleep(1000);
+           }
+           while(true){
+               ConsumerResult result = messagePullConsumer.getMessage();
+               if (result.isSuccess()) {
+                   List<Message> messageList = result.getMessageList();
+                   for (Message message : messageList) {
+                       System.out.println("received message : " + message);
+                   }
+                   messagePullConsumer.confirmConsume(result.getConfirmContext(), true);
+               } else{
+                   if (result.getErrCode() == 400) {
+                       ThreadUtils.sleep(100);
+                   } else {
+                       if (result.getErrCode() != 404) {
+                           System.out.println(String.format("Receive messages errorCode is %d, Error message is %s", result.getErrCode(), result.getErrMsg()));
+                       }
+                   }
+               }
+           }
+       }
+   }
+   ``` 
+   
+2. PushConsumer
+   ```
+   public class PushConsumerExample {
+   
+       public static void main(String[] args) throws Throwable {
+           final String localHostIP = "127.0.0.1";
+           final String masterHostAndPort = "localhost:8000";
+           final String topic = "test";
+           final String group = "test-group";
+           final ConsumerConfig consumerConfig = new ConsumerConfig(localHostIP, masterHostAndPort, group);
+           /* consumeModel
+            *  Set the start position of the consumer group. The value can be [-1, 0, 1]. Default value is 0.
+            * -1: Start from 0 for the first time. Otherwise start from last consume position.
+            *  0: Start from the latest position for the first time. Otherwise start from last consume position.
+            *  1: Start from the latest consume position.
+           */
+           consumerConfig.setConsumeModel(0);
+           final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+           final PushMessageConsumer pushConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
+           pushConsumer.subscribe(topic, null, new MessageListener() {
+   
+               @Override
+               public void receiveMessages(List<Message> messages) throws InterruptedException {
+                   for (Message message : messages) {
+                       System.out.println("received message : " + new String(message.getData()));
+                   }
+               }
+   
+               @Override
+               public Executor getExecutor() {
+                   return null;
+               }
+   
+               @Override
+               public void stop() {
+                   //
+               }
+           });
+           pushConsumer.completeSubscribe();
+           CountDownLatch latch = new CountDownLatch(1);
+           latch.await(10, TimeUnit.MINUTES);
+       }
+   }
+   ```
+
+
diff --git a/docs/en-us/contact.md b/docs/en-us/contact.md
index aae87de..d605c82 100644
--- a/docs/en-us/contact.md
+++ b/docs/en-us/contact.md
@@ -1,28 +1,16 @@
-Apache TubeMQ
-==============================================
-[![Build Status](https://travis-ci.org/apache/incubator-tubemq.svg?branch=master)](https://travis-ci.org/apache/incubator-tubemq)
-
-Apache TubeMQ (incubating) is a trillion-records-scale distributed messaging queue (MQ) system, focuses on data transmission and storage under massive data. Compared to many open source MQ projects, TubeMQ has unique advantages in terms of stability, performance, and low cost.
-
-
-Contact
+Contact us
 -------
 
-
 - Mailing lists
 
-| Name                                                                          | Scope                           |                                                                 |                                                                     |                                                                              |
-|:------------------------------------------------------------------------------|:--------------------------------|:----------------------------------------------------------------|:--------------------------------------------------------------------|:-----------------------------------------------------------------------------|
-| [dev@tubemq.apache.org](mailto:dev@tubemq.apache.org)     | Development-related discussions | [Subscribe](mailto:dev-subscribe@tubemq.apache.org)   | [Unsubscribe](mailto:dev-unsubscribe@tubemq.apache.org)   | [Archives](http://mail-archives.apache.org/mod_mbox/tubemq-dev/)   |
+    | Name                                                                          | Scope                           |                                                                 |                                                                     |                                                                              |
+    |:------------------------------------------------------------------------------|:--------------------------------|:----------------------------------------------------------------|:--------------------------------------------------------------------|:-----------------------------------------------------------------------------|
+    | [dev@tubemq.apache.org](mailto:dev@tubemq.apache.org)     | Development-related discussions | [Subscribe](mailto:dev-subscribe@tubemq.apache.org)   | [Unsubscribe](mailto:dev-unsubscribe@tubemq.apache.org)   | [Archives](http://mail-archives.apache.org/mod_mbox/tubemq-dev/)   |
 
+- Home page: https://tubemq.apache.org
+- Docs: https://tubemq.apache.org/en-us/docs/tubemq_user_guide.html
+- Issues: https://issues.apache.org/jira/browse/TubeMQ
 
-- Issue management
-  [See JIRA](https://issues.apache.org/jira/browse/TubeMQ)
-
-
-Build and Deploy
--------
-- [See user guide](./tubemq_user_guide.md)
 
 
 License
diff --git a/docs/en-us/contribution.md b/docs/en-us/contribution.md
index 64ec4ed..5f46e0e 100644
--- a/docs/en-us/contribution.md
+++ b/docs/en-us/contribution.md
@@ -40,14 +40,6 @@ To avoid potential frustration during the code review cycle, we encourage you to
 
 We are using "TubeMQ Improvement Proposals" for managing major changes to TubeMQ. The list of all proposals is maintained in the TubeMQ wiki at [this page](https://cwiki.apache.org/confluence/display/TUBEMQ/TubeMQ+Improvement+Proposals).
 
-## Code
-
-TBD
-
-## Review
-
-TBD
-
 ## Commit (committers only)
 
 Once the code has been peer reviewed by a committer, the next step is for the committer to merge it into the Github repo.
@@ -56,3 +48,14 @@ Pull requests should not be merged before the review has approved from at least
 
 For more about merging pull request, please refer to [this page](https://cwiki.apache.org/confluence/display/TUBEMQ/Merging+Pull+Requests)
 
+## Website Contributor List
+We are very pleased to announce some contributors here. They have made a lot of contributions in the translation of TubeMQ. Thanks again to the following participants.
+ - deepEvolution
+ - missy
+ - min.yang
+ - goson
+ - stillcoolme
+ - tboy
+ - viviel
+ - yuecai.liu
+
diff --git a/docs/en-us/deployment.md b/docs/en-us/deployment.md
new file mode 100644
index 0000000..87c9215
--- /dev/null
+++ b/docs/en-us/deployment.md
@@ -0,0 +1,161 @@
+## Deployment
+  The TubeMQ server includes two modules for the Master and the Broker. The Master also includes a Web front-end module for external page access (this part is stored in the resources). Considering the actual deployment, two modules are often deployed in the same machine, TubeMQ. The contents of the three parts of the two modules are packaged and delivered to the operation and maintenance; the client does not include the lib package of the server part and is delivered to the user separately.
+   Master and Broker use the ini configuration file format, and the relevant configuration files are placed in the master.ini and broker.ini files in the tubemq-server-3.8.0/conf/ directory.
+   Their configuration is defined by a set of configuration units. The Master configuration consists of four mandatory units: [master], [zookeeper], [bdbStore], and optional [tlsSetting]. The Broker configuration is mandatory. Broker], [zookeeper] and optional [tlsSetting] consist of a total of 3 configuration units; in actual use, you can also combine the contents of the two configuration files into one ini file.
+   In addition to the back-end system configuration file, the Master also stores the Web front-end page module in the resources. The root directory velocity.properties file of the resources is the Web front-end page configuration file of the Master.
+
+### Master
+  In real production environment, you need to run multiple master services on different servers for high availability purpose. Here's
+  the introduction of availability level.
+  
+  | HA Level | Master Number | Description |
+  | -------- | ------------- | ----------- |
+  | High     | 3 masters     | After any master crashed, the cluster meta data is still in read/write state and can accept new producers/consumers. |
+  | Medium   | 2 masters     | After one master crashed, the cluster meta data is in read only state. There's no affect on existing producers and consumers. |
+  | Minimum  | 1 master      | After the master crashed, there's no affect on existing producer and consumer. |
+  
+  Please notice that the master servers should be clock synchronized.
+  
+### Master Configuration item details:
+ 
+ ### master.ini file:
+ [master]
+ > Master system runs the main configuration unit, required unit, the value is fixed to "[master]"
+ 
+ | Name                          | Required                          | Type                          | Description                                                  |
+ | ----------------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+ | hostName                      | yes      | string  | The host address of the master external service, required, must be configured on the NIC, is enabled, non-loopback and cannot be IP of 127.0.0.1 |
+ | port                          | no       | int     | Master listening port, optional, default is 8715             |
+ | webPort                       | no       | int     | Master web console access port, the default value is 8080    |
+ | webResourcePath               | yes      | string  | Master Web Resource deploys an absolute path, which is required. If the value is set incorrectly, the web page will not display properly. |
+ | confModAuthToken              | no       | string  | The authorization Token provided by the operator when the change operation (including adding, deleting, changing configuration, and changing the master and managed Broker status) is performed by the Master's Web or API. The value is optional. The default is "ASDFGHJKL". |
+ | firstBalanceDelayAfterStartMs | no       | long    | Master starts to the interval of the first time to start Rebalance, optional, default 30000 milliseconds |
+ | consumerBalancePeriodMs       | no       | long    | The master balances the rebalance period of the consumer group. The default is 60000 milliseconds. When the cluster size is large, increase the value. |
+ | consumerHeartbeatTimeoutMs    | no       | long    | Consumer heartbeat timeout period, optional, default 30000 milliseconds, when the cluster size is large, please increase the value |
+ | producerHeartbeatTimeoutMs    | no       | long    | Producer heartbeat timeout period, optional, default 30000 milliseconds, when the cluster size is large, please increase the value |
+ | brokerHeartbeatTimeoutMs      | no       | long    | Broker heartbeat timeout period, optional, default 30000 milliseconds, when the cluster size is large, please increase the value |
+ | socketRecvBuffer              | no       | long    | Socket receives the size of the Buffer buffer SO_RCVBUF, the unit byte, the negative number is set as the default value |
+ | socketSendBuffer              | no       | long    | Socket sends Buffer buffer SO_SNDBUF size, unit byte, negative number is  set as the default value |
+ | maxAutoForbiddenCnt           | no       | int     | When the broker has an IO failure, the maximum number of masters allowed to automatically go offline is the number of options. The default value is 5. It is recommended that the value does not exceed 10% of the total number of brokers in the cluster. |
+ | startOffsetResetCheck         | no       | boolean | Whether to enable the check function of the client Offset reset function, optional, the default is false |
+ | needBrokerVisitAuth           | no       | boolean | Whether to enable Broker access authentication, the default is false. If true, the message reported by the broker must carry the correct username and signature information. |
+ | visitName                     | no       | string  | The username of the Broker access authentication. The default is an empty string. This value must exist when needBrokerVisitAuth is true. This value must be the same as the value of the visitName field in broker.ini. |
+ | visitPassword                 | no       | string  | The password for the Broker access authentication. The default is an empty string. This value must exist when needBrokerVisitAuth is true. This value must be the same as the value of the visitPassword field in broker.ini. |
+ | startVisitTokenCheck      | no       | boolean | Whether to enable client visitToken check, the default is false |
+ | startProduceAuthenticate      | no       | boolean | Whether to enable production end user authentication, the default is false |
+ | startProduceAuthorize         | no       | boolean | Whether to enable production-side production authorization authentication, the default is false |
+ | startConsumeAuthenticate      | no       | boolean | Whether to enable consumer user authentication, the default is false |
+ | startConsumeAuthorize         | no       | boolean | Whether to enable consumer consumption authorization authentication, the default is false |
+ | maxGroupBrokerConsumeRate     | no       | int     | The maximum ratio of the number of clustered brokers to the number of members in the consumer group. The default is 50. In a 50-kerrow cluster, one consumer group is allowed to start at least one client. |
+ 
+ [zookeeper]
+ >The corresponding Tom MQ cluster of the Master stores the information about the ZooKeeper cluster of the Offset. The required unit has a fixed value of "[zookeeper]".
+ 
+ | Name                  | Required                          | Type                          | Description                                                  |
+ | --------------------- |  -----------------------------|  ----------------------------- | ------------------------------------------------------------ |
+ | zkServerAddr          | no       | string | Zk server address, optional configuration, defaults to "localhost:2181" |
+ | zkNodeRoot            | no       | string | The root path of the node on zk, optional configuration. The default is "/tube". |
+ | zkSessionTimeoutMs    | no       | long   | Zk heartbeat timeout, in milliseconds, default 30 seconds    |
+ | zkConnectionTimeoutMs | no       | long   | Zk connection timeout, in milliseconds, default 30 seconds   |
+ | zkSyncTimeMs          | no       | long   | Zk data synchronization time, in milliseconds, default 5 seconds |
+ | zkCommitPeriodMs      | no       | long   | The interval at which the Master cache data is flushed to zk, in milliseconds, default 5 seconds. |
+ 
+ [bdbStore]
+ >Master configuration of the BDB cluster to which the master belongs. The master uses BDB for metadata storage and multi-node hot standby. The required unit has a fixed value of "[bdbStore]".
+ 
+ | Name                    | Required                          | Type                          | Description                                                  |
+ | ----------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+ | bdbRepGroupName         | yes      | string | BDB cluster name, the primary and backup master node values must be the same, required field |
+ | bdbNodeName             | yes      | string | The name of the node of the master in the BDB cluster. The value of each BDB node must not be repeated. Required field. |
+ | bdbNodePort             | no       | int    | BDB node communication port, optional field, default is 9001 |
+ | bdbEnvHome              | yes      | string | BDB data storage path, required field                        |
+ | bdbHelperHost           | yes      | string | Primary node when the BDB cluster starts, required field     |
+ | bdbLocalSync            | no       | int    | BDB data node local storage mode, the value range of this field is [1, 2, 3]. The default is 1: 1 is data saved to disk, 2 is data only saved to memory, and 3 is only data is written to file system buffer. But not brush |
+ | bdbReplicaSync          | no       | int    | BDB data node synchronization save mode, the value range of this field is [1, 2, 3]. The default is 1: 1 is data saved to disk, 2 is data only saved to memory, and 3 is only data is written to file system buffer. But not brush |
+ | bdbReplicaAck           | no       | int    | The response policy of the BDB node data synchronization, the value range of this field is [1, 2, 3], the default is 1: 1 is more than 1/2 majority is valid, 2 is valid for all nodes, 3 is not Need node response |
+ | bdbStatusCheckTimeoutMs | no       | long   | BDB status check interval, optional field, in milliseconds, defaults to 10 seconds |
+ 
+ [tlsSetting]
+ >The Master uses TLS to encrypt the transport layer data. When TLS is enabled, the configuration unit provides related settings. The optional unit has a fixed value of "[tlsSetting]".
+ 
+ | Name                  | Required                          | Type                          | Description                                                  |
+ | --------------------- |  -----------------------------|  ----------------------------- | ------------------------------------------------------------ |
+ | tlsEnable             | no       | boolean | Whether to enable TLS function, optional configuration, default is false |
+ | tlsPort               | no       | int     | Master TLS port number, optional configuration, default is 8716 |
+ | tlsKeyStorePath       | no       | string  | The absolute storage path of the TLS keyStore file + the name of the keyStore file. This field is required and cannot be empty when the TLS function is enabled. |
+ | tlsKeyStorePassword   | no       | string  | The absolute storage path of the TLS keyStorePassword file + the name of the keyStorePassword file. This field is required and cannot be empty when the TLS function is enabled. |
+ | tlsTwoWayAuthEnable   | no       | boolean | Whether to enable TLS mutual authentication, optional configuration, the default is false |
+ | tlsTrustStorePath     | no       | string  | The absolute storage path of the TLS TrustStore file + the TrustStore file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+ | tlsTrustStorePassword | no       | string  | The absolute storage path of the TLS TrustStorePassword file + the TrustStorePassword file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+ 
+ ### velocity.properties file:
+ 
+ | Name                      | Required                          | Type                          | Description                                                  |
+ | ------------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+ | file.resource.loader.path | yes      | string | The absolute path of the master web template. This part is the absolute path plus /resources/templates of the project when the master is deployed. The configuration is consistent with the actual deployment. If the configuration fails, the master front page access fails. |
+
+ 
+### Broker
+  In real production environment, you need to run multiple at least 2 broker services on different servers for high availability purpose.
+
+### Broker Configuration item details:
+
+### broker.ini file:
+
+[broker]
+>The broker system runs the main configuration unit, required unit, and the value is fixed to "[broker]"
+
+| Name                  | Required                          | Type                          | Description                                                  |
+| --------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| brokerId              | yes      | int     | Server unique flag, required field, can be set to 0; when set to 0, the system will default to take the local IP to int value |
+| hostName              | yes      | string  | The host address of the broker external service, required, must be configured in the NIC, is enabled, non-loopback and cannot be IP of 127.0.0.1 |
+| port                  | no       | int     | Broker listening port, optional, default is 8123             |
+| webPort               | no       | int     | Broker's http management access port, optional, default is 8081 |
+| masterAddressList     | yes      | string  | Master address list of the cluster to which the broker belongs. Required fields. The format must be ip1:port1, ip2:port2, ip3:port3. |
+| primaryPath           | yes      | string  | Broker stores the absolute path of the message, mandatory field |
+| maxSegmentSize        | no       | int     | Broker stores the file size of the message data content, optional field, default 512M, maximum 1G |
+| maxIndexSegmentSize   | no       | int     | Broker stores the file size of the message Index content, optional field, default 18M, about 70W messages per file |
+| transferSize          | no       | int     | Broker allows the maximum message content size to be transmitted to the client each time, optional field, default is 512K |
+| consumerRegTimeoutMs  | no       | long    | Consumer heartbeat timeout, optional, in milliseconds, default 30 seconds |
+| socketRecvBuffer      | no       | long    | Socket receives the size of the Buffer buffer SO_RCVBUF, the unit byte, the negative number is not set, the default value is |
+| socketSendBuffer      | no       | long    | Socket sends Buffer buffer SO_SNDBUF size, unit byte, negative number is not set, the default value is |
+| secondDataPath        | no       | string  | The SSD to storage location where the broker is located, optional field. The default is blank to indicate that the machine has no SSD. |
+| maxSSDTotalFileCnt    | no       | int     | The maximum number of Data files allowed by the SSD where the Broker is located, optional field, default 70 |
+| maxSSDTotalFileSizes  | no       | long    | The SSD where the Broker is located allows the maximum size of the data file to be saved. The optional field is 32G by default. |
+| tcpWriteServiceThread | no       | int     | Broker supports the number of socket worker threads for TCP production services, optional fields, and defaults to 2 times the number of CPUs of the machine. |
+| tcpReadServiceThread  | no       | int     | Broker supports the number of socket worker threads for TCP consumer services, optional fields, defaults to 2 times the number of CPUs of the machine |
+| logClearupDurationMs  | no       | long    | The aging cleanup period of the message file, in milliseconds. The default is 30 minutes for a log cleanup operation. The minimum is 30 minutes. |
+| logFlushDiskDurMs     | no       | long    | Batch check message persistence to file check cycle, in milliseconds, default is 20 seconds for a full check and brush |
+| visitTokenCheckInValidTimeMs       | no       | long | The length of the delay check for the visitToken check since the Broker is registered, in ms, the default is 120000, the value range [60000, 300000]. |
+| visitMasterAuth       | no       | boolean | Whether the authentication of the master is enabled, the default is false. If true, the user name and signature information are added to the signaling reported to the master. |
+| visitName             | no       | string  | User name of the access master. The default is an empty string. This value must exist when visitMasterAuth is true. The value must be the same as the value of the visitName field in master.ini. |
+| visitPassword         | no       | string  | The password for accessing the master. The default is an empty string. This value must exist when visitMasterAuth is true. The value must be the same as the value of the visitPassword field in master.ini. |
+| logFlushMemDurMs      | no       | long    | Batch check message memory persistence to file check cycle, in milliseconds, default is 10 seconds for a full check and brush |
+
+[zookeeper]
+>The Tube MQ cluster corresponding to the Broker stores the information about the ZooKeeper cluster of the Offset. The required unit has a fixed value of "[zookeeper]".
+
+
+| Name                  | Required                          | Type                          | Description                                                  |
+| --------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| zkServerAddr          | no       | string | Zk server address, optional configuration, defaults to "localhost:2181" |
+| zkNodeRoot            | no       | string | The root path of the node on zk, optional configuration. The default is "/tube". |
+| zkSessionTimeoutMs    | no       | long   | Zk heartbeat timeout, in milliseconds, default 30 seconds    |
+| zkConnectionTimeoutMs | no       | long   | Zk connection timeout, in milliseconds, default 30 seconds   |
+| zkSyncTimeMs          | no       | long   | Zk data synchronization time, in milliseconds, default 5 seconds |
+| zkCommitPeriodMs      | no       | long   | The interval at which the broker cache data is flushed to zk, in milliseconds, default 5 seconds |
+| zkCommitFailRetries   | no       | int    | The maximum number of re-brushings after Broker fails to flush cached data to Zk |
+
+[tlsSetting]
+>The Master uses TLS to encrypt the transport layer data. When TLS is enabled, the configuration unit provides related settings. The optional unit has a fixed value of "[tlsSetting]".
+
+
+| Name                  | Required                          | Type                           | Description                                                  |
+| --------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| tlsEnable             | no       | boolean | Whether to enable TLS function, optional configuration, default is false |
+| tlsPort               | no       | int     | Broker TLS port number, optional configuration, default is 8124 |
+| tlsKeyStorePath       | no       | string  | The absolute storage path of the TLS keyStore file + the name of the keyStore file. This field is required and cannot be empty when the TLS function is enabled. |
+| tlsKeyStorePassword   | no       | string  | The absolute storage path of the TLS keyStorePassword file + the name of the keyStorePassword file. This field is required and cannot be empty when the TLS function is enabled. |
+| tlsTwoWayAuthEnable   | no       | boolean | Whether to enable TLS mutual authentication, optional configuration, the default is false |
+| tlsTrustStorePath     | no       | string  | The absolute storage path of the TLS TrustStore file + the TrustStore file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+| tlsTrustStorePassword | no       | string  | The absolute storage path of the TLS TrustStorePassword file + the TrustStorePassword file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
diff --git a/docs/en-us/producer_example.md b/docs/en-us/producer_example.md
new file mode 100644
index 0000000..4c9902b
--- /dev/null
+++ b/docs/en-us/producer_example.md
@@ -0,0 +1,148 @@
+## Producer Example
+  TubeMQ provides two ways to initialize session factory, TubeSingleSessionFactory and TubeMultiSessionFactory:
+  - TubeSingleSessionFactory creates only one session in the lifecycle, this is very useful in streaming scenarios.
+  - TubeMultiSessionFactory creates new session on every call.
+
+1. TubeSingleSessionFactory
+   - Send Message Synchronously
+     ```
+     public final class SyncProducerExample {
+    
+        public static void main(String[] args) throws Exception{
+            final String localHostIP = "127.0.0.1";
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(localHostIP, masterHostAndPort);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+            final MessageProducer messageProducer = messageSessionFactory.createProducer();
+            final String topic = "test";
+            final String body = "This is a test message from single-session-factory!";
+            byte[] bodyData = StringUtils.getBytesUtf8(body);
+            messageProducer.publish(topic);
+            Message message = new Message(topic, bodyData);
+            MessageSentResult result = messageProducer.sendMessage(message);
+            if (result.isSuccess()) {
+                System.out.println("sync send message : " + message);
+            }
+            messageProducer.shutdown();
+        }
+     }
+     ```
+     
+   - Send Message Asynchronously
+     ```
+     public final class AsyncProducerExample {
+     
+         public static void main(String[] args) throws Throwable {
+             final String localHostIP = "127.0.0.1";
+             final String masterHostAndPort = "localhost:8000";
+             final TubeClientConfig clientConfig = new TubeClientConfig(localHostIP, masterHostAndPort);
+             final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+             final MessageProducer messageProducer = messageSessionFactory.createProducer();
+             final String topic = "test";
+             final String body = "async send message from single-session-factory!";
+             byte[] bodyData = StringUtils.getBytesUtf8(body);
+             messageProducer.publish(topic);
+             Message message = new Message(topic, bodyData);
+             messageProducer.sendMessage(message, new MessageSentCallback(){
+                 @Override
+                 public void onMessageSent(MessageSentResult result) {
+                     if (result.isSuccess()) {
+                         System.out.println("async send message : " + message);
+                     } else {
+                         System.out.println("async send message failed : " + result.getErrMsg());
+                     }
+                 }
+                 @Override
+                 public void onException(Throwable e) {
+                     System.out.println("async send message error : " + e);
+                 }
+             });
+             messageProducer.shutdown();
+         }
+     }
+     ```
+     
+   - Send Message With Attributes
+     ```
+     public final class ProducerWithAttributeExample {
+     
+         public static void main(String[] args) throws Throwable {
+             final String localHostIP = "127.0.0.1";
+             final String masterHostAndPort = "localhost:8000";
+             final TubeClientConfig clientConfig = new TubeClientConfig(localHostIP, masterHostAndPort);
+             final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+             final MessageProducer messageProducer = messageSessionFactory.createProducer();
+             final String topic = "test";
+             final String body = "send message with attribute from single-session-factory!";
+             byte[] bodyData = StringUtils.getBytesUtf8(body);
+             messageProducer.publish(topic);
+             Message message = new Message(topic, bodyData);
+             //set attribute
+             message.setAttrKeyVal("test_key", "test value");
+             //msgType is used for consumer filtering, and msgTime(accurate to minute) is used as the pipe to send and receive statistics
+             SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMddHHmm");
+             message.putSystemHeader("test", sdf.format(new Date()));
+             messageProducer.sendMessage(message);
+             messageProducer.shutdown();
+         }
+     }
+     ```  
+     
+- TubeMultiSessionFactory
+
+    ```
+    public class MultiSessionProducerExample {
+        
+        public static void main(String[] args) throws Exception{
+            final int SESSION_FACTORY_NUM = 10;
+            final String localHostIP = "127.0.0.1";
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(localHostIP, masterHostAndPort);
+            final List<MessageSessionFactory> sessionFactoryList = new ArrayList<>(SESSION_FACTORY_NUM);
+            final ExecutorService sendExecutorService = Executors.newFixedThreadPool(SESSION_FACTORY_NUM);
+            final CountDownLatch latch = new CountDownLatch(SESSION_FACTORY_NUM);
+            for (int i = 0; i < SESSION_FACTORY_NUM; i++) {
+                TubeMultiSessionFactory tubeMultiSessionFactory = new TubeMultiSessionFactory(clientConfig);
+                sessionFactoryList.add(tubeMultiSessionFactory);
+                MessageProducer producer = tubeMultiSessionFactory.createProducer();
+                Sender sender = new Sender(producer, latch);
+                sendExecutorService.submit(sender);
+            }
+            latch.await();
+            sendExecutorService.shutdownNow();
+            for(MessageSessionFactory sessionFactory : sessionFactoryList){
+                sessionFactory.shutdown();
+            }
+        }
+    
+        private static class Sender implements Runnable {
+            
+            private MessageProducer producer;
+            
+            private CountDownLatch latch;
+    
+            public Sender(MessageProducer producer, CountDownLatch latch) {
+                this.producer = producer;
+                this.latch = latch;
+            }
+    
+            @Override
+            public void run() {
+                final String topic = "test";
+                try {
+                    producer.publish(topic);
+                    final byte[] bodyData = StringUtils.getBytesUtf8("This is a test message from multi-session factory");
+                    Message message = new Message(topic, bodyData);
+                    producer.sendMessage(message);
+                    producer.shutdown();
+                } catch (Throwable ex) {
+                    System.out.println("send message error : " + ex);
+                } finally {
+                    latch.countDown();
+                }
+            }
+        }
+    }
+    ```
+
+
diff --git a/docs/en-us/quick_start.md b/docs/en-us/quick_start.md
new file mode 100644
index 0000000..18e6acc
--- /dev/null
+++ b/docs/en-us/quick_start.md
@@ -0,0 +1,230 @@
+## Prerequisites
+
+- Java 1.7 or 1.8(Java 9 and above haven't been verified yet)
+- Maven
+- [protoc 2.5.0](https://github.com/protocolbuffers/protobuf/releases/tag/v2.5.0)
+
+## Build
+
+### Build distribution tarball
+Go to the project root, and run
+```bash
+mvn clean package -DskipTests
+```
+If you want to build each module of the project separately, you need to run `mvn install` in the project root at first.
+### Build source code
+If you want to build and debug source code in IDE, go to the project root, and run
+
+```bash
+mvn compile
+```
+
+This command will generate the Java source files from the `protoc` files, the generated files located in `target/generated-sources`.
+
+When this command finished, you can use IDE import the project as maven project.
+
+## Deploy
+After the build, please go to `tubemq-server/target`. You can find the
+**tubemq-server-x.x.x-bin.tar.gz** file. It is the server deployment package, which includes
+scripts, configuration files, dependency jars and web GUI code.
+
+For the first time deployment, we just need to extract the package file. For example, we put these
+files into the `/opt/tubemq-server`, here's the folder structure.
+```
+/opt/tubemq-server
+├── bin
+├── conf
+├── lib
+├── logs
+└── resources
+```
+## Configure
+There're two roles in the cluster: **Master** and **Broker**. Master and Broker
+can be deployed on the same server or different servers. In this example, we setup our cluster
+like this, and all services run on the same node. Zookeeper should be setup in your environment also.
+
+| Role | TCP Port | TLS Port | Web Port | Comment |
+| ---- | -------- | -------- | -------- | ------- |
+| Master | 8099 | 8199 | 8080 | Meta data is stored at /stage/metadata |
+| Broker | 8123 | 8124 | 8081 | Message is stored at /stage/msgdata |
+| Zookeeper | 2181 | | | Offset is stored at /tubemq |
+
+You can follow the example below to update the corresponding config files. Please notice that the **YOUR_SERVER_IP** should
+be replaced with your server IP.
+
+##### conf/master.ini
+```ini
+[master]
+hostName=YOUR_SERVER_IP
+port=8000
+webPort=8080
+consumerBalancePeriodMs=30000
+firstBalanceDelayAfterStartMs=60000
+consumerHeartbeatTimeoutMs=30000
+producerHeartbeatTimeoutMs=45000
+brokerHeartbeatTimeoutMs=25000
+confModAuthToken=abc
+webResourcePath=/opt/tubemq-server/resources
+
+[zookeeper]
+zkNodeRoot=/tubemq
+zkServerAddr=localhost:2181
+zkSessionTimeoutMs=30000
+zkConnectionTimeoutMs=30000
+zkSyncTimeMs=5000
+zkCommitPeriodMs=5000
+
+[bdbStore]
+bdbRepGroupName=tubemqMasterGroup
+bdbNodeName=tubemqMasterGroupNode1
+bdbNodePort=9001
+bdbEnvHome=/stage/metadata
+bdbHelperHost=9.134.8.170:9001
+bdbLocalSync= 1
+bdbReplicaSync= 3
+bdbReplicaAck= 1
+bdbStatusCheckTimeoutMs=10000
+```
+
+##### resources/velocity.properties
+```properties
+resource.loader=file
+file.resource.loader.description=Velocity File Resource Loader
+file.resource.loader.class=org.apache.velocity.runtime.resource.loader.FileResourceLoader
+file.resource.loader.path=/opt/tubemq-server/resources/templates
+file.resource.loader.cache=false
+file.resource.loader.modificationCheckInterval=2
+string.resource.loader.description=Velocity String Resource Loader
+string.resource.loader.class=org.apache.velocity.runtime.resource.loader.StringResourceLoader
+input.encoding=UTF-8
+output.encoding=UTF-8
+```
+
+##### conf/broker.ini
+```ini
+[broker]
+brokerId=0
+hostName=YOUR_SERVER_IP
+port=8123
+webPort=8081
+masterAddressList=YOUR_SERVER_IP:8000
+primaryPath=/stage/msgdata
+maxSegmentSize=1073741824
+maxIndexSegmentSize=22020096
+transferSize= 524288
+loadMessageStoresInParallel=true
+consumerRegTimeoutMs=35000
+
+[zookeeper]
+zkNodeRoot=/tubemq
+zkServerAddr=localhost:2181
+zkSessionTimeoutMs=30000
+zkConnectionTimeoutMs=30000
+zkSyncTimeMs=5000
+zkCommitPeriodMs=5000
+zkCommitFailRetries=10
+
+```
+
+You also need to update your `/etc/hosts` file on the master servers. Add other master
+server IPs in this way, assume the ip is `192.168.1.2`:
+##### /etc/hosts
+```
+192.168.1.2 192-168-1-2
+```
+
+## Start Master
+After update the config file, please go to the `bin` folder and run this command to start
+the master service.
+```bash
+./master.sh start
+```
+You should be able to access `http://your-master-ip:8080/config/topic_list.htm` to see the
+web GUI now.
+
+![TubeMQ Console GUI](img/tubemq-console-gui.png)
+
+## Start Broker
+Before we start a broker service, we need to configure it on master web GUI first.
+
+Go to the `Broker List` page, click `Add Single Broker`, and input the new broker 
+information.
+
+![Add Broker 1](img/tubemq-add-broker-1.png)
+
+In this example, we only need to input broker IP and authToken:
+1. broker IP: broker server ip
+2. authToken: A token pre-configured in the `conf/master.ini` file. Please check the
+`confModAuthToken` field in your `master.ini` file.
+
+Click the online link to activate the new added broker.
+
+![Add Broker 2](img/tubemq-add-broker-2.png)
+
+Go to the broker server, under the `bin` folder run this command to start the broker service
+```bash
+./broker.sh start
+```
+
+Refresh the GUI broker list page, you can see that the broker now is registered.
+
+After the sub-state of the broker changed to `idle`, we can add topics to that broker.
+
+![Add Broker 3](img/tubemq-add-broker-3.png)
+
+## Add Topic
+We can add or manage the cluster topics on the web GUI. To add a new topic, go to the
+topic list page and click the add new topic button
+
+![Add Topic 1](img/tubemq-add-topic-1.png)
+
+Then select the brokers which you want to deploy the topics to.
+
+![Add Topic 5](img/tubemq-add-topic-5.png)
+
+We can see the publish and subscribe state of the new added topic is still grey. We need
+to go to the broker list page to reload the broker configuration.
+
+![Add Topic 6](img/tubemq-add-topic-6.png)
+
+![Add Topic 2](img/tubemq-add-topic-2.png)
+
+When the broker sub-state changed to idle, go to the topic list page. We can see
+that the topic publish/subscribe state is active now.
+
+![Add Topic 3](img/tubemq-add-topic-3.png)
+
+![Add Topic 4](img/tubemq-add-topic-4.png)
+
+Now we can use the topic to send messages.
+
+## Demo
+Now we can run the example to test our cluster. First let's run the produce data demo. Please don't
+forget replace `YOUR_SERVER_IP` with your server ip.
+```bash
+java -Dlog4j.configuration=file:/opt/tubemq-server/conf/tools.log4j.properties  -Djava.net.preferIPv4Stack=true -cp  /opt/tubemq-server/lib/*:/opt/tubemq-server/conf/*: com.tencent.tubemq.example.MessageProducerExample YOUR_SERVER_IP YOUR_SERVER_IP:8000 demo 10000000
+```
+From the log, we can see the message is sent out.
+```bash
+[2019-09-11 16:09:08,287] INFO Send demo 1000 message, keyCount is 268 (com.tencent.tubemq.example.MessageProducerExample)
+[2019-09-11 16:09:08,505] INFO Send demo 2000 message, keyCount is 501 (com.tencent.tubemq.example.MessageProducerExample)
+[2019-09-11 16:09:08,958] INFO Send demo 3000 message, keyCount is 755 (com.tencent.tubemq.example.MessageProducerExample)
+[2019-09-11 16:09:09,085] INFO Send demo 4000 message, keyCount is 1001 (com.tencent.tubemq.example.MessageProducerExample)
+```
+
+Then we run the consume data demo. Also replace the server ip
+```bash
+java -Xmx512m -Dlog4j.configuration=file:/opt/tubemq-server/conf/tools.log4j.properties -Djava.net.preferIPv4Stack=true -cp /opt/tubemq-server/lib/*:/opt/tubemq-server/conf/*: com.tencent.tubemq.example.MessageConsumerExample YOUR_SERVER_IP YOUR_SERVER_IP:8000 demo demoGroup 3 1 1
+```
+From the log, we can see the message received by the consumer.
+
+```bash
+[2019-09-11 16:09:29,720] INFO Receive messages:2500 (com.tencent.tubemq.example.MsgRecvStats)
+[2019-09-11 16:09:30,059] INFO Receive messages:5000 (com.tencent.tubemq.example.MsgRecvStats)
+[2019-09-11 16:09:34,493] INFO Receive messages:10000 (com.tencent.tubemq.example.MsgRecvStats)
+[2019-09-11 16:09:34,783] INFO Receive messages:12500 (com.tencent.tubemq.example.MsgRecvStats)
+```
+
+---
+
+
diff --git a/docs/zh-cn/architecture.md b/docs/zh-cn/architecture.md
new file mode 100644
index 0000000..7f0e3de
--- /dev/null
+++ b/docs/zh-cn/architecture.md
@@ -0,0 +1,26 @@
+## TubeMQ Architecture: ##
+After years of evolution, the TubeMQ cluster is divided into the following 5 parts: 
+![](img/sys_structure.png)
+
+- **Portal:** The Portal part responsible for external interaction and maintenance operations, including API and Web. The API connects to the management system outside the cluster. The Web is a page encapsulation of daily operation and maintenance functions based on the API;
+
+- **Master:** It is responsible for the Control part of the cluster. This part is composed of one or more Master nodes. Master HA performs heartbeat keep-alive and real-time hot standby switching between master nodes (This is the reason why everyone needs to fill in the addresses of all Master nodes corresponding to the cluster when using TubeMQ Lib). The main master is responsible for managing the status of the entire cluster, resource scheduling, permission checking, metadata query, etc;
+
+- **Broker:** The Store part responsible for data storage. This part is composed of independent Broker nodes. Each Broker node manages the Topic set in this node, including the addition, deletion, modification, and inquiring about Topics. It is also responsible for message storage, consumption, aging, partition expansion, data consumption offset records, etc. on the topic, and the external capabilities of the cluster, including the number of topics, throughput, and capacity, are complete [...]
+
+- **Client:** The Client part responsible for data production and consumption. We provide this part in the form of Lib. The most commonly used is the consumer. Compared with the previous, the consumer now supports Push and Pull data pull modes, data consumption behavior support both order and filtered consumption. For the Pull consumption mode, the service supports resetting the precise offset through the client to support the business extract-once consumption. At the same time, the cons [...]
+
+- **Zookeeper:** Responsible for the zk part of the offset storage. This part of the function has been weakened to only the persistent storage of the offset. Considering the next multi-node copy function, this module is temporarily reserved;
+
+## Broker File Storage Scheme Improvement: ##
+Systems that use disk as a medium for data persistence are faced with a variety of performance issues caused by disk problems,TubeMQ is no exception. Performance improvements are largely addressed to solve the problem of how message data is read, write and stored. In this respect, TubeMQ has made some improvements:
+
+1. **File structure and organization adjustment:** TubeMQ's disk storage scheme is similar to Kafka, but not the same, as the figure shown below. Storage instance is consisted of an index file and a data file, each topic can allocate one or more storage instances. And each topic separately maintains the mechanisms of managing storage instances, including aging cycles, number of partitions, whether readable or writable, etc.
+![](img/store_file.png)
+
+2. **Memory block cache:** We add an additional memory cache block for each storage instance based on the file storage, i.e. add a piece of memory to the original write disk to isolate the slow impact of hard disk. The data is brushed into memory first, and the memory control block then brushes the data to disk files in bulk.
+![](img/store_mem.png)
+
+3. **SSD Auxiliary Storage:** For servers with SSD hardware in addition to disk storage, we have made a layer of SSD secondary storage, which is different from the common practice that external systems save data to SSD first, and then transfer data from SSD to disk: according to our analysis, for normal sequential disk accesses, the performance is sufficient to meet the needs of data persistence. When disk IO is up to 100%, the performance degradation is mainly due to lagged consumption, [...]
+![](img/store_ssd.png)
+
diff --git a/docs/zh-cn/consumer_example.md b/docs/zh-cn/consumer_example.md
new file mode 100644
index 0000000..62236a1
--- /dev/null
+++ b/docs/zh-cn/consumer_example.md
@@ -0,0 +1,96 @@
+## Consumer Example
+  TubeMQ provides two ways to consumer message, PullConsumer and PushConsumer:
+
+1. PullConsumer 
+   ```
+   public class PullConsumerExample {
+   
+       public static void main(String[] args) throws Throwable {
+           final String localHostIP = "127.0.0.1";
+           final String masterHostAndPort = "localhost:8000";
+           final String topic = "test";
+           final String group = "test-group";
+           final ConsumerConfig consumerConfig = new ConsumerConfig(localHostIP, masterHostAndPort, group);
+           /* consumeModel
+            *  Set the start position of the consumer group. The value can be [-1, 0, 1]. Default value is 0.
+            * -1: Start from 0 for the first time. Otherwise start from last consume position.
+            *  0: Start from the latest position for the first time. Otherwise start from last consume position.
+            *  1: Start from the latest consume position.
+           */
+           consumerConfig.setConsumeModel(0);
+           final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+           final PullMessageConsumer messagePullConsumer = messageSessionFactory.createPullConsumer(consumerConfig);
+           messagePullConsumer.subscribe(topic, null);
+           messagePullConsumer.completeSubscribe();
+           // wait for client to join the exact consumer queue that consumer group allocated
+           while (!messagePullConsumer.isPartitionsReady(1000)) {
+               ThreadUtils.sleep(1000);
+           }
+           while(true){
+               ConsumerResult result = messagePullConsumer.getMessage();
+               if (result.isSuccess()) {
+                   List<Message> messageList = result.getMessageList();
+                   for (Message message : messageList) {
+                       System.out.println("received message : " + message);
+                   }
+                   messagePullConsumer.confirmConsume(result.getConfirmContext(), true);
+               } else{
+                   if (result.getErrCode() == 400) {
+                       ThreadUtils.sleep(100);
+                   } else {
+                       if (result.getErrCode() != 404) {
+                           System.out.println(String.format("Receive messages errorCode is %d, Error message is %s", result.getErrCode(), result.getErrMsg()));
+                       }
+                   }
+               }
+           }
+       }
+   }
+   ``` 
+   
+2. PushConsumer
+   ```
+   public class PushConsumerExample {
+   
+       public static void main(String[] args) throws Throwable {
+           final String localHostIP = "127.0.0.1";
+           final String masterHostAndPort = "localhost:8000";
+           final String topic = "test";
+           final String group = "test-group";
+           final ConsumerConfig consumerConfig = new ConsumerConfig(localHostIP, masterHostAndPort, group);
+           /* consumeModel
+            *  Set the start position of the consumer group. The value can be [-1, 0, 1]. Default value is 0.
+            * -1: Start from 0 for the first time. Otherwise start from last consume position.
+            *  0: Start from the latest position for the first time. Otherwise start from last consume position.
+            *  1: Start from the latest consume position.
+           */
+           consumerConfig.setConsumeModel(0);
+           final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(consumerConfig);
+           final PushMessageConsumer pushConsumer = messageSessionFactory.createPushConsumer(consumerConfig);
+           pushConsumer.subscribe(topic, null, new MessageListener() {
+   
+               @Override
+               public void receiveMessages(List<Message> messages) throws InterruptedException {
+                   for (Message message : messages) {
+                       System.out.println("received message : " + new String(message.getData()));
+                   }
+               }
+   
+               @Override
+               public Executor getExecutor() {
+                   return null;
+               }
+   
+               @Override
+               public void stop() {
+                   //
+               }
+           });
+           pushConsumer.completeSubscribe();
+           CountDownLatch latch = new CountDownLatch(1);
+           latch.await(10, TimeUnit.MINUTES);
+       }
+   }
+   ```
+
+
diff --git a/docs/zh-cn/contact.md b/docs/zh-cn/contact.md
index 8bca60c..d605c82 100644
--- a/docs/zh-cn/contact.md
+++ b/docs/zh-cn/contact.md
@@ -1,28 +1,16 @@
-Apache TubeMQ
-==============================================
-[![Build Status](https://travis-ci.org/apache/incubator-tubemq.svg?branch=master)](https://travis-ci.org/apache/incubator-tubemq)
-
-Apache TubeMQ (incubating) is a trillion-records-scale distributed messaging queue (MQ) system, focuses on data transmission and storage under massive data. Compared to many open source MQ projects, TubeMQ has unique advantages in terms of stability, performance, and low cost.
-
-
-Contact
+Contact us
 -------
 
-
 - Mailing lists
 
-| Name                                                                          | Scope                           |                                                                 |                                                                     |                                                                              |
-|:------------------------------------------------------------------------------|:--------------------------------|:----------------------------------------------------------------|:--------------------------------------------------------------------|:-----------------------------------------------------------------------------|
-| [dev@tubemq.apache.org](mailto:dev@tubemq.apache.org)     | Development-related discussions | [Subscribe](mailto:dev-subscribe@tubemq.apache.org)   | [Unsubscribe](mailto:dev-unsubscribe@tubemq.apache.org)   | [Archives](http://mail-archives.apache.org/mod_mbox/tubemq-dev/)   |
+    | Name                                                                          | Scope                           |                                                                 |                                                                     |                                                                              |
+    |:------------------------------------------------------------------------------|:--------------------------------|:----------------------------------------------------------------|:--------------------------------------------------------------------|:-----------------------------------------------------------------------------|
+    | [dev@tubemq.apache.org](mailto:dev@tubemq.apache.org)     | Development-related discussions | [Subscribe](mailto:dev-subscribe@tubemq.apache.org)   | [Unsubscribe](mailto:dev-unsubscribe@tubemq.apache.org)   | [Archives](http://mail-archives.apache.org/mod_mbox/tubemq-dev/)   |
 
+- Home page: https://tubemq.apache.org
+- Docs: https://tubemq.apache.org/en-us/docs/tubemq_user_guide.html
+- Issues: https://issues.apache.org/jira/browse/TubeMQ
 
-- Issue management
-  [See JIRA](https://issues.apache.org/jira/browse/TubeMQ)
-
-
-Build and Deploy
--------
-- [See user guide](./docs/tubemq_user_guide.md)
 
 
 License
diff --git a/docs/zh-cn/contribution.md b/docs/zh-cn/contribution.md
index 64ec4ed..5f46e0e 100644
--- a/docs/zh-cn/contribution.md
+++ b/docs/zh-cn/contribution.md
@@ -40,14 +40,6 @@ To avoid potential frustration during the code review cycle, we encourage you to
 
 We are using "TubeMQ Improvement Proposals" for managing major changes to TubeMQ. The list of all proposals is maintained in the TubeMQ wiki at [this page](https://cwiki.apache.org/confluence/display/TUBEMQ/TubeMQ+Improvement+Proposals).
 
-## Code
-
-TBD
-
-## Review
-
-TBD
-
 ## Commit (committers only)
 
 Once the code has been peer reviewed by a committer, the next step is for the committer to merge it into the Github repo.
@@ -56,3 +48,14 @@ Pull requests should not be merged before the review has approved from at least
 
 For more about merging pull request, please refer to [this page](https://cwiki.apache.org/confluence/display/TUBEMQ/Merging+Pull+Requests)
 
+## Website Contributor List
+We are very pleased to announce some contributors here. They have made a lot of contributions in the translation of TubeMQ. Thanks again to the following participants.
+ - deepEvolution
+ - missy
+ - min.yang
+ - goson
+ - stillcoolme
+ - tboy
+ - viviel
+ - yuecai.liu
+
diff --git a/docs/zh-cn/deployment.md b/docs/zh-cn/deployment.md
new file mode 100644
index 0000000..87c9215
--- /dev/null
+++ b/docs/zh-cn/deployment.md
@@ -0,0 +1,161 @@
+## Deployment
+  The TubeMQ server includes two modules for the Master and the Broker. The Master also includes a Web front-end module for external page access (this part is stored in the resources). Considering the actual deployment, two modules are often deployed in the same machine, TubeMQ. The contents of the three parts of the two modules are packaged and delivered to the operation and maintenance; the client does not include the lib package of the server part and is delivered to the user separately.
+   Master and Broker use the ini configuration file format, and the relevant configuration files are placed in the master.ini and broker.ini files in the tubemq-server-3.8.0/conf/ directory.
+   Their configuration is defined by a set of configuration units. The Master configuration consists of four mandatory units: [master], [zookeeper], [bdbStore], and optional [tlsSetting]. The Broker configuration is mandatory. Broker], [zookeeper] and optional [tlsSetting] consist of a total of 3 configuration units; in actual use, you can also combine the contents of the two configuration files into one ini file.
+   In addition to the back-end system configuration file, the Master also stores the Web front-end page module in the resources. The root directory velocity.properties file of the resources is the Web front-end page configuration file of the Master.
+
+### Master
+  In real production environment, you need to run multiple master services on different servers for high availability purpose. Here's
+  the introduction of availability level.
+  
+  | HA Level | Master Number | Description |
+  | -------- | ------------- | ----------- |
+  | High     | 3 masters     | After any master crashed, the cluster meta data is still in read/write state and can accept new producers/consumers. |
+  | Medium   | 2 masters     | After one master crashed, the cluster meta data is in read only state. There's no affect on existing producers and consumers. |
+  | Minimum  | 1 master      | After the master crashed, there's no affect on existing producer and consumer. |
+  
+  Please notice that the master servers should be clock synchronized.
+  
+### Master Configuration item details:
+ 
+ ### master.ini file:
+ [master]
+ > Master system runs the main configuration unit, required unit, the value is fixed to "[master]"
+ 
+ | Name                          | Required                          | Type                          | Description                                                  |
+ | ----------------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+ | hostName                      | yes      | string  | The host address of the master external service, required, must be configured on the NIC, is enabled, non-loopback and cannot be IP of 127.0.0.1 |
+ | port                          | no       | int     | Master listening port, optional, default is 8715             |
+ | webPort                       | no       | int     | Master web console access port, the default value is 8080    |
+ | webResourcePath               | yes      | string  | Master Web Resource deploys an absolute path, which is required. If the value is set incorrectly, the web page will not display properly. |
+ | confModAuthToken              | no       | string  | The authorization Token provided by the operator when the change operation (including adding, deleting, changing configuration, and changing the master and managed Broker status) is performed by the Master's Web or API. The value is optional. The default is "ASDFGHJKL". |
+ | firstBalanceDelayAfterStartMs | no       | long    | Master starts to the interval of the first time to start Rebalance, optional, default 30000 milliseconds |
+ | consumerBalancePeriodMs       | no       | long    | The master balances the rebalance period of the consumer group. The default is 60000 milliseconds. When the cluster size is large, increase the value. |
+ | consumerHeartbeatTimeoutMs    | no       | long    | Consumer heartbeat timeout period, optional, default 30000 milliseconds, when the cluster size is large, please increase the value |
+ | producerHeartbeatTimeoutMs    | no       | long    | Producer heartbeat timeout period, optional, default 30000 milliseconds, when the cluster size is large, please increase the value |
+ | brokerHeartbeatTimeoutMs      | no       | long    | Broker heartbeat timeout period, optional, default 30000 milliseconds, when the cluster size is large, please increase the value |
+ | socketRecvBuffer              | no       | long    | Socket receives the size of the Buffer buffer SO_RCVBUF, the unit byte, the negative number is set as the default value |
+ | socketSendBuffer              | no       | long    | Socket sends Buffer buffer SO_SNDBUF size, unit byte, negative number is  set as the default value |
+ | maxAutoForbiddenCnt           | no       | int     | When the broker has an IO failure, the maximum number of masters allowed to automatically go offline is the number of options. The default value is 5. It is recommended that the value does not exceed 10% of the total number of brokers in the cluster. |
+ | startOffsetResetCheck         | no       | boolean | Whether to enable the check function of the client Offset reset function, optional, the default is false |
+ | needBrokerVisitAuth           | no       | boolean | Whether to enable Broker access authentication, the default is false. If true, the message reported by the broker must carry the correct username and signature information. |
+ | visitName                     | no       | string  | The username of the Broker access authentication. The default is an empty string. This value must exist when needBrokerVisitAuth is true. This value must be the same as the value of the visitName field in broker.ini. |
+ | visitPassword                 | no       | string  | The password for the Broker access authentication. The default is an empty string. This value must exist when needBrokerVisitAuth is true. This value must be the same as the value of the visitPassword field in broker.ini. |
+ | startVisitTokenCheck      | no       | boolean | Whether to enable client visitToken check, the default is false |
+ | startProduceAuthenticate      | no       | boolean | Whether to enable production end user authentication, the default is false |
+ | startProduceAuthorize         | no       | boolean | Whether to enable production-side production authorization authentication, the default is false |
+ | startConsumeAuthenticate      | no       | boolean | Whether to enable consumer user authentication, the default is false |
+ | startConsumeAuthorize         | no       | boolean | Whether to enable consumer consumption authorization authentication, the default is false |
+ | maxGroupBrokerConsumeRate     | no       | int     | The maximum ratio of the number of clustered brokers to the number of members in the consumer group. The default is 50. In a 50-kerrow cluster, one consumer group is allowed to start at least one client. |
+ 
+ [zookeeper]
+ >The corresponding Tom MQ cluster of the Master stores the information about the ZooKeeper cluster of the Offset. The required unit has a fixed value of "[zookeeper]".
+ 
+ | Name                  | Required                          | Type                          | Description                                                  |
+ | --------------------- |  -----------------------------|  ----------------------------- | ------------------------------------------------------------ |
+ | zkServerAddr          | no       | string | Zk server address, optional configuration, defaults to "localhost:2181" |
+ | zkNodeRoot            | no       | string | The root path of the node on zk, optional configuration. The default is "/tube". |
+ | zkSessionTimeoutMs    | no       | long   | Zk heartbeat timeout, in milliseconds, default 30 seconds    |
+ | zkConnectionTimeoutMs | no       | long   | Zk connection timeout, in milliseconds, default 30 seconds   |
+ | zkSyncTimeMs          | no       | long   | Zk data synchronization time, in milliseconds, default 5 seconds |
+ | zkCommitPeriodMs      | no       | long   | The interval at which the Master cache data is flushed to zk, in milliseconds, default 5 seconds. |
+ 
+ [bdbStore]
+ >Master configuration of the BDB cluster to which the master belongs. The master uses BDB for metadata storage and multi-node hot standby. The required unit has a fixed value of "[bdbStore]".
+ 
+ | Name                    | Required                          | Type                          | Description                                                  |
+ | ----------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+ | bdbRepGroupName         | yes      | string | BDB cluster name, the primary and backup master node values must be the same, required field |
+ | bdbNodeName             | yes      | string | The name of the node of the master in the BDB cluster. The value of each BDB node must not be repeated. Required field. |
+ | bdbNodePort             | no       | int    | BDB node communication port, optional field, default is 9001 |
+ | bdbEnvHome              | yes      | string | BDB data storage path, required field                        |
+ | bdbHelperHost           | yes      | string | Primary node when the BDB cluster starts, required field     |
+ | bdbLocalSync            | no       | int    | BDB data node local storage mode, the value range of this field is [1, 2, 3]. The default is 1: 1 is data saved to disk, 2 is data only saved to memory, and 3 is only data is written to file system buffer. But not brush |
+ | bdbReplicaSync          | no       | int    | BDB data node synchronization save mode, the value range of this field is [1, 2, 3]. The default is 1: 1 is data saved to disk, 2 is data only saved to memory, and 3 is only data is written to file system buffer. But not brush |
+ | bdbReplicaAck           | no       | int    | The response policy of the BDB node data synchronization, the value range of this field is [1, 2, 3], the default is 1: 1 is more than 1/2 majority is valid, 2 is valid for all nodes, 3 is not Need node response |
+ | bdbStatusCheckTimeoutMs | no       | long   | BDB status check interval, optional field, in milliseconds, defaults to 10 seconds |
+ 
+ [tlsSetting]
+ >The Master uses TLS to encrypt the transport layer data. When TLS is enabled, the configuration unit provides related settings. The optional unit has a fixed value of "[tlsSetting]".
+ 
+ | Name                  | Required                          | Type                          | Description                                                  |
+ | --------------------- |  -----------------------------|  ----------------------------- | ------------------------------------------------------------ |
+ | tlsEnable             | no       | boolean | Whether to enable TLS function, optional configuration, default is false |
+ | tlsPort               | no       | int     | Master TLS port number, optional configuration, default is 8716 |
+ | tlsKeyStorePath       | no       | string  | The absolute storage path of the TLS keyStore file + the name of the keyStore file. This field is required and cannot be empty when the TLS function is enabled. |
+ | tlsKeyStorePassword   | no       | string  | The absolute storage path of the TLS keyStorePassword file + the name of the keyStorePassword file. This field is required and cannot be empty when the TLS function is enabled. |
+ | tlsTwoWayAuthEnable   | no       | boolean | Whether to enable TLS mutual authentication, optional configuration, the default is false |
+ | tlsTrustStorePath     | no       | string  | The absolute storage path of the TLS TrustStore file + the TrustStore file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+ | tlsTrustStorePassword | no       | string  | The absolute storage path of the TLS TrustStorePassword file + the TrustStorePassword file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+ 
+ ### velocity.properties file:
+ 
+ | Name                      | Required                          | Type                          | Description                                                  |
+ | ------------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+ | file.resource.loader.path | yes      | string | The absolute path of the master web template. This part is the absolute path plus /resources/templates of the project when the master is deployed. The configuration is consistent with the actual deployment. If the configuration fails, the master front page access fails. |
+
+ 
+### Broker
+  In real production environment, you need to run multiple at least 2 broker services on different servers for high availability purpose.
+
+### Broker Configuration item details:
+
+### broker.ini file:
+
+[broker]
+>The broker system runs the main configuration unit, required unit, and the value is fixed to "[broker]"
+
+| Name                  | Required                          | Type                          | Description                                                  |
+| --------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| brokerId              | yes      | int     | Server unique flag, required field, can be set to 0; when set to 0, the system will default to take the local IP to int value |
+| hostName              | yes      | string  | The host address of the broker external service, required, must be configured in the NIC, is enabled, non-loopback and cannot be IP of 127.0.0.1 |
+| port                  | no       | int     | Broker listening port, optional, default is 8123             |
+| webPort               | no       | int     | Broker's http management access port, optional, default is 8081 |
+| masterAddressList     | yes      | string  | Master address list of the cluster to which the broker belongs. Required fields. The format must be ip1:port1, ip2:port2, ip3:port3. |
+| primaryPath           | yes      | string  | Broker stores the absolute path of the message, mandatory field |
+| maxSegmentSize        | no       | int     | Broker stores the file size of the message data content, optional field, default 512M, maximum 1G |
+| maxIndexSegmentSize   | no       | int     | Broker stores the file size of the message Index content, optional field, default 18M, about 70W messages per file |
+| transferSize          | no       | int     | Broker allows the maximum message content size to be transmitted to the client each time, optional field, default is 512K |
+| consumerRegTimeoutMs  | no       | long    | Consumer heartbeat timeout, optional, in milliseconds, default 30 seconds |
+| socketRecvBuffer      | no       | long    | Socket receives the size of the Buffer buffer SO_RCVBUF, the unit byte, the negative number is not set, the default value is |
+| socketSendBuffer      | no       | long    | Socket sends Buffer buffer SO_SNDBUF size, unit byte, negative number is not set, the default value is |
+| secondDataPath        | no       | string  | The SSD to storage location where the broker is located, optional field. The default is blank to indicate that the machine has no SSD. |
+| maxSSDTotalFileCnt    | no       | int     | The maximum number of Data files allowed by the SSD where the Broker is located, optional field, default 70 |
+| maxSSDTotalFileSizes  | no       | long    | The SSD where the Broker is located allows the maximum size of the data file to be saved. The optional field is 32G by default. |
+| tcpWriteServiceThread | no       | int     | Broker supports the number of socket worker threads for TCP production services, optional fields, and defaults to 2 times the number of CPUs of the machine. |
+| tcpReadServiceThread  | no       | int     | Broker supports the number of socket worker threads for TCP consumer services, optional fields, defaults to 2 times the number of CPUs of the machine |
+| logClearupDurationMs  | no       | long    | The aging cleanup period of the message file, in milliseconds. The default is 30 minutes for a log cleanup operation. The minimum is 30 minutes. |
+| logFlushDiskDurMs     | no       | long    | Batch check message persistence to file check cycle, in milliseconds, default is 20 seconds for a full check and brush |
+| visitTokenCheckInValidTimeMs       | no       | long | The length of the delay check for the visitToken check since the Broker is registered, in ms, the default is 120000, the value range [60000, 300000]. |
+| visitMasterAuth       | no       | boolean | Whether the authentication of the master is enabled, the default is false. If true, the user name and signature information are added to the signaling reported to the master. |
+| visitName             | no       | string  | User name of the access master. The default is an empty string. This value must exist when visitMasterAuth is true. The value must be the same as the value of the visitName field in master.ini. |
+| visitPassword         | no       | string  | The password for accessing the master. The default is an empty string. This value must exist when visitMasterAuth is true. The value must be the same as the value of the visitPassword field in master.ini. |
+| logFlushMemDurMs      | no       | long    | Batch check message memory persistence to file check cycle, in milliseconds, default is 10 seconds for a full check and brush |
+
+[zookeeper]
+>The Tube MQ cluster corresponding to the Broker stores the information about the ZooKeeper cluster of the Offset. The required unit has a fixed value of "[zookeeper]".
+
+
+| Name                  | Required                          | Type                          | Description                                                  |
+| --------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| zkServerAddr          | no       | string | Zk server address, optional configuration, defaults to "localhost:2181" |
+| zkNodeRoot            | no       | string | The root path of the node on zk, optional configuration. The default is "/tube". |
+| zkSessionTimeoutMs    | no       | long   | Zk heartbeat timeout, in milliseconds, default 30 seconds    |
+| zkConnectionTimeoutMs | no       | long   | Zk connection timeout, in milliseconds, default 30 seconds   |
+| zkSyncTimeMs          | no       | long   | Zk data synchronization time, in milliseconds, default 5 seconds |
+| zkCommitPeriodMs      | no       | long   | The interval at which the broker cache data is flushed to zk, in milliseconds, default 5 seconds |
+| zkCommitFailRetries   | no       | int    | The maximum number of re-brushings after Broker fails to flush cached data to Zk |
+
+[tlsSetting]
+>The Master uses TLS to encrypt the transport layer data. When TLS is enabled, the configuration unit provides related settings. The optional unit has a fixed value of "[tlsSetting]".
+
+
+| Name                  | Required                          | Type                           | Description                                                  |
+| --------------------- |  ----------------------------- |  ----------------------------- | ------------------------------------------------------------ |
+| tlsEnable             | no       | boolean | Whether to enable TLS function, optional configuration, default is false |
+| tlsPort               | no       | int     | Broker TLS port number, optional configuration, default is 8124 |
+| tlsKeyStorePath       | no       | string  | The absolute storage path of the TLS keyStore file + the name of the keyStore file. This field is required and cannot be empty when the TLS function is enabled. |
+| tlsKeyStorePassword   | no       | string  | The absolute storage path of the TLS keyStorePassword file + the name of the keyStorePassword file. This field is required and cannot be empty when the TLS function is enabled. |
+| tlsTwoWayAuthEnable   | no       | boolean | Whether to enable TLS mutual authentication, optional configuration, the default is false |
+| tlsTrustStorePath     | no       | string  | The absolute storage path of the TLS TrustStore file + the TrustStore file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
+| tlsTrustStorePassword | no       | string  | The absolute storage path of the TLS TrustStorePassword file + the TrustStorePassword file name. This field is required and cannot be empty when the TLS function is enabled and mutual authentication is enabled. |
diff --git a/docs/zh-cn/producer_example.md b/docs/zh-cn/producer_example.md
new file mode 100644
index 0000000..4c9902b
--- /dev/null
+++ b/docs/zh-cn/producer_example.md
@@ -0,0 +1,148 @@
+## Producer Example
+  TubeMQ provides two ways to initialize session factory, TubeSingleSessionFactory and TubeMultiSessionFactory:
+  - TubeSingleSessionFactory creates only one session in the lifecycle, this is very useful in streaming scenarios.
+  - TubeMultiSessionFactory creates new session on every call.
+
+1. TubeSingleSessionFactory
+   - Send Message Synchronously
+     ```
+     public final class SyncProducerExample {
+    
+        public static void main(String[] args) throws Exception{
+            final String localHostIP = "127.0.0.1";
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(localHostIP, masterHostAndPort);
+            final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+            final MessageProducer messageProducer = messageSessionFactory.createProducer();
+            final String topic = "test";
+            final String body = "This is a test message from single-session-factory!";
+            byte[] bodyData = StringUtils.getBytesUtf8(body);
+            messageProducer.publish(topic);
+            Message message = new Message(topic, bodyData);
+            MessageSentResult result = messageProducer.sendMessage(message);
+            if (result.isSuccess()) {
+                System.out.println("sync send message : " + message);
+            }
+            messageProducer.shutdown();
+        }
+     }
+     ```
+     
+   - Send Message Asynchronously
+     ```
+     public final class AsyncProducerExample {
+     
+         public static void main(String[] args) throws Throwable {
+             final String localHostIP = "127.0.0.1";
+             final String masterHostAndPort = "localhost:8000";
+             final TubeClientConfig clientConfig = new TubeClientConfig(localHostIP, masterHostAndPort);
+             final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+             final MessageProducer messageProducer = messageSessionFactory.createProducer();
+             final String topic = "test";
+             final String body = "async send message from single-session-factory!";
+             byte[] bodyData = StringUtils.getBytesUtf8(body);
+             messageProducer.publish(topic);
+             Message message = new Message(topic, bodyData);
+             messageProducer.sendMessage(message, new MessageSentCallback(){
+                 @Override
+                 public void onMessageSent(MessageSentResult result) {
+                     if (result.isSuccess()) {
+                         System.out.println("async send message : " + message);
+                     } else {
+                         System.out.println("async send message failed : " + result.getErrMsg());
+                     }
+                 }
+                 @Override
+                 public void onException(Throwable e) {
+                     System.out.println("async send message error : " + e);
+                 }
+             });
+             messageProducer.shutdown();
+         }
+     }
+     ```
+     
+   - Send Message With Attributes
+     ```
+     public final class ProducerWithAttributeExample {
+     
+         public static void main(String[] args) throws Throwable {
+             final String localHostIP = "127.0.0.1";
+             final String masterHostAndPort = "localhost:8000";
+             final TubeClientConfig clientConfig = new TubeClientConfig(localHostIP, masterHostAndPort);
+             final MessageSessionFactory messageSessionFactory = new TubeSingleSessionFactory(clientConfig);
+             final MessageProducer messageProducer = messageSessionFactory.createProducer();
+             final String topic = "test";
+             final String body = "send message with attribute from single-session-factory!";
+             byte[] bodyData = StringUtils.getBytesUtf8(body);
+             messageProducer.publish(topic);
+             Message message = new Message(topic, bodyData);
+             //set attribute
+             message.setAttrKeyVal("test_key", "test value");
+             //msgType is used for consumer filtering, and msgTime(accurate to minute) is used as the pipe to send and receive statistics
+             SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMddHHmm");
+             message.putSystemHeader("test", sdf.format(new Date()));
+             messageProducer.sendMessage(message);
+             messageProducer.shutdown();
+         }
+     }
+     ```  
+     
+- TubeMultiSessionFactory
+
+    ```
+    public class MultiSessionProducerExample {
+        
+        public static void main(String[] args) throws Exception{
+            final int SESSION_FACTORY_NUM = 10;
+            final String localHostIP = "127.0.0.1";
+            final String masterHostAndPort = "localhost:8000";
+            final TubeClientConfig clientConfig = new TubeClientConfig(localHostIP, masterHostAndPort);
+            final List<MessageSessionFactory> sessionFactoryList = new ArrayList<>(SESSION_FACTORY_NUM);
+            final ExecutorService sendExecutorService = Executors.newFixedThreadPool(SESSION_FACTORY_NUM);
+            final CountDownLatch latch = new CountDownLatch(SESSION_FACTORY_NUM);
+            for (int i = 0; i < SESSION_FACTORY_NUM; i++) {
+                TubeMultiSessionFactory tubeMultiSessionFactory = new TubeMultiSessionFactory(clientConfig);
+                sessionFactoryList.add(tubeMultiSessionFactory);
+                MessageProducer producer = tubeMultiSessionFactory.createProducer();
+                Sender sender = new Sender(producer, latch);
+                sendExecutorService.submit(sender);
+            }
+            latch.await();
+            sendExecutorService.shutdownNow();
+            for(MessageSessionFactory sessionFactory : sessionFactoryList){
+                sessionFactory.shutdown();
+            }
+        }
+    
+        private static class Sender implements Runnable {
+            
+            private MessageProducer producer;
+            
+            private CountDownLatch latch;
+    
+            public Sender(MessageProducer producer, CountDownLatch latch) {
+                this.producer = producer;
+                this.latch = latch;
+            }
+    
+            @Override
+            public void run() {
+                final String topic = "test";
+                try {
+                    producer.publish(topic);
+                    final byte[] bodyData = StringUtils.getBytesUtf8("This is a test message from multi-session factory");
+                    Message message = new Message(topic, bodyData);
+                    producer.sendMessage(message);
+                    producer.shutdown();
+                } catch (Throwable ex) {
+                    System.out.println("send message error : " + ex);
+                } finally {
+                    latch.countDown();
+                }
+            }
+        }
+    }
+    ```
+
+
diff --git a/docs/zh-cn/quick_start.md b/docs/zh-cn/quick_start.md
new file mode 100644
index 0000000..18e6acc
--- /dev/null
+++ b/docs/zh-cn/quick_start.md
@@ -0,0 +1,230 @@
+## Prerequisites
+
+- Java 1.7 or 1.8(Java 9 and above haven't been verified yet)
+- Maven
+- [protoc 2.5.0](https://github.com/protocolbuffers/protobuf/releases/tag/v2.5.0)
+
+## Build
+
+### Build distribution tarball
+Go to the project root, and run
+```bash
+mvn clean package -DskipTests
+```
+If you want to build each module of the project separately, you need to run `mvn install` in the project root at first.
+### Build source code
+If you want to build and debug source code in IDE, go to the project root, and run
+
+```bash
+mvn compile
+```
+
+This command will generate the Java source files from the `protoc` files, the generated files located in `target/generated-sources`.
+
+When this command finished, you can use IDE import the project as maven project.
+
+## Deploy
+After the build, please go to `tubemq-server/target`. You can find the
+**tubemq-server-x.x.x-bin.tar.gz** file. It is the server deployment package, which includes
+scripts, configuration files, dependency jars and web GUI code.
+
+For the first time deployment, we just need to extract the package file. For example, we put these
+files into the `/opt/tubemq-server`, here's the folder structure.
+```
+/opt/tubemq-server
+├── bin
+├── conf
+├── lib
+├── logs
+└── resources
+```
+## Configure
+There're two roles in the cluster: **Master** and **Broker**. Master and Broker
+can be deployed on the same server or different servers. In this example, we setup our cluster
+like this, and all services run on the same node. Zookeeper should be setup in your environment also.
+
+| Role | TCP Port | TLS Port | Web Port | Comment |
+| ---- | -------- | -------- | -------- | ------- |
+| Master | 8099 | 8199 | 8080 | Meta data is stored at /stage/metadata |
+| Broker | 8123 | 8124 | 8081 | Message is stored at /stage/msgdata |
+| Zookeeper | 2181 | | | Offset is stored at /tubemq |
+
+You can follow the example below to update the corresponding config files. Please notice that the **YOUR_SERVER_IP** should
+be replaced with your server IP.
+
+##### conf/master.ini
+```ini
+[master]
+hostName=YOUR_SERVER_IP
+port=8000
+webPort=8080
+consumerBalancePeriodMs=30000
+firstBalanceDelayAfterStartMs=60000
+consumerHeartbeatTimeoutMs=30000
+producerHeartbeatTimeoutMs=45000
+brokerHeartbeatTimeoutMs=25000
+confModAuthToken=abc
+webResourcePath=/opt/tubemq-server/resources
+
+[zookeeper]
+zkNodeRoot=/tubemq
+zkServerAddr=localhost:2181
+zkSessionTimeoutMs=30000
+zkConnectionTimeoutMs=30000
+zkSyncTimeMs=5000
+zkCommitPeriodMs=5000
+
+[bdbStore]
+bdbRepGroupName=tubemqMasterGroup
+bdbNodeName=tubemqMasterGroupNode1
+bdbNodePort=9001
+bdbEnvHome=/stage/metadata
+bdbHelperHost=9.134.8.170:9001
+bdbLocalSync= 1
+bdbReplicaSync= 3
+bdbReplicaAck= 1
+bdbStatusCheckTimeoutMs=10000
+```
+
+##### resources/velocity.properties
+```properties
+resource.loader=file
+file.resource.loader.description=Velocity File Resource Loader
+file.resource.loader.class=org.apache.velocity.runtime.resource.loader.FileResourceLoader
+file.resource.loader.path=/opt/tubemq-server/resources/templates
+file.resource.loader.cache=false
+file.resource.loader.modificationCheckInterval=2
+string.resource.loader.description=Velocity String Resource Loader
+string.resource.loader.class=org.apache.velocity.runtime.resource.loader.StringResourceLoader
+input.encoding=UTF-8
+output.encoding=UTF-8
+```
+
+##### conf/broker.ini
+```ini
+[broker]
+brokerId=0
+hostName=YOUR_SERVER_IP
+port=8123
+webPort=8081
+masterAddressList=YOUR_SERVER_IP:8000
+primaryPath=/stage/msgdata
+maxSegmentSize=1073741824
+maxIndexSegmentSize=22020096
+transferSize= 524288
+loadMessageStoresInParallel=true
+consumerRegTimeoutMs=35000
+
+[zookeeper]
+zkNodeRoot=/tubemq
+zkServerAddr=localhost:2181
+zkSessionTimeoutMs=30000
+zkConnectionTimeoutMs=30000
+zkSyncTimeMs=5000
+zkCommitPeriodMs=5000
+zkCommitFailRetries=10
+
+```
+
+You also need to update your `/etc/hosts` file on the master servers. Add other master
+server IPs in this way, assume the ip is `192.168.1.2`:
+##### /etc/hosts
+```
+192.168.1.2 192-168-1-2
+```
+
+## Start Master
+After update the config file, please go to the `bin` folder and run this command to start
+the master service.
+```bash
+./master.sh start
+```
+You should be able to access `http://your-master-ip:8080/config/topic_list.htm` to see the
+web GUI now.
+
+![TubeMQ Console GUI](img/tubemq-console-gui.png)
+
+## Start Broker
+Before we start a broker service, we need to configure it on master web GUI first.
+
+Go to the `Broker List` page, click `Add Single Broker`, and input the new broker 
+information.
+
+![Add Broker 1](img/tubemq-add-broker-1.png)
+
+In this example, we only need to input broker IP and authToken:
+1. broker IP: broker server ip
+2. authToken: A token pre-configured in the `conf/master.ini` file. Please check the
+`confModAuthToken` field in your `master.ini` file.
+
+Click the online link to activate the new added broker.
+
+![Add Broker 2](img/tubemq-add-broker-2.png)
+
+Go to the broker server, under the `bin` folder run this command to start the broker service
+```bash
+./broker.sh start
+```
+
+Refresh the GUI broker list page, you can see that the broker now is registered.
+
+After the sub-state of the broker changed to `idle`, we can add topics to that broker.
+
+![Add Broker 3](img/tubemq-add-broker-3.png)
+
+## Add Topic
+We can add or manage the cluster topics on the web GUI. To add a new topic, go to the
+topic list page and click the add new topic button
+
+![Add Topic 1](img/tubemq-add-topic-1.png)
+
+Then select the brokers which you want to deploy the topics to.
+
+![Add Topic 5](img/tubemq-add-topic-5.png)
+
+We can see the publish and subscribe state of the new added topic is still grey. We need
+to go to the broker list page to reload the broker configuration.
+
+![Add Topic 6](img/tubemq-add-topic-6.png)
+
+![Add Topic 2](img/tubemq-add-topic-2.png)
+
+When the broker sub-state changed to idle, go to the topic list page. We can see
+that the topic publish/subscribe state is active now.
+
+![Add Topic 3](img/tubemq-add-topic-3.png)
+
+![Add Topic 4](img/tubemq-add-topic-4.png)
+
+Now we can use the topic to send messages.
+
+## Demo
+Now we can run the example to test our cluster. First let's run the produce data demo. Please don't
+forget replace `YOUR_SERVER_IP` with your server ip.
+```bash
+java -Dlog4j.configuration=file:/opt/tubemq-server/conf/tools.log4j.properties  -Djava.net.preferIPv4Stack=true -cp  /opt/tubemq-server/lib/*:/opt/tubemq-server/conf/*: com.tencent.tubemq.example.MessageProducerExample YOUR_SERVER_IP YOUR_SERVER_IP:8000 demo 10000000
+```
+From the log, we can see the message is sent out.
+```bash
+[2019-09-11 16:09:08,287] INFO Send demo 1000 message, keyCount is 268 (com.tencent.tubemq.example.MessageProducerExample)
+[2019-09-11 16:09:08,505] INFO Send demo 2000 message, keyCount is 501 (com.tencent.tubemq.example.MessageProducerExample)
+[2019-09-11 16:09:08,958] INFO Send demo 3000 message, keyCount is 755 (com.tencent.tubemq.example.MessageProducerExample)
+[2019-09-11 16:09:09,085] INFO Send demo 4000 message, keyCount is 1001 (com.tencent.tubemq.example.MessageProducerExample)
+```
+
+Then we run the consume data demo. Also replace the server ip
+```bash
+java -Xmx512m -Dlog4j.configuration=file:/opt/tubemq-server/conf/tools.log4j.properties -Djava.net.preferIPv4Stack=true -cp /opt/tubemq-server/lib/*:/opt/tubemq-server/conf/*: com.tencent.tubemq.example.MessageConsumerExample YOUR_SERVER_IP YOUR_SERVER_IP:8000 demo demoGroup 3 1 1
+```
+From the log, we can see the message received by the consumer.
+
+```bash
+[2019-09-11 16:09:29,720] INFO Receive messages:2500 (com.tencent.tubemq.example.MsgRecvStats)
+[2019-09-11 16:09:30,059] INFO Receive messages:5000 (com.tencent.tubemq.example.MsgRecvStats)
+[2019-09-11 16:09:34,493] INFO Receive messages:10000 (com.tencent.tubemq.example.MsgRecvStats)
+[2019-09-11 16:09:34,783] INFO Receive messages:12500 (com.tencent.tubemq.example.MsgRecvStats)
+```
+
+---
+
+
diff --git a/site_config/docs.js b/site_config/docs.js
index 3f4faac..fcab333 100644
--- a/site_config/docs.js
+++ b/site_config/docs.js
@@ -1,50 +1,88 @@
 export default {
-  'en-us': {
-    sidemenu: [
-      {
-        title: 'guide',
-        children: [
-          {
-            title: 'user guide',
-            link: '/en-us/docs/tubemq_user_guide.html',
-          },
-          {
-            title: 'contact',
-            link: '/en-us/docs/contact.html'
-          },
-          {
-            title: 'contribution',
-            link: '/en-us/docs/contribution.html'
-          }
-        ],
-      },
-      {
-          title: 'user guide',
-          link: '/en-us/docs/xx.html',
-      }
+    'en-us': {
+        sidemenu: [
+            {
+                title: 'User Guide',
+                children: [
+                    {
+                        title: 'Quick Start',
+                        link: '/en-us/docs/quick_start.html',
+                    },
+                    {
+                        title: 'Producer Example',
+                        link: '/en-us/docs/producer_example.html',
+                    },
+                    {
+                        title: 'Consumer Example',
+                        link: '/en-us/docs/consumer_example.html',
+                    },
+                ],
+            },
+            {
+                title: 'Architecture & Deployment',
+                children: [
+                    {
+                        title: 'Architecture',
+                        link: '/en-us/docs/architecture.html',
+                    },
+                    {
+                        title: 'Deployment',
+                        link: '/en-us/docs/deployment.html',
+                    },
+                ],
+            },
+            {
+                title: 'Contact',
+                link: '/en-us/docs/contact.html',
+            },
+            {
+                title: 'Contribution',
+                link: '/en-us/docs/contribution.html',
+            }
     ],
     barText: 'Documentation',
   },
   'zh-cn': {
     sidemenu: [
-      {
-        title: '引导',
-        children: [
-          {
-            title: '用户指南',
-            link: '/zh-cn/docs/tubemq_user_guide.html',
-          },
-          {
+        {
+            title: '引导',
+            children: [
+                {
+                    title: '快速开始',
+                    link: '/zh-cn/docs/quick_start.html',
+                },
+                {
+                    title: '生产者示例',
+                    link: '/zh-cn/docs/producer_example.html',
+                },
+                {
+                    title: '消费者示例',
+                    link: '/zh-cn/docs/consumer_example.html',
+                },
+            ],
+        },
+        {
+            title: '架构和部署',
+            children: [
+                {
+                    title: '架构',
+                    link: '/en-us/docs/architecture.html',
+                },
+                {
+                    title: '部署',
+                    link: '/en-us/docs/deployment.html',
+                },
+            ],
+        },
+        {
             title: '联系我们',
             link: '/zh-cn/docs/contact.html'
-          },
-          {
+        },
+        {
             title: '如何贡献',
             link: '/zh-cn/docs/contribution.html'
-          }
-        ],
-      },
+        }
     ],
-    barText: '文档',
-  },
+        barText: '文档',
+    },
 };