You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hugegraph.apache.org by ji...@apache.org on 2022/09/15 05:27:42 UTC

[incubator-hugegraph-doc] branch master updated: add rank api & fix typo

This is an automated email from the ASF dual-hosted git repository.

jin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git


The following commit(s) were added to refs/heads/master by this push:
     new 06499b02 add rank api & fix typo
06499b02 is described below

commit 06499b02a4315f860ff2c41f825fc1487c949573
Author: imbajin <ji...@apache.org>
AuthorDate: Thu Sep 15 12:59:59 2022 +0800

    add rank api & fix typo
---
 .gitignore                                         |   9 +
 content/en/docs/BUILDING.md                        |  24 --
 content/en/docs/clients/hugegraph-client.md        | 209 +++++-----
 content/en/docs/clients/restful-api/rank.md        | 137 +++---
 content/en/docs/clients/restful-api/task.md        |  10 +-
 content/en/docs/config/config-option.md            | 460 ++++++++++-----------
 .../en/docs/contribution-guidelines/contribute.md  |   6 +-
 .../en/docs/contribution-guidelines/subscribe.md   |   4 +-
 content/en/docs/download/download.md               |  34 +-
 content/en/docs/guides/custom-plugin.md            |   6 +-
 content/en/docs/guides/faq.md                      |   2 +-
 content/en/docs/introduction/README.md             |   4 +-
 content/en/docs/language/hugegraph-example.md      |  24 +-
 content/en/docs/language/hugegraph-gremlin.md      | 180 ++++----
 .../docs/performance/hugegraph-benchmark-0.4.4.md  |  78 ++--
 .../docs/performance/hugegraph-benchmark-0.5.6.md  |  70 ++--
 content/en/docs/quickstart/hugegraph-client.md     |   6 +-
 content/en/docs/quickstart/hugegraph-hubble.md     |   4 +-
 content/en/docs/quickstart/hugegraph-loader.md     |  79 ++--
 content/en/docs/quickstart/hugegraph-server.md     |   2 +-
 content/en/docs/quickstart/hugegraph-spark.md      |   4 +-
 content/en/docs/quickstart/hugegraph-studio.md     |  41 +-
 content/en/docs/quickstart/hugegraph-tools.md      |  20 +-
 23 files changed, 705 insertions(+), 708 deletions(-)

diff --git a/.gitignore b/.gitignore
index af1b6467..b507d685 100644
--- a/.gitignore
+++ b/.gitignore
@@ -5,3 +5,12 @@ package-lock.json
 .hugo_build.lock
 nohup.out
 *.log
+
+# Default ignored files
+/shelf/
+/workspace.xml
+# Editor-based HTTP Client requests
+/httpRequests/
+# Datasource local storage ignored files
+/dataSources/
+/dataSources.local.xml
diff --git a/content/en/docs/BUILDING.md b/content/en/docs/BUILDING.md
deleted file mode 100644
index 0607141a..00000000
--- a/content/en/docs/BUILDING.md
+++ /dev/null
@@ -1,24 +0,0 @@
-HugeDoc Installation
-
-HugeDoc use [GitBook](https://github.com/GitbookIO/gitbook) to convert markdown to static website, 
-and use GitBook with NodeJs to server as web server.
-
-### How To use
-
-Install GitBook is via **NPM**:
-
-```
-$ npm install gitbook-cli -g
-```
-
-Preview and serve your book using:
-
-```
-$ gitbook serve
-```
-
-Or build the static website using:
-
-```
-$ gitbook build
-```
diff --git a/content/en/docs/clients/hugegraph-client.md b/content/en/docs/clients/hugegraph-client.md
index 5e7133ee..7525c399 100644
--- a/content/en/docs/clients/hugegraph-client.md
+++ b/content/en/docs/clients/hugegraph-client.md
@@ -57,39 +57,38 @@ The constraint information that PropertyKey allows to define includes: name, dat
 
 - name: The name of the property, used to distinguish different PropertyKeys, PropertyKeys with the same name are not allowed.
 
-interface                | param | must set
------------------------- | ----- | --------
-propertyKey(String name) | name  | y
+| interface                | param | must set |
+|--------------------------|-------|----------|
+| propertyKey(String name) | name  | y        |
 
 - datatype: property value type, you must select an explicit setting from the following table that conforms to the specific business scenario:
 
-interface     | Java Class
-------------- | ----------
-asText()      | String
-asInt()       | Integer
-asDate()      | Date
-asUuid()      | UUID
-asBoolean()   | Boolean
-asByte()      | Byte
-asBlob()      | Byte[]
-asDouble()    | Double
-asFloat()     | Float
-asLong()      | Long
-
-- cardinality: Whether the property value is single-valued or multi-valued, in the case of multi-valued, it is divided into allowing-duplicate values and not-allowing-duplicate values. This item is single by default. If necessary, you can select a setting from the following table:
-
-interface     | cardinality | description
-------------- | ----------- | -------------------------------------------
-valueSingle() | single      | single value
-valueList()   | list        | multi-values that allow duplicate value
-valueSet()    | set         | multi-values that not allow duplicate value
+| interface   | Java Class |
+|-------------|------------|
+| asText()    | String     |
+| asInt()     | Integer    |
+| asDate()    | Date       |
+| asUuid()    | UUID       |
+| asBoolean() | Boolean    |
+| asByte()    | Byte       |
+| asBlob()    | Byte[]     |
+| asDouble()  | Double     |
+| asFloat()   | Float      |
+| asLong()    | Long       |
+
+- cardinality: Whether the property value is single-valued or multivalued, in the case of multivalued, it is divided into allowing-duplicate values and not-allowing-duplicate values. This item is single by default. If necessary, you can select a setting from the following table:
+
+| interface     | cardinality | description                                 |
+|---------------|-------------|---------------------------------------------|
+| valueSingle() | single      | single value                                |
+| valueList()   | list        | multi-values that allow duplicate value     |
+| valueSet()    | set         | multi-values that not allow duplicate value |
 
 - userdata: Users can add some constraints or additional information by themselves, and then check whether the incoming properties satisfy the constraints, or extract additional information when necessary:
 
-interface                          | description
----------------------------------- | ----------------------------------------------
-userdata(String key, Object value) | The same key, the latter will cover the former
-
+| interface                          | description                                    |
+|------------------------------------|------------------------------------------------|
+| userdata(String key, Object value) | The same key, the latter will cover the former |
 
 ##### 2.2.2 Create PropertyKey
 
@@ -105,7 +104,7 @@ graph.schema().propertyKey("name").asText().valueSet().ifNotExist().create()
 
 In the following examples, the syntax of `gremlin` and `java` is exactly the same, so we won't repeat them.
 
-- ifNotExist(): Add a judgment mechanism for create, if the current PropertyKey already exists, it will not be created, otherwise the property will be created. If no ifNotExist() is added, an exception will be thrown if a properkey with the same name already exists. The same as below, and will not be repeated there.
+- ifNotExist(): Add a judgment mechanism for create, if the current PropertyKey already exists, it will not be created, otherwise the property will be created. If no ifNotExist() is added, an exception will be thrown if a property-key with the same name already exists. The same as below, and will not be repeated there.
 
 ##### 2.2.3 Delete PropertyKey
 
@@ -136,57 +135,57 @@ The constraint information that VertexLabel allows to define include: name, idSt
 
 - name: The name of the VertexLabel, used to distinguish different VertexLabels, VertexLabels with the same name are not allowed.
 
-interface                | param | must set
------------------------- | ----- | --------
-vertexLabel(String name) | name  | y
+| interface                | param | must set |
+|--------------------------|-------|----------|
+| vertexLabel(String name) | name  | y        |
 
-- idStrategy: Each VertexLabel can choose its own Id strategy. There are currently three strategies to choose from, namely Automatic (automatically generated), Customize (user input) and PrimaryKey (primary attribute key). Among them, Automatic uses the Snowflake algorithm to generate Id, Customize requires the user to pass in the Id of string or number type, and PrimaryKey allows the user to select several properties of VertexLabel as the basis for differentiation. HugeGraph will be spl [...]
+- idStrategy: Each VertexLabel can choose its own ID strategy. There are currently three strategies to choose from, namely Automatic (automatically generated), Customize (user input) and PrimaryKey (primary attribute key). Among them, Automatic uses the Snowflake algorithm to generate Id, Customize requires the user to pass in the Id of string or number type, and PrimaryKey allows the user to select several properties of VertexLabel as the basis for differentiation. HugeGraph will be spl [...]
 
-interface             | idStrategy        | description
---------------------- | ----------------- | ------------------------------------------------------
-useAutomaticId        | AUTOMATIC         | generate id automaticly by Snowflake algorithom
-useCustomizeStringId  | CUSTOMIZE_STRING  | passed id by user, must be string type
-useCustomizeNumberId  | CUSTOMIZE_NUMBER  | passed id by user, must be number type
-usePrimaryKeyId       | PRIMARY_KEY       | choose some important prop as primary key to splice id
+| interface            | idStrategy       | description                                             |
+|----------------------|------------------|---------------------------------------------------------|
+| useAutomaticId       | AUTOMATIC        | generate id automatically by Snowflake algorithm        |
+| useCustomizeStringId | CUSTOMIZE_STRING | passed id by user, must be string type                  |
+| useCustomizeNumberId | CUSTOMIZE_NUMBER | passed id by user, must be number type                  |
+| usePrimaryKeyId      | PRIMARY_KEY      | choose some important prop as primary key to splice id  |
 
 - properties: define the properties of the vertex, the incoming parameter is the name of the PropertyKey.
 
-interface                        | description
--------------------------------- | -------------------------
-properties(String... properties) | allow to pass multi properties
+| interface                        | description                    |
+|----------------------------------|--------------------------------|
+| properties(String... properties) | allow to pass multi properties |
 
-- primaryKeys: When the user selects the Id strategy of PrimaryKey, several primary properties need to be selected from the properties of VertexLabel as the basis for differentiation;
+- primaryKeys: When the user selects the ID strategy of PrimaryKey, several primary properties need to be selected from the properties of VertexLabel as the basis for differentiation;
 
-interface                   | description
---------------------------- | -----------------------------------------
-primaryKeys(String... keys) | allow to choose multi prop as primaryKeys
+| interface                   | description                               |
+|-----------------------------|-------------------------------------------|
+| primaryKeys(String... keys) | allow to choose multi prop as primaryKeys |
 
-Note that the selection of the Id strategy and the setting of primaryKeys have some mutual constraints, which cannot be called at will. The constraints are shown in the following table:
+Note that the selection of the ID strategy and the setting of primaryKeys have some mutual constraints, which cannot be called at will. The constraints are shown in the following table:
 
-|                   | useAutomaticId | useCustomizeStringId | useCustomizeNumberId | usePrimaryKeyId
-| ----------------- | -------------- | -------------------- | -------------------- | ---------------
-| unset primaryKeys | AUTOMATIC      | CUSTOMIZE_STRING     | CUSTOMIZE_NUMBER     | ERROR
-| set primaryKeys   | ERROR          | ERROR                | ERROR                | PRIMARY_KEY
+|                   | useAutomaticId | useCustomizeStringId | useCustomizeNumberId | usePrimaryKeyId |
+|-------------------|----------------|----------------------|----------------------|-----------------|
+| unset primaryKeys | AUTOMATIC      | CUSTOMIZE_STRING     | CUSTOMIZE_NUMBER     | ERROR           |
+| set primaryKeys   | ERROR          | ERROR                | ERROR                | PRIMARY_KEY     |
 
 - nullableKeys: For properties set by the properties(...) method, all of them are non-nullable by default, that is, the property must be assigned a value when creating a vertex, which may impose too strict integrity requirements on user data. In order to avoid such strong constraints, the user can set some properties to be nullable through this method, so that the properties can be unassigned when adding vertices.
 
-interface                          | description
----------------------------------- | -------------------------
-nullableKeys(String... properties) | allow to pass multi props
+| interface                          | description               |
+|------------------------------------|---------------------------|
+| nullableKeys(String... properties) | allow to pass multi props |
 
 Note: primaryKeys and nullableKeys cannot intersect, because a property cannot be both primary and nullable.
 
 - enableLabelIndex: The user can specify whether to create an index for the label. If you don't create it, you can't globally search for the vertices and edges of the specified label. If you create it, you can search globally, like `g.V().hasLabel('person'), g.E().has('label', 'person')` query, but the performance will be slower when inserting data, and it will take up more storage space. This defaults to true.
 
-interface                          | description
----------------------------------- | -------------------------------
-enableLabelIndex(boolean enable)   | Whether to create a label index
+| interface                        | description                     |
+|----------------------------------|---------------------------------|
+| enableLabelIndex(boolean enable) | Whether to create a label index |
 
 - userdata: Users can add some constraints or additional information by themselves, and then check whether the incoming properties meet the constraints, or extract additional information when necessary.
 
-interface                          | description
----------------------------------- | ----------------------------------------------
-userdata(String key, Object value) | The same key, the latter will cover the former
+| interface                          | description                                    |
+|------------------------------------|------------------------------------------------|
+| userdata(String key, Object value) | The same key, the latter will cover the former |
 
 ##### 2.3.2 Create VertexLabel
 
@@ -244,37 +243,37 @@ The constraint information that EdgeLabel allows to define include: name, source
 
 - name: The name of the EdgeLabel, used to distinguish different EdgeLabels, EdgeLabels with the same name are not allowed.
 
-interface              | param | must set
----------------------- | ----- | --------
-edgeLabel(String name) | name  | y
+| interface              | param | must set |
+|------------------------|-------|----------|
+| edgeLabel(String name) | name  | y        |
 
 - sourceLabel: The name of the source vertex type of the edge link, only one is allowed;
 
 - targetLabel: The name of the target vertex type of the edge link, only one is allowed;
 
-interface                 | param | must set
-------------------------- | ----- | --------
-sourceLabel(String label) | label | y
-targetLabel(String label) | label | y
+| interface                 | param | must set |
+|---------------------------|-------|----------|
+| sourceLabel(String label) | label | y        |
+| targetLabel(String label) | label | y        |
 
 - frequency: Indicating the number of times a relationship occurs between two specific vertices, which can be single (single) or multiple (frequency), the default is single.
 
-interface    | frequency | description
------------- | --------- | -----------------------------------
-singleTime() | single    | a relationship can only occur once
-multiTimes() | multiple  | a relationship can occur many times
+| interface    | frequency | description                         |
+|--------------|-----------|-------------------------------------|
+| singleTime() | single    | a relationship can only occur once  |
+| multiTimes() | multiple  | a relationship can occur many times |
 
 - properties: Define the properties of the edge.
 
-interface                        | description
--------------------------------- | -------------------------
-properties(String... properties) | allow to pass multi props
+| interface                        | description               |
+|----------------------------------|---------------------------|
+| properties(String... properties) | allow to pass multi props |
 
 - sortKeys: When the frequency of EdgeLabel is multiple, some properties are needed to distinguish the multiple relationships, so sortKeys (sorted keys) is introduced;
 
-interface                | description
------------------------- | --------------------------------------
-sortKeys(String... keys) | allow to choose multi prop as sortKeys
+| interface                | description                            |
+|--------------------------|----------------------------------------|
+| sortKeys(String... keys) | allow to choose multi prop as sortKeys |
 
 - nullableKeys: Consistent with the concept of nullableKeys in vertices.
 
@@ -284,9 +283,9 @@ Note: sortKeys and nullableKeys also cannot intersect.
 
 - userdata: Users can add some constraints or additional information by themselves, and then check whether the incoming properties meet the constraints, or extract additional information when necessary.
 
-interface                          | description
----------------------------------- | ----------------------------------------------
-userdata(String key, Object value) | The same key, the latter will cover the former
+| interface                          | description                                    |
+|------------------------------------|------------------------------------------------|
+| userdata(String key, Object value) | The same key, the latter will cover the former |
 
 ##### 2.4.2 Create EdgeLabel
 
@@ -307,7 +306,7 @@ schema.edgeLabel("knows").properties("price").nullableKeys("price").append();
 schema.edgeLabel("knows").remove();
 ```
 
-##### 2.4.5 Qeury EdgeLabel
+##### 2.4.5 Query EdgeLabel
 
 ```java
 // Get EdgeLabel
@@ -330,33 +329,33 @@ schema.getEdgeLabel("knows").userdata()
 
 IndexLabel is used to define the index type and describe the constraint information of the index, mainly for the convenience of query.
 
-The constraint information that IndexLabel allows to define include: name, baseType, baseValue, indexFeilds, indexType, which are introduced one by one below.
+The constraint information that IndexLabel allows to define include: name, baseType, baseValue, indexFields, indexType, which are introduced one by one below.
 
 - name: The name of the IndexLabel, used to distinguish different IndexLabels, IndexLabels with the same name are not allowed.
 
-interface               | param | must set
------------------------ | ----- | --------
-indexLabel(String name) | name  | y
+| interface               | param | must set |
+|-------------------------|-------|----------|
+| indexLabel(String name) | name  | y        |
 
 - baseType: Indicates whether to index VertexLabel or EdgeLabel, used in conjunction with the baseValue below.
 
 - baseValue: Specifies the name of the VertexLabel or EdgeLabel to be indexed.
 
-interface             | param     | description
---------------------- | --------- | ----------------------------------------
-onV(String baseValue) | baseValue | build index for VertexLabel: 'baseValue'
-onE(String baseValue) | baseValue | build index for EdgeLabel: 'baseValue'
+| interface             | param     | description                              |
+|-----------------------|-----------|------------------------------------------|
+| onV(String baseValue) | baseValue | build index for VertexLabel: 'baseValue' |
+| onE(String baseValue) | baseValue | build index for EdgeLabel: 'baseValue'   |
 
 - indexFields: on which fields to index, it can be a joint index for multiple columns.
 
-interface            | param | description
--------------------- | ----- | ---------------------------------------------------------
-by(String... fields) | files | allow to build index for multi fields for secondary index
+| interface            | param | description                                               |
+|----------------------|-------|-----------------------------------------------------------|
+| by(String... fields) | files | allow to build index for multi fields for secondary index |
 
 - indexType: There are currently five types of indexes established, namely Secondary, Range, Search, Shard and Unique.
     - Secondary Index supports exact matching secondary index, allow to build joint index, joint index supports index prefix search
         - Single Property Secondary Index, support equality query, for example: the secondary index of the city property of the person vertex, you can use `g.V().has("city", "Beijing")` to query all the vertices with "city attribute value is Beijing"
-        - Joint Secondary Index, supports prefix query and equality query, such as: joint index of city and street properties of person vertex, you can use `g.V().has("city", "Beijing").has('street', 'Zhongguancun street ')` to query all vertices of "city property value is Beijing and street property value is Zhongguancun", or `g.V().has("city", "Beijing")` to query all vertices of "city property value is Beijing".
+        - Joint Secondary Index, supports prefix query and equality query, such as: joint index of city and street properties of person vertex, you can use `g.V().has("city", "Beijing").has('street', 'Zhongguancun street ')` to query all vertices of "city property value is Beijing and street property value is ZhongGuanCun", or `g.V().has("city", "Beijing")` to query all vertices of "city property value is Beijing".
         > The query of Secondary Index is based on the query condition of "yes" or "equal", and does not support "partial matching".
     - Range Index supports for range queries of numeric types
         - Must be a single number or date attribute, for example: the range index of the age property of the person vertex, you can use `g.V().has("age", P.gt(18))` to query the vertices with "age property value greater than 18" . In addition to `P.gt()`, also supports `P.gte()`, `P.lte()`, `P.lt()`, `P.eq()`, `P.between() `, `P.inside()` and `P.outside()` etc.
@@ -371,13 +370,13 @@ by(String... fields) | files | allow to build index for multi fields for seconda
     - Unique Index supports properties uniqueness constraints, that is, the value of properties can be limited to not repeat, and joint indexing is allowed, but querying is not supported now
         - The unique index of single or multiple properties cannot be used for query, only the value of the property can be limited, and an error will be reported when there is a duplicate value.
 
-interface   | indexType | description
------------ | --------- | ---------------------------------------
-secondary() | Secondary | support prefix search
-range()     | Range     | support range(numeric or date type) search
-search()    | Search    | support full text search
-shard()     | Shard     | support prefix + range(numeric or date type) search
-unique()    | Unique    | support unique props value, not support search
+| interface   | indexType | description                                         |
+|-------------|-----------|-----------------------------------------------------|
+| secondary() | Secondary | support prefix search                               |
+| range()     | Range     | support range(numeric or date type) search          |
+| search()    | Search    | support full text search                            |
+| shard()     | Shard     | support prefix + range(numeric or date type) search |
+| unique()    | Unique    | support unique props value, not support search      |
 
 ##### 2.5.2 Create IndexLabel
 
@@ -422,10 +421,10 @@ Vertex lop = graph.addVertex(T.label, "software", "name", "lop", "lang", "java",
 
 - The key to adding vertices is the vertex properties. The number of parameters of the vertex adding function must be an even number and satisfy the order of `key1 -> val1, key2 -> val2 ...`, and the order between key-value pairs is free .
 - The parameter must contain a special key-value pair, namely `T.label -> "val"`, which is used to define the category of the vertex, so that the program can obtain the schema definition of the VertexLabel from the cache or backend, and then do subsequent constraint checks. The label in the example is defined as person.
-- If the vertex type's Id policy is `AUTOMATIC`, users are not allowed to pass in id key-value pairs.
-- If the Id policy of the vertex type is `CUSTOMIZE_STRING`, the user needs to pass in the value of the id of the String type. The key-value pair is like: `"T.id", "123456"`.
-- If the Id policy of the vertex type is `CUSTOMIZE_NUMBER`, the user needs to pass in the value of the id of the Number type. The key-value pair is like: `"T.id", 123456`.
-- If the Id policy of the vertex type is `PRIMARY_KEY`, the parameters must also contain the name and value of the properties corresponding to the `primaryKeys`, if not set an exception will be thrown. For example, the `primaryKeys` of `person` is `name`, in the example, the value of `name` is set to `marko`.
+- If the vertex type's ID policy is `AUTOMATIC`, users are not allowed to pass in id key-value pairs.
+- If the ID policy of the vertex type is `CUSTOMIZE_STRING`, the user needs to pass in the value of the id of the String type. The key-value pair is like: `"T.id", "123456"`.
+- If the ID policy of the vertex type is `CUSTOMIZE_NUMBER`, the user needs to pass in the value of the id of the Number type. The key-value pair is like: `"T.id", 123456`.
+- If the ID policy of the vertex type is `PRIMARY_KEY`, the parameters must also contain the name and value of the properties corresponding to the `primaryKeys`, if not set an exception will be thrown. For example, the `primaryKeys` of `person` is `name`, in the example, the value of `name` is set to `marko`.
 - For properties that are not nullableKeys, a value must be assigned.
 - The remaining parameters are the settings of other properties of the vertex, but they are not required.
 - After calling the `addVertex` method, the vertices are inserted into the backend storage system immediately.
@@ -438,8 +437,8 @@ After added vertices, edges are also needed to form a complete graph. Here is an
 Edge knows1 = marko.addEdge("knows", vadas, "city", "Beijing");
 ```
 
-- The function `addEdge()` of the (source) vertex is to add a edge(relationship) between itself and another vertex. The first parameter of the function is the label of the edge, and the second parameter is the target vertex. The position and order of these two parameters are fixed. The subsequent parameters are the order of `key1 -> val1, key2 -> val2 ...`, set the properties of the edge, and the key-value pair order is free.
-- The source and target vertices must conform to the definitions of sourcelabel and targetlabel in EdgeLabel, and cannot be added arbitrarily.
+- The function `addEdge()` of the (source) vertex is to add an edge(relationship) between itself and another vertex. The first parameter of the function is the label of the edge, and the second parameter is the target vertex. The position and order of these two parameters are fixed. The subsequent parameters are the order of `key1 -> val1, key2 -> val2 ...`, set the properties of the edge, and the key-value pair order is free.
+- The source and target vertices must conform to the definitions of source-label and target label in EdgeLabel, and cannot be added arbitrarily.
 - For properties that are not nullableKeys, a value must be assigned.
 
 
diff --git a/content/en/docs/clients/restful-api/rank.md b/content/en/docs/clients/restful-api/rank.md
index 8ef45c7f..6a7525fd 100644
--- a/content/en/docs/clients/restful-api/rank.md
+++ b/content/en/docs/clients/restful-api/rank.md
@@ -4,28 +4,35 @@ linkTitle: "Rank"
 weight: 10
 ---
 
-### 4.1 rank API 概述
+### 4.1 Rank API overview
 
-HugeGraphServer 除了上一节提到的遍历(traverser)方法,还提供了一类专门做推荐的方法,我们称为`rank API`,
-可在图中为一个点推荐与其关系密切的其它点。
+Not only the Graph iteration (traverser) method, HugeGraph-Server also provide `Rank API` for recommendation purpose.
+You can use it to recommend some vertexes much closer to a vertex.
 
-### 4.2 rank API 详解
+
+### 4.2 Details of Rank API
 
 #### 4.2.1 Personal Rank API
 
-Personal Rank 算法典型场景是用于推荐应用中, 根据某个点现有的出边, 推荐具有相近 / 相同关系的其他点,
-比如根据某个人的阅读记录 / 习惯, 向它推荐其他可能感兴趣的书, 或潜在的书友, 举例如下:
-1. 假设给定 1个 Person 点 是 tom, 它喜欢 `a,b,c,d,e` 5本书, 我们的想给 tom 推荐一些书友, 以及一些书, 最容易的想法就是看看还有哪些人喜欢过这些书 (共同兴趣)
-2. 那么此时, 需要有其它的 Person 点比如 neo, 他喜欢 `b,d,f` 3本书, 以及 jay, 它喜欢 `c,d,e,g` 4本书, lee 它喜欢 `a,d,e,f` 4本书
-3. 由于 tom 已经看过的书不需要重复推荐, 所以返回结果里应该期望推荐有共同喜好的其他书友看过, 但 tom 没看过的书, 比如推荐 "f"  和 "g" 书, 且优先级 f > g
-4. 此时再计算 tom 的个性化 rank 值, 就会返回排序后 TopN 推荐的 书友 + 书 的结果了 (如果只需要推荐的书, 选择 OTHER_LABEL 即可)
+A typical scenario for `Personal Rank` algorithm is in recommendation application. According to the out edges of a vertex, 
+recommend some other vertices that having the same or similar edges.
+
+Here is a use case:
+According to someone's reading habit or reading history, we can recommend some books he may be interested or some book pal.
+
+For Example:
+1. Suppose we have a vertex, Person type, and named tom.He like 5 books `a,b,c,d,e`. If we want to recommend some book pal and books for tom, an easier idea is let's check whoever also liked these books (common hobby based).
+2. Now, we need someone else, like neo, he like three books `b,d,f`. And Jay, he like 4 books `c,d,e,g`, and Lee, he also like 4 books `a,d,e,f`.
+3. For we don't need to recommend books tom already read, the recommend-list should only contain the books Tom's book pal already read but tom haven't read yet. Such as book "f" and "g", and with priority f > g.
+4. Now, we recompute Tom's personal rank value, we will get a sorted TopN book pal or book recommend-list. (Choose OTHER_LABEL,for Only Book purpose)
+
+
+##### 4.2.1.0 Data Preparation
 
-##### 4.2.1.0 数据准备
+The case above is simple. Here we also provide a public test dataset [MovieLens](https://grouplens.org/datasets/movielens/) for use case.
+You should download the dataset. The load it into HugeGraph with HugeGraph-Loader. To make it simple, we ignore all properties data of user and move. only field id is enough. we also ignore the value of edge rating. 
 
-上面是一个简单的例子, 这里再提供一个公开的 1MB 测试数据集 [MovieLens](https://grouplens.org/datasets/movielens/) 为例,
-用户需下载该数据集,然后使用 HugeGraph-Loader 导入到 HugeGraph 中,简单起见,数据中顶点 user 
-和 movie 的属性都忽略,仅使用 id 字段即可,边 rating 的具体评分值也忽略。loader 使用的元数据
-文件和输入源映射文件内容如下:
+The metadata for input file and mapping file as follows:
 
 ```groovy
 ////////////////////////////////////////////////////////////
@@ -112,42 +119,45 @@ schema.edgeLabel("rating")
 }
 ```
 
-> 注意将映射文件中`input.path`的值修改为自己本地的路径。
+>Note: modify the `input.path` to your local path.
 
-##### 4.2.1.1 功能介绍
+##### 4.2.1.1 Function Introduction
 
-适用于二分图,给出所有源顶点相关的其他顶点及其相关性组成的列表。
+suitable for bipartite graph, will return all vertex or a list of its correlation which related to all source vertex.
 
-> 二分图:也称二部图,是图论里的一种特殊模型,也是一种特殊的网络流。其最大的特点在于,可以将图里的顶点分为两个集合,两个集合之间的点有边相连,但集合内的点之间没有直接关联。
 
-假设有一个用户和物品的二分图,基于随机游走的 PersonalRank 算法步骤如下:
+> Bipartite Graph is a special model in Graph Theory, as well as a special flow in network. The strongest feature is, it split all vertex in graph into two sets. The vertex in the set is not connected. However,the vertex in two sets may connect with each other.
 
-1. 选定一个起点用户 u,其初始权重为 1.0,从 Vu 开始游走(有 alpha 的概率走到邻居点,1 - alpha 的概率停留);
-2. 如果决定向外游走, 那么会选取某一个类型的出边, 例如 `rating` 来查找共同的打分人:
-   1. 那就从当前节点的邻居节点中按照均匀分布随机选择一个,并且按照均匀分布划分权重值;
-   2. 给源顶点补偿权重 1 - alpha;
-   3. 重复步骤2;
-3. 达到一定步数或达到精度后收敛,得到推荐列表。
+Suppose we have one bipartite graph based on user and things.
+A random walk based PersonalRank algorithm should be likes this:
+
+
+1. Choose a user u as start vertex, let's set the initial weight to be 1.0 . Go from Vu with probability alpha to a neighbor vertex, and (1-alpha) to stay.
+2. If we decide to go outside, we would like to choose an edge, such as `rating`, to find a common judge.
+   1. Then choose the neighbors of current vertex randomly with uniform distribution, and reset the weights with uniform distribution.
+   2. Compensate the source vertex's weight with (1 - alpha)
+   3. Repeat step 2;
+3. Convergence after reaching a certain number of steps or precision, then we got a recommend-list.
 
 ###### Params
 
-**必填项**:
-- source: 源顶点 id
-- label: 源点出发的某类边 label,须连接两类不同顶点
+**Required**:
+- source: the id of source vertex
+- label: edge label go from the source vertex, should connect two different type of vertex
 
-**选填项**:
-- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,取值区间为 (0, 1], 默认值 `0.85` 
-- max_degree: 查询过程中,单个顶点遍历的最大邻接边数目,默认为 `10000`
-- max_depth: 迭代次数,取值区间为 [2, 50], 默认值 `5`
-- with_label:筛选结果中保留哪些结果,可选以下三类, 默认为 `BOTH_LABEL`
-    - SAME_LABEL:仅保留与源顶点相同类别的顶点
-    - OTHER_LABEL:仅保留与源顶点不同类别(二分图的另一端)的顶点
-    - BOTH_LABEL:同时保留与源顶点相同和相反类别的顶点
-- limit: 返回的顶点的最大数目,默认为 `100`
-- max_diff: 提前收敛的精度差, 默认为 `0.0001` (*后续实现*)  
-- sorted:返回的结果是否根据 rank 排序,为 true 时降序排列,反之不排序,默认为 `true`
+**Optional**:
+- alpha: the probability of going out for one vertex in each iteration,similar to the alpha of PageRank,required, value range is (0, 1], default 0.85.
+- max_degree: in query process, the max iteration number of adjacency edge for a vertex, default `10000`
+- max_depth: iteration number,range [2, 50], default `5`
+- with_label:result filter,default `BOTH_LABEL`,optional list as follows:
+    - SAME_LABEL:Only keep vertex which has the same type as source vertex
+    - OTHER_LABEL:Only keep vertex which has different type as source vertex (the another part in bipartite graph)
+    - BOTH_LABEL:Keep both type vertex
+- limit: max return vertex number,default `100`
+- max_diff: accuracy for convergence, default `0.0001` (*will implement soon*)  
+- sorted: whether sort the result by rank or not, true for descending sort, false for none, default `true`
 
-##### 4.2.1.2 使用方法
+##### 4.2.1.2 Usage
 
 ###### Method & Url
 
@@ -192,17 +202,16 @@ POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
 }
 ```
 
-##### 4.2.1.3 适用场景
+##### 4.2.1.3 Suitable Scenario
 
-两类不同顶点连接形成的二分图中,给某个点推荐相关性最高的其他顶点,例如:
-
-- 阅读推荐: 找出优先给某人推荐的其他**书籍**, 也可以同时推荐共同喜好最高的**书友** (例: 微信 "你的好友也在看 xx 文章" 功能)
-- 社交推荐: 找出拥有相同关注话题的其他**博主**, 也可以推荐可能感兴趣的**新闻/消息** (例: Weibo 中的 "热点推荐" 功能)
-- 商品推荐: 通过某人现在的购物习惯, 找出应优先推给它的**商品列表**, 也可以给它推荐**带货**播主 (例: TaoBao 的 "猜你喜欢" 功能)
+In a bipartite graph build by two different type of vertex, recommend other most related vertex to one vertex. for example:
+- Reading recommendation: find out the **books** should be recommended to someone first, It is also possible to recommend **book pal**  with the highest common preferences at the same time (just like: WeChat "your friend also read xx " function)
+- Social recommendation: find out other **Poster** who interested in same topics, or other **News/Messages** you may be interested with (Such as : "Hot News" function in Weibo)
+- Commodity recommendation: according to someone's shopping habit,find out a **commodity list** should recommend first, some online **salesman** may also be good (Such as : "You May Like" function in TaoBao)
 
 #### 4.2.2 Neighbor Rank API
 
-##### 4.2.2.0 数据准备
+##### 4.2.2.0 Data Preparation
 
 ```java
 public class Loader {
@@ -286,23 +295,25 @@ public class Loader {
 }
 ```
 
-##### 4.2.2.1 功能介绍
+##### 4.2.2.1 Function Introduction
+
+In a general graph structure,find the first N vertices of each layer with the highest correlation with a given starting point and their relevance.
 
-在一般图结构中,找出每一层与给定起点相关性最高的前 N 个顶点及其相关度,用图的语义理解就是:从起点往外走,
-走到各层各个顶点的概率。
+In graph words:  to go out from the starting point, get the probability of going to each vertex of each layer.
 
 ###### Params
 
-- source: 源顶点 id,必填项
-- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,必填项,取值区间为 (0, 1] 
-- steps: 表示从起始顶点走过的路径规则,是一组 Step 的列表,每个 Step 对应结果中的一层,必填项。每个 Step 的结构如下:
-	- direction:表示边的方向(OUT, IN, BOTH),默认是 BOTH
-	- labels:边的类型列表,多个边类型取并集
-	- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
-	- top:在结果中每一层只保留权重最高的前 N 个结果,默认为 100,最大值为 1000
-- capacity: 遍历过程中最大的访问的顶点数目,选填项,默认为10000000
+- source: id of source vertex,required
+- alpha:the probability of going out for one vertex in each iteration,similar to the alpha of PageRank,required, value range is (0, 1] 
+- steps: a path rule for source vertex visited,it's a list of Step,each Step map to a layout in result,required.The structure of each Step as follows:
+	- direction:the direction of edge(OUT, IN, BOTH), BOTH for default.
+	- labels:a list of edge types, will union all edge types
+	- max_degree:in query process, the max iteration number of adjacency edge for a vertex, default `10000` 
+        (Note: before v0.12 step only support degree as parameter name, from v0.12, use max_degree, compatible with degree)
+	- top: retains only the top N results with the highest weight in each layer of the results, default 100, max 1000 
+- capacity: the maximum number of vertexes visited during the traversal, optional, default 10000000
 
-##### 4.2.2.2 使用方法
+##### 4.2.2.2 Usage
 
 ###### Method & Url
 
@@ -383,8 +394,8 @@ POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
 }
 ```
 
-##### 4.2.2.3 适用场景
+##### 4.2.2.3 Suitable Scenario
 
-为给定的起点在不同的层中找到最应该推荐的顶点。
+Find the vertices in different layers for a given start point that should be most recommended
 
-- 比如:在观众、朋友、电影、导演的四层图结构中,根据某个观众的朋友们喜欢的电影,为这个观众推荐电影;或者根据这些电影是谁拍的,为其推荐导演。
+- For example, in the four-layered structure of the audience, friends, movies, and directors, according to the movies that a certain audience's friends like, recommend movies for that audience, or recommend directors for those movies based on who made them.
\ No newline at end of file
diff --git a/content/en/docs/clients/restful-api/task.md b/content/en/docs/clients/restful-api/task.md
index ee1888c1..2477de2c 100644
--- a/content/en/docs/clients/restful-api/task.md
+++ b/content/en/docs/clients/restful-api/task.md
@@ -10,7 +10,7 @@ weight: 13
 
 ##### Params
 
-- status: the status of asynTasks
+- status: the status of asyncTasks
 - limit:the max number of tasks to return
 
 ##### Method & Url
@@ -77,7 +77,7 @@ GET http://localhost:8080/graphs/hugegraph/tasks/2
 }
 ```
 
-#### 7.1.3 Delete task infomation of an async task,**won't delete the task itself**
+#### 7.1.3 Delete task information of an async task,**won't delete the task itself**
 
 ##### Method & Url
 
@@ -91,7 +91,7 @@ DELETE http://localhost:8080/graphs/hugegraph/tasks/2
 204
 ```
 
-#### 7.1.4 取消某个异步任务,**该异步任务必须具有处理中断的能力**
+#### 7.1.4 Cancel an async task, **the task should be able to be canceled**
 
 If you already created an async task via [Gremlin API](../gremlin) as follows:
 
@@ -112,7 +112,7 @@ If you already created an async task via [Gremlin API](../gremlin) as follows:
 ```
 PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
 ```
-> cancel it in 10s. if more than 10s,the task may already finished,then can't be cancelled.
+> cancel it in 10s. if more than 10s,the task may already be finished, then can't be cancelled.
 
 ##### Response Status
 
@@ -128,4 +128,4 @@ PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
 }
 ```
 
-At this point, the number of vertices whose label is man must be less than 10.
+At this point, the number of vertices whose label is man must be less than 10.
\ No newline at end of file
diff --git a/content/en/docs/config/config-option.md b/content/en/docs/config/config-option.md
index ec511f67..5e09df89 100644
--- a/content/en/docs/config/config-option.md
+++ b/content/en/docs/config/config-option.md
@@ -8,269 +8,269 @@ weight: 2
 
 Corresponding configuration file `gremlin-server.yaml`
 
-config option           | default value                                                                                                | descrition
------------------------ | ------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------
-host                    | 127.0.0.1                                                                                                    | The host or ip of Gremlin Server.
-port                    | 8182                                                                                                         | The listening port of Gremlin Server.
-graphs                  | hugegraph: conf/hugegraph.properties                                                                         | The map of graphs with name and config file path.
-scriptEvaluationTimeout | 30000                                                                                                        | The timeout for gremlin script execution(millisecond).
-channelizer             | org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer                                                  | Indicates the protocol which the Gremlin Server provides service.
-authentication          | authenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties} | The authenticator and config(contains tokens path) of authentication mechanism.
+| config option           | default value                                                                                                | description                                                                      |
+|-------------------------|--------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------|
+| host                    | 127.0.0.1                                                                                                    | The host or ip of Gremlin Server.                                               |
+| port                    | 8182                                                                                                         | The listening port of Gremlin Server.                                           |
+| graphs                  | hugegraph: conf/hugegraph.properties                                                                         | The map of graphs with name and config file path.                               |
+| scriptEvaluationTimeout | 30000                                                                                                        | The timeout for gremlin script execution(millisecond).                          |
+| channelizer             | org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer                                                  | Indicates the protocol which the Gremlin Server provides service.               |
+| authentication          | authenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties} | The authenticator and config(contains tokens path) of authentication mechanism. |
 
 ### Rest Server & API Config Options
 
 Corresponding configuration file `rest-server.properties`
 
-config option                      | default value                                    | descrition
----------------------------------- | ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------
-graphs                             | [hugegraph:conf/hugegraph.properties]            | The map of graphs' name and config file.
-server.id                          | server-1                                         | The id of rest server, used for license verification.
-server.role                        | master                                           | The role of nodes in the cluster, available types are [master, worker, computer]
-restserver.url                     | http://127.0.0.1:8080                            | The url for listening of rest server.
-ssl.keystore_file                  | server.keystore                                  | The path of server keystore file used when https protocol is enabled.
-ssl.keystore_password              |                                                  | The password of the path of the server keystore file used when the https protocol is enabled.
-restserver.max_worker_threads      | 2 * CPUs                                         | The maximum worker threads of rest server.
-restserver.min_free_memory         | 64                                               | The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.
-restserver.request_timeout         | 30                                               | The time in seconds within which a request must complete, -1 means no timeout.
-restserver.connection_idle_timeout | 30                                               | The time in seconds to keep an inactive connection alive, -1 means no timeout.
-restserver.connection_max_requests | 256                                              | The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.
-gremlinserver.url                  | http://127.0.0.1:8182                            | The url of gremlin server.
-gremlinserver.max_route            | 8                                                | The max route number for gremlin server.
-gremlinserver.timeout              | 30                                               | The timeout in seconds of waiting for gremlin server.
-batch.max_edges_per_batch          | 500                                              | The maximum number of edges submitted per batch.
-batch.max_vertices_per_batch       | 500                                              | The maximum number of vertices submitted per batch.
-batch.max_write_ratio              | 50                                               | The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.
-batch.max_write_threads            | 0                                                | The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.
-auth.authenticator                 |                                                  | The class path of authenticator implemention. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.
-auth.admin_token                   | 162f7848-0b6d-4faf-b557-3a0797869c55             | Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
-auth.graph_store                   | hugegraph                                        | The name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
-auth.user_tokens                   | [hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
-auth.audit_log_rate                | 1000.0                                           | The max rate of audit log output per user, default value is 1000 records per second.
-auth.cache_capacity                | 10240                                            | The max cache capacity of each auth cache item.
-auth.cache_expire                  | 600                                              | The expiration time in seconds of vertex cache.
-auth.remote_url                    |                                                  | If the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ','.
-auth.token_expire                  | 86400                                            | The expiration time in seconds after token created
-auth.token_secret                  | FXQXbJtbCLxODc6tGci732pkH1cyf8Qg                 | Secret key of HS256 algorithm.
-exception.allow_trace              | false                                            | Whether to allow exception trace stack.
+| config option                      | default value                                    | description                                                                                                                                                                                                    |
+|------------------------------------|--------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| graphs                             | [hugegraph:conf/hugegraph.properties]            | The map of graphs' name and config file.                                                                                                                                                                      |
+| server.id                          | server-1                                         | The id of rest server, used for license verification.                                                                                                                                                         |
+| server.role                        | master                                           | The role of nodes in the cluster, available types are [master, worker, computer]                                                                                                                              |
+| restserver.url                     | http://127.0.0.1:8080                            | The url for listening of rest server.                                                                                                                                                                         |
+| ssl.keystore_file                  | server.keystore                                  | The path of server keystore file used when https protocol is enabled.                                                                                                                                         |
+| ssl.keystore_password              |                                                  | The password of the path of the server keystore file used when the https protocol is enabled.                                                                                                                 |
+| restserver.max_worker_threads      | 2 * CPUs                                         | The maximum worker threads of rest server.                                                                                                                                                                    |
+| restserver.min_free_memory         | 64                                               | The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.                                                                           |
+| restserver.request_timeout         | 30                                               | The time in seconds within which a request must complete, -1 means no timeout.                                                                                                                                |
+| restserver.connection_idle_timeout | 30                                               | The time in seconds to keep an inactive connection alive, -1 means no timeout.                                                                                                                                |
+| restserver.connection_max_requests | 256                                              | The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.                                                                                                     |
+| gremlinserver.url                  | http://127.0.0.1:8182                            | The url of gremlin server.                                                                                                                                                                                    |
+| gremlinserver.max_route            | 8                                                | The max route number for gremlin server.                                                                                                                                                                      |
+| gremlinserver.timeout              | 30                                               | The timeout in seconds of waiting for gremlin server.                                                                                                                                                         |
+| batch.max_edges_per_batch          | 500                                              | The maximum number of edges submitted per batch.                                                                                                                                                              |
+| batch.max_vertices_per_batch       | 500                                              | The maximum number of vertices submitted per batch.                                                                                                                                                           |
+| batch.max_write_ratio              | 50                                               | The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.                                                                                                             |
+| batch.max_write_threads            | 0                                                | The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.                                                              |
+| auth.authenticator                 |                                                  | The class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.                                                          |
+| auth.admin_token                   | 162f7848-0b6d-4faf-b557-3a0797869c55             | Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.                                                                                                                    |
+| auth.graph_store                   | hugegraph                                        | The name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.                                                                              |
+| auth.user_tokens                   | [hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.                                                                                                         |
+| auth.audit_log_rate                | 1000.0                                           | The max rate of audit log output per user, default value is 1000 records per second.                                                                                                                          |
+| auth.cache_capacity                | 10240                                            | The max cache capacity of each auth cache item.                                                                                                                                                               |
+| auth.cache_expire                  | 600                                              | The expiration time in seconds of vertex cache.                                                                                                                                                               |
+| auth.remote_url                    |                                                  | If the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ','. |
+| auth.token_expire                  | 86400                                            | The expiration time in seconds after token created                                                                                                                                                            |
+| auth.token_secret                  | FXQXbJtbCLxODc6tGci732pkH1cyf8Qg                 | Secret key of HS256 algorithm.                                                                                                                                                                                |
+| exception.allow_trace              | false                                            | Whether to allow exception trace stack.                                                                                                                                                                       |
 
 ### Basic Config Options
 
 Basic Config Options and Backend Config Options correspond to configuration files:{graph-name}.properties,such as `hugegraph.properties`
 
-config option                    | default value                   | descrition
--------------------------------- | ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
-gremlin.graph	                 | com.baidu.hugegraph.HugeFactory | Gremlin entrance to create graph.
-backend                          | rocksdb                         | The data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].
-serializer                       | binary                          | The serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].
-store                            | hugegraph                       | The database name like Cassandra Keyspace.
-store.connection_detect_interval | 600                             | The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.
-store.graph                      | g                               | The graph table name, which store vertex, edge and property.
-store.schema                     | m                               | The schema table name, which store meta data.
-store.system                     | s                               | The system table name, which store system data.
-schema.illegal_name_regex	     | .*\s+$&#124;~.*	               | The regex specified the illegal format for schema name.
-schema.cache_capacity            | 10000                           | The max cache size(items) of schema cache.
-vertex.cache_type                | l2                              | The type of vertex cache, allowed values are [l1, l2].
-vertex.cache_capacity            | 10000000                        | The max cache size(items) of vertex cache.
-vertex.cache_expire              | 600                             | The expire time in seconds of vertex cache.
-vertex.check_customized_id_exist | false                           | Whether to check the vertices exist for those using customized id strategy.
-vertex.default_label             | vertex                          | The default vertex label.
-vertex.tx_capacity               | 10000                           | The max size(items) of vertices(uncommitted) in transaction.
-vertex.check_adjacent_vertex_exist | false                         | Whether to check the adjacent vertices of edges exist.
-vertex.lazy_load_adjacent_vertex | true                            | Whether to lazy load adjacent vertices of edges.
-vertex.part_edge_commit_size     | 5000                            | Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.
-vertex.encode_primary_key_number | true                            | Whether to encode number value of primary key in vertex id.
-vertex.remove_left_index_at_overwrite | false                      | Whether remove left index at overwrite.
-edge.cache_type                  | l2                              | The type of edge cache, allowed values are [l1, l2].
-edge.cache_capacity              | 1000000                         | The max cache size(items) of edge cache.
-edge.cache_expire                | 600                             | The expiration time in seconds of edge cache.
-edge.tx_capacity                 | 10000                           | The max size(items) of edges(uncommitted) in transaction.
-query.page_size                  | 500                             | The size of each page when querying by paging.
-query.batch_size                 | 1000                            | The size of each batch when querying by batch.
-query.ignore_invalid_data        | true                            | Whether to ignore invalid data of vertex or edge.
-query.index_intersect_threshold  | 1000                            | The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.
-query.ramtable_edges_capacity    | 20000000                        | The maximum number of edges in ramtable, include OUT and IN edges.
-query.ramtable_enable            | false                           | Whether to enable ramtable for query of adjacent edges.
-query.ramtable_vertices_capacity | 10000000                        | The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.
-query.optimize_aggregate_by_index| false                           | Whether to optimize aggregate query(like count) by index.
-oltp.concurrent_depth            | 10                              | The min depth to enable concurrent oltp algorithm.
-oltp.concurrent_threads          | 10                              | Thread number to concurrently execute oltp algorithm.
-oltp.collection_type             | EC                              | The implementation type of collections used in oltp algorithm.
-rate_limit.read                  | 0                               | The max rate(times/s) to execute query of vertices/edges.
-rate_limit.write                 | 0                               | The max rate(items/s) to add/update/delete vertices/edges.
-task.wait_timeout                | 10                              | Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.
-task.input_size_limit            | 16777216                        | The job input size limit in bytes.
-task.result_size_limit           | 16777216                        | The job result size limit in bytes.
-task.sync_deletion               | false                           | Whether to delete schema or expired data synchronously.
-task.ttl_delete_batch            | 1                               | The batch size used to delete expired data.
-computer.config                  | /conf/computer.yaml             | The config file path of computer job.
-search.text_analyzer             | ikanalyzer                      | Choose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].
-search.text_analyzer_mode        | smart                           | Specify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nSho [...]
-snowflake.datecenter_id          | 0                               | The datacenter id of snowflake id generator.
-snowflake.force_string           | false                           | Whether to force the snowflake long id to be a string.
-snowflake.worker_id              | 0                               | The worker id of snowflake id generator.
-raft.mode                        | false                           | Whether the backend storage works in raft mode.
-raft.safe_read                   | false                           | Whether to use linearly consistent read.
-raft.use_snapshot                | false                           | Whether to use snapshot.
-raft.endpoint                    | 127.0.0.1:8281                  | The peerid of current raft node.
-raft.group_peers                 | 127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283 | The peers of current raft group.
-raft.path                        | ./raft-log                      | The log path of current raft node.
-raft.use_replicator_pipeline     | true                            | Whether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn't have to wait for the ack message of the current log to be sent.
-raft.election_timeout            | 10000                           | Timeout in milliseconds to launch a round of election.
-raft.snapshot_interval           | 3600                            | The interval in seconds to trigger snapshot save.
-raft.backend_threads             | current CPU vcores              | The thread number used to apply task to bakcend.
-raft.read_index_threads          | 8                               | The thread number used to execute reading index.
-raft.apply_batch                 | 1                               | The apply batch size to trigger disruptor event handler.
-raft.queue_size                  | 16384                           | The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.
-raft.queue_publish_timeout       | 60                              | The timeout in second when publish event into disruptor.
-raft.rpc_threads                 | 80                              | The rpc threads for jraft RPC layer.
-raft.rpc_connect_timeout         | 5000                            | The rpc connect timeout for jraft rpc.
-raft.rpc_timeout                 | 60000                           | The rpc timeout for jraft rpc.
-raft.rpc_buf_low_water_mark      | 10485760                        | The ChannelOutboundBuffer's low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.
-raft.rpc_buf_high_water_mark     | 20971520                        | The ChannelOutboundBuffer's high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.
-raft.read_strategy               | ReadOnlyLeaseBased              | The linearizability of read strategy.
+| config option                         | default value                                | description                                                                                                                                                                                                                                                                                                                                                                                                         [...]
+|---------------------------------------|----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| gremlin.graph	                        | com.baidu.hugegraph.HugeFactory              | Gremlin entrance to create graph.                                                                                                                                                                                                                                                                                                                                                                                   [...]
+| backend                               | rocksdb                                      | The data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].                                                                                                                                                                                                                                                                                                                     [...]
+| serializer                            | binary                                       | The serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].                                                                                                                                                                                                                                                                                                                     [...]
+| store                                 | hugegraph                                    | The database name like Cassandra Keyspace.                                                                                                                                                                                                                                                                                                                                                                          [...]
+| store.connection_detect_interval      | 600                                          | The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.                                                                                                                                                                                                                         [...]
+| store.graph                           | g                                            | The graph table name, which store vertex, edge and property.                                                                                                                                                                                                                                                                                                                                                        [...]
+| store.schema                          | m                                            | The schema table name, which store meta data.                                                                                                                                                                                                                                                                                                                                                                       [...]
+| store.system                          | s                                            | The system table name, which store system data.                                                                                                                                                                                                                                                                                                                                                                     [...]
+| schema.illegal_name_regex	            | .*\s+$&#124;~.*	                             | The regex specified the illegal format for schema name.                                                                                                                                                                                                                                                                                                                                                             [...]
+| schema.cache_capacity                 | 10000                                        | The max cache size(items) of schema cache.                                                                                                                                                                                                                                                                                                                                                                          [...]
+| vertex.cache_type                     | l2                                           | The type of vertex cache, allowed values are [l1, l2].                                                                                                                                                                                                                                                                                                                                                              [...]
+| vertex.cache_capacity                 | 10000000                                     | The max cache size(items) of vertex cache.                                                                                                                                                                                                                                                                                                                                                                          [...]
+| vertex.cache_expire                   | 600                                          | The expire time in seconds of vertex cache.                                                                                                                                                                                                                                                                                                                                                                         [...]
+| vertex.check_customized_id_exist      | false                                        | Whether to check the vertices exist for those using customized id strategy.                                                                                                                                                                                                                                                                                                                                         [...]
+| vertex.default_label                  | vertex                                       | The default vertex label.                                                                                                                                                                                                                                                                                                                                                                                           [...]
+| vertex.tx_capacity                    | 10000                                        | The max size(items) of vertices(uncommitted) in transaction.                                                                                                                                                                                                                                                                                                                                                        [...]
+| vertex.check_adjacent_vertex_exist    | false                                        | Whether to check the adjacent vertices of edges exist.                                                                                                                                                                                                                                                                                                                                                              [...]
+| vertex.lazy_load_adjacent_vertex      | true                                         | Whether to lazy load adjacent vertices of edges.                                                                                                                                                                                                                                                                                                                                                                    [...]
+| vertex.part_edge_commit_size          | 5000                                         | Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.                                                                                                                                                                                                                                                                                                         [...]
+| vertex.encode_primary_key_number      | true                                         | Whether to encode number value of primary key in vertex id.                                                                                                                                                                                                                                                                                                                                                         [...]
+| vertex.remove_left_index_at_overwrite | false                                        | Whether remove left index at overwrite.                                                                                                                                                                                                                                                                                                                                                                             [...]
+| edge.cache_type                       | l2                                           | The type of edge cache, allowed values are [l1, l2].                                                                                                                                                                                                                                                                                                                                                                [...]
+| edge.cache_capacity                   | 1000000                                      | The max cache size(items) of edge cache.                                                                                                                                                                                                                                                                                                                                                                            [...]
+| edge.cache_expire                     | 600                                          | The expiration time in seconds of edge cache.                                                                                                                                                                                                                                                                                                                                                                       [...]
+| edge.tx_capacity                      | 10000                                        | The max size(items) of edges(uncommitted) in transaction.                                                                                                                                                                                                                                                                                                                                                           [...]
+| query.page_size                       | 500                                          | The size of each page when querying by paging.                                                                                                                                                                                                                                                                                                                                                                      [...]
+| query.batch_size                      | 1000                                         | The size of each batch when querying by batch.                                                                                                                                                                                                                                                                                                                                                                      [...]
+| query.ignore_invalid_data             | true                                         | Whether to ignore invalid data of vertex or edge.                                                                                                                                                                                                                                                                                                                                                                   [...]
+| query.index_intersect_threshold       | 1000                                         | The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.                                                                                                                                                                                                                                                                                                  [...]
+| query.ramtable_edges_capacity         | 20000000                                     | The maximum number of edges in ramtable, include OUT and IN edges.                                                                                                                                                                                                                                                                                                                                                  [...]
+| query.ramtable_enable                 | false                                        | Whether to enable ramtable for query of adjacent edges.                                                                                                                                                                                                                                                                                                                                                             [...]
+| query.ramtable_vertices_capacity      | 10000000                                     | The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.                                                                                                                                                                                                                                                                                                                    [...]
+| query.optimize_aggregate_by_index     | false                                        | Whether to optimize aggregate query(like count) by index.                                                                                                                                                                                                                                                                                                                                                           [...]
+| oltp.concurrent_depth                 | 10                                           | The min depth to enable concurrent oltp algorithm.                                                                                                                                                                                                                                                                                                                                                                  [...]
+| oltp.concurrent_threads               | 10                                           | Thread number to concurrently execute oltp algorithm.                                                                                                                                                                                                                                                                                                                                                               [...]
+| oltp.collection_type                  | EC                                           | The implementation type of collections used in oltp algorithm.                                                                                                                                                                                                                                                                                                                                                      [...]
+| rate_limit.read                       | 0                                            | The max rate(times/s) to execute query of vertices/edges.                                                                                                                                                                                                                                                                                                                                                           [...]
+| rate_limit.write                      | 0                                            | The max rate(items/s) to add/update/delete vertices/edges.                                                                                                                                                                                                                                                                                                                                                          [...]
+| task.wait_timeout                     | 10                                           | Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.                                                                                                                                                                                                                                                                                                            [...]
+| task.input_size_limit                 | 16777216                                     | The job input size limit in bytes.                                                                                                                                                                                                                                                                                                                                                                                  [...]
+| task.result_size_limit                | 16777216                                     | The job result size limit in bytes.                                                                                                                                                                                                                                                                                                                                                                                 [...]
+| task.sync_deletion                    | false                                        | Whether to delete schema or expired data synchronously.                                                                                                                                                                                                                                                                                                                                                             [...]
+| task.ttl_delete_batch                 | 1                                            | The batch size used to delete expired data.                                                                                                                                                                                                                                                                                                                                                                         [...]
+| computer.config                       | /conf/computer.yaml                          | The config file path of computer job.                                                                                                                                                                                                                                                                                                                                                                               [...]
+| search.text_analyzer                  | ikanalyzer                                   | Choose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].                                                                                                                                                                                                                                                                [...]
+| search.text_analyzer_mode             | smart                                        | Specify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standa [...]
+| snowflake.datacenter_id               | 0                                            | The datacenter id of snowflake id generator.                                                                                                                                                                                                                                                                                                                                                                        [...]
+| snowflake.force_string                | false                                        | Whether to force the snowflake long id to be a string.                                                                                                                                                                                                                                                                                                                                                              [...]
+| snowflake.worker_id                   | 0                                            | The worker id of snowflake id generator.                                                                                                                                                                                                                                                                                                                                                                            [...]
+| raft.mode                             | false                                        | Whether the backend storage works in raft mode.                                                                                                                                                                                                                                                                                                                                                                     [...]
+| raft.safe_read                        | false                                        | Whether to use linearly consistent read.                                                                                                                                                                                                                                                                                                                                                                            [...]
+| raft.use_snapshot                     | false                                        | Whether to use snapshot.                                                                                                                                                                                                                                                                                                                                                                                            [...]
+| raft.endpoint                         | 127.0.0.1:8281                               | The peerid of current raft node.                                                                                                                                                                                                                                                                                                                                                                                    [...]
+| raft.group_peers                      | 127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283 | The peers of current raft group.                                                                                                                                                                                                                                                                                                                                                                                    [...]
+| raft.path                             | ./raft-log                                   | The log path of current raft node.                                                                                                                                                                                                                                                                                                                                                                                  [...]
+| raft.use_replicator_pipeline          | true                                         | Whether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn't have to wait for the ack message of the current log to be sent.                                                                                                                                                                                                                                   [...]
+| raft.election_timeout                 | 10000                                        | Timeout in milliseconds to launch a round of election.                                                                                                                                                                                                                                                                                                                                                              [...]
+| raft.snapshot_interval                | 3600                                         | The interval in seconds to trigger snapshot save.                                                                                                                                                                                                                                                                                                                                                                   [...]
+| raft.backend_threads                  | current CPU v-cores                          | The thread number used to apply task to backend.                                                                                                                                                                                                                                                                                                                                                                    [...]
+| raft.read_index_threads               | 8                                            | The thread number used to execute reading index.                                                                                                                                                                                                                                                                                                                                                                    [...]
+| raft.apply_batch                      | 1                                            | The apply batch size to trigger disruptor event handler.                                                                                                                                                                                                                                                                                                                                                            [...]
+| raft.queue_size                       | 16384                                        | The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.                                                                                                                                                                                                                                                                                                                                         [...]
+| raft.queue_publish_timeout            | 60                                           | The timeout in second when publish event into disruptor.                                                                                                                                                                                                                                                                                                                                                            [...]
+| raft.rpc_threads                      | 80                                           | The rpc threads for jraft RPC layer.                                                                                                                                                                                                                                                                                                                                                                                [...]
+| raft.rpc_connect_timeout              | 5000                                         | The rpc connect timeout for jraft rpc.                                                                                                                                                                                                                                                                                                                                                                              [...]
+| raft.rpc_timeout                      | 60000                                        | The rpc timeout for jraft rpc.                                                                                                                                                                                                                                                                                                                                                                                      [...]
+| raft.rpc_buf_low_water_mark           | 10485760                                     | The ChannelOutboundBuffer's low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.                                                                                                                                                                                                   [...]
+| raft.rpc_buf_high_water_mark          | 20971520                                     | The ChannelOutboundBuffer's high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.                                                                                                  [...]
+| raft.read_strategy                    | ReadOnlyLeaseBased                           | The linearizability of read strategy.                                                                                                                                                                                                                                                                                                                                                                               [...]
 
 ### RPC server Config Options
 
-config option                  | default value  | descrition
------------------------------- | -------------- | ------------------------------------------------------------------
-rpc.client_connect_timeout     | 20             | The timeout(in seconds) of rpc client connect to rpc server.
-rpc.client_load_balancer       | consistentHash | The rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is 'consistentHash', means forwording by request parameters.
-rpc.client_read_timeout        | 40             | The timeout(in seconds) of rpc client read from rpc server.
-rpc.client_reconnect_period    | 10             | The period(in seconds) of rpc client reconnect to rpc server.
-rpc.client_retries             | 3              | Failed retry number of rpc client calls to rpc server.
-rpc.config_order               | 999            | Sofa rpc configuration file loading order, the larger the more later loading.
-rpc.logger_impl                | com.alipay.sofa.rpc.log.SLF4JLoggerImpl | Sofa rpc log implementation class.
-rpc.protocol                   | bolt           | Rpc communication protocol, client and server need to be specified the same value.
-rpc.remote_url                 |                | The remote urls of rpc peers, it can be set to multiple addresses, which are concat by ',', empty value means not enabled.
-rpc.server_adaptive_port       | false          | Whether the bound port is adaptive, if it's enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts.
-rpc.server_host                |                | The hosts/ips bound by rpc server to provide services, empty value means not enabled.
-rpc.server_port                | 8090           | The port bound by rpc server to provide services.
-rpc.server_timeout             | 30             | The timeout(in seconds) of rpc server execution.
+| config option               | default value                           | description                                                                                                                                                                                                    |
+|-----------------------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| rpc.client_connect_timeout  | 20                                      | The timeout(in seconds) of rpc client connect to rpc server.                                                                                                                                                  |
+| rpc.client_load_balancer    | consistentHash                          | The rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is 'consistentHash', means forwarding by request parameters.                                      |
+| rpc.client_read_timeout     | 40                                      | The timeout(in seconds) of rpc client read from rpc server.                                                                                                                                                   |
+| rpc.client_reconnect_period | 10                                      | The period(in seconds) of rpc client reconnect to rpc server.                                                                                                                                                 |
+| rpc.client_retries          | 3                                       | Failed retry number of rpc client calls to rpc server.                                                                                                                                                        |
+| rpc.config_order            | 999                                     | Sofa rpc configuration file loading order, the larger the more later loading.                                                                                                                                 |
+| rpc.logger_impl             | com.alipay.sofa.rpc.log.SLF4JLoggerImpl | Sofa rpc log implementation class.                                                                                                                                                                            |
+| rpc.protocol                | bolt                                    | Rpc communication protocol, client and server need to be specified the same value.                                                                                                                            |
+| rpc.remote_url              |                                         | The remote urls of rpc peers, it can be set to multiple addresses, which are concat by ',', empty value means not enabled.                                                                                    |
+| rpc.server_adaptive_port    | false                                   | Whether the bound port is adaptive, if it's enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts. |
+| rpc.server_host             |                                         | The hosts/ips bound by rpc server to provide services, empty value means not enabled.                                                                                                                         |
+| rpc.server_port             | 8090                                    | The port bound by rpc server to provide services.                                                                                                                                                             |
+| rpc.server_timeout          | 30                                      | The timeout(in seconds) of rpc server execution.                                                                                                                                                              |
 
 ### Cassandra Backend Config Options
 
-config option                  | default value  | descrition
------------------------------- | -------------- | ------------------------------------------------------------------
-backend                        |                | Must be set to `cassandra`.
-serializer                     |                | Must be set to `cassandra`.
-cassandra.host                 | localhost      | The seeds hostname or ip address of cassandra cluster.
-cassandra.port                 | 9042           | The seeds port address of cassandra cluster.
-cassandra.connect_timeout      | 5              | The cassandra driver connect server timeout(seconds).
-cassandra.read_timeout         | 20             | The cassandra driver read from server timeout(seconds).
-cassandra.keyspace.strategy    | SimpleStrategy | The replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.
-cassandra.keyspace.replication | [3]            | The keyspace replication factor of SimpleStrategy, like '[3]'.Or replicas in each datacenter of NetworkTopologyStrategy, like '[dc1:2,dc2:1]'.
-cassandra.username             |                | The username to use to login to cassandra cluster.
-cassandra.password             |                | The password corresponding to cassandra.username.
-cassandra.compression_type     | none           | The compression algorithm of cassandra transport: none/snappy/lz4.
-cassandra.jmx_port=7199        | 7199           | The port of JMX API service for cassandra.
-cassandra.aggregation_timeout  | 43200          | The timeout in seconds of waiting for aggregation.
+| config option                  | default value  | description                                                                                                                                     |
+|--------------------------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------------|
+| backend                        |                | Must be set to `cassandra`.                                                                                                                    |
+| serializer                     |                | Must be set to `cassandra`.                                                                                                                    |
+| cassandra.host                 | localhost      | The seeds hostname or ip address of cassandra cluster.                                                                                         |
+| cassandra.port                 | 9042           | The seeds port address of cassandra cluster.                                                                                                   |
+| cassandra.connect_timeout      | 5              | The cassandra driver connect server timeout(seconds).                                                                                          |
+| cassandra.read_timeout         | 20             | The cassandra driver read from server timeout(seconds).                                                                                        |
+| cassandra.keyspace.strategy    | SimpleStrategy | The replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.                                                |
+| cassandra.keyspace.replication | [3]            | The keyspace replication factor of SimpleStrategy, like '[3]'.Or replicas in each datacenter of NetworkTopologyStrategy, like '[dc1:2,dc2:1]'. |
+| cassandra.username             |                | The username to use to login to cassandra cluster.                                                                                             |
+| cassandra.password             |                | The password corresponding to cassandra.username.                                                                                              |
+| cassandra.compression_type     | none           | The compression algorithm of cassandra transport: none/snappy/lz4.                                                                             |
+| cassandra.jmx_port=7199        | 7199           | The port of JMX API service for cassandra.                                                                                                     |
+| cassandra.aggregation_timeout  | 43200          | The timeout in seconds of waiting for aggregation.                                                                                             |
 
 ### ScyllaDB Backend Config Options
 
-config option                  | default value | descrition
------------------------------- | ------------- | ------------------------------------------------------------------------------------------------
-backend                        |               | Must be set to `scylladb`.
-serializer                     |               | Must be set to `scylladb`.
+| config option | default value | description                 |
+|---------------|---------------|----------------------------|
+| backend       |               | Must be set to `scylladb`. |
+| serializer    |               | Must be set to `scylladb`. |
 
 Other options are consistent with the Cassandra backend.
 
 ### RocksDB Backend Config Options
 
-config option                                   | default value                                                                                                                        | descrition
------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-backend                                         |                                                                                                                                      | Must be set to `rocksdb`.
-serializer                                      |                                                                                                                                      | Must be set to `binary`.
-rocksdb.data_disks                              | []                                                                                                                                   | The optimized disks for storing data of RocksDB. The format of each element: `STORE/TABLE: /path/disk`.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_i [...]
-rocksdb.data_path                               | rocksdb-data                                                                                                                         | The path for storing data of RocksDB.
-rocksdb.wal_path                                | rocksdb-data                                                                                                                         | The path for storing WAL of RocksDB.
-rocksdb.allow_mmap_reads                        | false                                                                                                                                | Allow the OS to mmap file for reading sst tables.
-rocksdb.allow_mmap_writes                       | false                                                                                                                                | Allow the OS to mmap file for writing.
-rocksdb.block_cache_capacity                    | 8388608                                                                                                                              | The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.
-rocksdb.bloom_filter_bits_per_key               | -1                                                                                                                                   | The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.
-rocksdb.bloom_filter_block_based_mode           | false                                                                                                                                | Use block based filter rather than full filter.
-rocksdb.bloom_filter_whole_key_filtering        | true                                                                                                                                 | True if place whole keys in the bloom filter, else place the prefix of keys.
-rocksdb.bottommost_compression                  | NO_COMPRESSION                                                                                                                       | The compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
-rocksdb.bulkload_mode                           | false                                                                                                                                | Switch to the mode to bulk load data into RocksDB.
-rocksdb.cache_index_and_filter_blocks           | false                                                                                                                                | Indicating if we'd put index/filter blocks to the block cache.
-rocksdb.compaction_style                        | LEVEL                                                                                                                                | Set compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.
-rocksdb.compression                             | SNAPPY_COMPRESSION                                                                                                                   | The compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
-rocksdb.compression_per_level                   | [NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION] | The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
-rocksdb.delayed_write_rate                      | 16777216                                                                                                                             | The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.
-rocksdb.log_level                               | INFO                                                                                                                                 | The info log level of RocksDB.
-rocksdb.max_background_jobs                     | 8                                                                                                                                    | Maximum number of concurrent background jobs, including flushes and compactions.
-rocksdb.level_compaction_dynamic_level_bytes    | false                                                                                                                                | Whether to enable level_compaction_dynamic_level_bytes, if it's enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off f [...]
-rocksdb.max_bytes_for_level_base                | 536870912                                                                                                                            | The upper-bound of the total size of level-1 files in bytes.
-rocksdb.max_bytes_for_level_multiplier          | 10.0                                                                                                                                 | The ratio between the total size of level (L+1) files and the total size of level L files for all L.
-rocksdb.max_open_files                          | -1                                                                                                                                   | The maximum number of open files that can be cached by RocksDB, -1 means no limit.
-rocksdb.max_subcompactions                      | 4                                                                                                                                    | The value represents the maximum number of threads per compaction job.
-rocksdb.max_write_buffer_number                 | 6                                                                                                                                    | The maximum number of write buffers that are built up in memory.
-rocksdb.max_write_buffer_number_to_maintain     | 0                                                                                                                                    | The total maximum number of write buffers to maintain in memory.
-rocksdb.min_write_buffer_number_to_merge        | 2                                                                                                                                    | The minimum number of write buffers that will be merged together.
-rocksdb.num_levels                              | 7                                                                                                                                    | Set the number of levels for this database.
-rocksdb.optimize_filters_for_hits               | false                                                                                                                                | This flag allows us to not store filters for the last level.
-rocksdb.optimize_mode                           | true                                                                                                                                 | Optimize for heavy workloads and big datasets.
-rocksdb.pin_l0_filter_and_index_blocks_in_cache | false                                                                                                                                | Indicating if we'd put index/filter blocks to the block cache.
-rocksdb.sst_path                                |                                                                                                                                      | The path for ingesting SST file into RocksDB.
-rocksdb.target_file_size_base                   | 67108864                                                                                                                             | The target file size for compaction in bytes.
-rocksdb.target_file_size_multiplier             | 1                                                                                                                                    | The size ratio between a level L file and a level (L+1) file.
-rocksdb.use_direct_io_for_flush_and_compaction  | false                                                                                                                                | Enable the OS to use direct read/writes in flush and compaction.
-rocksdb.use_direct_reads                        | false                                                                                                                                | Enable the OS to use direct I/O for reading sst tables.
-rocksdb.write_buffer_size                       | 134217728                                                                                                                            | Amount of data in bytes to build up in memory.
-rocksdb.max_manifest_file_size                  | 104857600                                                                                                                            | The max size of manifest file in bytes.
-rocksdb.skip_stats_update_on_db_open            | false                                                                                                                                | Whether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.
-rocksdb.max_file_opening_threads                | 16                                                                                                                                   | The max number of threads used to open files.
-rocksdb.max_total_wal_size                      | 0                                                                                                                                    | Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.
-rocksdb.db_write_buffer_size                    | 0                                                                                                                                    | Total size of write buffers in bytes across all column families, 0 means no limit.
-rocksdb.delete_obsolete_files_period            | 21600                                                                                                                                | The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.
-rocksdb.hard_pending_compaction_bytes_limit     | 274877906944                                                                                                                         | The hard limit to impose on pending compaction in bytes.
-rocksdb.level0_file_num_compaction_trigger      | 2                                                                                                                                    | Number of files to trigger level-0 compaction.
-rocksdb.level0_slowdown_writes_trigger          | 20                                                                                                                                   | Soft limit on number of level-0 files for slowing down writes.
-rocksdb.level0_stop_writes_trigger              | 36                                                                                                                                   | Hard limit on number of level-0 files for stopping writes.
-rocksdb.soft_pending_compaction_bytes_limit     | 68719476736                                                                                                                          | The soft limit to impose on pending compaction in bytes.
+| config option                                   | default value                                                                                                                        | description                                                                                                                                                                                                                                                                                                       [...]
+|-------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ [...]
+| backend                                         |                                                                                                                                      | Must be set to `rocksdb`.                                                                                                                                                                                                                                                                                         [...]
+| serializer                                      |                                                                                                                                      | Must be set to `binary`.                                                                                                                                                                                                                                                                                          [...]
+| rocksdb.data_disks                              | []                                                                                                                                   | The optimized disks for storing data of RocksDB. The format of each element: `STORE/TABLE: /path/disk`.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search [...]
+| rocksdb.data_path                               | rocksdb-data                                                                                                                         | The path for storing data of RocksDB.                                                                                                                                                                                                                                                                             [...]
+| rocksdb.wal_path                                | rocksdb-data                                                                                                                         | The path for storing WAL of RocksDB.                                                                                                                                                                                                                                                                              [...]
+| rocksdb.allow_mmap_reads                        | false                                                                                                                                | Allow the OS to mmap file for reading sst tables.                                                                                                                                                                                                                                                                 [...]
+| rocksdb.allow_mmap_writes                       | false                                                                                                                                | Allow the OS to mmap file for writing.                                                                                                                                                                                                                                                                            [...]
+| rocksdb.block_cache_capacity                    | 8388608                                                                                                                              | The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.                                                                                                                                                                                                                          [...]
+| rocksdb.bloom_filter_bits_per_key               | -1                                                                                                                                   | The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.                                                                                                                                                                              [...]
+| rocksdb.bloom_filter_block_based_mode           | false                                                                                                                                | Use block based filter rather than full filter.                                                                                                                                                                                                                                                                   [...]
+| rocksdb.bloom_filter_whole_key_filtering        | true                                                                                                                                 | True if place whole keys in the bloom filter, else place the prefix of keys.                                                                                                                                                                                                                                      [...]
+| rocksdb.bottommost_compression                  | NO_COMPRESSION                                                                                                                       | The compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.                                                                                                                                                                                      [...]
+| rocksdb.bulkload_mode                           | false                                                                                                                                | Switch to the mode to bulk load data into RocksDB.                                                                                                                                                                                                                                                                [...]
+| rocksdb.cache_index_and_filter_blocks           | false                                                                                                                                | Indicating if we'd put index/filter blocks to the block cache.                                                                                                                                                                                                                                                    [...]
+| rocksdb.compaction_style                        | LEVEL                                                                                                                                | Set compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.                                                                                                                                                                                                                                                           [...]
+| rocksdb.compression                             | SNAPPY_COMPRESSION                                                                                                                   | The compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.                                                                                                                                                                                        [...]
+| rocksdb.compression_per_level                   | [NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION] | The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.                                                                                                                                                                                         [...]
+| rocksdb.delayed_write_rate                      | 16777216                                                                                                                             | The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.                                                                                                                                                                                                            [...]
+| rocksdb.log_level                               | INFO                                                                                                                                 | The info log level of RocksDB.                                                                                                                                                                                                                                                                                    [...]
+| rocksdb.max_background_jobs                     | 8                                                                                                                                    | Maximum number of concurrent background jobs, including flushes and compactions.                                                                                                                                                                                                                                  [...]
+| rocksdb.level_compaction_dynamic_level_bytes    | false                                                                                                                                | Whether to enable level_compaction_dynamic_level_bytes, if it's enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off [...]
+| rocksdb.max_bytes_for_level_base                | 536870912                                                                                                                            | The upper-bound of the total size of level-1 files in bytes.                                                                                                                                                                                                                                                      [...]
+| rocksdb.max_bytes_for_level_multiplier          | 10.0                                                                                                                                 | The ratio between the total size of level (L+1) files and the total size of level L files for all L.                                                                                                                                                                                                              [...]
+| rocksdb.max_open_files                          | -1                                                                                                                                   | The maximum number of open files that can be cached by RocksDB, -1 means no limit.                                                                                                                                                                                                                                [...]
+| rocksdb.max_subcompactions                      | 4                                                                                                                                    | The value represents the maximum number of threads per compaction job.                                                                                                                                                                                                                                            [...]
+| rocksdb.max_write_buffer_number                 | 6                                                                                                                                    | The maximum number of write buffers that are built up in memory.                                                                                                                                                                                                                                                  [...]
+| rocksdb.max_write_buffer_number_to_maintain     | 0                                                                                                                                    | The total maximum number of write buffers to maintain in memory.                                                                                                                                                                                                                                                  [...]
+| rocksdb.min_write_buffer_number_to_merge        | 2                                                                                                                                    | The minimum number of write buffers that will be merged together.                                                                                                                                                                                                                                                 [...]
+| rocksdb.num_levels                              | 7                                                                                                                                    | Set the number of levels for this database.                                                                                                                                                                                                                                                                       [...]
+| rocksdb.optimize_filters_for_hits               | false                                                                                                                                | This flag allows us to not store filters for the last level.                                                                                                                                                                                                                                                      [...]
+| rocksdb.optimize_mode                           | true                                                                                                                                 | Optimize for heavy workloads and big datasets.                                                                                                                                                                                                                                                                    [...]
+| rocksdb.pin_l0_filter_and_index_blocks_in_cache | false                                                                                                                                | Indicating if we'd put index/filter blocks to the block cache.                                                                                                                                                                                                                                                    [...]
+| rocksdb.sst_path                                |                                                                                                                                      | The path for ingesting SST file into RocksDB.                                                                                                                                                                                                                                                                     [...]
+| rocksdb.target_file_size_base                   | 67108864                                                                                                                             | The target file size for compaction in bytes.                                                                                                                                                                                                                                                                     [...]
+| rocksdb.target_file_size_multiplier             | 1                                                                                                                                    | The size ratio between a level L file and a level (L+1) file.                                                                                                                                                                                                                                                     [...]
+| rocksdb.use_direct_io_for_flush_and_compaction  | false                                                                                                                                | Enable the OS to use direct read/writes in flush and compaction.                                                                                                                                                                                                                                                  [...]
+| rocksdb.use_direct_reads                        | false                                                                                                                                | Enable the OS to use direct I/O for reading sst tables.                                                                                                                                                                                                                                                           [...]
+| rocksdb.write_buffer_size                       | 134217728                                                                                                                            | Amount of data in bytes to build up in memory.                                                                                                                                                                                                                                                                    [...]
+| rocksdb.max_manifest_file_size                  | 104857600                                                                                                                            | The max size of manifest file in bytes.                                                                                                                                                                                                                                                                           [...]
+| rocksdb.skip_stats_update_on_db_open            | false                                                                                                                                | Whether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.                                                                                                                                                                                           [...]
+| rocksdb.max_file_opening_threads                | 16                                                                                                                                   | The max number of threads used to open files.                                                                                                                                                                                                                                                                     [...]
+| rocksdb.max_total_wal_size                      | 0                                                                                                                                    | Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.                                                                                                                                                                       [...]
+| rocksdb.db_write_buffer_size                    | 0                                                                                                                                    | Total size of write buffers in bytes across all column families, 0 means no limit.                                                                                                                                                                                                                                [...]
+| rocksdb.delete_obsolete_files_period            | 21600                                                                                                                                | The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.                                                                                                                                                                                                                         [...]
+| rocksdb.hard_pending_compaction_bytes_limit     | 274877906944                                                                                                                         | The hard limit to impose on pending compaction in bytes.                                                                                                                                                                                                                                                          [...]
+| rocksdb.level0_file_num_compaction_trigger      | 2                                                                                                                                    | Number of files to trigger level-0 compaction.                                                                                                                                                                                                                                                                    [...]
+| rocksdb.level0_slowdown_writes_trigger          | 20                                                                                                                                   | Soft limit on number of level-0 files for slowing down writes.                                                                                                                                                                                                                                                    [...]
+| rocksdb.level0_stop_writes_trigger              | 36                                                                                                                                   | Hard limit on number of level-0 files for stopping writes.                                                                                                                                                                                                                                                        [...]
+| rocksdb.soft_pending_compaction_bytes_limit     | 68719476736                                                                                                                          | The soft limit to impose on pending compaction in bytes.                                                                                                                                                                                                                                                          [...]
 
 ### HBase Backend Config Options
 
-config option            | default value               | descrition
------------------------- | --------------------------- | -------------------------------------------------------------------------------
-backend                  |                             | Must be set to `hbase`.
-serializer               |                             | Must be set to `hbase`.
-hbase.hosts              | localhost                   | The hostnames or ip addresses of HBase zookeeper, separated with commas.
-hbase.port               | 2181                        | The port address of HBase zookeeper.
-hbase.threads_max        | 64                          | The max threads num of hbase connections.
-hbase.znode_parent       | /hbase                      | The znode parent path of HBase zookeeper.
-hbase.zk_retry           | 3                           | The recovery retry times of HBase zookeeper.
-hbase.aggregation_timeout |  43200                     | The timeout in seconds of waiting for aggregation.
-hbase.kerberos_enable    |  false                      | Is Kerberos authentication enabled for HBase.
-hbase.kerberos_keytab    |                             | The HBase's key tab file for kerberos authentication.
-hbase.kerberos_principal |                             | The HBase's principal for kerberos authentication.
-hbase.krb5_conf          |  etc/krb5.conf              | Kerberos configuration file, including KDC IP, default realm, etc.
-hbase.hbase_site         | /etc/hbase/conf/hbase-site.xml| The HBase's configuration file
-hbase.enable_partition   | true                           | Is pre-split partitions enabled for HBase.
-hbase.vertex_partitions  | 10                             | The number of partitions of the HBase vertex table.
-hbase.edge_partitions    | 30                             | The number of partitions of the HBase edge table.
+| config option             | default value                  | description                                                               |
+|---------------------------|--------------------------------|--------------------------------------------------------------------------|
+| backend                   |                                | Must be set to `hbase`.                                                  |
+| serializer                |                                | Must be set to `hbase`.                                                  |
+| hbase.hosts               | localhost                      | The hostnames or ip addresses of HBase zookeeper, separated with commas. |
+| hbase.port                | 2181                           | The port address of HBase zookeeper.                                     |
+| hbase.threads_max         | 64                             | The max threads num of hbase connections.                                |
+| hbase.znode_parent        | /hbase                         | The znode parent path of HBase zookeeper.                                |
+| hbase.zk_retry            | 3                              | The recovery retry times of HBase zookeeper.                             |
+| hbase.aggregation_timeout | 43200                          | The timeout in seconds of waiting for aggregation.                       |
+| hbase.kerberos_enable     | false                          | Is Kerberos authentication enabled for HBase.                            |
+| hbase.kerberos_keytab     |                                | The HBase's key tab file for kerberos authentication.                    |
+| hbase.kerberos_principal  |                                | The HBase's principal for kerberos authentication.                       |
+| hbase.krb5_conf           | etc/krb5.conf                  | Kerberos configuration file, including KDC IP, default realm, etc.       |
+| hbase.hbase_site          | /etc/hbase/conf/hbase-site.xml | The HBase's configuration file                                           |
+| hbase.enable_partition    | true                           | Is pre-split partitions enabled for HBase.                               |
+| hbase.vertex_partitions   | 10                             | The number of partitions of the HBase vertex table.                      |
+| hbase.edge_partitions     | 30                             | The number of partitions of the HBase edge table.                        |
 
 ### MySQL & PostgreSQL Backend Config Options
 
-config option            | default value               | descrition
------------------------- | --------------------------- | -------------------------------------------------------------------------------
-backend                  |                             | Must be set to `mysql`.
-serializer               |                             | Must be set to `mysql`.
-jdbc.driver              | com.mysql.jdbc.Driver       | The JDBC driver class to connect database.
-jdbc.url                 | jdbc:mysql://127.0.0.1:3306 | The url of database in JDBC format.
-jdbc.username            | root                        | The username to login database.
-jdbc.password            | ******                      | The password corresponding to jdbc.username.
-jdbc.ssl_mode            | false                       | The SSL mode of connections with database.
-jdbc.reconnect_interval  | 3                           | The interval(seconds) between reconnections when the database connection fails.
-jdbc.reconnect_max_times | 3                           | The reconnect times when the database connection fails.
-jdbc.storage_engine      | InnoDB                      | The storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL.
-jdbc.postgresql.connect_database | template1           | The database used to connect when init store, drop store or check store exist.
+| config option                    | default value               | description                                                                          |
+|----------------------------------|-----------------------------|-------------------------------------------------------------------------------------|
+| backend                          |                             | Must be set to `mysql`.                                                             |
+| serializer                       |                             | Must be set to `mysql`.                                                             |
+| jdbc.driver                      | com.mysql.jdbc.Driver       | The JDBC driver class to connect database.                                          |
+| jdbc.url                         | jdbc:mysql://127.0.0.1:3306 | The url of database in JDBC format.                                                 |
+| jdbc.username                    | root                        | The username to login database.                                                     |
+| jdbc.password                    | ******                      | The password corresponding to jdbc.username.                                        |
+| jdbc.ssl_mode                    | false                       | The SSL mode of connections with database.                                          |
+| jdbc.reconnect_interval          | 3                           | The interval(seconds) between reconnections when the database connection fails.     |
+| jdbc.reconnect_max_times         | 3                           | The reconnect times when the database connection fails.                             |
+| jdbc.storage_engine              | InnoDB                      | The storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL. |
+| jdbc.postgresql.connect_database | template1                   | The database used to connect when init store, drop store or check store exist.      |
 
 ### PostgreSQL Backend Config Options
 
-config option            | default value               | descrition
------------------------- | --------------------------- | -------------------------------------------------------------------------------
-backend                  |                             | Must be set to `postgresql`.
-serializer               |                             | Must be set to `postgresql`.
+| config option | default value | description                   |
+|---------------|---------------|------------------------------|
+| backend       |               | Must be set to `postgresql`. |
+| serializer    |               | Must be set to `postgresql`. |
 
 Other options are consistent with the MySQL backend.
 
diff --git a/content/en/docs/contribution-guidelines/contribute.md b/content/en/docs/contribution-guidelines/contribute.md
index 2ce93249..3ec12ef1 100644
--- a/content/en/docs/contribution-guidelines/contribute.md
+++ b/content/en/docs/contribution-guidelines/contribute.md
@@ -4,7 +4,7 @@ linkTitle: "How to Contribute to HugeGraph"
 weight: 1
 ---
 
-Thanks for taking the time to contribute! As an open source project, HugeGraph is looking forward to be contributed from everyone, and we are also grateful to all of the contributors.
+Thanks for taking the time to contribute! As an open source project, HugeGraph is looking forward to be contributed from everyone, and we are also grateful to all the contributors.
 
 The following is a contribution guide for HugeGraph:
 
@@ -50,7 +50,7 @@ If you encounter bugs or have any questions, please go to [GitHub Issues](https:
 
 #### 3.1 Create a new branch
 
-Please don't use master branch for development. Instead we should create a new branch:
+Please don't use master branch for development. We should create a new branch instead:
 
 ```shell
 # checkout master branch
@@ -132,7 +132,7 @@ Please click on "Details" to find the problem if any check does not pass.
 
 If there are checks not passed or changes requested, then continue to modify the code and push again.
 
-## 6. Further changes after review 
+## 6. More changes after review 
 
 If we have not passed the review, don't be discouraged. Usually a commit needs to be reviewed several times before being accepted! Please follow the review comments and make further changes.
 
diff --git a/content/en/docs/contribution-guidelines/subscribe.md b/content/en/docs/contribution-guidelines/subscribe.md
index 2256f106..aa34219e 100644
--- a/content/en/docs/contribution-guidelines/subscribe.md
+++ b/content/en/docs/contribution-guidelines/subscribe.md
@@ -8,7 +8,7 @@ It is highly recommended to subscribe to the development mailing list to keep up
 
 In the process of using HugeGraph, if you have any questions or ideas, suggestions, you can participate in the HugeGraph community building through the Apache mailing list. Sending a subscription email is also very simple, the steps are as follows:
 
-1. Send an email to dev-subscribe@hugegraph.apache.org with your own email address, subject and content are arbitrary.
+1. Email dev-subscribe@hugegraph.apache.org with your own email address, subject and content are arbitrary.
 
 2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.
 
@@ -20,7 +20,7 @@ If you do not need to know what's going on with HugeGraph, you can unsubscribe f
 
 Unsubscribe from the mailing list steps are as follows:
 
-1. Send an email to dev-unsubscribe@hugegraph.apache.org with your subscribed email address, subject and content are arbitrary.
+1. Email dev-unsubscribe@hugegraph.apache.org with your subscribed email address, subject and content are arbitrary.
 
 2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.
 
diff --git a/content/en/docs/download/download.md b/content/en/docs/download/download.md
index e323dd12..f9f0f8f2 100644
--- a/content/en/docs/download/download.md
+++ b/content/en/docs/download/download.md
@@ -8,26 +8,26 @@ weight: 2
 
 The latest HugeGraph: **0.12.0**, released on _2021-12-31_.
 
-components       | description          | download
----------------- | -------------------- | ----------------------------------------------------------------------------------------------------------------
-HugeGraph-Server | The main program of HugeGraph      | [0.12.0](https://github.com/hugegraph/hugegraph/releases/download/v0.12.0/hugegraph-0.12.0.tar.gz)
-HugeGraph-Hubble | Web-based Visual Graphical Interface  | [1.6.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.6.0/hugegraph-hubble-1.6.0.tar.gz)
-HugeGraph-Loader | Data import tool            | [0.12.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.12.0/hugegraph-loader-0.12.0.tar.gz)
-HugeGraph-Tools  | Command line toolset            | [1.6.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.6.0/hugegraph-tools-1.6.0.tar.gz)
+| components       | description                          | download                                                                                                         |
+|------------------|--------------------------------------|------------------------------------------------------------------------------------------------------------------|
+| HugeGraph-Server | The main program of HugeGraph        | [0.12.0](https://github.com/hugegraph/hugegraph/releases/download/v0.12.0/hugegraph-0.12.0.tar.gz)               |
+| HugeGraph-Hubble | Web-based Visual Graphical Interface | [1.6.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.6.0/hugegraph-hubble-1.6.0.tar.gz)    |
+| HugeGraph-Loader | Data import tool                     | [0.12.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.12.0/hugegraph-loader-0.12.0.tar.gz) |
+| HugeGraph-Tools  | Command line toolset                 | [1.6.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.6.0/hugegraph-tools-1.6.0.tar.gz)      |
 
 ### Versions mapping
 
-server                                                                                           | client | loader                                                                                                                                                                      | hubble                                                                                                             | common | tools |
------------------------------------------------------------------------------------------------- | ------ | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | -----  | -----------------------------------------------------------------------------------------------------------
-[0.12.0](https://github.com/hugegraph/hugegraph/releases/download/v0.12.0/hugegraph-0.12.0.tar.gz)  | [2.0.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/2.0.1)  | [0.12.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.12.0/hugegraph-loader-0.12.0.tar.gz)   | [1.6.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.6.0/hugegraph-hubble-1.6.0.tar.gz)       | [2.0.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hu [...]
-[0.11.2](https://github.com/hugegraph/hugegraph/releases/download/v0.11.2/hugegraph-0.11.2.tar.gz)  | [1.9.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.9.1)  | [0.11.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.11.1/hugegraph-loader-0.11.1.tar.gz)   | [1.5.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.5.0/hugegraph-hubble-1.5.0.tar.gz)       | [1.8.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hu [...]
-[0.10.4](https://github.com/hugegraph/hugegraph/releases/download/v0.10.4/hugegraph-0.10.4.tar.gz)  | [1.8.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.8.0)  | [0.10.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.10.1/hugegraph-loader-0.10.1.tar.gz)   | [0.10.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.10.0/hugegraph-studio-0.10.0.tar.gz)      | [1.6.16](https://mvnrepository.com/artifact/com.baidu.hugegraph [...]
-[0.9.2](https://github.com/hugegraph/hugegraph/releases/download/v0.9.2/hugegraph-0.9.2.tar.gz)  | [1.7.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.7.0)  | [0.9.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.9.0/hugegraph-loader-0.9.0.tar.gz)   | [0.9.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.9.0/hugegraph-studio-0.9.0.tar.gz)               | [1.6.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/ [...]
-[0.8.0](https://github.com/hugegraph/hugegraph/releases/download/v0.8.0/hugegraph-0.8.0.tar.gz)  | [1.6.4](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.6.4)  | [0.8.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.8.0/hugegraph-loader-0.8.0.tar.gz)   | [0.8.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.8.0/hugegraph-studio-0.8.0.tar.gz)               | [1.5.3](https://mvnrepository.com/artifact/com.baidu.hugegraph/ [...]
-[0.7.4](https://github.com/hugegraph/hugegraph/releases/download/v0.7.4/hugegraph-0.7.4.tar.gz)  | [1.5.8](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.8)  | [0.7.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.7.0/hugegraph-loader-0.7.0.tar.gz)   | [0.7.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.7.0/hugegraph-studio-0.7.0.tar.gz)               | [1.4.9](https://mvnrepository.com/artifact/com.baidu.hugegraph/ [...]
-[0.6.1](https://github.com/hugegraph/hugegraph/releases/download/v0.6.1/hugegraph-0.6.1.tar.gz)  | [1.5.6](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.6)  | [0.6.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.6.1/hugegraph-loader-0.6.1.tar.gz)   |  [0.6.1](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.6.1/hugegraph-studio-0.6.1.tar.gz)               | [1.4.3](https://mvnrepository.com/artifact/com.baidu.hugegraph [...]
-[0.5.6](https://hugegraph.github.io/hugegraph-downloads/hugegraph-release-0.5.6-SNAPSHOT.tar.gz) | [1.5.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.0)  | [0.5.6](https://hugegraph.github.io/hugegraph-downloads/hugegraph-loader/hugegraph-loader-0.5.6-bin.tar.gz)     |  [0.5.0](https://hugegraph.github.io/hugegraph-downloads/hugegraph-studio/hugestudio-release-0.5.0-SNAPSHOT.tar.gz) | 1.4.0  |
-[0.4.5](https://hugegraph.github.io/hugegraph-downloads/hugegraph-release-0.4.5-SNAPSHOT.tar.gz) | [1.4.7](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.4.7)  | [0.2.2](https://hugegraph.github.io/hugegraph-downloads/hugegraph-loader/hugegraph-loader-0.2.2-bin.tar.gz)     | [0.4.1](https://hugegraph.github.io/hugegraph-downloads/hugegraph-studio/hugestudio-release-0.4.1-SNAPSHOT.tar.gz) | 1.3.12 |
+| server                                                                                             | client                                                                                 | loader                                                                                                           | hubble                                                                                                             | common                                                               [...]
+|----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------- [...]
+| [0.12.0](https://github.com/hugegraph/hugegraph/releases/download/v0.12.0/hugegraph-0.12.0.tar.gz) | [2.0.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/2.0.1) | [0.12.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.12.0/hugegraph-loader-0.12.0.tar.gz) | [1.6.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.6.0/hugegraph-hubble-1.6.0.tar.gz)      | [2.0.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugeg [...]
+| [0.11.2](https://github.com/hugegraph/hugegraph/releases/download/v0.11.2/hugegraph-0.11.2.tar.gz) | [1.9.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.9.1) | [0.11.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.11.1/hugegraph-loader-0.11.1.tar.gz) | [1.5.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.5.0/hugegraph-hubble-1.5.0.tar.gz)      | [1.8.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugeg [...]
+| [0.10.4](https://github.com/hugegraph/hugegraph/releases/download/v0.10.4/hugegraph-0.10.4.tar.gz) | [1.8.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.8.0) | [0.10.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.10.1/hugegraph-loader-0.10.1.tar.gz) | [0.10.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.10.0/hugegraph-studio-0.10.0.tar.gz)   | [1.6.16](https://mvnrepository.com/artifact/com.baidu.hugegraph/huge [...]
+| [0.9.2](https://github.com/hugegraph/hugegraph/releases/download/v0.9.2/hugegraph-0.9.2.tar.gz)    | [1.7.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.7.0) | [0.9.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.9.0/hugegraph-loader-0.9.0.tar.gz)    | [0.9.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.9.0/hugegraph-studio-0.9.0.tar.gz)      | [1.6.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugeg [...]
+| [0.8.0](https://github.com/hugegraph/hugegraph/releases/download/v0.8.0/hugegraph-0.8.0.tar.gz)    | [1.6.4](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.6.4) | [0.8.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.8.0/hugegraph-loader-0.8.0.tar.gz)    | [0.8.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.8.0/hugegraph-studio-0.8.0.tar.gz)      | [1.5.3](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugeg [...]
+| [0.7.4](https://github.com/hugegraph/hugegraph/releases/download/v0.7.4/hugegraph-0.7.4.tar.gz)    | [1.5.8](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.8) | [0.7.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.7.0/hugegraph-loader-0.7.0.tar.gz)    | [0.7.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.7.0/hugegraph-studio-0.7.0.tar.gz)      | [1.4.9](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugeg [...]
+| [0.6.1](https://github.com/hugegraph/hugegraph/releases/download/v0.6.1/hugegraph-0.6.1.tar.gz)    | [1.5.6](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.6) | [0.6.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.6.1/hugegraph-loader-0.6.1.tar.gz)    | [0.6.1](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.6.1/hugegraph-studio-0.6.1.tar.gz)      | [1.4.3](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugeg [...]
+| [0.5.6](https://hugegraph.github.io/hugegraph-downloads/hugegraph-release-0.5.6-SNAPSHOT.tar.gz)   | [1.5.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.0) | [0.5.6](https://hugegraph.github.io/hugegraph-downloads/hugegraph-loader/hugegraph-loader-0.5.6-bin.tar.gz)      | [0.5.0](https://hugegraph.github.io/hugegraph-downloads/hugegraph-studio/hugestudio-release-0.5.0-SNAPSHOT.tar.gz) | 1.4.0                                                                [...]
+| [0.4.5](https://hugegraph.github.io/hugegraph-downloads/hugegraph-release-0.4.5-SNAPSHOT.tar.gz)   | [1.4.7](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.4.7) | [0.2.2](https://hugegraph.github.io/hugegraph-downloads/hugegraph-loader/hugegraph-loader-0.2.2-bin.tar.gz)      | [0.4.1](https://hugegraph.github.io/hugegraph-downloads/hugegraph-studio/hugestudio-release-0.4.1-SNAPSHOT.tar.gz) | 1.3.12                                                               [...]
 
 > Note: The latest graph analysis and display platform is Hubble, which supports server v0.10 +.
 
diff --git a/content/en/docs/guides/custom-plugin.md b/content/en/docs/guides/custom-plugin.md
index 59a750a1..7a74ae71 100644
--- a/content/en/docs/guides/custom-plugin.md
+++ b/content/en/docs/guides/custom-plugin.md
@@ -259,7 +259,7 @@ public class SpaceAnalyzer implements Analyzer {
 }
 ```
  
-#### 3 实现插件接口,并进行注册
+#### 3. 实现插件接口,并进行注册
 
 插件注册入口为`HugeGraphPlugin.register()`,自定义插件必须实现该接口方法,在其内部注册上述定义好的扩展项。
 接口`com.baidu.hugegraph.plugin.HugeGraphPlugin`定义如下:
@@ -304,13 +304,13 @@ public class DemoPlugin implements HugeGraphPlugin {
 }
 ```
 
-#### 4 配置SPI入口
+#### 4. 配置SPI入口
 
 1. 确保services目录存在:hugegraph-plugin-demo/resources/META-INF/services
 2. 在services目录下建立文本文件:com.baidu.hugegraph.plugin.HugeGraphPlugin
 3. 文件内容如下:com.baidu.hugegraph.plugin.DemoPlugin
  
-#### 5 打Jar包
+#### 5. 打Jar包
 
 通过maven打包,在项目目录下执行命令`mvn package`,在target目录下会生成Jar包文件。
 使用时将该Jar包拷到`plugins`目录,重启服务即可生效。
\ No newline at end of file
diff --git a/content/en/docs/guides/faq.md b/content/en/docs/guides/faq.md
index 750493fd..b0d4c8bf 100644
--- a/content/en/docs/guides/faq.md
+++ b/content/en/docs/guides/faq.md
@@ -38,7 +38,7 @@ weight: 5
 
 - 服务启动成功后,使用`curl`查询所有顶点时返回乱码
 
-  服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至gunzip进行解压(`curl http://example | gunzip`),也可以用`Firefox`的`postman`或者`Chrome`浏览器的`restlet`插件发请求,会自动解压缩响应数据。
+  服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip 进行解压(`curl http://example | gunzip`),也可以用`Firefox`的`postman`或者`Chrome`浏览器的`restlet`插件发请求,会自动解压缩响应数据。
 
 - 使用顶点Id通过`RESTful API`查询顶点时返回空,但是顶点确实是存在的
 
diff --git a/content/en/docs/introduction/README.md b/content/en/docs/introduction/README.md
index 558df1c2..a6e134e2 100644
--- a/content/en/docs/introduction/README.md
+++ b/content/en/docs/introduction/README.md
@@ -44,7 +44,7 @@ The functions of this system include but are not limited to:
 
 ### Modules
 
-- [HugeGraph-Server](/docs/quickstart/hugegraph-server): HugeGraph-Server is the core part of the HugeGraph project, including sub-modules such as Core, Backend, and API;
+- [HugeGraph-Server](/docs/quickstart/hugegraph-server): HugeGraph-Server is the core part of the HugeGraph project, including submodules such as Core, Backend, and API;
   - Core: Graph engine implementation, connecting the Backend module downward and supporting the API module upward;
   - Backend: Realize the storage of graph data to the backend. The supported backends include: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL and PostgreSQL. Users can choose one according to the actual situation;
   - API: Built-in REST Server, provides RESTful API to users, and is fully compatible with Gremlin query.
@@ -55,6 +55,6 @@ The functions of this system include but are not limited to:
 - [HugeGraph-Tools](/docs/quickstart/hugegraph-tools): HugeGraph-Tools is HugeGraph's deployment and management tools, including functions such as managing graphs, backup/restore, Gremlin execution, etc.
 
 ### Contact Us
-- [Github Issues](https://github.com/apache/incubator-hugegraph/issues): Feedback on usage issues and functional requirements (priority)
+- [GitHub Issues](https://github.com/apache/incubator-hugegraph/issues): Feedback on usage issues and functional requirements (priority)
 - Feedback Email: [hugegraph@googlegroups.com](mailto:hugegraph@googlegroups.com)
 - WeChat public account: HugeGraph
\ No newline at end of file
diff --git a/content/en/docs/language/hugegraph-example.md b/content/en/docs/language/hugegraph-example.md
index a103665e..050b8cad 100644
--- a/content/en/docs/language/hugegraph-example.md
+++ b/content/en/docs/language/hugegraph-example.md
@@ -35,20 +35,20 @@ HugeGraph相对于TitanDB而言,其主要特点如下:
 
 该关系图谱中有两类顶点,分别是人物(character)和位置(location)如下表:
 
-名称        | 类型     | 属性
---------- | ------ | -------------
-character | vertex | name,age,type
-location  | vertex | name
+| 名称        | 类型     | 属性            |
+|-----------|--------|---------------|
+| character | vertex | name,age,type |
+| location  | vertex | name          |
 
 有六种关系,分别是父子(father)、母子(mother)、兄弟(brother)、战斗(battled)、居住(lives)、拥有宠物(pet) 关于关系图谱的具体信息如下:
 
-名称      | 类型   | source vertex label | target vertex label | 属性
-------- | ---- | ------------------- | ------------------- | ------
-father  | edge | character           | character           | -
-mother  | edge | character           | character           | -
-brother | edge | character           | character           | -
-pet     | edge | character           | character           | -
-lives   | edge | character           | location            | reason
+| 名称      | 类型   | source vertex label | target vertex label | 属性     |
+|---------|------|---------------------|---------------------|--------|
+| father  | edge | character           | character           | -      |
+| mother  | edge | character           | character           | -      |
+| brother | edge | character           | character           | -      |
+| pet     | edge | character           | character           | -      |
+| lives   | edge | character           | location            | reason |
 
 在HugeGraph中,每个edge label只能作用于一对source vertex label和target vertex label。也就是说,如果一个图内定义了一种关系father连接character和character,那farther就不能再连接其他的vertex labels。
 
@@ -125,7 +125,7 @@ HugeGraph默认是自动生成Id,如果用户通过`primaryKeys`指定`VertexL
 
 #### 3.1 Traversal Query
 
-**1\. Find the grand father of hercules**
+**1\. Find the grandfather of hercules**
 
 ```groovy
 g.V().hasLabel('character').has('name','hercules').out('father').out('father')
diff --git a/content/en/docs/language/hugegraph-gremlin.md b/content/en/docs/language/hugegraph-gremlin.md
index d231ba33..47b9ac68 100644
--- a/content/en/docs/language/hugegraph-gremlin.md
+++ b/content/en/docs/language/hugegraph-gremlin.md
@@ -18,107 +18,107 @@ HugeGraph实现了TinkerPop框架,但是并没有实现TinkerPop所有的特
 
 ### Graph Features
 
-Name                 | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                   | Support
--------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------
-Computer             | Determines if the {@code Graph} implementation supports {@link GraphComputer} based processing                                                                                                                                                                                                                                                                                                                                                                | false
-Transactions         | Determines if the {@code Graph} implementations supports transactions.                                                                                                                                                                                                                                                                                                                                                                                        | true
-Persistence          | Determines if the {@code Graph} implementation supports persisting it's contents natively to disk.This feature does not refer to every graph's ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph.                                                                         | true
-ThreadedTransactions | Determines if the {@code Graph} implementation supports threaded transactions which allow a transactionto be executed across multiple threads via {@link Transaction#createThreadedTx()}.                                                                                                                                                                                                                                                                     | false
-ConcurrentAccess     | Determines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database. | false
+| Name                 | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                   | Support |
+|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| Computer             | Determines if the {@code Graph} implementation supports {@link GraphComputer} based processing                                                                                                                                                                                                                                                                                                                                                                | false   |
+| Transactions         | Determines if the {@code Graph} implementations supports transactions.                                                                                                                                                                                                                                                                                                                                                                                        | true    |
+| Persistence          | Determines if the {@code Graph} implementation supports persisting it's contents natively to disk.This feature does not refer to every graph's ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph.                                                                         | true    |
+| ThreadedTransactions | Determines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}.                                                                                                                                                                                                                                                                     | false   |
+| ConcurrentAccess     | Determines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database. | false   |
 
 ### Vertex Features
 
-Name                     | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                       [...]
------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
-UserSuppliedIds          | Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier datat ype that the {@link Graph [...]
-NumericIds               | Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                            [...]
-StringIds                | Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                             [...]
-UuidIds                  | Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                           [...]
-CustomIds                | Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB's {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                           [...]
-AnyIds                   | Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.                          [...]
-AddProperty              | Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting "data types" and refers to support of calls to {@link Element#property(String, Object)}.                                                                                                                                                                                                                                                                        [...]
-RemoveProperty           | Determines if an {@link Element} allows properties to be removed.                                                                                                                                                                                                                                                                                                                                                                                                                 [...]
-AddVertices              | Determines if a {@link Vertex} can be added to the {@code Graph}.                                                                                                                                                                                                                                                                                                                                                                                                                 [...]
-MultiProperties          | Determines if a {@link Vertex} can support multiple properties with the same key.                                                                                                                                                                                                                                                                                                                                                                                                 [...]
-DuplicateMultiProperties | Determines if a {@link Vertex} can support non-unique values on the same key. For this valueto be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns.                                                                                                                                                                                                               [...]
-MetaProperties           | Determines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties.                                                                                                                                                                                                                                                                        [...]
-RemoveVertices           | Determines if a {@link Vertex} can be removed from the {@code Graph}.                                                                                                                                                                                                                                                                                                                                                                                                             [...]
+| Name                     | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                     [...]
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| UserSuppliedIds          | Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Gra [...]
+| NumericIds               | Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                          [...]
+| StringIds                | Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                           [...]
+| UuidIds                  | Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                         [...]
+| CustomIds                | Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB's {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                         [...]
+| AnyIds                   | Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.                        [...]
+| AddProperty              | Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting "data types" and refers to support of calls to {@link Element#property(String, Object)}.                                                                                                                                                                                                                                                                      [...]
+| RemoveProperty           | Determines if an {@link Element} allows properties to be removed.                                                                                                                                                                                                                                                                                                                                                                                                               [...]
+| AddVertices              | Determines if a {@link Vertex} can be added to the {@code Graph}.                                                                                                                                                                                                                                                                                                                                                                                                               [...]
+| MultiProperties          | Determines if a {@link Vertex} can support multiple properties with the same key.                                                                                                                                                                                                                                                                                                                                                                                               [...]
+| DuplicateMultiProperties | Determines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns.                                                                                                                                                                                                            [...]
+| MetaProperties           | Determines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties.                                                                                                                                                                                                                                                                      [...]
+| RemoveVertices           | Determines if a {@link Vertex} can be removed from the {@code Graph}.                                                                                                                                                                                                                                                                                                                                                                                                           [...]
 
 ### Edge Features
 
-Name            | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                [...]
---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ [...]
-UserSuppliedIds | Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier datat ype that the {@link Graph} will ac [...]
-NumericIds      | Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                     [...]
-StringIds       | Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                      [...]
-UuidIds         | Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                    [...]
-CustomIds       | Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB's {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                    [...]
-AnyIds          | Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.                                   [...]
-AddProperty     | Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting "data types" and refers to support of calls to {@link Element#property(String, Object)}.                                                                                                                                                                                                                                                                                 [...]
-RemoveProperty  | Determines if an {@link Element} allows properties to be removed.                                                                                                                                                                                                                                                                                                                                                                                                                          [...]
-AddEdges        | Determines if an {@link Edge} can be added to a {@code Vertex}.                                                                                                                                                                                                                                                                                                                                                                                                                            [...]
-RemoveEdges     | Determines if an {@link Edge} can be removed from a {@code Vertex}.                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
+| Name            | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                              [...]
+|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| UserSuppliedIds | Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will  [...]
+| NumericIds      | Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                   [...]
+| StringIds       | Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                    [...]
+| UuidIds         | Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                  [...]
+| CustomIds       | Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB's {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                  [...]
+| AnyIds          | Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.                                 [...]
+| AddProperty     | Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting "data types" and refers to support of calls to {@link Element#property(String, Object)}.                                                                                                                                                                                                                                                                               [...]
+| RemoveProperty  | Determines if an {@link Element} allows properties to be removed.                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
+| AddEdges        | Determines if an {@link Edge} can be added to a {@code Vertex}.                                                                                                                                                                                                                                                                                                                                                                                                                          [...]
+| RemoveEdges     | Determines if an {@link Edge} can be removed from a {@code Vertex}.                                                                                                                                                                                                                                                                                                                                                                                                                      [...]
 
 ### Data Type Features
 
-Name               | Description                                                                                                                                                                                                                                                         | Support
------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------
-BooleanValues      |                                                                                                                                                                                                                                                                     | true
-ByteValues         |                                                                                                                                                                                                                                                                     | true
-DoubleValues       |                                                                                                                                                                                                                                                                     | true
-FloatValues        |                                                                                                                                                                                                                                                                     | true
-IntegerValues      |                                                                                                                                                                                                                                                                     | true
-LongValues         |                                                                                                                                                                                                                                                                     | true
-MapValues          | Supports setting of a {@code Map} value. The assumption is that the {@code Map} can containarbitrary serializable values that may or may not be defined as a feature itself                                                                                         | false
-MixedListValues    | Supports setting of a {@code List} value. The assumption is that the {@code List} can containarbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is "mixed" it does not need to contain objects of the same type. | false
-BooleanArrayValues |                                                                                                                                                                                                                                                                     | false
-ByteArrayValues    |                                                                                                                                                                                                                                                                     | true
-DoubleArrayValues  |                                                                                                                                                                                                                                                                     | false
-FloatArrayValues   |                                                                                                                                                                                                                                                                     | false
-IntegerArrayValues |                                                                                                                                                                                                                                                                     | false
-LongArrayValues    |                                                                                                                                                                                                                                                                     | false
-SerializableValues |                                                                                                                                                                                                                                                                     | false
-StringArrayValues  |                                                                                                                                                                                                                                                                     | false
-StringValues       |                                                                                                                                                                                                                                                                     | true
-UniformListValues  | Supports setting of a {@code List} value. The assumption is that the {@code List} can containarbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is "uniform" it must contain objects of the same type.           | false
+| Name               | Description                                                                                                                                                                                                                                                         | Support |
+|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| BooleanValues      |                                                                                                                                                                                                                                                                     | true    |
+| ByteValues         |                                                                                                                                                                                                                                                                     | true    |
+| DoubleValues       |                                                                                                                                                                                                                                                                     | true    |
+| FloatValues        |                                                                                                                                                                                                                                                                     | true    |
+| IntegerValues      |                                                                                                                                                                                                                                                                     | true    |
+| LongValues         |                                                                                                                                                                                                                                                                     | true    |
+| MapValues          | Supports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itself                                                                                         | false   |
+| MixedListValues    | Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is "mixed" it does not need to contain objects of the same type. | false   |
+| BooleanArrayValues |                                                                                                                                                                                                                                                                     | false   |
+| ByteArrayValues    |                                                                                                                                                                                                                                                                     | true    |
+| DoubleArrayValues  |                                                                                                                                                                                                                                                                     | false   |
+| FloatArrayValues   |                                                                                                                                                                                                                                                                     | false   |
+| IntegerArrayValues |                                                                                                                                                                                                                                                                     | false   |
+| LongArrayValues    |                                                                                                                                                                                                                                                                     | false   |
+| SerializableValues |                                                                                                                                                                                                                                                                     | false   |
+| StringArrayValues  |                                                                                                                                                                                                                                                                     | false   |
+| StringValues       |                                                                                                                                                                                                                                                                     | true    |
+| UniformListValues  | Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is "uniform" it must contain objects of the same type.           | false   |
 
 ### Gremlin的步骤
 
 HugeGraph支持Gremlin的所有步骤。有关Gremlin的完整参考信息,请参与[Gremlin官网](http://tinkerpop.apache.org/docs/current/reference/)。
 
-步骤         | 说明                                                                                              | 文档
----------- | ----------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------
-addE       | 在两个顶点之间添加边                                                                                      | [addE step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)
-addV       | 将顶点添加到图形                                                                                        | [addV step](http://tinkerpop.apache.org/docs/current/reference/#addvertex-step)
-and        | 确保所有遍历都返回值                                                                                      | [and step](http://tinkerpop.apache.org/docs/current/reference/#add-step)
-as         | 用于向步骤的输出分配变量的步骤调制器                                                                              | [as step](http://tinkerpop.apache.org/docs/current/reference/#as-step)
-by         | 与`group`和`order`配合使用的步骤调制器                                                                      | [by step](http://tinkerpop.apache.org/docs/current/reference/#by-step)
-coalesce   | 返回第一个返回结果的遍历                                                                                    | [coalesce step](http://tinkerpop.apache.org/docs/current/reference/#coalesce-step)
-constant   | 返回常量值。 与`coalesce`配合使用                                                                          | [constant step](http://tinkerpop.apache.org/docs/current/reference/#constant-step)
-count      | 从遍历返回计数                                                                                         | [count step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)
-dedup      | 返回已删除重复内容的值                                                                                     | [dedup step](http://tinkerpop.apache.org/docs/current/reference/#dedup-step)
-drop       | 丢弃值(顶点/边缘)                                                                                      | [drop step](http://tinkerpop.apache.org/docs/current/reference/#drop-step)
-fold       | 充当用于计算结果聚合值的屏障                                                                                  | [fold step](http://tinkerpop.apache.org/docs/current/reference/#fold-step)
-group      | 根据指定的标签将值分组                                                                                     | [group step](http://tinkerpop.apache.org/docs/current/reference/#group-step)
-has        | 用于筛选属性、顶点和边缘。 支持`hasLabel`、`hasId`、`hasNot` 和 `has` 变体                                          | [has step](http://tinkerpop.apache.org/docs/current/reference/#has-step)
-inject     | 将值注入流中                                                                                          | [inject step](http://tinkerpop.apache.org/docs/current/reference/#inject-step)
-is         | 用于通过布尔表达式执行筛选器                                                                                  | [is step](http://tinkerpop.apache.org/docs/current/reference/#is-step)
-limit      | 用于限制遍历中的项数                                                                                      | [limit step](http://tinkerpop.apache.org/docs/current/reference/#limit-step)
-local      | 本地包装遍历的某个部分,类似于子查询                                                                              | [local step](http://tinkerpop.apache.org/docs/current/reference/#local-step)
-not        | 用于生成筛选器的求反结果                                                                                    | [not step](http://tinkerpop.apache.org/docs/current/reference/#not-step)
-optional   | 如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素                                                                   | [optional step](http://tinkerpop.apache.org/docs/current/reference/#optional-step)
-or         | 确保至少有一个遍历会返回值                                                                                   | [or step](http://tinkerpop.apache.org/docs/current/reference/#or-step)
-order      | 按指定的排序顺序返回结果                                                                                    | [order step](http://tinkerpop.apache.org/docs/current/reference/#order-step)
-path       | 返回遍历的完整路径                                                                                       | [path step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)
-project    | 将属性投影为映射                                                                                        | [project step](http://tinkerpop.apache.org/docs/current/reference/#project-step)
-properties | 返回指定标签的属性                                                                                       | [properties step](http://tinkerpop.apache.org/docs/current/reference/#properties-step)
-range      | 根据指定的值范围进行筛选                                                                                    | [range step](http://tinkerpop.apache.org/docs/current/reference/#range-step)
-repeat     | 将步骤重复指定的次数。 用于循环                                                                                | [repeat step](http://tinkerpop.apache.org/docs/current/reference/#repeat-step)
-sample     | 用于对遍历返回的结果采样                                                                                    | [sample step](http://tinkerpop.apache.org/docs/current/reference/#sample-step)
-select     | 用于投影遍历返回的结果                                                                                     | [select step](http://tinkerpop.apache.org/docs/current/reference/#select-step)
-store      | 用于遍历返回的非阻塞聚合                                                                                    | [store step](http://tinkerpop.apache.org/docs/current/reference/#store-step)
-tree       | 将顶点中的路径聚合到树中                                                                                    | [tree step](http://tinkerpop.apache.org/docs/current/reference/#tree-step)
-unfold     | 将迭代器作为步骤展开                                                                                      | [unfold step](http://tinkerpop.apache.org/docs/current/reference/#unfold-step)
-union      | 合并多个遍历返回的结果                                                                                     | [union step](http://tinkerpop.apache.org/docs/current/reference/#union-step)
-V          | 包括顶点与边之间的遍历所需的步骤:`V`、`E`、`out`、`in`、`both`、`outE`、`inE`、`bothE`、`outV`、`inV`、`bothV` 和 `otherV` | [order step](http://tinkerpop.apache.org/docs/current/reference/#vertex-steps)
-where      | 用于筛选遍历返回的结果。 支持 `eq`、`neq`、`lt`、`lte`、`gt`、`gte` 和 `between` 运算符                                | [where step](http://tinkerpop.apache.org/docs/current/reference/#where-step)
+| 步骤         | 说明                                                                                              | 文档                                                                                     |
+|------------|-------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|
+| addE       | 在两个顶点之间添加边                                                                                      | [addE step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)          |
+| addV       | 将顶点添加到图形                                                                                        | [addV step](http://tinkerpop.apache.org/docs/current/reference/#addvertex-step)        |
+| and        | 确保所有遍历都返回值                                                                                      | [and step](http://tinkerpop.apache.org/docs/current/reference/#add-step)               |
+| as         | 用于向步骤的输出分配变量的步骤调制器                                                                              | [as step](http://tinkerpop.apache.org/docs/current/reference/#as-step)                 |
+| by         | 与`group`和`order`配合使用的步骤调制器                                                                      | [by step](http://tinkerpop.apache.org/docs/current/reference/#by-step)                 |
+| coalesce   | 返回第一个返回结果的遍历                                                                                    | [coalesce step](http://tinkerpop.apache.org/docs/current/reference/#coalesce-step)     |
+| constant   | 返回常量值。 与`coalesce`配合使用                                                                          | [constant step](http://tinkerpop.apache.org/docs/current/reference/#constant-step)     |
+| count      | 从遍历返回计数                                                                                         | [count step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)         |
+| dedup      | 返回已删除重复内容的值                                                                                     | [dedup step](http://tinkerpop.apache.org/docs/current/reference/#dedup-step)           |
+| drop       | 丢弃值(顶点/边缘)                                                                                      | [drop step](http://tinkerpop.apache.org/docs/current/reference/#drop-step)             |
+| fold       | 充当用于计算结果聚合值的屏障                                                                                  | [fold step](http://tinkerpop.apache.org/docs/current/reference/#fold-step)             |
+| group      | 根据指定的标签将值分组                                                                                     | [group step](http://tinkerpop.apache.org/docs/current/reference/#group-step)           |
+| has        | 用于筛选属性、顶点和边缘。 支持`hasLabel`、`hasId`、`hasNot` 和 `has` 变体                                          | [has step](http://tinkerpop.apache.org/docs/current/reference/#has-step)               |
+| inject     | 将值注入流中                                                                                          | [inject step](http://tinkerpop.apache.org/docs/current/reference/#inject-step)         |
+| is         | 用于通过布尔表达式执行筛选器                                                                                  | [is step](http://tinkerpop.apache.org/docs/current/reference/#is-step)                 |
+| limit      | 用于限制遍历中的项数                                                                                      | [limit step](http://tinkerpop.apache.org/docs/current/reference/#limit-step)           |
+| local      | 本地包装遍历的某个部分,类似于子查询                                                                              | [local step](http://tinkerpop.apache.org/docs/current/reference/#local-step)           |
+| not        | 用于生成筛选器的求反结果                                                                                    | [not step](http://tinkerpop.apache.org/docs/current/reference/#not-step)               |
+| optional   | 如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素                                                                   | [optional step](http://tinkerpop.apache.org/docs/current/reference/#optional-step)     |
+| or         | 确保至少有一个遍历会返回值                                                                                   | [or step](http://tinkerpop.apache.org/docs/current/reference/#or-step)                 |
+| order      | 按指定的排序顺序返回结果                                                                                    | [order step](http://tinkerpop.apache.org/docs/current/reference/#order-step)           |
+| path       | 返回遍历的完整路径                                                                                       | [path step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)          |
+| project    | 将属性投影为映射                                                                                        | [project step](http://tinkerpop.apache.org/docs/current/reference/#project-step)       |
+| properties | 返回指定标签的属性                                                                                       | [properties step](http://tinkerpop.apache.org/docs/current/reference/#properties-step) |
+| range      | 根据指定的值范围进行筛选                                                                                    | [range step](http://tinkerpop.apache.org/docs/current/reference/#range-step)           |
+| repeat     | 将步骤重复指定的次数。 用于循环                                                                                | [repeat step](http://tinkerpop.apache.org/docs/current/reference/#repeat-step)         |
+| sample     | 用于对遍历返回的结果采样                                                                                    | [sample step](http://tinkerpop.apache.org/docs/current/reference/#sample-step)         |
+| select     | 用于投影遍历返回的结果                                                                                     | [select step](http://tinkerpop.apache.org/docs/current/reference/#select-step)         |
+| store      | 用于遍历返回的非阻塞聚合                                                                                    | [store step](http://tinkerpop.apache.org/docs/current/reference/#store-step)           |
+| tree       | 将顶点中的路径聚合到树中                                                                                    | [tree step](http://tinkerpop.apache.org/docs/current/reference/#tree-step)             |
+| unfold     | 将迭代器作为步骤展开                                                                                      | [unfold step](http://tinkerpop.apache.org/docs/current/reference/#unfold-step)         |
+| union      | 合并多个遍历返回的结果                                                                                     | [union step](http://tinkerpop.apache.org/docs/current/reference/#union-step)           |
+| V          | 包括顶点与边之间的遍历所需的步骤:`V`、`E`、`out`、`in`、`both`、`outE`、`inE`、`bothE`、`outV`、`inV`、`bothV` 和 `otherV` | [order step](http://tinkerpop.apache.org/docs/current/reference/#vertex-steps)         |
+| where      | 用于筛选遍历返回的结果。 支持 `eq`、`neq`、`lt`、`lte`、`gt`、`gte` 和 `between` 运算符                                | [where step](http://tinkerpop.apache.org/docs/current/reference/#where-step)           |
diff --git a/content/en/docs/performance/hugegraph-benchmark-0.4.4.md b/content/en/docs/performance/hugegraph-benchmark-0.4.4.md
index 2e4bd584..08e951db 100644
--- a/content/en/docs/performance/hugegraph-benchmark-0.4.4.md
+++ b/content/en/docs/performance/hugegraph-benchmark-0.4.4.md
@@ -2,9 +2,9 @@
 
 #### 1.1 硬件信息
 
-CPU                                          | Memory | 网卡        | 磁盘
--------------------------------------------- | ------ | --------- | ---------
-48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G   | 10000Mbps | 750GB SSD
+| CPU                                          | Memory | 网卡        | 磁盘        |
+|----------------------------------------------|--------|-----------|-----------|
+| 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G   | 10000Mbps | 750GB SSD |
 
 #### 1.2 软件信息
 
@@ -40,16 +40,16 @@ CPU                                          | Memory | 网卡        | 磁盘
 
 ###### 本测试用到的数据集规模
 
-名称                      | vertex数目  | edge数目    | 文件大小
------------------------ | --------- | --------- | ------
-email-enron.txt         | 36,691    | 367,661   | 4MB
-com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB
-amazon0601.txt          | 403,393   | 3,387,388 | 47.9MB
+| 名称                      | vertex数目  | edge数目    | 文件大小   |
+|-------------------------|-----------|-----------|--------|
+| email-enron.txt         | 36,691    | 367,661   | 4MB    |
+| com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB |
+| amazon0601.txt          | 403,393   | 3,387,388 | 47.9MB |
 
 #### 1.3 服务配置
 
 - HugeGraph版本:0.4.4,RestServer和Gremlin Server和backends都在同一台服务器上
-- Cassandra版本:cassandra-3.10,commitlog和data共用SSD
+- Cassandra版本:cassandra-3.10,commit-log 和data共用SSD
 - RocksDB版本:rocksdbjni-5.8.6
 - Titan版本:0.5.4, 使用thrift+Cassandra模式
 
@@ -59,12 +59,12 @@ amazon0601.txt          | 403,393   | 3,387,388 | 47.9MB
 
 #### 2.1 Batch插入性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w)
---------- | ---------------- | ---------------- | -------------------------
-Titan     | 9.516            | 88.123           | 111.586
-RocksDB   | 2.345            | 14.076           | 16.636
-Cassandra | 11.930           | 108.709          | 101.959
-Memory    | 3.077            | 15.204           | 13.841
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
+|-----------|------------------|------------------|---------------------------|
+| Titan     | 9.516            | 88.123           | 111.586                   |
+| RocksDB   | 2.345            | 14.076           | 16.636                    |
+| Cassandra | 11.930           | 108.709          | 101.959                   |
+| Memory    | 3.077            | 15.204           | 13.841                    |
 
 _说明_
 
@@ -86,12 +86,12 @@ _说明_
 
 ##### 2.2.2 FN性能
 
-Backend   | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w)
---------- | ----------------- | --------------- | -------------------------
-Titan     | 7.724             | 70.935          | 128.884
-RocksDB   | 8.876             | 65.852          | 63.388
-Cassandra | 13.125            | 126.959         | 102.580
-Memory    | 22.309            | 207.411         | 165.609
+| Backend   | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) |
+|-----------|-------------------|-----------------|---------------------------|
+| Titan     | 7.724             | 70.935          | 128.884                   |
+| RocksDB   | 8.876             | 65.852          | 63.388                    |
+| Cassandra | 13.125            | 126.959         | 102.580                   |
+| Memory    | 22.309            | 207.411         | 165.609                   |
 
 _说明_
 
@@ -101,12 +101,12 @@ _说明_
 
 ##### 2.2.3 FA性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w)
---------- | ---------------- | ---------------- | -------------------------
-Titan     | 7.119            | 63.353           | 115.633
-RocksDB   | 6.032            | 64.526           | 52.721
-Cassandra | 9.410            | 102.766          | 94.197
-Memory    | 12.340           | 195.444          | 140.89
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
+|-----------|------------------|------------------|---------------------------|
+| Titan     | 7.119            | 63.353           | 115.633                   |
+| RocksDB   | 6.032            | 64.526           | 52.721                    |
+| Cassandra | 9.410            | 102.766          | 94.197                    |
+| Memory    | 12.340           | 195.444          | 140.89                    |
 
 _说明_
 
@@ -128,12 +128,12 @@ _说明_
 
 ##### FS性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w)
---------- | ---------------- | ---------------- | -------------------------
-Titan     | 11.333           | 0.313            | 376.06
-RocksDB   | 44.391           | 2.221            | 268.792
-Cassandra | 39.845           | 3.337            | 331.113
-Memory    | 35.638           | 2.059            | 388.987
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
+|-----------|------------------|------------------|---------------------------|
+| Titan     | 11.333           | 0.313            | 376.06                    |
+| RocksDB   | 44.391           | 2.221            | 268.792                   |
+| Cassandra | 39.845           | 3.337            | 331.113                   |
+| Memory    | 35.638           | 2.059            | 388.987                   |
 
 _说明_
 
@@ -180,12 +180,12 @@ _说明_
 
 #### 2.4 图综合性能测试-CW
 
-数据库             | 规模1000 | 规模5000   | 规模10000  | 规模20000
---------------- | ------ | -------- | -------- | --------
-Titan           | 45.943 | 849.168  | 2737.117 | 9791.46
-Memory(core)    | 41.077 | 1825.905 | *        | *
-Cassandra(core) | 39.783 | 862.744  | 2423.136 | 6564.191
-RcoksDB(core)   | 33.383 | 199.894  | 763.869  | 1677.813
+| 数据库             | 规模1000 | 规模5000   | 规模10000  | 规模20000  |
+|-----------------|--------|----------|----------|----------|
+| Titan           | 45.943 | 849.168  | 2737.117 | 9791.46  |
+| Memory(core)    | 41.077 | 1825.905 | *        | *        |
+| Cassandra(core) | 39.783 | 862.744  | 2423.136 | 6564.191 |
+| RocksDB(core)   | 33.383 | 199.894  | 763.869  | 1677.813 |
 
 _说明_
 
diff --git a/content/en/docs/performance/hugegraph-benchmark-0.5.6.md b/content/en/docs/performance/hugegraph-benchmark-0.5.6.md
index d8a0ed6b..bb3db47c 100644
--- a/content/en/docs/performance/hugegraph-benchmark-0.5.6.md
+++ b/content/en/docs/performance/hugegraph-benchmark-0.5.6.md
@@ -8,9 +8,9 @@ weight: 1
 
 #### 1.1 硬件信息
 
-CPU                                          | Memory | 网卡        | 磁盘
--------------------------------------------- | ------ | --------- | ---------
-48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G   | 10000Mbps | 750GB SSD
+| CPU                                          | Memory | 网卡        | 磁盘        |
+|----------------------------------------------|--------|-----------|-----------|
+| 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G   | 10000Mbps | 750GB SSD |
 
 #### 1.2 软件信息
 
@@ -46,12 +46,12 @@ CPU                                          | Memory | 网卡        | 磁盘
 
 ###### 本测试用到的数据集规模
 
-名称                      | vertex数目  | edge数目    | 文件大小
------------------------ | --------- | --------- | ------
-email-enron.txt         | 36,691    | 367,661   | 4MB
-com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB
-amazon0601.txt          | 403,393   | 3,387,388 | 47.9MB
-com-lj.ungraph.txt      | 3997961   | 34681189  | 479MB
+| 名称                      | vertex数目  | edge数目    | 文件大小   |
+|-------------------------|-----------|-----------|--------|
+| email-enron.txt         | 36,691    | 367,661   | 4MB    |
+| com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB |
+| amazon0601.txt          | 403,393   | 3,387,388 | 47.9MB |
+| com-lj.ungraph.txt      | 3997961   | 34681189  | 479MB  |
 
 #### 1.3 服务配置
 
@@ -61,7 +61,7 @@ com-lj.ungraph.txt      | 3997961   | 34681189  | 479MB
 
 - Titan版本:0.5.4, 使用thrift+Cassandra模式
 
-  - Cassandra版本:cassandra-3.10,commitlog和data共用SSD
+  - Cassandra版本:cassandra-3.10,commit-log 和 data 共用SSD
 
 - Neo4j版本:2.0.1
 
@@ -71,11 +71,11 @@ com-lj.ungraph.txt      | 3997961   | 34681189  | 479MB
 
 #### 2.1 Batch插入性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w)
---------- | ---------------- | ---------------- | ------------------------- | ---------------------
-HugeGraph | 0.629            | 5.711            | 5.243                     | 67.033
-Titan     | 10.15            | 108.569          | 150.266                   | 1217.944
-Neo4j     | 3.884            | 18.938           | 24.890                    | 281.537
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
+|-----------|------------------|------------------|---------------------------|-----------------------|
+| HugeGraph | 0.629            | 5.711            | 5.243                     | 67.033                |
+| Titan     | 10.15            | 108.569          | 150.266                   | 1217.944              |
+| Neo4j     | 3.884            | 18.938           | 24.890                    | 281.537               |
 
 _说明_
 
@@ -96,11 +96,11 @@ _说明_
 
 ##### 2.2.2 FN性能
 
-Backend   | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) | com-lj.ungraph(400w)
---------- | ----------------- | --------------- | ------------------------- | ---------------------
-HugeGraph | 4.072             | 45.118          | 66.006     | 609.083
-Titan     | 8.084             | 92.507          | 184.543    | 1099.371
-Neo4j     | 2.424             | 10.537          | 11.609     | 106.919
+| Backend   | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) | com-lj.ungraph(400w) |
+|-----------|-------------------|-----------------|---------------------------|----------------------|
+| HugeGraph | 4.072             | 45.118          | 66.006                    | 609.083              |
+| Titan     | 8.084             | 92.507          | 184.543                   | 1099.371             |
+| Neo4j     | 2.424             | 10.537          | 11.609                    | 106.919              |
 
 _说明_
 
@@ -110,11 +110,11 @@ _说明_
 
 ##### 2.2.3 FA性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w)
---------- | ---------------- | ---------------- | ------------------------- | ---------------------
-HugeGraph | 1.540             | 10.764          | 11.243     | 151.271
-Titan     | 7.361             | 93.344          | 169.218    | 1085.235
-Neo4j     | 1.673             | 4.775           | 4.284      | 40.507
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
+|-----------|------------------|------------------|---------------------------|-----------------------|
+| HugeGraph | 1.540            | 10.764           | 11.243                    | 151.271               |
+| Titan     | 7.361            | 93.344           | 169.218                   | 1085.235              |
+| Neo4j     | 1.673            | 4.775            | 4.284                     | 40.507                |
 
 _说明_
 
@@ -136,11 +136,11 @@ _说明_
 
 ##### FS性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w)
---------- | ---------------- | ---------------- | ------------------------- | ---------------------
-HugeGraph | 0.494            | 0.103            | 3.364      | 8.155
-Titan     | 11.818           | 0.239            | 377.709    | 575.678
-Neo4j     | 1.719            | 1.800            | 1.956      | 8.530
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
+|-----------|------------------|------------------|---------------------------|-----------------------|
+| HugeGraph | 0.494            | 0.103            | 3.364                     | 8.155                 |
+| Titan     | 11.818           | 0.239            | 377.709                   | 575.678               |
+| Neo4j     | 1.719            | 1.800            | 1.956                     | 8.530                 |
 
 _说明_
 
@@ -187,11 +187,11 @@ _说明_
 
 #### 2.4 图综合性能测试-CW
 
-数据库             | 规模1000 | 规模5000   | 规模10000  | 规模20000
---------------- | ------ | -------- | -------- | --------
-HugeGraph(core) | 20.804 | 242.099  |  744.780 | 1700.547
-Titan           | 45.790 | 820.633  | 2652.235 | 9568.623
-Neo4j           |  5.913 |  50.267  |  142.354 |  460.880
+| 数据库             | 规模1000 | 规模5000  | 规模10000  | 规模20000  |
+|-----------------|--------|---------|----------|----------|
+| HugeGraph(core) | 20.804 | 242.099 | 744.780  | 1700.547 |
+| Titan           | 45.790 | 820.633 | 2652.235 | 9568.623 |
+| Neo4j           | 5.913  | 50.267  | 142.354  | 460.880  |
 
 _说明_
 
diff --git a/content/en/docs/quickstart/hugegraph-client.md b/content/en/docs/quickstart/hugegraph-client.md
index bfa053d8..268843f9 100644
--- a/content/en/docs/quickstart/hugegraph-client.md
+++ b/content/en/docs/quickstart/hugegraph-client.md
@@ -10,8 +10,8 @@ HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the
 
 ### 2 What You Need
 
-- JDK1.8
-- Maven-3.3.9
+- JDK 1.8
+- Maven 3.3.9+
 
 ### 3 How To Use
 
@@ -19,7 +19,7 @@ The basic steps to use HugeGraph-Client are as follows:
 
 - Build a new Maven project by IDEA or Eclipse
 - Add HugeGraph-Client dependency in pom file;
-- Create a object to invoke the interface of HugeGraph-Client
+- Create an object to invoke the interface of HugeGraph-Client
 
 See the complete example in the following section for the detail.
 
diff --git a/content/en/docs/quickstart/hugegraph-hubble.md b/content/en/docs/quickstart/hugegraph-hubble.md
index 796cd3c0..d5c46231 100644
--- a/content/en/docs/quickstart/hugegraph-hubble.md
+++ b/content/en/docs/quickstart/hugegraph-hubble.md
@@ -26,7 +26,7 @@ Data import is to convert the user's business data into the vertices and edges o
 
 ##### Graph Analysis
 
-By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multi-dimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multi-dimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statement [...]
+By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, [...]
 
 ##### Task Management
 
@@ -59,7 +59,7 @@ Create graph by filling in the content as follows::
 
 
 ##### 3.1.2	Graph Access
-Realize the information access of the graph space. After entering, you can perform operations such as multi-dimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
+Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
 
 <center>
   <img src="/docs/images/images-hubble/312图访问.png" alt="image">
diff --git a/content/en/docs/quickstart/hugegraph-loader.md b/content/en/docs/quickstart/hugegraph-loader.md
index b737aac5..738ad483 100644
--- a/content/en/docs/quickstart/hugegraph-loader.md
+++ b/content/en/docs/quickstart/hugegraph-loader.md
@@ -6,7 +6,7 @@ weight: 2
 
 ### 1 HugeGraph-Loader Overview
 
-HugeGraph-Loader is the data import component of HugeGragh, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
+HugeGraph-Loader is the data import component of HugeGraph, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
 
 Currently supported data sources include:
 - Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
@@ -188,7 +188,7 @@ id | name | lang | price
 id | p_id | s_id | date
 ```
 
-If the id strategy of person or software is specified as PRIMARY_KEY when modeling (schema), choose name as the primary key (note: this is the concept of vertexlabel in hugegraph), when importing edge data, the source vertex and target need to be spliced ​​out. For the id of the vertex, you must go to the person/software table with p_id/s_id to find the corresponding name. In the case of the schema that requires additional query, the loader does not support it temporarily. In this case,  [...]
+If the id strategy of person or software is specified as PRIMARY_KEY when modeling (schema), choose name as the primary key (note: this is the concept of vertex-label in hugegraph), when importing edge data, the source vertex and target need to be spliced ​​out. For the id of the vertex, you must go to the person/software table with p_id/s_id to find the corresponding name. In the case of the schema that requires additional query, the loader does not support it temporarily. In this case, [...]
 
 1. The id strategy of person and software is still specified as PRIMARY_KEY, but the id column of the person table and software table is used as the primary key attribute of the vertex, so that the id can be generated by directly splicing p_id and s_id with the label of the vertex when importing an edge;
 2. Specify the id policy of person and software as CUSTOMIZE, and then directly use the id column of the person table and the software table as the vertex id, so that p_id and s_id can be used directly when importing edges;
@@ -218,7 +218,7 @@ Office,388
 
 ###### 3.2.2.2 Edge data
 
-The edge data file consists of data line by line. Generally, each line is used as an edge. Some of the columns are used as the IDs of the source and target vertices, and other columns are used as edge attributes. The following uses JSON format as an example.
+The edge data file consists of data line by line. Generally, each line is used as an edge. Some columns are used as the IDs of the source and target vertices, and other columns are used as edge attributes. The following uses JSON format as an example.
 
 - knows edge data
 
@@ -573,7 +573,7 @@ The nodes and meanings of the above `local file input source` are basically appl
 
 - type: input source type, must fill in hdfs or HDFS, required;
 - path: the path of the HDFS file or directory, it must be the absolute path of HDFS, required;
-- core_site_path: the path of the core-site.xml file of the HDFS cluster, the key point is to specify the address of the namenode (fs.default.name) and the implementation of the file system (fs.hdfs.impl);
+- core_site_path: the path of the core-site.xml file of the HDFS cluster, the key point is to specify the address of the NameNode (`fs.default.name`) and the implementation of the file system (`fs.hdfs.impl`);
 
 ###### 3.3.2.3 JDBC input source
 
@@ -713,37 +713,37 @@ The import process is controlled by commands submitted by the user, and the user
 
 ##### 3.4.1 Parameter description
 
-Parameter | Default value | Required or not | Description
-------------------- | ------------ | ------- | -----------------------
--f or --file | | Y | path to configure script
--g or --graph | | Y | graph dbspace
--s or --schema | | Y | schema file path
--h or --host | localhost | | address of HugeGraphServer
--p or --port | 8080 | | port number of HugeGraphServer
---username | null | | When HugeGraphServer enables permission authentication, the username of the current graph
---token | null | | When HugeGraphServer has enabled authorization authentication, the token of the current graph
---protocol | http | | Protocol for sending requests to the server, optional http or https
---trust-store-file | | | When the request protocol is https, the client's certificate file path
---trust-store-password | | | When the request protocol is https, the client certificate password
---clear-all-data | false | | Whether to clear the original data on the server before importing data
---clear-timeout | 240 | | Timeout for clearing the original data on the server before importing data
---incremental-mode | false | | Whether to use the breakpoint resume mode, only the input source is FILE and HDFS support this mode, enabling this mode can start the import from the place where the last import stopped
---failure-mode | false | | When the failure mode is true, the data that failed before will be imported. Generally speaking, the failed data file needs to be manually corrected and edited, and then imported again
---batch-insert-threads | CPUs | | Batch insert thread pool size (CPUs is the number of **logical cores** available to the current OS)
---single-insert-threads | 8 | | Size of single insert thread pool
---max-conn | 4 * CPUs | | The maximum number of HTTP connections between HugeClient and HugeGraphServer, it is recommended to adjust this when **adjusting threads**
---max-conn-per-route| 2 * CPUs | | The maximum number of HTTP connections for each route between HugeClient and HugeGraphServer, it is recommended to adjust this item at the same time when **adjusting the thread**
---batch-size | 500 | | The number of data items in each batch when importing data
---max-parse-errors | 1 | | The maximum number of lines of data parsing errors allowed, and the program exits when this value is reached
---max-insert-errors | 500 | | The maximum number of rows of data insertion errors allowed, and the program exits when this value is reached
---timeout | 60 | | Timeout (seconds) for inserting results to return
---shutdown-timeout | 10 | | Waiting time for multithreading to stop (seconds)
---retry-times | 0 | | Number of retries when a specific exception occurs
---retry-interval | 10 | | interval before retry (seconds)
---check-vertex | false | | Whether to check whether the vertex connected by the edge exists when inserting the edge
---print-progress | true | | Whether to print the number of imported items in the console in real time
---dry-run | false | | Turn on this mode, only parsing but not importing, usually used for testing
---help | false | | print help information
+| Parameter               | Default value | Required or not | Description                                                                                                                                                                               |
+|-------------------------|---------------|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| -f or --file            |               | Y               | path to configure script                                                                                                                                                                  |
+| -g or --graph           |               | Y               | graph space name                                                                                                                                                                          |
+| -s or --schema          |               | Y               | schema file path                                                                                                                                                                          |
+| -h or --host            | localhost     |                 | address of HugeGraphServer                                                                                                                                                                |
+| -p or --port            | 8080          |                 | port number of HugeGraphServer                                                                                                                                                            |
+| --username              | null          |                 | When HugeGraphServer enables permission authentication, the username of the current graph                                                                                                 |
+| --token                 | null          |                 | When HugeGraphServer has enabled authorization authentication, the token of the current graph                                                                                             |
+| --protocol              | http          |                 | Protocol for sending requests to the server, optional http or https                                                                                                                       |
+| --trust-store-file      |               |                 | When the request protocol is https, the client's certificate file path                                                                                                                    |
+| --trust-store-password  |               |                 | When the request protocol is https, the client certificate password                                                                                                                       |
+| --clear-all-data        | false         |                 | Whether to clear the original data on the server before importing data                                                                                                                    |
+| --clear-timeout         | 240           |                 | Timeout for clearing the original data on the server before importing data                                                                                                                |
+| --incremental-mode      | false         |                 | Whether to use the breakpoint resume mode, only the input source is FILE and HDFS support this mode, enabling this mode can start the import from the place where the last import stopped |
+| --failure-mode          | false         |                 | When the failure mode is true, the data that failed before will be imported. Generally speaking, the failed data file needs to be manually corrected and edited, and then imported again  |
+| --batch-insert-threads  | CPUs          |                 | Batch insert thread pool size (CPUs is the number of **logical cores** available to the current OS)                                                                                       |
+| --single-insert-threads | 8             |                 | Size of single insert thread pool                                                                                                                                                         |
+| --max-conn              | 4 * CPUs      |                 | The maximum number of HTTP connections between HugeClient and HugeGraphServer, it is recommended to adjust this when **adjusting threads**                                                |
+| --max-conn-per-route    | 2 * CPUs      |                 | The maximum number of HTTP connections for each route between HugeClient and HugeGraphServer, it is recommended to adjust this item at the same time when **adjusting the thread**        |
+| --batch-size            | 500           |                 | The number of data items in each batch when importing data                                                                                                                                |
+| --max-parse-errors      | 1             |                 | The maximum number of lines of data parsing errors allowed, and the program exits when this value is reached                                                                              |
+| --max-insert-errors     | 500           |                 | The maximum number of rows of data insertion errors allowed, and the program exits when this value is reached                                                                             |
+| --timeout               | 60            |                 | Timeout (seconds) for inserting results to return                                                                                                                                         |
+| --shutdown-timeout      | 10            |                 | Waiting time for multithreading to stop (seconds)                                                                                                                                         |
+| --retry-times           | 0             |                 | Number of retries when a specific exception occurs                                                                                                                                        |
+| --retry-interval        | 10            |                 | interval before retry (seconds)                                                                                                                                                           |
+| --check-vertex          | false         |                 | Whether to check whether the vertex connected by the edge exists when inserting the edge                                                                                                  |
+| --print-progress        | true          |                 | Whether to print the number of imported items in the console in real time                                                                                                                 |
+| --dry-run               | false         |                 | Turn on this mode, only parsing but not importing, usually used for testing                                                                                                               |
+| --help                  | false         |                 | print help information                                                                                                                                                                    |
 
 ##### 3.4.2 Breakpoint Continuation Mode
 
@@ -764,13 +764,12 @@ In the failed file, after the user modifies the data lines in the failed file, s
 Of course, if there is still a problem with the modified data line, it will be logged again to the failure file (don't worry about duplicate lines).
 
 Each vertex map or edge map will generate its own failure file when data insertion fails. The failure file is divided into a parsing failure file (suffix .parse-error) and an insertion failure file (suffix .insert-error).
-They are stored in the `${struct}/current` directory. For example, there is a vertex mapping person and an edge mapping knows in the mapping file, each of which has some error lines. When the Loader exits, in the
-You will see the following files in the `${struct}/current` directory:
+They are stored in the `${struct}/current` directory. For example, there is a vertex mapping person and an edge mapping knows in the mapping file, each of which has some error lines. When the Loader exits, you will see the following files in the `${struct}/current` directory:
 
 - person-b4cd32ab.parse-error: Vertex map person parses wrong data
 - person-b4cd32ab.insert-error: Vertex map person inserts wrong data
-- knows-eb6b2bac.parse-error: edgemap knows parses wrong data
-- knows-eb6b2bac.insert-error: edgemap knows inserts wrong data
+- knows-eb6b2bac.parse-error: edge map knows parses wrong data
+- knows-eb6b2bac.insert-error: edge map knows inserts wrong data
 
 > .parse-error and .insert-error do not always exist together. Only lines with parsing errors will have .parse-error files, and only lines with insertion errors will have .insert-error files.
 
@@ -780,7 +779,7 @@ The log and error data during program execution will be written into hugegraph-l
 
 ##### 3.4.4 Execute command
 
-Run bin/hugeloader and pass in parameters
+Run bin/hugegraph-loader and pass in parameters
 
 ```bash
 bin/hugegraph-loader -g {GRAPH_NAME} -f ${INPUT_DESC_FILE} -s ${SCHEMA_FILE} -h {HOST} -p {PORT}
diff --git a/content/en/docs/quickstart/hugegraph-server.md b/content/en/docs/quickstart/hugegraph-server.md
index 0df4be1d..d835de1b 100644
--- a/content/en/docs/quickstart/hugegraph-server.md
+++ b/content/en/docs/quickstart/hugegraph-server.md
@@ -8,7 +8,7 @@ weight: 1
 
 HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core、Backend、API.
 
-The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include:Memory、Cassandra、ScyllaDB、RocksDB; The API Module provides HTTP Server, which converts Client's HTTP request into a call to Core Moudle.
+The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include:Memory、Cassandra、ScyllaDB、RocksDB; The API Module provides HTTP Server, which converts Client's HTTP request into a call to Core Module.
 
 > There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: `HugeGraph-Server` represents the code of server-related components, `HugeGraphServer` represents the service process.
 
diff --git a/content/en/docs/quickstart/hugegraph-spark.md b/content/en/docs/quickstart/hugegraph-spark.md
index 0a6f9a5a..4ca1d1af 100644
--- a/content/en/docs/quickstart/hugegraph-spark.md
+++ b/content/en/docs/quickstart/hugegraph-spark.md
@@ -5,9 +5,9 @@ draft: true
 weight: 7
 ---
 
-### 1 HugeGraph-Spark概述
+### 1 HugeGraph-Spark概述 (Deprecated)
 
-HugeGraph-Spark 是一个连接 HugeGraph 和 Spark GraphX 的工具,能够读取 HugeGraph 中的数据并转换成 Spark GraphX 的 RDD,然后执行 GraphX 中的各种图算法。
+HugeGraph-Spark 是一个连接 HugeGraph 和 Spark GraphX 的工具,能够读取 HugeGraph 中的数据并转换成 Spark GraphX 的 RDD,然后执行 GraphX 中的各种图算法。 (WARNING: Deprecated Now! Use HugeGraph-Computer instead)
 
 ### 2 环境依赖
 
diff --git a/content/en/docs/quickstart/hugegraph-studio.md b/content/en/docs/quickstart/hugegraph-studio.md
index 81c05b7e..7d1359e5 100644
--- a/content/en/docs/quickstart/hugegraph-studio.md
+++ b/content/en/docs/quickstart/hugegraph-studio.md
@@ -5,7 +5,9 @@ draft: true
 weight: 5
 ---
 
-### 1 HugeGraph-Studio概述
+### 1 HugeGraph-Studio概述 (Deprecated)
+
+(WARNING: Deprecated Now! Use HugeGraph-Hubble instead)
 
 HugeGraph-Studio是HugeGraph的前端展示工具,是基于Web的图形化IDE环境。
 通过HugeGraph-Studio,用户可以执行Gremlin语句,并及时获得图形化的展示结果。
@@ -242,23 +244,23 @@ HugeGraph-Studio不仅支持通过graph的方式展示数据,还支持表格
 
 ##### 4.4.1 自定义VertexLabel 样式
 
-属性                         | 默认值       | 类型     | 说明
-:------------------------- | :-------- | :----- | :--------------------------------------------------------------------------------------------------------------
-`vis.size`                 | `25`      | number | 顶点大小
-`vis.scaling.min`          | `10`      | number | 根据标签内容调整节点大小,优先级比vis.size高
-`vis.scaling.max`          | `30`      | number | 根据标签内容调整节点大小,优先级比vis.size高
-`vis.shape`                | dot       | string | 形状,包括ellipse, circle, database, box, text,diamond, dot, star, triangle, triangleDown, hexagon, square and icon.
-`vis.border`               | #00ccff   | string | 顶点边框颜色
-`vis.background`           | #00ccff   | string | 顶点背景颜色
-`vis.hover.border`         | #00ccff   | string | 鼠标悬浮时,顶点边框颜色
-`vis.hover.background`     | #ec3112   | string | 鼠标悬浮时,顶点背景颜色
-`vis.highlight.border`     | #fb6a02   | string | 选中时,顶点边框颜色
-`vis.highlight.background` | #fb6a02   | string | 选中时,顶点背景颜色
-`vis.font.color`           | #343434   | string | 顶点类型字体颜色
-`vis.font.size`            | `12`      | string | 顶点类型字体大小
-`vis.icon.code`            | `\uf111`  | string | FontAwesome 图标编码,目前支持4.7.5版本的图标
-`vis.icon.color`           | `#2B7CE9` | string | 图标颜色,优先级比vis.background高
-`vis.icon.size`            | 50        | string | icon大小,优先级比vis.size高
+| 属性                         | 默认值       | 类型     | 说明                                                                                                              |
+|:---------------------------|:----------|:-------|:----------------------------------------------------------------------------------------------------------------|
+| `vis.size`                 | `25`      | number | 顶点大小                                                                                                            |
+| `vis.scaling.min`          | `10`      | number | 根据标签内容调整节点大小,优先级比vis.size高                                                                                      |
+| `vis.scaling.max`          | `30`      | number | 根据标签内容调整节点大小,优先级比vis.size高                                                                                      |
+| `vis.shape`                | dot       | string | 形状,包括ellipse, circle, database, box, text,diamond, dot, star, triangle, triangleDown, hexagon, square and icon. |
+| `vis.border`               | #00ccff   | string | 顶点边框颜色                                                                                                          |
+| `vis.background`           | #00ccff   | string | 顶点背景颜色                                                                                                          |
+| `vis.hover.border`         | #00ccff   | string | 鼠标悬浮时,顶点边框颜色                                                                                                    |
+| `vis.hover.background`     | #ec3112   | string | 鼠标悬浮时,顶点背景颜色                                                                                                    |
+| `vis.highlight.border`     | #fb6a02   | string | 选中时,顶点边框颜色                                                                                                      |
+| `vis.highlight.background` | #fb6a02   | string | 选中时,顶点背景颜色                                                                                                      |
+| `vis.font.color`           | #343434   | string | 顶点类型字体颜色                                                                                                        |
+| `vis.font.size`            | `12`      | string | 顶点类型字体大小                                                                                                        |
+| `vis.icon.code`            | `\uf111`  | string | FontAwesome 图标编码,目前支持4.7.5版本的图标                                                                                 |
+| `vis.icon.color`           | `#2B7CE9` | string | 图标颜色,优先级比vis.background高                                                                                        |
+| `vis.icon.size`            | 50        | string | icon大小,优先级比vis.size高                                                                                            |
 
 示例:
 
@@ -284,7 +286,8 @@ graph.schema().vertexLabel("software")
 
 <div align="center">
 
-颜色代码示例:
+颜色代码示例:
+
 <table style="BORDER-COLLAPSE: collapse" bordercolor="#111111" cellpadding="2" width="740" border="0">
 <tbody><tr><td align="middle" width="10%" bgcolor="#fffff" height="16"><font face="MS Sans Serif" size="2" color="#000000">#ffffff </font></td><td align="middle" width="10%" bgcolor="#ffffcc" height="16"><font face="MS Sans Serif" size="2" color="#000000">#ffffcc </font></td><td align="middle" width="10%" bgcolor="#cccccc" height="16"><font face="MS Sans Serif" size="2" color="#000000">#cccccc </font></td><td align="middle" width="10%" bgcolor="#999999" height="16"><font face="MS Sans Se [...]
 </table>
diff --git a/content/en/docs/quickstart/hugegraph-tools.md b/content/en/docs/quickstart/hugegraph-tools.md
index bda90c9c..5f860d31 100644
--- a/content/en/docs/quickstart/hugegraph-tools.md
+++ b/content/en/docs/quickstart/hugegraph-tools.md
@@ -6,7 +6,7 @@ weight: 3
 
 ### 1 HugeGraph-Tools概述
 
-HugeGraph-Tools 是 HugeGragh 的自动化部署、管理和备份/还原组件。
+HugeGraph-Tools 是 HugeGraph 的自动化部署、管理和备份/还原组件。
 
 ### 2 获取 HugeGraph-Tools
 
@@ -73,15 +73,15 @@ Usage: hugegraph [options] [command] [command options]
 上述全局变量,也可以通过环境变量来设置。一种方式是在命令行使用 export 设置临时环境变量,在该命令行关闭之前均有效
 
 
-全局变量      | 环境变量                | 示例                                           
------------- | --------------------- | ------------------------------------------
---url        | HUGEGRAPH_URL         | export HUGEGRAPH_URL=http://127.0.0.1:8080
---graph      | HUGEGRAPH_GRAPH       | export HUGEGRAPH_GRAPH=hugegraph 
---user       | HUGEGRAPH_USERNAME    | export HUGEGRAPH_USERNAME=admin
---password   | HUGEGRAPH_PASSWORD    | export HUGEGRAPH_PASSWORD=test
---timeout    | HUGEGRAPH_TIMEOUT     | export HUGEGRAPH_TIMEOUT=30
---trust-store-file | HUGEGRAPH_TRUST_STORE_FILE | export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store
---trust-store-password | HUGEGRAPH_TRUST_STORE_PASSWORD | export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx
+| 全局变量                   | 环境变量                           | 示例                                                 |
+|------------------------|--------------------------------|----------------------------------------------------|
+| --url                  | HUGEGRAPH_URL                  | export HUGEGRAPH_URL=http://127.0.0.1:8080         |
+| --graph                | HUGEGRAPH_GRAPH                | export HUGEGRAPH_GRAPH=hugegraph                   |
+| --user                 | HUGEGRAPH_USERNAME             | export HUGEGRAPH_USERNAME=admin                    |
+| --password             | HUGEGRAPH_PASSWORD             | export HUGEGRAPH_PASSWORD=test                     |
+| --timeout              | HUGEGRAPH_TIMEOUT              | export HUGEGRAPH_TIMEOUT=30                        |
+| --trust-store-file     | HUGEGRAPH_TRUST_STORE_FILE     | export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store |
+| --trust-store-password | HUGEGRAPH_TRUST_STORE_PASSWORD | export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx         |
 
 另一种方式是在 bin/hugegraph 脚本中设置环境变量: