You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@iotdb.apache.org by ro...@apache.org on 2021/07/29 08:59:33 UTC

[iotdb] branch master updated: [IOTDB-1517][IOTDB-1521] Refactor TsFile Index for Vector (multi-variable timeseries) (#3627)

This is an automated email from the ASF dual-hosted git repository.

rong pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iotdb.git


The following commit(s) were added to refs/heads/master by this push:
     new ca3a4c2  [IOTDB-1517][IOTDB-1521] Refactor TsFile Index for Vector (multi-variable timeseries) (#3627)
ca3a4c2 is described below

commit ca3a4c277c4182d0abe9a97fde9792743ee6db00
Author: Chen YZ <43...@users.noreply.github.com>
AuthorDate: Thu Jul 29 16:59:11 2021 +0800

    [IOTDB-1517][IOTDB-1521] Refactor TsFile Index for Vector (multi-variable timeseries) (#3627)
---
 docs/SystemDesign/TsFile/Format.md                 | 477 ++++++++---------
 docs/zh/SystemDesign/TsFile/Format.md              | 454 ++++++++--------
 docs/zh/UserGuide/Data-Concept/Encoding.md         |   4 +-
 .../iotdb/HybridTimeseriesSessionExample.java      | 129 +++++
 .../iotdb/tsfile/TsFileWriteVectorWithTablet.java  |  89 ++--
 .../db/engine/cache/TimeSeriesMetadataCache.java   |  71 ++-
 .../apache/iotdb/db/tools/TsFileSketchTool.java    | 583 +++++++++++++++------
 .../db/engine/memtable/MemTableFlushTaskTest.java  |  22 +-
 .../db/engine/memtable/MemTableTestUtils.java      |   2 +-
 .../iotdb/db/tools/TsFileSketchToolTest.java       |   4 +-
 .../apache/iotdb/session/IoTDBSessionVectorIT.java | 213 ++++++++
 .../file/metadata/MetadataIndexConstructor.java    |  52 +-
 .../iotdb/tsfile/read/TsFileSequenceReader.java    |  17 +-
 .../org/apache/iotdb/tsfile/read/common/Path.java  |   4 +
 .../tsfile/write/chunk/ChunkGroupWriterImpl.java   |  77 ++-
 .../tsfile/write/chunk/VectorChunkWriterImpl.java  |   5 +-
 .../iotdb/tsfile/write/writer/TsFileIOWriter.java  |  10 +
 .../tsfile/write/MetadataIndexConstructorTest.java | 478 +++++++++++++++++
 .../write/writer/VectorChunkWriterImplTest.java    |  34 +-
 .../write/writer/VectorMeasurementSchemaStub.java  |   2 +-
 20 files changed, 1941 insertions(+), 786 deletions(-)

diff --git a/docs/SystemDesign/TsFile/Format.md b/docs/SystemDesign/TsFile/Format.md
index 47f9f09..172c6c2 100644
--- a/docs/SystemDesign/TsFile/Format.md
+++ b/docs/SystemDesign/TsFile/Format.md
@@ -7,9 +7,9 @@
     to you under the Apache License, Version 2.0 (the
     "License"); you may not use this file except in compliance
     with the License.  You may obtain a copy of the License at
-
+    
         http://www.apache.org/licenses/LICENSE-2.0
-
+    
     Unless required by applicable law or agreed to in writing,
     software distributed under the License is distributed on an
     "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@@ -27,15 +27,6 @@
 
 ### 1.1 Variable Storage
 
-- **Big Endian**
-       
-  - For Example, the `int` `0x8` will be stored as `00 00 00 08`, replace by `08 00 00 00`
-- **String with Variable Length**
-  - The format is `int size` plus `String literal`. Size can be zero.
-  - Size equals the number of bytes this string will take, and it may not equal to the length of the string. 
-  - For example "sensor_1" will be stored as `00 00 00 08` plus the encoding(ASCII) of "sensor_1".
-  - Note that for the file signature "TsFile000001" (`MAGIC STRING` + `Version Number`), the size(12) and encoding(ASCII)
-    is fixed so there is no need to put the size before this string literal.
 - **Data Type Hardcode**
   - 0: BOOLEAN
   - 1: INT32 (`int`)
@@ -43,25 +34,55 @@
   - 3: FLOAT
   - 4: DOUBLE
   - 5: TEXT (`String`)
+  
 - **Encoding Type Hardcode**
-  - 0: PLAIN
-  - 1: DICTIONARY
-  - 2: RLE
-  - 3: DIFF
-  - 4: TS_2DIFF
-  - 5: BITMAP
-  - 6: GORILLA_V1
-  - 7: REGULAR 
-  - 8: GORILLA
-- **Compressing Type Hardcode**
-  - 0: UNCOMPRESSED
-  - 1: SNAPPY
-  - 2: GZIP
-  - 3: LZO
-  - 4: SDT
-  - 5: PAA
-  - 6: PLA
-  - 7: LZ4
+
+  To improve the efficiency of data storage, it is necessary to encode data during data writing, thereby reducing the amount of disk space used. In the process of writing and reading data, the amount of data involved in the I/O operations can be reduced to improve performance. IoTDB supports the following encoding methods for different data types:
+
+  - **0: PLAIN**
+
+  	- PLAIN encoding, the default encoding mode, i.e, no encoding, supports multiple data types. It has high compression and decompression efficiency while suffering from low space storage efficiency.
+
+  - **1: DICTIONARY**
+
+  	- DICTIONARY encoding is lossless. It is suitable for TEXT data with low cardinality (i.e. low number of distinct values). It is not recommended to use it for high-cardinality data.
+
+  - **2: RLE**
+
+  	- Run-length encoding is suitable for storing sequence with continuous integer values, and is not recommended for sequence data with most of the time different values.
+
+  	- Run-length encoding can also be used to encode floating-point numbers, while it is necessary to specify reserved decimal digits (MAX_POINT_NUMBER) when creating time series. It is more suitable to store sequence data where floating-point values appear continuously, monotonously increasing or decreasing, and it is not suitable for storing sequence data with high precision requirements after the decimal point or with large fluctuations.
+
+  		> TS_2DIFF and RLE have precision limit for data type of float and double. By default, two decimal places are reserved. GORILLA is recommended.
+
+  - **3: DIFF**
+
+  - **4: TS_2DIFF**
+
+  	- Second-order differential encoding is more suitable for encoding monotonically increasing or decreasing sequence data, and is not recommended for sequence data with large fluctuations.
+
+  - **5: BITMAP**
+
+  - **6: GORILLA_V1**
+
+  	- GORILLA encoding is lossless. It is more suitable for numerical sequence with similar values and is not recommended for sequence data with large fluctuations.
+  	- Currently, there are two versions of GORILLA encoding implementation, it is recommended to use `GORILLA` instead of `GORILLA_V1` (deprecated).
+  	- Usage restrictions: When using GORILLA to encode INT32 data, you need to ensure that there is no data point with the value `Integer.MIN_VALUE` in the sequence. When using GORILLA to encode INT64 data, you need to ensure that there is no data point with the value `Long.MIN_VALUE` in the sequence.
+
+  - **7: REGULAR** 
+
+  - **8: GORILLA**
+
+- **The correspondence between the data type and its supported encodings**
+
+	| Data Type |      Supported Encoding       |
+	| :-------: | :---------------------------: |
+	|  BOOLEAN  |          PLAIN, RLE           |
+	|   INT32   | PLAIN, RLE, TS_2DIFF, GORILLA |
+	|   INT64   | PLAIN, RLE, TS_2DIFF, GORILLA |
+	|   FLOAT   | PLAIN, RLE, TS_2DIFF, GORILLA |
+	|  DOUBLE   | PLAIN, RLE, TS_2DIFF, GORILLA |
+	|   TEXT    |       PLAIN, DICTIONARY       |
 
 ### 1.2 TsFile Overview
 
@@ -216,34 +237,70 @@ IndexEntry has members as below:
 
 All IndexNode forms an **index tree (secondary index)** like a B+ tree, which consists of two levels: entity index level and measurement index level. The IndexNodeType has four enums: `INTERNAL_ENTITY`, `LEAF_ENTITY`, `INTERNAL_MEASUREMENT`, `LEAF_MEASUREMENT`, which indicates the internal or leaf node of entity index level and measurement index level respectively. Only the `LEAF_MEASUREMENT` nodes point to `TimeseriesIndex`.
 
-Here are four detailed examples.
+Consider the introduction of multi-variable timeseries, each multi-variable timeseries is called a vector with a `TimeColumn`. For example, the multi-variable timeseries *vector1* belongs to the entity *d* , with two measurements *s1, s2*. i.e. `d1.vector1.(s1,s2)`, we call vector1 as `TimeColumn`. In the storage, you need to store an extra Chunk of vector1.
+
+Except for `TimeColumn`, measurements of a multi-variable timeseries are concatenated with `TimeColumn` when constructing `IndexOfTimeseriesIndex`, e.g. we Indicates `vector1.s1` as a measurement.
+
+> From v0.13, IoTDB supports [Multi-variable Timeseries](https://iotdb.apache.org/UserGuide/Master/Data-Concept/Data-Model-and-Terminology.html). A multi-variable measurements of an entity corresponds to a multi-variable timeseries. These timeseries are called **multi-variable timeseries**, also called **aligned timeseries**.
+>
+> Multi-variable timeseries need to be created, inserted and deleted at the same time. However, when querying, you can query each sub-measurement separately.
+>
+> By using multi-variable timeseries, the timestamp columns of a group of multi-variable timeseries need to be stored only once in memory and disk when inserting data, instead of once per timeseries.
+>
+> ![img](https://cwiki.apache.org/confluence/download/attachments/184617773/image-20210720151044629.png?version=1&modificationDate=1626773824000&api=v2)
+
+Here are seven detailed examples.
 
 The degree of the index tree (that is, the max number of each node's children) could be configured by users, and is 256 by default. In the examples below, we assume `max_degree_of_index_node = 10`.
 
-* Example 1: 5 entities with 5 measurements each
+Note that the keys are arranged in dictionary order in each type of nodes (ENTITY, MEASUREMENT) of the index tree. In the following example, we assumed that the dictionary order **di<dj** if i<j. (Otherwise, in fact, the dictionary order of [d1,d2,... .d10] should be [d1,d10,d2,.... .d9])
+
+Example 1\~4 is an example of a single-variable timeseries.
+
+Example 5\~6 is an example of a multi-variale timeseries.
+Example 7 is a comprehensive example.
 
-<img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/19167280/125254013-9d2d7400-e32c-11eb-9f95-1663e14cffbb.png">
+* **Example 1: 5 entities with 5 measurements each**
 
-In the case of 5 entities with 5 measurements each: Since the numbers of entities and measurements are both no more than `max_degree_of_index_node`, the tree has only measurement index level by default. In this level, each IndexNode is composed of no more than 10 index entries. The root node is `INTERNAL_ENTITY` type, and the 5 index entries point to index nodes of related entities. These nodes point to  `TimeseriesIndex` directly, as they are `LEAF_MEASUREMENT` type.
+  <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/19167280/125254013-9d2d7400-e32c-11eb-9f95-1663e14cffbb.png">
+  In the case of 5 entities with 5 measurements each: Since the numbers of entities and measurements are both no more than `max_degree_of_index_node`, the tree has only measurement index level by default. In this level, each IndexNode is composed of no more than 10 index entries. The root node is `INTERNAL_ENTITY` type, and the 5 index entries point to index nodes of related entities. These nodes point to  `TimeseriesIndex` directly, as they are `LEAF_MEASUREMENT` type.
 
-* Example 2: 1 entity with 150 measurements
+* **Example 2: 1 entity with 150 measurements**
 
 <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/19167280/125254022-a0c0fb00-e32c-11eb-8fd1-462936358288.png">
 
 In the case of 1 entity with 150 measurements: The number of measurements exceeds `max_degree_of_index_node`, so the tree has only measurement index level by default. In this level, each IndexNode is composed of no more than 10 index entries. The nodes that point to `TimeseriesIndex` directly are `LEAF_MEASUREMENT` type. Other nodes are not leaf nodes of measurement index level, so they are `INTERNAL_MEASUREMENT` type. The root node is `INTERNAL_ENTITY` type.
 
-* Example 3: 150 entities with 1 measurement each
+* **Example 3: 150 entities with 1 measurement each**
 
 <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/19167280/122771008-9a64d380-d2d8-11eb-9044-5ac794dd38f7.png">
 
 In the case of 150 entities with 1 measurement each: The number of entities exceeds `max_degree_of_index_node`, so the entity index level and measurement index level of the tree are both formed. In these two levels, each IndexNode is composed of no more than 10 index entries. The nodes that point to `TimeseriesIndex` directly are `LEAF_MEASUREMENT` type. The root nodes of measurement index level are also the leaf nodes of entity index level, which are `LEAF_ENTITY` type. Other nodes and  [...]
 
-* Example 4: 150 entities with 150 measurements each
+* **Example 4: 150 entities with 150 measurements each**
 
 <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/19167280/122677241-1a753580-d214-11eb-817f-17bcf797251f.png">
 
 In the case of 150 entities with 150 measurements each: The numbers of entities and measurements both exceed `max_degree_of_index_node`, so the entity index level and measurement index level are both formed. In these two levels, each IndexNode is composed of no more than 10 index entries. As is described before, from the root node to the leaf nodes of entity index level, their types are `INTERNAL_ENTITY` and `LEAF_ENTITY`; each leaf node of entity index level can be seen as the root node [...]
 
+* **Example 5: 1 entities with 2 vectors, 9 measurements for each vector**
+
+	![img](https://cwiki.apache.org/confluence/download/attachments/184617773/tsFileVectorIndexCase5.png?version=2&modificationDate=1626952911868&api=v2)
+
+* **Example 6: 1 entities with 2 vectors, 15 measurements for each vector**
+
+	![img](https://cwiki.apache.org/confluence/download/attachments/184617773/tsFileVectorIndexCase6.png?version=2&modificationDate=1626952911054&api=v2)
+
+* **Example 7: 2 entities, measurements of entities are shown in the following table**
+
+	| entity: d0                                      | entity: d1                                      |
+	| :---------------------------------------------- | :---------------------------------------------- |
+	| 【Single-variable Tmeseries】s0,s1...,s4        | 【Single-variable Tmeseries】s0,s1,...s14       |
+	| 【Multi-variable Timeseries】v0.(s5,s6,...,s14) | 【Multi-variable Timeseries】v0.(s15,s16,..s18) |
+	| 【Single-variable Tmeseries】z15,z16,..,z18     |                                                 |
+
+	![img](https://cwiki.apache.org/confluence/download/attachments/184617773/tsFileVectorIndexCase7.png?version=2&modificationDate=1626952910746&api=v2)
+
 The IndexTree is designed as tree structure so that not all the `TimeseriesIndex` need to be read when the number of entities or measurements is too large. Only reading specific IndexTree nodes according to requirement and reducing I/O could speed up the query. More reading process of TsFile in details will be described in the last section of this chapter.
 
 
@@ -380,220 +437,127 @@ For Linux or MacOs:
 
 An example on macOS:
 
-```shell
+```
 /iotdb/server/target/iotdb-server-{version}/tools/tsfileToolSet$ ./print-tsfile-sketch.sh test.tsfile
-|````````````````````````
+|````````````````````````
 Starting Printing the TsFile Sketch
-|````````````````````````
+|````````````````````````
 TsFile path:test.tsfile
 Sketch save path:TsFile_sketch_view.txt
 -------------------------------- TsFile Sketch --------------------------------
 file path: test.tsfile
-file length: 33436
-
-            POSITION| CONTENT
-            --------  -------
-                   0| [magic head] TsFile
-                   6| [version number] 000002
-||||||||||||||||||||| [Chunk Group] of root.group_12.d2, num of Chunks:3
-                  12| [Chunk] of s_INT64e_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:INT64, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   2 pages
-                 677| [Chunk] of s_INT64e_TS_2DIFF, numOfPoints:10000, time range:[1,10000], tsDataType:INT64, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                1349| [Chunk] of s_INT64e_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:INT64, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   2 pages
-                5766| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d2
-                    |   [dataSize] 5754
-                    |   [num of chunks] 3
-||||||||||||||||||||| [Chunk Group] of root.group_12.d2 ends
-                5799| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d1, num of Chunks:3
-                5808| [Chunk] of s_INT32e_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:INT32, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                8231| [Chunk] of s_INT32e_TS_2DIFF, numOfPoints:10000, time range:[1,10000], tsDataType:INT32, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                8852| [Chunk] of s_INT32e_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:INT32, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                9399| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d1
-                    |   [dataSize] 3591
-                    |   [num of chunks] 3
-||||||||||||||||||||| [Chunk Group] of root.group_12.d1 ends
-                9432| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d0, num of Chunks:2
-                9441| [Chunk] of s_BOOLEANe_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:BOOLEAN, 
-                      startTime: 1 endTime: 10000 count: 10000 [firstValue:true,lastValue:true]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                9968| [Chunk] of s_BOOLEANe_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:BOOLEAN, 
-                      startTime: 1 endTime: 10000 count: 10000 [firstValue:true,lastValue:true]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               10961| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d0
-                    |   [dataSize] 1520
-                    |   [num of chunks] 2
-||||||||||||||||||||| [Chunk Group] of root.group_12.d0 ends
-               10994| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d5, num of Chunks:1
-               11003| [Chunk] of s_TEXTe_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:TEXT, 
-                      startTime: 1 endTime: 10000 count: 10000 [firstValue:version_test,lastValue:version_test]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   3 pages
-               19278| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d5
-                    |   [dataSize] 8275
-                    |   [num of chunks] 1
-||||||||||||||||||||| [Chunk Group] of root.group_12.d5 ends
-               19311| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d4, num of Chunks:4
-               19320| [Chunk] of s_DOUBLEe_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:DOUBLE, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00000000123]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   2 pages
-               23740| [Chunk] of s_DOUBLEe_TS_2DIFF, numOfPoints:10000, time range:[1,10000], tsDataType:DOUBLE, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000002045]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               24414| [Chunk] of s_DOUBLEe_GORILLA, numOfPoints:10000, time range:[1,10000], tsDataType:DOUBLE, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000002045]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               25054| [Chunk] of s_DOUBLEe_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:DOUBLE, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000001224]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   2 pages
-               25717| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d4
-                    |   [dataSize] 6397
-                    |   [num of chunks] 4
-||||||||||||||||||||| [Chunk Group] of root.group_12.d4 ends
-               25750| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d3, num of Chunks:4
-               25759| [Chunk] of s_FLOATe_GORILLA, numOfPoints:10000, time range:[1,10000], tsDataType:FLOAT, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               26375| [Chunk] of s_FLOATe_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:FLOAT, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               28796| [Chunk] of s_FLOATe_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:FLOAT, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               29343| [Chunk] of s_FLOATe_TS_2DIFF, numOfPoints:10000, time range:[1,10000], tsDataType:FLOAT, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               29967| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d3
-                    |   [dataSize] 4208
-                    |   [num of chunks] 4
-||||||||||||||||||||| [Chunk Group] of root.group_12.d3 ends
-               30000| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-               30009| [marker] 2
-               30010| [ChunkIndexList] of root.group_12.d0.s_BOOLEANe_PLAIN, tsDataType:BOOLEAN
-                    | [startTime: 1 endTime: 10000 count: 10000 [firstValue:true,lastValue:true]] 
-               30066| [ChunkIndexList] of root.group_12.d0.s_BOOLEANe_RLE, tsDataType:BOOLEAN
-                    | [startTime: 1 endTime: 10000 count: 10000 [firstValue:true,lastValue:true]] 
-               30120| [ChunkIndexList] of root.group_12.d1.s_INT32e_PLAIN, tsDataType:INT32
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30196| [ChunkIndexList] of root.group_12.d1.s_INT32e_RLE, tsDataType:INT32
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30270| [ChunkIndexList] of root.group_12.d1.s_INT32e_TS_2DIFF, tsDataType:INT32
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30349| [ChunkIndexList] of root.group_12.d2.s_INT64e_PLAIN, tsDataType:INT64
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30441| [ChunkIndexList] of root.group_12.d2.s_INT64e_RLE, tsDataType:INT64
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30531| [ChunkIndexList] of root.group_12.d2.s_INT64e_TS_2DIFF, tsDataType:INT64
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30626| [ChunkIndexList] of root.group_12.d3.s_FLOATe_GORILLA, tsDataType:FLOAT
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]] 
-               30704| [ChunkIndexList] of root.group_12.d3.s_FLOATe_PLAIN, tsDataType:FLOAT
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]] 
-               30780| [ChunkIndexList] of root.group_12.d3.s_FLOATe_RLE, tsDataType:FLOAT
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]] 
-               30854| [ChunkIndexList] of root.group_12.d3.s_FLOATe_TS_2DIFF, tsDataType:FLOAT
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]] 
-               30933| [ChunkIndexList] of root.group_12.d4.s_DOUBLEe_GORILLA, tsDataType:DOUBLE
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000002045]] 
-               31028| [ChunkIndexList] of root.group_12.d4.s_DOUBLEe_PLAIN, tsDataType:DOUBLE
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00000000123]] 
-               31121| [ChunkIndexList] of root.group_12.d4.s_DOUBLEe_RLE, tsDataType:DOUBLE
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000001224]] 
-               31212| [ChunkIndexList] of root.group_12.d4.s_DOUBLEe_TS_2DIFF, tsDataType:DOUBLE
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000002045]] 
-               31308| [ChunkIndexList] of root.group_12.d5.s_TEXTe_PLAIN, tsDataType:TEXT
-                    | [startTime: 1 endTime: 10000 count: 10000 [firstValue:version_test,lastValue:version_test]] 
-               32840| [MetadataIndex] of root.group_12.d0
-               32881| [MetadataIndex] of root.group_12.d1
-               32920| [MetadataIndex] of root.group_12.d2
-               32959| [MetadataIndex] of root.group_12.d3
-               33000| [MetadataIndex] of root.group_12.d4
-               33042| [MetadataIndex] of root.group_12.d5
-               33080| [IndexOfTimeseriesIndex]
-                    |   [num of devices] 6
-                    |   6 key&TsMetadataIndex
-                    |   [totalChunkNum] 17
-                    |   [invalidChunkNum] 0
-                    |   [bloom filter bit vector byte array length] 32
-                    |   [bloom filter bit vector byte array] 
-                    |   [bloom filter number of bits] 256
-                    |   [bloom filter number of hash functions] 5
-               33426| [IndexOfTimeseriesIndexSize] 346
-               33430| [magic tail] TsFile
-               33436| END of TsFile
-
+file length: 15462
+14:40:55.619 [main] INFO org.apache.iotdb.tsfile.read.TsFileSequenceReader - Start reading file test.tsfile metadata from 15356, length 96
+
+            POSITION|	CONTENT
+            -------- 	-------
+                   0|	[magic head] TsFile
+                   6|	[version number] 3
+|||||||||||||||||||||	[Chunk Group] of root.sg_1.d1, num of Chunks:4
+                   7|	[Chunk Group Header]
+                    |		[marker] 0
+                    |		[deviceID] root.sg_1.d1
+                  21|	[Chunk] of s6, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:6,maxValue:9996,firstValue:6,lastValue:9996,sumValue:5001000.0]
+                    |		[chunk header] marker=5, measurementId=s6, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+                1856|	[Chunk] of s4, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:4,maxValue:9994,firstValue:4,lastValue:9994,sumValue:4999000.0]
+                    |		[chunk header] marker=5, measurementId=s4, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+                3691|	[Chunk] of s2, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:3,maxValue:9993,firstValue:3,lastValue:9993,sumValue:4998000.0]
+                    |		[chunk header] marker=5, measurementId=s2, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+                5526|	[Chunk] of s5, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:5,maxValue:9995,firstValue:5,lastValue:9995,sumValue:5000000.0]
+                    |		[chunk header] marker=5, measurementId=s5, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+|||||||||||||||||||||	[Chunk Group] of root.sg_1.d1 ends
+|||||||||||||||||||||	[Chunk Group] of root.sg_1.d2, num of Chunks:4
+                7361|	[Chunk Group Header]
+                    |		[marker] 0
+                    |		[deviceID] root.sg_1.d2
+                7375|	[Chunk] of s2, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:3,maxValue:9993,firstValue:3,lastValue:9993,sumValue:4998000.0]
+                    |		[chunk header] marker=5, measurementId=s2, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+                9210|	[Chunk] of s4, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:4,maxValue:9994,firstValue:4,lastValue:9994,sumValue:4999000.0]
+                    |		[chunk header] marker=5, measurementId=s4, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+               11045|	[Chunk] of s6, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:6,maxValue:9996,firstValue:6,lastValue:9996,sumValue:5001000.0]
+                    |		[chunk header] marker=5, measurementId=s6, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+               12880|	[Chunk] of s5, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:5,maxValue:9995,firstValue:5,lastValue:9995,sumValue:5000000.0]
+                    |		[chunk header] marker=5, measurementId=s5, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+|||||||||||||||||||||	[Chunk Group] of root.sg_1.d2 ends
+               14715|	[marker] 2
+               14716|	[TimeseriesIndex] of root.sg_1.d1.s2, tsDataType:INT64
+                    |		[ChunkIndex] s2, offset=3691
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:3,maxValue:9993,firstValue:3,lastValue:9993,sumValue:4998000.0]] 
+               14788|	[TimeseriesIndex] of root.sg_1.d1.s4, tsDataType:INT64
+                    |		[ChunkIndex] s4, offset=1856
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:4,maxValue:9994,firstValue:4,lastValue:9994,sumValue:4999000.0]] 
+               14860|	[TimeseriesIndex] of root.sg_1.d1.s5, tsDataType:INT64
+                    |		[ChunkIndex] s5, offset=5526
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:5,maxValue:9995,firstValue:5,lastValue:9995,sumValue:5000000.0]] 
+               14932|	[TimeseriesIndex] of root.sg_1.d1.s6, tsDataType:INT64
+                    |		[ChunkIndex] s6, offset=21
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:6,maxValue:9996,firstValue:6,lastValue:9996,sumValue:5001000.0]] 
+               15004|	[TimeseriesIndex] of root.sg_1.d2.s2, tsDataType:INT64
+                    |		[ChunkIndex] s2, offset=7375
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:3,maxValue:9993,firstValue:3,lastValue:9993,sumValue:4998000.0]] 
+               15076|	[TimeseriesIndex] of root.sg_1.d2.s4, tsDataType:INT64
+                    |		[ChunkIndex] s4, offset=9210
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:4,maxValue:9994,firstValue:4,lastValue:9994,sumValue:4999000.0]] 
+               15148|	[TimeseriesIndex] of root.sg_1.d2.s5, tsDataType:INT64
+                    |		[ChunkIndex] s5, offset=12880
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:5,maxValue:9995,firstValue:5,lastValue:9995,sumValue:5000000.0]] 
+               15220|	[TimeseriesIndex] of root.sg_1.d2.s6, tsDataType:INT64
+                    |		[ChunkIndex] s6, offset=11045
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:6,maxValue:9996,firstValue:6,lastValue:9996,sumValue:5001000.0]] 
+|||||||||||||||||||||
+               15292|	[IndexOfTimerseriesIndex Node] type=LEAF_MEASUREMENT
+                    |		<s2, 14716>
+                    |		<s6, 14932>
+                    |		<endOffset, 15004>
+               15324|	[IndexOfTimerseriesIndex Node] type=LEAF_MEASUREMENT
+                    |		<s2, 15004>
+                    |		<s6, 15220>
+                    |		<endOffset, 15292>
+               15356|	[TsFileMetadata]
+                    |		[meta offset] 14715
+                    |		[num of devices] 2
+                    |		2 key&TsMetadataIndex
+                    |		[bloom filter bit vector byte array length] 32
+                    |		[bloom filter bit vector byte array] 
+                    |		[bloom filter number of bits] 256
+                    |		[bloom filter number of hash functions] 5
+               15452|	[TsFileMetadataSize] 96
+               15456|	[magic tail] TsFile
+               15462|	END of TsFile
+---------------------------- IndexOfTimerseriesIndex Tree -----------------------------
+	[MetadataIndex:LEAF_DEVICE]
+	└───[root.sg_1.d1,15292]
+			[MetadataIndex:LEAF_MEASUREMENT]
+			└───[s2,14716]
+			└───[s6,14932]
+	└───[root.sg_1.d2,15324]
+			[MetadataIndex:LEAF_MEASUREMENT]
+			└───[s2,15004]
+			└───[s6,15220]]
 ---------------------------------- TsFile Sketch End ----------------------------------
 
 ```
@@ -774,3 +738,28 @@ Plot results:
 ![3](https://user-images.githubusercontent.com/33376433/123760418-66e70200-d8f3-11eb-8701-437afd73ac4c.png)
 ![4](https://user-images.githubusercontent.com/33376433/123760424-69e1f280-d8f3-11eb-9f45-571496685a6e.png)
 ![5](https://user-images.githubusercontent.com/33376433/123760433-6cdce300-d8f3-11eb-8ecd-da04a475af41.png)
+
+
+
+## Appendix
+
+### 
+
+- **Big Endian**
+	- For Example, the `int` `0x8` will be stored as `00 00 00 08`, replace by `08 00 00 00`
+- **String with Variable Length**
+	- The format is `int size` plus `String literal`. Size can be zero.
+	- Size equals the number of bytes this string will take, and it may not equal to the length of the string. 
+	- For example "sensor_1" will be stored as `00 00 00 08` plus the encoding(ASCII) of "sensor_1".
+	- Note that for the file signature "TsFile000001" (`MAGIC STRING` + `Version Number`), the size(12) and encoding(ASCII)
+		is fixed so there is no need to put the size before this string literal.
+- **Compressing Type Hardcode**
+	- 0: UNCOMPRESSED
+	- 1: SNAPPY
+	- 2: GZIP
+	- 3: LZO
+	- 4: SDT
+	- 5: PAA
+	- 6: PLA
+	- 7: LZ4
+
diff --git a/docs/zh/SystemDesign/TsFile/Format.md b/docs/zh/SystemDesign/TsFile/Format.md
index 561d4fc..1d8d867 100644
--- a/docs/zh/SystemDesign/TsFile/Format.md
+++ b/docs/zh/SystemDesign/TsFile/Format.md
@@ -7,9 +7,9 @@
     to you under the Apache License, Version 2.0 (the
     "License"); you may not use this file except in compliance
     with the License.  You may obtain a copy of the License at
-
+    
         http://www.apache.org/licenses/LICENSE-2.0
-
+    
     Unless required by applicable law or agreed to in writing,
     software distributed under the License is distributed on an
     "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@@ -27,39 +27,63 @@
 
 ### 1.1 变量的存储
 
-- **大端存储**
-  - 比如: `int` `0x8` 将会被存储为 `00 00 00 08`, 而不是 `08 00 00 00`
-- **可变长的字符串类型**
-  - 存储的方式是以一个 `int` 类型的 `Size` + 字符串组成。`Size` 的值可以为 0。
-  - `Size` 指的是字符串所占的字节数,它并不一定等于字符串的长度。 
-  - 举例来说,"sensor_1" 这个字符串将被存储为 `00 00 00 08` + "sensor_1" (ASCII 编码)。
-  - 另外需要注意的一点是文件签名 "TsFile000001" (`Magic String` + `Version`), 因为他的 `Size(12)` 和 ASCII 编码值是固定的,所以没有必要在这个字符串前的写入 `Size` 值。
 - **数据类型**
+  
   - 0: BOOLEAN
   - 1: INT32 (`int`)
   - 2: INT64 (`long`)
   - 3: FLOAT
   - 4: DOUBLE
   - 5: TEXT (`String`)
+  
 - **编码类型**
-  - 0: PLAIN
-  - 1: DICTIONARY
-  - 2: RLE
-  - 3: DIFF
-  - 4: TS_2DIFF
-  - 5: BITMAP
-  - 6: GORILLA_V1
-  - 7: REGULAR 
-  - 8: GORILLA 
-- **压缩类型**
-  - 0: UNCOMPRESSED
-  - 1: SNAPPY
-  - 2: GZIP
-  - 3: LZO
-  - 4: SDT
-  - 5: PAA
-  - 6: PLA
-  - 7: LZ4
+
+	为了提高数据的存储效率,需要在数据写入的过程中对数据进行编码,从而减少磁盘空间的使用量。在写数据以及读数据的过程中都能够减少I/O操作的数据量从而提高性能。IoTDB支持多种针对不同类型的数据的编码方法:
+
+	- **0: PLAIN**
+
+		- PLAIN编码,默认的编码方式,即不编码,支持多种数据类型,压缩和解压缩的时间效率较高,但空间存储效率较低。
+
+	- **1: DICTIONARY**
+
+		- 字典编码是一种无损编码。它适合编码基数小的数据(即数据去重后唯一值数量小)。不推荐用于基数大的数据。
+
+	- **2: RLE**
+
+		- 游程编码,比较适合存储某些整数值连续出现的序列,不适合编码大部分情况下前后值不一样的序列数据。
+
+		- 游程编码也可用于对浮点数进行编码,但在创建时间序列的时候需指定保留小数位数(MAX_POINT_NUMBER,具体指定方式参见[SQL 参考文档](../../UserGuide/Appendix/SQL-Reference.md))。比较适合存储某些浮点数值连续出现的序列数据,不适合存储对小数点后精度要求较高以及前后波动较大的序列数据。
+
+			> 游程编码(RLE)和二阶差分编码(TS_2DIFF)对 float 和 double 的编码是有精度限制的,默认保留2位小数。推荐使用 GORILLA。
+
+	- **3: DIFF**
+
+	- **4: TS_2DIFF**
+
+		- 二阶差分编码,比较适合编码单调递增或者递减的序列数据,不适合编码波动较大的数据。
+
+	- **5: BITMAP**
+
+	- **6: GORILLA_V1**
+
+		- GORILLA编码是一种无损编码,它比较适合编码前后值比较接近的数值序列,不适合编码前后波动较大的数据。
+		- 当前系统中存在两个版本的GORILLA编码实现,推荐使用`GORILLA`,不推荐使用`GORILLA_V1`(已过时)。
+		- 使用限制:使用Gorilla编码INT32数据时,需要保证序列中不存在值为`Integer.MIN_VALUE`的数据点;使用Gorilla编码INT64数据时,需要保证序列中不存在值为`Long.MIN_VALUE`的数据点。
+
+	- **7: REGULAR**
+
+	- **8: GORILLA**
+
+- **数据类型与支持编码的对应关系**
+
+	| 数据类型 |          支持的编码           |
+	| :------: | :---------------------------: |
+	| BOOLEAN  |          PLAIN, RLE           |
+	|  INT32   | PLAIN, RLE, TS_2DIFF, GORILLA |
+	|  INT64   | PLAIN, RLE, TS_2DIFF, GORILLA |
+	|  FLOAT   | PLAIN, RLE, TS_2DIFF, GORILLA |
+	|  DOUBLE  | PLAIN, RLE, TS_2DIFF, GORILLA |
+	|   TEXT   |       PLAIN, DICTIONARY       |
 
 ### 1.2 TsFile 概述
 
@@ -217,34 +241,64 @@ PageHeader 结构:
 
 所有的索引节点构成一棵类 B+树结构的**索引树(二级索引)**,这棵树由两部分组成:实体索引部分和物理量索引部分。索引节点类型有四种,分别是`INTERNAL_ENTITY`、`LEAF_ENTITY`、`INTERNAL_MEASUREMENT`、`LEAF_MEASUREMENT`,分别对应实体索引部分的中间节点和叶子节点,和物理量索引部分的中间节点和叶子节点。 只有物理量索引部分的叶子节点 (`LEAF_MEASUREMENT`) 指向 `TimeseriesIndex`。
 
-下面,我们使用四个例子来加以详细说明。
+考虑多元时间序列的引入,每个多元时间序列称为一个vector,有一个`TimeColumn`,例如d1实体下的多元时间序列vector1,有s1、s2两个物理量,即`d1.vector1.(s1,s2)`,我们称vector1为`TimeColumn`,在存储的时候需要多存储一个vector1的Chunk。
+
+构建`IndexOfTimeseriesIndex`时,对于多元时间序列的非`TimeValue`的物理量量,使用与`TimeValue`拼接的方式,例如`vector1.s1`视为“物理量”。
+
+> 注:从0.13起,系统支持[多元时间序列](https://iotdb.apache.org/zh/UserGuide/Master/Data-Concept/Data-Model-and-Terminology.html)(Multi-variable timeseries 或 Aligned timeseries),一个实体的一个多元物理量对应一个多元时间序列。这些时间序列称为**多元时间序列**,也叫**对齐时间序列**。多元时间序列需要被同时创建、同时插入值,删除时也必须同时删除,一组多元序列的时间戳列在内存和磁盘中仅需存储一次,而不是每个时间序列存储一次。
+>
+> ![img](https://cwiki.apache.org/confluence/download/attachments/184617773/image-20210720151044629.png?version=1&modificationDate=1626773824000&api=v2)
+
+下面,我们使用七个例子来加以详细说明。
 
 索引树节点的度(即每个节点的最大子节点个数)可以由用户进行配置,配置项为`max_degree_of_index_node`,其默认值为 256。在以下例子中,我们假定 `max_degree_of_index_node = 10`。
 
-* 例 1:5 个实体,每个实体有 5 个物理量
+需要注意的是,在索引树的每类节点(ENTITY、MEASUREMENT)中,键按照字典序排列。在下面的例子中,若i<j,假设字典序di<dj。(否则,实际上[d1,d2,...d10]的字典序排列应该为[d1,d10,d2,...d9])
+
+其中,例1\~4为一元时间序列的例子,例5\~6为多元时间序列的例子,例7为综合例子。
+
+* **例 1:5 个实体,每个实体有 5 个物理量**
 
 <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/19167280/125254013-9d2d7400-e32c-11eb-9f95-1663e14cffbb.png">
 
 在 5 个实体,每个实体有 5 个物理量的情况下,由于实体数和物理量数均不超过 `max_degree_of_index_node`,因此索引树只有默认的物理量部分。在这部分中,每个 IndexNode 最多由 10 个 IndexEntry 组成。根节点是 `LEAF_ENTITY` 类型,其中的 5 个 IndexEntry 指向对应的实体的 IndexNode,这些节点直接指向 `TimeseriesIndex`,是 `LEAF_MEASUREMENT`。
 
-* 例 2:1 个实体,150 个物理量
+* **例 2:1 个实体,150 个物理量**
 
 <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/19167280/125254022-a0c0fb00-e32c-11eb-8fd1-462936358288.png">
 
 在 1 个实体,实体中有 150 个物理量的情况下,物理量个数超过了 `max_degree_of_index_node`,索引树有默认的物理量层级。在这个层级里,每个 IndexNode 最多由 10 个 IndexEntry 组成。直接指向 `TimeseriesIndex`的节点类型均为 `LEAF_MEASUREMENT`;而后续产生的中间节点不是物理量索引层级的叶子节点,这些节点是 `INTERNAL_MEASUREMENT`;根节点是 `LEAF_ENTITY` 类型。
 
-* 例 3:150 个实体,每个实体有 1 个物理量
+* **例 3:150 个实体,每个实体有 1 个物理量**
 
 <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/19167280/122771008-9a64d380-d2d8-11eb-9044-5ac794dd38f7.png">
 
 在 150 个实体,每个实体中有 1 个物理量的情况下,实体个数超过了 `max_degree_of_index_node`,形成索引树的物理量层级和实体索引层级。在这两个层级里,每个 IndexNode 最多由 10 个 IndexEntry 组成。直接指向 `TimeseriesIndex` 的节点类型为 `LEAF_MEASUREMENT`,物理量索引层级的根节点同时作为实体索引层级的叶子节点,其节点类型为 `LEAF_ENTITY`;而后续产生的中间节点和根节点不是实体索引层级的叶子节点,因此节点类型为 `INTERNAL_ENTITY`。
 
-* 例 4:150 个实体,每个实体有 150 个物理量
+* **例 4:150 个实体,每个实体有 150 个物理量**
 
 <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/19167280/122677241-1a753580-d214-11eb-817f-17bcf797251f.png">
 
 在 150 个实体,每个实体中有 150 个物理量的情况下,物理量和实体个数均超过了 `max_degree_of_index_node`,形成索引树的物理量层级和实体索引层级。在这两个层级里,每个 IndexNode 均最多由 10 个 IndexEntry 组成。如前所述,从根节点到实体索引层级的叶子节点,类型分别为`INTERNAL_ENTITY` 和 `LEAF_ENTITY`,而每个实体索引层级的叶子节点都是物理量索引层级的根节点,从这里到物理量索引层级的叶子节点,类型分别为`INTERNAL_MEASUREMENT` 和 `LEAF_MEASUREMENT`。
 
+- **例 5:1 个实体,18 个物理量,2 个多元时间序列组,每个多元时间序列组分别有 9 个物理量**
+
+![img](https://cwiki.apache.org/confluence/download/attachments/184617773/tsFileVectorIndexCase5.png?version=2&modificationDate=1626952911868&api=v2)
+
+- **例 6:1 个实体,30 个物理量,2 个多元时间序列组,每个多元时间序列组分别有 15 个物理量**
+
+![img](https://cwiki.apache.org/confluence/download/attachments/184617773/tsFileVectorIndexCase6.png?version=2&modificationDate=1626952911054&api=v2)
+
+- **例 7:2 个实体,每个实体的物理量如下表所示**
+
+| d0                                 | d1                                 |
+| :--------------------------------- | :--------------------------------- |
+| 【一元时间序列】s0,s1...,s4        | 【一元时间序列】s0,s1,...s14       |
+| 【多元时间序列】v0.(s5,s6,...,s14) | 【多元时间序列】v0.(s15,s16,..s18) |
+| 【一元时间序列】z15,z16,..,z18     |                                    |
+
+![img](https://cwiki.apache.org/confluence/download/attachments/184617773/tsFileVectorIndexCase7.png?version=2&modificationDate=1626952910746&api=v2)
+
 索引采用树形结构进行设计的目的是在实体数或者物理量数量过大时,可以不用一次读取所有的 `TimeseriesIndex`,只需要根据所读取的物理量定位对应的节点,从而减少 I/O,加快查询速度。有关 TsFile 的读流程将在本章最后一节加以详细说明。
 
 #### 1.2.4 Magic String
@@ -385,211 +439,118 @@ TsFile path:test.tsfile
 Sketch save path:TsFile_sketch_view.txt
 -------------------------------- TsFile Sketch --------------------------------
 file path: test.tsfile
-file length: 33436
-
-            POSITION| CONTENT
-            --------  -------
-                   0| [magic head] TsFile
-                   6| [version number] 000002
-||||||||||||||||||||| [Chunk Group] of root.group_12.d2, num of Chunks:3
-                  12| [Chunk] of s_INT64e_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:INT64, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   2 pages
-                 677| [Chunk] of s_INT64e_TS_2DIFF, numOfPoints:10000, time range:[1,10000], tsDataType:INT64, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                1349| [Chunk] of s_INT64e_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:INT64, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   2 pages
-                5766| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d2
-                    |   [dataSize] 5754
-                    |   [num of chunks] 3
-||||||||||||||||||||| [Chunk Group] of root.group_12.d2 ends
-                5799| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d1, num of Chunks:3
-                5808| [Chunk] of s_INT32e_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:INT32, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                8231| [Chunk] of s_INT32e_TS_2DIFF, numOfPoints:10000, time range:[1,10000], tsDataType:INT32, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                8852| [Chunk] of s_INT32e_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:INT32, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                9399| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d1
-                    |   [dataSize] 3591
-                    |   [num of chunks] 3
-||||||||||||||||||||| [Chunk Group] of root.group_12.d1 ends
-                9432| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d0, num of Chunks:2
-                9441| [Chunk] of s_BOOLEANe_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:BOOLEAN, 
-                      startTime: 1 endTime: 10000 count: 10000 [firstValue:true,lastValue:true]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-                9968| [Chunk] of s_BOOLEANe_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:BOOLEAN, 
-                      startTime: 1 endTime: 10000 count: 10000 [firstValue:true,lastValue:true]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               10961| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d0
-                    |   [dataSize] 1520
-                    |   [num of chunks] 2
-||||||||||||||||||||| [Chunk Group] of root.group_12.d0 ends
-               10994| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d5, num of Chunks:1
-               11003| [Chunk] of s_TEXTe_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:TEXT, 
-                      startTime: 1 endTime: 10000 count: 10000 [firstValue:version_test,lastValue:version_test]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   3 pages
-               19278| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d5
-                    |   [dataSize] 8275
-                    |   [num of chunks] 1
-||||||||||||||||||||| [Chunk Group] of root.group_12.d5 ends
-               19311| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d4, num of Chunks:4
-               19320| [Chunk] of s_DOUBLEe_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:DOUBLE, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00000000123]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   2 pages
-               23740| [Chunk] of s_DOUBLEe_TS_2DIFF, numOfPoints:10000, time range:[1,10000], tsDataType:DOUBLE, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000002045]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               24414| [Chunk] of s_DOUBLEe_GORILLA, numOfPoints:10000, time range:[1,10000], tsDataType:DOUBLE, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000002045]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               25054| [Chunk] of s_DOUBLEe_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:DOUBLE, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000001224]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   2 pages
-               25717| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d4
-                    |   [dataSize] 6397
-                    |   [num of chunks] 4
-||||||||||||||||||||| [Chunk Group] of root.group_12.d4 ends
-               25750| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-||||||||||||||||||||| [Chunk Group] of root.group_12.d3, num of Chunks:4
-               25759| [Chunk] of s_FLOATe_GORILLA, numOfPoints:10000, time range:[1,10000], tsDataType:FLOAT, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               26375| [Chunk] of s_FLOATe_PLAIN, numOfPoints:10000, time range:[1,10000], tsDataType:FLOAT, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               28796| [Chunk] of s_FLOATe_RLE, numOfPoints:10000, time range:[1,10000], tsDataType:FLOAT, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               29343| [Chunk] of s_FLOATe_TS_2DIFF, numOfPoints:10000, time range:[1,10000], tsDataType:FLOAT, 
-                      startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]
-                    |   [marker] 1
-                    |   [ChunkHeader]
-                    |   1 pages
-               29967| [Chunk Group Footer]
-                    |   [marker] 0
-                    |   [deviceID] root.group_12.d3
-                    |   [dataSize] 4208
-                    |   [num of chunks] 4
-||||||||||||||||||||| [Chunk Group] of root.group_12.d3 ends
-               30000| [Version Info]
-                    |   [marker] 3
-                    |   [version] 102
-               30009| [marker] 2
-               30010| [ChunkMetadataList] of root.group_12.d0.s_BOOLEANe_PLAIN, tsDataType:BOOLEAN
-                    | [startTime: 1 endTime: 10000 count: 10000 [firstValue:true,lastValue:true]] 
-               30066| [ChunkMetadataList] of root.group_12.d0.s_BOOLEANe_RLE, tsDataType:BOOLEAN
-                    | [startTime: 1 endTime: 10000 count: 10000 [firstValue:true,lastValue:true]] 
-               30120| [ChunkMetadataList] of root.group_12.d1.s_INT32e_PLAIN, tsDataType:INT32
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30196| [ChunkMetadataList] of root.group_12.d1.s_INT32e_RLE, tsDataType:INT32
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30270| [ChunkMetadataList] of root.group_12.d1.s_INT32e_TS_2DIFF, tsDataType:INT32
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30349| [ChunkMetadataList] of root.group_12.d2.s_INT64e_PLAIN, tsDataType:INT64
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30441| [ChunkMetadataList] of root.group_12.d2.s_INT64e_RLE, tsDataType:INT64
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30531| [ChunkMetadataList] of root.group_12.d2.s_INT64e_TS_2DIFF, tsDataType:INT64
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1,maxValue:1,firstValue:1,lastValue:1,sumValue:10000.0]] 
-               30626| [ChunkMetadataList] of root.group_12.d3.s_FLOATe_GORILLA, tsDataType:FLOAT
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]] 
-               30704| [ChunkMetadataList] of root.group_12.d3.s_FLOATe_PLAIN, tsDataType:FLOAT
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]] 
-               30780| [ChunkMetadataList] of root.group_12.d3.s_FLOATe_RLE, tsDataType:FLOAT
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]] 
-               30854| [ChunkMetadataList] of root.group_12.d3.s_FLOATe_TS_2DIFF, tsDataType:FLOAT
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00023841858]] 
-               30933| [ChunkMetadataList] of root.group_12.d4.s_DOUBLEe_GORILLA, tsDataType:DOUBLE
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000002045]] 
-               31028| [ChunkMetadataList] of root.group_12.d4.s_DOUBLEe_PLAIN, tsDataType:DOUBLE
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.00000000123]] 
-               31121| [ChunkMetadataList] of root.group_12.d4.s_DOUBLEe_RLE, tsDataType:DOUBLE
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000001224]] 
-               31212| [ChunkMetadataList] of root.group_12.d4.s_DOUBLEe_TS_2DIFF, tsDataType:DOUBLE
-                    | [startTime: 1 endTime: 10000 count: 10000 [minValue:1.1,maxValue:1.1,firstValue:1.1,lastValue:1.1,sumValue:11000.000000002045]] 
-               31308| [ChunkMetadataList] of root.group_12.d5.s_TEXTe_PLAIN, tsDataType:TEXT
-                    | [startTime: 1 endTime: 10000 count: 10000 [firstValue:version_test,lastValue:version_test]] 
-               32840| [MetadataIndex] of root.group_12.d0
-               32881| [MetadataIndex] of root.group_12.d1
-               32920| [MetadataIndex] of root.group_12.d2
-               32959| [MetadataIndex] of root.group_12.d3
-               33000| [MetadataIndex] of root.group_12.d4
-               33042| [MetadataIndex] of root.group_12.d5
-               33080| [TsFileMetadata]
-                    |   [num of devices] 6
-                    |   6 key&TsMetadataIndex
-                    |   [totalChunkNum] 17
-                    |   [invalidChunkNum] 0
-                    |   [bloom filter bit vector byte array length] 32
-                    |   [bloom filter bit vector byte array] 
-                    |   [bloom filter number of bits] 256
-                    |   [bloom filter number of hash functions] 5
-               33426| [TsFileMetadataSize] 346
-               33430| [magic tail] TsFile
-               33436| END of TsFile
-
+file length: 15462
+14:40:55.619 [main] INFO org.apache.iotdb.tsfile.read.TsFileSequenceReader - Start reading file test.tsfile metadata from 15356, length 96
+
+            POSITION|	CONTENT
+            -------- 	-------
+                   0|	[magic head] TsFile
+                   6|	[version number] 3
+|||||||||||||||||||||	[Chunk Group] of root.sg_1.d1, num of Chunks:4
+                   7|	[Chunk Group Header]
+                    |		[marker] 0
+                    |		[deviceID] root.sg_1.d1
+                  21|	[Chunk] of s6, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:6,maxValue:9996,firstValue:6,lastValue:9996,sumValue:5001000.0]
+                    |		[chunk header] marker=5, measurementId=s6, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+                1856|	[Chunk] of s4, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:4,maxValue:9994,firstValue:4,lastValue:9994,sumValue:4999000.0]
+                    |		[chunk header] marker=5, measurementId=s4, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+                3691|	[Chunk] of s2, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:3,maxValue:9993,firstValue:3,lastValue:9993,sumValue:4998000.0]
+                    |		[chunk header] marker=5, measurementId=s2, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+                5526|	[Chunk] of s5, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:5,maxValue:9995,firstValue:5,lastValue:9995,sumValue:5000000.0]
+                    |		[chunk header] marker=5, measurementId=s5, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+|||||||||||||||||||||	[Chunk Group] of root.sg_1.d1 ends
+|||||||||||||||||||||	[Chunk Group] of root.sg_1.d2, num of Chunks:4
+                7361|	[Chunk Group Header]
+                    |		[marker] 0
+                    |		[deviceID] root.sg_1.d2
+                7375|	[Chunk] of s2, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:3,maxValue:9993,firstValue:3,lastValue:9993,sumValue:4998000.0]
+                    |		[chunk header] marker=5, measurementId=s2, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+                9210|	[Chunk] of s4, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:4,maxValue:9994,firstValue:4,lastValue:9994,sumValue:4999000.0]
+                    |		[chunk header] marker=5, measurementId=s4, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+               11045|	[Chunk] of s6, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:6,maxValue:9996,firstValue:6,lastValue:9996,sumValue:5001000.0]
+                    |		[chunk header] marker=5, measurementId=s6, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+               12880|	[Chunk] of s5, numOfPoints:1000, time range:[0,999], tsDataType:INT64, 
+                     	startTime: 0 endTime: 999 count: 1000 [minValue:5,maxValue:9995,firstValue:5,lastValue:9995,sumValue:5000000.0]
+                    |		[chunk header] marker=5, measurementId=s5, dataSize=1826, serializedSize=9
+                    |		[chunk] java.nio.HeapByteBuffer[pos=0 lim=1826 cap=1826]
+                    |		[page]  CompressedSize:1822, UncompressedSize:1951
+|||||||||||||||||||||	[Chunk Group] of root.sg_1.d2 ends
+               14715|	[marker] 2
+               14716|	[TimeseriesIndex] of root.sg_1.d1.s2, tsDataType:INT64
+                    |		[ChunkIndex] s2, offset=3691
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:3,maxValue:9993,firstValue:3,lastValue:9993,sumValue:4998000.0]] 
+               14788|	[TimeseriesIndex] of root.sg_1.d1.s4, tsDataType:INT64
+                    |		[ChunkIndex] s4, offset=1856
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:4,maxValue:9994,firstValue:4,lastValue:9994,sumValue:4999000.0]] 
+               14860|	[TimeseriesIndex] of root.sg_1.d1.s5, tsDataType:INT64
+                    |		[ChunkIndex] s5, offset=5526
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:5,maxValue:9995,firstValue:5,lastValue:9995,sumValue:5000000.0]] 
+               14932|	[TimeseriesIndex] of root.sg_1.d1.s6, tsDataType:INT64
+                    |		[ChunkIndex] s6, offset=21
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:6,maxValue:9996,firstValue:6,lastValue:9996,sumValue:5001000.0]] 
+               15004|	[TimeseriesIndex] of root.sg_1.d2.s2, tsDataType:INT64
+                    |		[ChunkIndex] s2, offset=7375
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:3,maxValue:9993,firstValue:3,lastValue:9993,sumValue:4998000.0]] 
+               15076|	[TimeseriesIndex] of root.sg_1.d2.s4, tsDataType:INT64
+                    |		[ChunkIndex] s4, offset=9210
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:4,maxValue:9994,firstValue:4,lastValue:9994,sumValue:4999000.0]] 
+               15148|	[TimeseriesIndex] of root.sg_1.d2.s5, tsDataType:INT64
+                    |		[ChunkIndex] s5, offset=12880
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:5,maxValue:9995,firstValue:5,lastValue:9995,sumValue:5000000.0]] 
+               15220|	[TimeseriesIndex] of root.sg_1.d2.s6, tsDataType:INT64
+                    |		[ChunkIndex] s6, offset=11045
+                    |		[startTime: 0 endTime: 999 count: 1000 [minValue:6,maxValue:9996,firstValue:6,lastValue:9996,sumValue:5001000.0]] 
+|||||||||||||||||||||
+               15292|	[IndexOfTimerseriesIndex Node] type=LEAF_MEASUREMENT
+                    |		<s2, 14716>
+                    |		<s6, 14932>
+                    |		<endOffset, 15004>
+               15324|	[IndexOfTimerseriesIndex Node] type=LEAF_MEASUREMENT
+                    |		<s2, 15004>
+                    |		<s6, 15220>
+                    |		<endOffset, 15292>
+               15356|	[TsFileMetadata]
+                    |		[meta offset] 14715
+                    |		[num of devices] 2
+                    |		2 key&TsMetadataIndex
+                    |		[bloom filter bit vector byte array length] 32
+                    |		[bloom filter bit vector byte array] 
+                    |		[bloom filter number of bits] 256
+                    |		[bloom filter number of hash functions] 5
+               15452|	[TsFileMetadataSize] 96
+               15456|	[magic tail] TsFile
+               15462|	END of TsFile
+---------------------------- IndexOfTimerseriesIndex Tree -----------------------------
+	[MetadataIndex:LEAF_DEVICE]
+	└───[root.sg_1.d1,15292]
+			[MetadataIndex:LEAF_MEASUREMENT]
+			└───[s2,14716]
+			└───[s6,14932]
+	└───[root.sg_1.d2,15324]
+			[MetadataIndex:LEAF_MEASUREMENT]
+			└───[s2,15004]
+			└───[s6,15220]]
 ---------------------------------- TsFile Sketch End ----------------------------------
 ```
 
@@ -765,3 +726,22 @@ title("draw(timeMap,countMap,{'root.vehicle.d0.s0','root.vehicle.d0.s1'},true)")
 ![3](https://user-images.githubusercontent.com/33376433/123760418-66e70200-d8f3-11eb-8701-437afd73ac4c.png)
 ![4](https://user-images.githubusercontent.com/33376433/123760424-69e1f280-d8f3-11eb-9f45-571496685a6e.png)
 ![5](https://user-images.githubusercontent.com/33376433/123760433-6cdce300-d8f3-11eb-8ecd-da04a475af41.png)
+
+## 附录
+
+- **大端存储**
+	- 比如: `int` `0x8` 将会被存储为 `00 00 00 08`, 而不是 `08 00 00 00`
+- **可变长的字符串类型**
+	- 存储的方式是以一个 `int` 类型的 `Size` + 字符串组成。`Size` 的值可以为 0。
+	- `Size` 指的是字符串所占的字节数,它并不一定等于字符串的长度。
+	- 举例来说,"sensor_1" 这个字符串将被存储为 `00 00 00 08` + "sensor_1" (ASCII编码)。
+	- 另外需要注意的一点是文件签名 "TsFile000001" (`Magic String` + `Version`), 因为他的 `Size(12)` 和 ASCII 编码值是固定的,所以没有必要在这个字符串前的写入 `Size` 值。
+- **压缩类型**
+	- 0: UNCOMPRESSED
+	- 1: SNAPPY
+	- 2: GZIP
+	- 3: LZO
+	- 4: SDT
+	- 5: PAA
+	- 6: PLA
+	- 7: LZ4
\ No newline at end of file
diff --git a/docs/zh/UserGuide/Data-Concept/Encoding.md b/docs/zh/UserGuide/Data-Concept/Encoding.md
index 8cb97b9..d3f22a5 100644
--- a/docs/zh/UserGuide/Data-Concept/Encoding.md
+++ b/docs/zh/UserGuide/Data-Concept/Encoding.md
@@ -35,7 +35,7 @@ PLAIN 编码,默认的编码方式,即不编码,支持多种数据类型
 
 游程编码,比较适合存储某些整数值连续出现的序列,不适合编码大部分情况下前后值不一样的序列数据。
 
-游程编码也可用于对浮点数进行编码,但在创建时间序列的时候需指定保留小数位数(MAX_POINT_NUMBER,具体指定方式参见本文 [SQL 参考文档](../Operation%20Manual/SQL%20Reference.md))。比较适合存储某些浮点数值连续出现的序列数据,不适合存储对小数点后精度要求较高以及前后波动较大的序列数据。
+游程编码也可用于对浮点数进行编码,但在创建时间序列的时候需指定保留小数位数(MAX_POINT_NUMBER,具体指定方式参见本文 [SQL 参考文档](../Appendix/SQL-Reference.md))。比较适合存储某些浮点数值连续出现的序列数据,不适合存储对小数点后精度要求较高以及前后波动较大的序列数据。
 
 > 游程编码(RLE)和二阶差分编码(TS_2DIFF)对 float 和 double 的编码是有精度限制的,默认保留 2 位小数。推荐使用 GORILLA。
 
@@ -66,4 +66,4 @@ GORILLA 编码是一种无损编码,它比较适合编码前后值比较接近
 |DOUBLE	|PLAIN, RLE, TS_2DIFF, GORILLA|
 |TEXT	|PLAIN, DICTIONARY|
 
-</div>
+</div>
\ No newline at end of file
diff --git a/example/session/src/main/java/org/apache/iotdb/HybridTimeseriesSessionExample.java b/example/session/src/main/java/org/apache/iotdb/HybridTimeseriesSessionExample.java
new file mode 100644
index 0000000..993fcb3
--- /dev/null
+++ b/example/session/src/main/java/org/apache/iotdb/HybridTimeseriesSessionExample.java
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb;
+
+import org.apache.iotdb.rpc.IoTDBConnectionException;
+import org.apache.iotdb.rpc.StatementExecutionException;
+import org.apache.iotdb.session.Session;
+import org.apache.iotdb.session.SessionDataSet;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.write.record.Tablet;
+import org.apache.iotdb.tsfile.write.schema.IMeasurementSchema;
+import org.apache.iotdb.tsfile.write.schema.VectorMeasurementSchema;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * This example shows how to insert and select Hybrid Timeseries by session Hybrid Timeseries
+ * includes Aligned Timeseries and Normal Timeseries
+ */
+public class HybridTimeseriesSessionExample {
+
+  private static Session session;
+  private static final String ROOT_SG1_D1_VECTOR1 = "root.sg_1.d1.vector";
+  private static final String ROOT_SG1_D1 = "root.sg_1.d1";
+  private static final String ROOT_SG1_D2 = "root.sg_1.d2";
+
+  public static void main(String[] args)
+      throws IoTDBConnectionException, StatementExecutionException {
+    session = new Session("127.0.0.1", 6667, "root", "root");
+    session.open(false);
+
+    // set session fetchSize
+    session.setFetchSize(10000);
+
+    insertRecord(ROOT_SG1_D2, 0, 100);
+    insertTabletWithAlignedTimeseriesMethod(0, 100);
+    insertRecord(ROOT_SG1_D1, 0, 100);
+    session.executeNonQueryStatement("flush");
+    selectTest();
+
+    session.close();
+  }
+
+  private static void selectTest() throws StatementExecutionException, IoTDBConnectionException {
+    SessionDataSet dataSet = session.executeQueryStatement("select * from root.sg_1.d1");
+    System.out.println(dataSet.getColumnNames());
+    while (dataSet.hasNext()) {
+      System.out.println(dataSet.next());
+    }
+
+    dataSet.closeOperationHandle();
+  }
+  /** Method 1 for insert tablet with aligned timeseries */
+  private static void insertTabletWithAlignedTimeseriesMethod(int minTime, int maxTime)
+      throws IoTDBConnectionException, StatementExecutionException {
+    // The schema of measurements of one device
+    // only measurementId and data type in MeasurementSchema take effects in Tablet
+    List<IMeasurementSchema> schemaList = new ArrayList<>();
+    schemaList.add(
+        new VectorMeasurementSchema(
+            "vector",
+            new String[] {"s1", "s2"},
+            new TSDataType[] {TSDataType.INT64, TSDataType.INT32}));
+
+    Tablet tablet = new Tablet(ROOT_SG1_D1_VECTOR1, schemaList);
+    tablet.setAligned(true);
+    long timestamp = minTime;
+
+    for (long row = minTime; row < maxTime; row++) {
+      int rowIndex = tablet.rowSize++;
+      tablet.addTimestamp(rowIndex, timestamp);
+      tablet.addValue(
+          schemaList.get(0).getValueMeasurementIdList().get(0), rowIndex, row * 10 + 1L);
+      tablet.addValue(
+          schemaList.get(0).getValueMeasurementIdList().get(1), rowIndex, (int) (row * 10 + 2));
+
+      if (tablet.rowSize == tablet.getMaxRowNumber()) {
+        session.insertTablet(tablet, true);
+        tablet.reset();
+      }
+      timestamp++;
+    }
+
+    if (tablet.rowSize != 0) {
+      session.insertTablet(tablet);
+      tablet.reset();
+    }
+  }
+
+  private static void insertRecord(String deviceId, int minTime, int maxTime)
+      throws IoTDBConnectionException, StatementExecutionException {
+    List<String> measurements = new ArrayList<>();
+    List<TSDataType> types = new ArrayList<>();
+    measurements.add("s2");
+    measurements.add("s4");
+    measurements.add("s5");
+    measurements.add("s6");
+    types.add(TSDataType.INT64);
+    types.add(TSDataType.INT64);
+    types.add(TSDataType.INT64);
+    types.add(TSDataType.INT64);
+
+    for (long time = minTime; time < maxTime; time++) {
+      List<Object> values = new ArrayList<>();
+      values.add(time * 10 + 3L);
+      values.add(time * 10 + 4L);
+      values.add(time * 10 + 5L);
+      values.add(time * 10 + 6L);
+      session.insertRecord(deviceId, time, measurements, types, values);
+    }
+  }
+}
diff --git a/server/src/test/java/org/apache/iotdb/db/tools/TsFileSketchToolTest.java b/example/tsfile/src/main/java/org/apache/iotdb/tsfile/TsFileWriteVectorWithTablet.java
similarity index 59%
copy from server/src/test/java/org/apache/iotdb/db/tools/TsFileSketchToolTest.java
copy to example/tsfile/src/main/java/org/apache/iotdb/tsfile/TsFileWriteVectorWithTablet.java
index 8a9a143..b8b4a13 100644
--- a/server/src/test/java/org/apache/iotdb/db/tools/TsFileSketchToolTest.java
+++ b/example/tsfile/src/main/java/org/apache/iotdb/tsfile/TsFileWriteVectorWithTablet.java
@@ -17,61 +17,64 @@
  * under the License.
  */
 
-package org.apache.iotdb.db.tools;
+package org.apache.iotdb.tsfile;
 
+import org.apache.iotdb.tsfile.common.conf.TSFileDescriptor;
 import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
-import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
 import org.apache.iotdb.tsfile.fileSystem.FSFactoryProducer;
 import org.apache.iotdb.tsfile.read.common.Path;
 import org.apache.iotdb.tsfile.write.TsFileWriter;
 import org.apache.iotdb.tsfile.write.record.Tablet;
 import org.apache.iotdb.tsfile.write.schema.IMeasurementSchema;
-import org.apache.iotdb.tsfile.write.schema.MeasurementSchema;
 import org.apache.iotdb.tsfile.write.schema.Schema;
+import org.apache.iotdb.tsfile.write.schema.VectorMeasurementSchema;
 
-import org.apache.commons.io.FileUtils;
-import org.junit.After;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.File;
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.List;
 
-public class TsFileSketchToolTest {
-  String path = "test.tsfile";
-  String sketchOut = "sketch.out";
-  String device = "root.device_0";
+/** An example of writing vector type timeseries with tablet */
+public class TsFileWriteVectorWithTablet {
 
-  @Before
-  public void setUp() throws Exception {
+  private static final Logger logger = LoggerFactory.getLogger(TsFileWriteVectorWithTablet.class);
+
+  public static void main(String[] args) throws IOException {
     try {
+      String path = "test.tsfile";
       File f = FSFactoryProducer.getFSFactory().getFile(path);
       if (f.exists() && !f.delete()) {
         throw new RuntimeException("can not delete " + f.getAbsolutePath());
       }
+      TSFileDescriptor.getInstance().getConfig().setMaxDegreeOfIndexNode(3);
 
       Schema schema = new Schema();
 
+      String device = Constant.DEVICE_PREFIX + 1;
       String sensorPrefix = "sensor_";
+      String vectorName = "vector1";
       // the number of rows to include in the tablet
-      int rowNum = 1000000;
-      // the number of values to include in the tablet
-      int sensorNum = 10;
+      int rowNum = 10000;
+      // the number of vector values to include in the tablet
+      int multiSensorNum = 10;
+
+      String[] measurementNames = new String[multiSensorNum];
+      TSDataType[] dataTypes = new TSDataType[multiSensorNum];
 
       List<IMeasurementSchema> measurementSchemas = new ArrayList<>();
       // add measurements into file schema (all with INT64 data type)
-      for (int i = 0; i < sensorNum; i++) {
-        IMeasurementSchema measurementSchema =
-            new MeasurementSchema(sensorPrefix + (i + 1), TSDataType.INT64, TSEncoding.TS_2DIFF);
-        measurementSchemas.add(measurementSchema);
-        schema.registerTimeseries(
-            new Path(device, sensorPrefix + (i + 1)),
-            new MeasurementSchema(sensorPrefix + (i + 1), TSDataType.INT64, TSEncoding.TS_2DIFF));
+      for (int i = 0; i < multiSensorNum; i++) {
+        measurementNames[i] = sensorPrefix + (i + 1);
+        dataTypes[i] = TSDataType.INT64;
       }
-
+      // vector schema
+      IMeasurementSchema vectorMeasurementSchema =
+          new VectorMeasurementSchema(vectorName, measurementNames, dataTypes);
+      measurementSchemas.add(vectorMeasurementSchema);
+      schema.registerTimeseries(new Path(device, vectorName), vectorMeasurementSchema);
       // add measurements into TSFileWriter
       try (TsFileWriter tsFileWriter = new TsFileWriter(f, schema)) {
 
@@ -87,9 +90,13 @@ public class TsFileSketchToolTest {
         for (int r = 0; r < rowNum; r++, value++) {
           int row = tablet.rowSize++;
           timestamps[row] = timestamp++;
-          for (int i = 0; i < sensorNum; i++) {
-            long[] sensor = (long[]) values[i];
-            sensor[row] = value;
+          for (int i = 0; i < measurementSchemas.size(); i++) {
+            IMeasurementSchema measurementSchema = measurementSchemas.get(i);
+            if (measurementSchema instanceof VectorMeasurementSchema) {
+              for (String valueName : measurementSchema.getValueMeasurementIdList()) {
+                tablet.addValue(valueName, row, value);
+              }
+            }
           }
           // write Tablet to TsFile
           if (tablet.rowSize == tablet.getMaxRowNumber()) {
@@ -103,31 +110,9 @@ public class TsFileSketchToolTest {
           tablet.reset();
         }
       }
-    } catch (Exception e) {
-      throw new Exception("meet error in TsFileWrite with tablet", e);
-    }
-  }
 
-  @Test
-  public void tsFileSketchToolTest() {
-    TsFileSketchTool tool = new TsFileSketchTool();
-    String args[] = new String[2];
-    args[0] = path;
-    args[1] = sketchOut;
-    try {
-      tool.main(args);
-    } catch (IOException e) {
-      Assert.fail(e.getMessage());
-    }
-  }
-
-  @After
-  public void tearDown() {
-    try {
-      FileUtils.forceDelete(new File(path));
-      FileUtils.forceDelete(new File(sketchOut));
-    } catch (IOException e) {
-      Assert.fail(e.getMessage());
+    } catch (Exception e) {
+      logger.error("meet error in TsFileWrite with tablet", e);
     }
   }
 }
diff --git a/server/src/main/java/org/apache/iotdb/db/engine/cache/TimeSeriesMetadataCache.java b/server/src/main/java/org/apache/iotdb/db/engine/cache/TimeSeriesMetadataCache.java
index fdcea1b..1443cbf 100644
--- a/server/src/main/java/org/apache/iotdb/db/engine/cache/TimeSeriesMetadataCache.java
+++ b/server/src/main/java/org/apache/iotdb/db/engine/cache/TimeSeriesMetadataCache.java
@@ -24,8 +24,10 @@ import org.apache.iotdb.db.conf.IoTDBConstant;
 import org.apache.iotdb.db.conf.IoTDBDescriptor;
 import org.apache.iotdb.db.query.control.FileReaderManager;
 import org.apache.iotdb.db.utils.TestOnly;
+import org.apache.iotdb.tsfile.common.constant.TsFileConstant;
 import org.apache.iotdb.tsfile.file.metadata.ChunkMetadata;
 import org.apache.iotdb.tsfile.file.metadata.TimeseriesMetadata;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
 import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
 import org.apache.iotdb.tsfile.read.common.Path;
 import org.apache.iotdb.tsfile.utils.BloomFilter;
@@ -40,14 +42,7 @@ import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.lang.ref.WeakReference;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Objects;
-import java.util.Set;
-import java.util.WeakHashMap;
+import java.util.*;
 import java.util.concurrent.atomic.AtomicLong;
 
 /**
@@ -251,6 +246,9 @@ public class TimeSeriesMetadataCache {
       boolean debug)
       throws IOException {
     // put all sub sensors into allSensors
+    for (int i = 0; i < subSensorList.size(); i++) {
+      subSensorList.set(i, key.measurement + TsFileConstant.PATH_SEPARATOR + subSensorList.get(i));
+    }
     allSensors.addAll(subSensorList);
     if (!CACHE_ENABLE) {
       // bloom filter part
@@ -260,7 +258,7 @@ public class TimeSeriesMetadataCache {
           && !bloomFilter.contains(key.device + IoTDBConstant.PATH_SEPARATOR + key.measurement)) {
         return Collections.emptyList();
       }
-      return reader.readTimeseriesMetadata(new Path(key.device, key.measurement), subSensorList);
+      return readTimeseriesMetadataForVector(reader, key, subSensorList, allSensors);
     }
 
     List<TimeseriesMetadata> res = new ArrayList<>();
@@ -287,10 +285,11 @@ public class TimeSeriesMetadataCache {
             if (debug) {
               DEBUG_LOGGER.info("TimeSeries meta data {} is filter by bloomFilter!", key);
             }
+            allSensors.removeAll(subSensorList);
             return Collections.emptyList();
           }
           List<TimeseriesMetadata> timeSeriesMetadataList =
-              reader.readTimeseriesMetadata(path, allSensors);
+              readTimeseriesMetadataForVector(reader, key, subSensorList, allSensors);
           Map<TimeSeriesMetadataCacheKey, TimeseriesMetadata> map = new HashMap<>();
           // put TimeSeriesMetadata of all sensors used in this query into cache
           timeSeriesMetadataList.forEach(
@@ -315,6 +314,7 @@ public class TimeSeriesMetadataCache {
       if (debug) {
         DEBUG_LOGGER.info("The file doesn't have this time series {}.", key);
       }
+      allSensors.removeAll(subSensorList);
       return Collections.emptyList();
     } else {
       if (debug) {
@@ -328,11 +328,61 @@ public class TimeSeriesMetadataCache {
       for (int i = 0; i < res.size(); i++) {
         res.set(i, new TimeseriesMetadata(res.get(i)));
       }
+      allSensors.removeAll(subSensorList);
       return res;
     }
   }
 
   /**
+   * Support for vector, extraction of common function of `get`
+   *
+   * @param key vector's own fullPath, e.g. root.sg1.d1.vector
+   * @param subSensorList all subSensors of this vector in one query, e.g. [s1, s2, s3]
+   * @param allSensors all sensors of the device in one device, to vector, this should contain both
+   *     vector name and subSensors' name, e.g. [vector, s1, s2, s3]
+   * @param reader TsFileSequenceReader created by file
+   */
+  private List<TimeseriesMetadata> readTimeseriesMetadataForVector(
+      TsFileSequenceReader reader,
+      TimeSeriesMetadataCacheKey key,
+      List<String> subSensorList,
+      Set<String> allSensors)
+      throws IOException {
+    Path path = new Path(key.device, key.measurement);
+    List<TimeseriesMetadata> timeSeriesMetadataList =
+        reader.readTimeseriesMetadata(path, allSensors);
+    // for new implementation of index tree, subSensor may not all stored in one leaf
+    // for this case, it's necessary to make sure all subSensor's timeseries add to list
+    TreeSet<String> subSensorsSet = new TreeSet<>(subSensorList);
+    for (int i = 0; i < timeSeriesMetadataList.size(); i++) {
+      TimeseriesMetadata tsMetadata = timeSeriesMetadataList.get(i);
+      if (tsMetadata.getTSDataType().equals(TSDataType.VECTOR)
+          && tsMetadata.getMeasurementId().equals(key.measurement)) {
+        for (int j = i + 1; j < timeSeriesMetadataList.size(); j++) {
+          tsMetadata = timeSeriesMetadataList.get(j);
+          if (!subSensorsSet.isEmpty() && subSensorsSet.contains(tsMetadata.getMeasurementId())) {
+            subSensorsSet.remove(tsMetadata.getMeasurementId());
+          }
+        }
+        break;
+      }
+    }
+    while (!subSensorsSet.isEmpty()) {
+      Path subPath =
+          new Path(
+              key.device, key.measurement + TsFileConstant.PATH_SEPARATOR + subSensorsSet.first());
+      List<TimeseriesMetadata> subList = reader.readTimeseriesMetadata(subPath, allSensors);
+      for (TimeseriesMetadata tsMetadata : subList) {
+        if (!subSensorsSet.isEmpty() && subSensorsSet.contains(tsMetadata.getMeasurementId())) {
+          subSensorsSet.remove(tsMetadata.getMeasurementId());
+        }
+      }
+      timeSeriesMetadataList.addAll(subList);
+    }
+    return timeSeriesMetadataList;
+  }
+
+  /**
    * !!!Attention!!!
    *
    * <p>For a vector, e.g. root.sg1.d1.vector1(s1, s2) TimeSeriesMetadataCacheKey for vector1 should
@@ -356,7 +406,6 @@ public class TimeSeriesMetadataCache {
         if (timeseriesMetadata != null) {
           res.add(timeseriesMetadata);
         } else {
-          res.clear();
           break;
         }
       }
diff --git a/server/src/main/java/org/apache/iotdb/db/tools/TsFileSketchTool.java b/server/src/main/java/org/apache/iotdb/db/tools/TsFileSketchTool.java
index d36f736..947c1b7 100644
--- a/server/src/main/java/org/apache/iotdb/db/tools/TsFileSketchTool.java
+++ b/server/src/main/java/org/apache/iotdb/db/tools/TsFileSketchTool.java
@@ -22,11 +22,9 @@ package org.apache.iotdb.db.tools;
 import org.apache.iotdb.tsfile.common.conf.TSFileConfig;
 import org.apache.iotdb.tsfile.file.MetaMarker;
 import org.apache.iotdb.tsfile.file.header.ChunkGroupHeader;
-import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetadata;
-import org.apache.iotdb.tsfile.file.metadata.ChunkMetadata;
-import org.apache.iotdb.tsfile.file.metadata.MetadataIndexEntry;
-import org.apache.iotdb.tsfile.file.metadata.TimeseriesMetadata;
-import org.apache.iotdb.tsfile.file.metadata.TsFileMetadata;
+import org.apache.iotdb.tsfile.file.header.PageHeader;
+import org.apache.iotdb.tsfile.file.metadata.*;
+import org.apache.iotdb.tsfile.file.metadata.enums.MetadataIndexNodeType;
 import org.apache.iotdb.tsfile.fileSystem.FSFactoryProducer;
 import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
 import org.apache.iotdb.tsfile.read.common.Chunk;
@@ -37,6 +35,8 @@ import org.apache.iotdb.tsfile.utils.Pair;
 import java.io.FileWriter;
 import java.io.IOException;
 import java.io.PrintWriter;
+import java.nio.BufferOverflowException;
+import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
@@ -44,202 +44,359 @@ import java.util.TreeMap;
 
 public class TsFileSketchTool {
 
+  private String filename;
+  private PrintWriter pw;
+  private TsFileSketchToolReader reader;
+  private String splitStr; // for split different part of TsFile
+
   public static void main(String[] args) throws IOException {
     Pair<String, String> fileNames = checkArgs(args);
     String filename = fileNames.left;
     String outFile = fileNames.right;
     System.out.println("TsFile path:" + filename);
     System.out.println("Sketch save path:" + outFile);
-    try (PrintWriter pw = new PrintWriter(new FileWriter(outFile))) {
-      long length = FSFactoryProducer.getFSFactory().getFile(filename).length();
+    new TsFileSketchTool(filename, outFile).run();
+  }
+
+  /**
+   * construct TsFileSketchTool
+   *
+   * @param filename input file path
+   * @param outFile output file path
+   */
+  public TsFileSketchTool(String filename, String outFile) {
+    try {
+      this.filename = filename;
+      pw = new PrintWriter(new FileWriter(outFile));
+      reader = new TsFileSketchToolReader(filename);
+      StringBuilder str1 = new StringBuilder();
+      for (int i = 0; i < 21; i++) {
+        str1.append("|");
+      }
+      splitStr = str1.toString();
+    } catch (IOException e) {
+      e.printStackTrace();
+    }
+  }
+
+  /** entry of tool */
+  public void run() throws IOException {
+    long length = FSFactoryProducer.getFSFactory().getFile(filename).length();
+    printlnBoth(
+        pw, "-------------------------------- TsFile Sketch --------------------------------");
+    printlnBoth(pw, "file path: " + filename);
+    printlnBoth(pw, "file length: " + length);
+
+    // get metadata information
+    TsFileMetadata tsFileMetaData = reader.readFileMetadata();
+    List<ChunkGroupMetadata> allChunkGroupMetadata = new ArrayList<>();
+    reader.selfCheck(null, allChunkGroupMetadata, false);
+
+    // print file information
+    printFileInfo();
+
+    // print chunk
+    printChunk(allChunkGroupMetadata);
+
+    // metadata begins
+    if (tsFileMetaData.getMetadataIndex().getChildren().isEmpty()) {
+      printlnBoth(pw, String.format("%20s", reader.getFileMetadataPos() - 1) + "|\t[marker] 2");
+    } else {
       printlnBoth(
-          pw, "-------------------------------- TsFile Sketch --------------------------------");
-      printlnBoth(pw, "file path: " + filename);
-      printlnBoth(pw, "file length: " + length);
-
-      // get metadata information
-      try (TsFileSequenceReader reader = new TsFileSequenceReader(filename)) {
-        TsFileMetadata tsFileMetaData = reader.readFileMetadata();
-        List<ChunkGroupMetadata> allChunkGroupMetadata = new ArrayList<>();
-        reader.selfCheck(null, allChunkGroupMetadata, false);
-
-        // begin print
-        StringBuilder str1 = new StringBuilder();
-        for (int i = 0; i < 21; i++) {
-          str1.append("|");
-        }
+          pw, String.format("%20s", reader.readFileMetadata().getMetaOffset()) + "|\t[marker] 2");
+    }
+    // get all timeseries index
+    Map<Long, Pair<Path, TimeseriesMetadata>> timeseriesMetadataMap =
+        reader.getAllTimeseriesMetadataWithOffset();
 
-        printlnBoth(pw, "");
-        printlnBoth(pw, String.format("%20s", "POSITION") + "|\tCONTENT");
-        printlnBoth(pw, String.format("%20s", "--------") + " \t-------");
-        printlnBoth(pw, String.format("%20d", 0) + "|\t[magic head] " + reader.readHeadMagic());
-        printlnBoth(
-            pw,
-            String.format("%20d", TSFileConfig.MAGIC_STRING.getBytes().length)
-                + "|\t[version number] "
-                + reader.readVersionNumber());
-        long nextChunkGroupHeaderPos =
-            (long) TSFileConfig.MAGIC_STRING.getBytes().length + Byte.BYTES;
-        // ChunkGroup begins
-        for (ChunkGroupMetadata chunkGroupMetadata : allChunkGroupMetadata) {
-          printlnBoth(
-              pw,
-              str1
-                  + "\t[Chunk Group] of "
-                  + chunkGroupMetadata.getDevice()
-                  + ", num of Chunks:"
-                  + chunkGroupMetadata.getChunkMetadataList().size());
-          // chunkGroupHeader begins
-          printlnBoth(
-              pw, String.format("%20s", nextChunkGroupHeaderPos) + "|\t[Chunk Group Header]");
-          ChunkGroupHeader chunkGroupHeader =
-              reader.readChunkGroupHeader(nextChunkGroupHeaderPos, false);
-          printlnBoth(pw, String.format("%20s", "") + "|\t\t[marker] 0");
-          printlnBoth(
-              pw, String.format("%20s", "") + "|\t\t[deviceID] " + chunkGroupHeader.getDeviceID());
-          // chunk begins
-          for (ChunkMetadata chunkMetadata : chunkGroupMetadata.getChunkMetadataList()) {
-            Chunk chunk = reader.readMemChunk(chunkMetadata);
-            printlnBoth(
-                pw,
-                String.format("%20d", chunkMetadata.getOffsetOfChunkHeader())
-                    + "|\t[Chunk] of "
-                    + chunkMetadata.getMeasurementUid()
-                    + ", numOfPoints:"
-                    + chunkMetadata.getNumOfPoints()
-                    + ", time range:["
-                    + chunkMetadata.getStartTime()
-                    + ","
-                    + chunkMetadata.getEndTime()
-                    + "], tsDataType:"
-                    + chunkMetadata.getDataType()
-                    + ", \n"
-                    + String.format("%20s", "")
-                    + " \t"
-                    + chunkMetadata.getStatistics());
-            printlnBoth(
-                pw,
-                String.format("%20s", "") + "|\t\t[marker] " + chunk.getHeader().getChunkType());
-            nextChunkGroupHeaderPos =
-                chunkMetadata.getOffsetOfChunkHeader()
-                    + chunk.getHeader().getSerializedSize()
-                    + chunk.getHeader().getDataSize();
-          }
-          reader.position(nextChunkGroupHeaderPos);
-          byte marker = reader.readMarker();
-          switch (marker) {
-            case MetaMarker.CHUNK_GROUP_HEADER:
-              // do nothing
-              break;
-            case MetaMarker.OPERATION_INDEX_RANGE:
-              // skip the PlanIndex
-              nextChunkGroupHeaderPos += 16;
-              break;
-          }
+    // print timeseries index
+    printTimeseriesIndex(timeseriesMetadataMap);
 
-          printlnBoth(pw, str1 + "\t[Chunk Group] of " + chunkGroupMetadata.getDevice() + " ends");
-        }
+    MetadataIndexNode metadataIndexNode = tsFileMetaData.getMetadataIndex();
+    TreeMap<Long, MetadataIndexNode> metadataIndexNodeMap = new TreeMap<>();
+    List<String> treeOutputStringBuffer = new ArrayList<>();
+    loadIndexTree(metadataIndexNode, metadataIndexNodeMap, treeOutputStringBuffer, 0);
 
-        // metadata begins
-        if (tsFileMetaData.getMetadataIndex().getChildren().isEmpty()) {
-          printlnBoth(pw, String.format("%20s", reader.getFileMetadataPos() - 1) + "|\t[marker] 2");
-        } else {
-          printlnBoth(
-              pw,
-              String.format("%20s", reader.readFileMetadata().getMetaOffset()) + "|\t[marker] 2");
-        }
+    // print IndexOfTimerseriesIndex
+    printIndexOfTimerseriesIndex(metadataIndexNodeMap);
 
-        Map<String, List<TimeseriesMetadata>> allTimeseriesMetadata =
-            reader.getAllTimeseriesMetadata();
-        Map<String, Pair<Path, TimeseriesMetadata>> timeseriesMetadataMap = new TreeMap<>();
+    // print TsFile Metadata
+    printTsFileMetadata(tsFileMetaData);
 
-        for (Map.Entry<String, List<TimeseriesMetadata>> entry : allTimeseriesMetadata.entrySet()) {
-          String device = entry.getKey();
-          List<TimeseriesMetadata> seriesMetadataList = entry.getValue();
-          for (TimeseriesMetadata seriesMetadata : seriesMetadataList) {
-            timeseriesMetadataMap.put(
-                seriesMetadata.getMeasurementId(),
-                new Pair<>(new Path(device, seriesMetadata.getMeasurementId()), seriesMetadata));
-          }
-        }
-        for (Map.Entry<String, Pair<Path, TimeseriesMetadata>> entry :
-            timeseriesMetadataMap.entrySet()) {
-          printlnBoth(
-              pw,
-              entry.getKey()
-                  + "|\t[ChunkMetadataList] of "
-                  + entry.getValue().left
-                  + ", tsDataType:"
-                  + entry.getValue().right.getTSDataType());
-          printlnBoth(
-              pw,
-              String.format("%20s", "") + "|\t[" + entry.getValue().right.getStatistics() + "] ");
-        }
+    printlnBoth(pw, String.format("%20s", length) + "|\tEND of TsFile");
+    printlnBoth(
+        pw,
+        "---------------------------- IndexOfTimerseriesIndex Tree -----------------------------");
+    // print index tree
+    for (String str : treeOutputStringBuffer) {
+      printlnBoth(pw, str);
+    }
+    printlnBoth(
+        pw,
+        "---------------------------------- TsFile Sketch End ----------------------------------");
+    pw.close();
+  }
 
-        for (MetadataIndexEntry metadataIndex : tsFileMetaData.getMetadataIndex().getChildren()) {
-          printlnBoth(
-              pw,
-              String.format("%20s", metadataIndex.getOffset())
-                  + "|\t[MetadataIndex] of "
-                  + metadataIndex.getName());
-        }
+  private void printTsFileMetadata(TsFileMetadata tsFileMetaData) {
+    try {
+      printlnBoth(pw, String.format("%20s", reader.getFileMetadataPos()) + "|\t[TsFileMetadata]");
+      printlnBoth(
+          pw, String.format("%20s", "") + "|\t\t[meta offset] " + tsFileMetaData.getMetaOffset());
+      printlnBoth(
+          pw,
+          String.format("%20s", "")
+              + "|\t\t[num of devices] "
+              + tsFileMetaData.getMetadataIndex().getChildren().size());
+      printlnBoth(
+          pw,
+          String.format("%20s", "")
+              + "|\t\t"
+              + tsFileMetaData.getMetadataIndex().getChildren().size()
+              + " key&TsMetadataIndex");
+      // bloom filter
+      BloomFilter bloomFilter = tsFileMetaData.getBloomFilter();
+      printlnBoth(
+          pw,
+          String.format("%20s", "")
+              + "|\t\t[bloom filter bit vector byte array length] "
+              + bloomFilter.serialize().length);
+      printlnBoth(pw, String.format("%20s", "") + "|\t\t[bloom filter bit vector byte array] ");
+      printlnBoth(
+          pw,
+          String.format("%20s", "")
+              + "|\t\t[bloom filter number of bits] "
+              + bloomFilter.getSize());
+      printlnBoth(
+          pw,
+          String.format("%20s", "")
+              + "|\t\t[bloom filter number of hash functions] "
+              + bloomFilter.getHashFunctionSize());
 
-        printlnBoth(pw, String.format("%20s", reader.getFileMetadataPos()) + "|\t[TsFileMetadata]");
-        printlnBoth(
-            pw,
-            String.format("%20s", "")
-                + "|\t\t[num of devices] "
-                + tsFileMetaData.getMetadataIndex().getChildren().size());
+      printlnBoth(
+          pw,
+          String.format("%20s", (reader.getFileMetadataPos() + reader.getFileMetadataSize()))
+              + "|\t[TsFileMetadataSize] "
+              + reader.getFileMetadataSize());
+
+      printlnBoth(
+          pw,
+          String.format("%20s", reader.getFileMetadataPos() + reader.getFileMetadataSize() + 4)
+              + "|\t[magic tail] "
+              + reader.readTailMagic());
+    } catch (IOException e) {
+      e.printStackTrace();
+    }
+  }
+
+  private void printIndexOfTimerseriesIndex(TreeMap<Long, MetadataIndexNode> metadataIndexNodeMap) {
+    for (Map.Entry<Long, MetadataIndexNode> entry : metadataIndexNodeMap.entrySet()) {
+      printlnBoth(
+          pw,
+          String.format("%20s", entry.getKey())
+              + "|\t[IndexOfTimerseriesIndex Node] type="
+              + entry.getValue().getNodeType());
+      for (MetadataIndexEntry metadataIndexEntry : entry.getValue().getChildren()) {
         printlnBoth(
             pw,
             String.format("%20s", "")
-                + "|\t\t"
-                + tsFileMetaData.getMetadataIndex().getChildren().size()
-                + " key&TsMetadataIndex");
+                + "|\t\t<"
+                + metadataIndexEntry.getName()
+                + ", "
+                + metadataIndexEntry.getOffset()
+                + ">");
+      }
+      printlnBoth(
+          pw,
+          String.format("%20s", "") + "|\t\t<endOffset, " + entry.getValue().getEndOffset() + ">");
+    }
+  }
+
+  private void printFileInfo() {
+    try {
+      printlnBoth(pw, "");
+      printlnBoth(pw, String.format("%20s", "POSITION") + "|\tCONTENT");
+      printlnBoth(pw, String.format("%20s", "--------") + " \t-------");
+      printlnBoth(pw, String.format("%20d", 0) + "|\t[magic head] " + reader.readHeadMagic());
+      printlnBoth(
+          pw,
+          String.format("%20d", TSFileConfig.MAGIC_STRING.getBytes().length)
+              + "|\t[version number] "
+              + reader.readVersionNumber());
+    } catch (IOException e) {
+      e.printStackTrace();
+    }
+  }
 
-        // bloom filter
-        BloomFilter bloomFilter = tsFileMetaData.getBloomFilter();
+  private void printChunk(List<ChunkGroupMetadata> allChunkGroupMetadata) {
+    try {
+      long nextChunkGroupHeaderPos =
+          (long) TSFileConfig.MAGIC_STRING.getBytes().length + Byte.BYTES;
+      // ChunkGroup begins
+      for (ChunkGroupMetadata chunkGroupMetadata : allChunkGroupMetadata) {
         printlnBoth(
             pw,
-            String.format("%20s", "")
-                + "|\t\t[bloom filter bit vector byte array length] "
-                + bloomFilter.serialize().length);
-        printlnBoth(pw, String.format("%20s", "") + "|\t\t[bloom filter bit vector byte array] ");
+            splitStr
+                + "\t[Chunk Group] of "
+                + chunkGroupMetadata.getDevice()
+                + ", num of Chunks:"
+                + chunkGroupMetadata.getChunkMetadataList().size());
+        // chunkGroupHeader begins
+        printlnBoth(pw, String.format("%20s", nextChunkGroupHeaderPos) + "|\t[Chunk Group Header]");
+        ChunkGroupHeader chunkGroupHeader =
+            reader.readChunkGroupHeader(nextChunkGroupHeaderPos, false);
+        printlnBoth(pw, String.format("%20s", "") + "|\t\t[marker] 0");
         printlnBoth(
-            pw,
-            String.format("%20s", "")
-                + "|\t\t[bloom filter number of bits] "
-                + bloomFilter.getSize());
+            pw, String.format("%20s", "") + "|\t\t[deviceID] " + chunkGroupHeader.getDeviceID());
+        // chunk begins
+        for (ChunkMetadata chunkMetadata : chunkGroupMetadata.getChunkMetadataList()) {
+          Chunk chunk = reader.readMemChunk(chunkMetadata);
+          printlnBoth(
+              pw,
+              String.format("%20d", chunkMetadata.getOffsetOfChunkHeader())
+                  + "|\t[Chunk] of "
+                  + chunkMetadata.getMeasurementUid()
+                  + ", numOfPoints:"
+                  + chunkMetadata.getNumOfPoints()
+                  + ", time range:["
+                  + chunkMetadata.getStartTime()
+                  + ","
+                  + chunkMetadata.getEndTime()
+                  + "], tsDataType:"
+                  + chunkMetadata.getDataType()
+                  + ", \n"
+                  + String.format("%20s", "")
+                  + " \t"
+                  + chunkMetadata.getStatistics());
+          printlnBoth(
+              pw,
+              String.format("%20s", "")
+                  + "|\t\t[chunk header] "
+                  + "marker="
+                  + chunk.getHeader().getChunkType()
+                  + ", measurementId="
+                  + chunk.getHeader().getMeasurementID()
+                  + ", dataSize="
+                  + chunk.getHeader().getDataSize()
+                  + ", serializedSize="
+                  + chunk.getHeader().getSerializedSize());
+
+          printlnBoth(pw, String.format("%20s", "") + "|\t\t[chunk] " + chunk.getData());
+          PageHeader pageHeader;
+          if (((byte) (chunk.getHeader().getChunkType() & 0x3F))
+              == MetaMarker.ONLY_ONE_PAGE_CHUNK_HEADER) {
+            pageHeader = PageHeader.deserializeFrom(chunk.getData(), chunkMetadata.getStatistics());
+          } else {
+            pageHeader =
+                PageHeader.deserializeFrom(chunk.getData(), chunk.getHeader().getDataType());
+          }
+          printlnBoth(
+              pw,
+              String.format("%20s", "")
+                  + "|\t\t[page] "
+                  + " CompressedSize:"
+                  + pageHeader.getCompressedSize()
+                  + ", UncompressedSize:"
+                  + pageHeader.getUncompressedSize());
+          nextChunkGroupHeaderPos =
+              chunkMetadata.getOffsetOfChunkHeader()
+                  + chunk.getHeader().getSerializedSize()
+                  + chunk.getHeader().getDataSize();
+        }
+        reader.position(nextChunkGroupHeaderPos);
+        byte marker = reader.readMarker();
+        switch (marker) {
+          case MetaMarker.CHUNK_GROUP_HEADER:
+            // do nothing
+            break;
+          case MetaMarker.OPERATION_INDEX_RANGE:
+            // skip the PlanIndex
+            nextChunkGroupHeaderPos += 16;
+            break;
+        }
+
         printlnBoth(
-            pw,
-            String.format("%20s", "")
-                + "|\t\t[bloom filter number of hash functions] "
-                + bloomFilter.getHashFunctionSize());
+            pw, splitStr + "\t[Chunk Group] of " + chunkGroupMetadata.getDevice() + " ends");
+      }
+    } catch (IOException e) {
+      e.printStackTrace();
+    }
+  }
 
+  private void printTimeseriesIndex(
+      Map<Long, Pair<Path, TimeseriesMetadata>> timeseriesMetadataMap) {
+    try {
+      for (Map.Entry<Long, Pair<Path, TimeseriesMetadata>> entry :
+          timeseriesMetadataMap.entrySet()) {
         printlnBoth(
             pw,
-            String.format("%20s", (reader.getFileMetadataPos() + reader.getFileMetadataSize()))
-                + "|\t[TsFileMetadataSize] "
-                + reader.getFileMetadataSize());
-
+            String.format("%20s", entry.getKey())
+                + "|\t[TimeseriesIndex] of "
+                + entry.getValue().left
+                + ", tsDataType:"
+                + entry.getValue().right.getTSDataType());
+        for (IChunkMetadata chunkMetadata : reader.getChunkMetadataList(entry.getValue().left)) {
+          printlnBoth(
+              pw,
+              String.format("%20s", "")
+                  + "|\t\t[ChunkIndex] "
+                  + chunkMetadata.getMeasurementUid()
+                  + ", offset="
+                  + chunkMetadata.getOffsetOfChunkHeader());
+        }
         printlnBoth(
             pw,
-            String.format("%20s", reader.getFileMetadataPos() + reader.getFileMetadataSize() + 4)
-                + "|\t[magic tail] "
-                + reader.readTailMagic());
-
-        printlnBoth(pw, String.format("%20s", length) + "|\tEND of TsFile");
+            String.format("%20s", "") + "|\t\t[" + entry.getValue().right.getStatistics() + "] ");
+      }
+      printlnBoth(pw, splitStr);
+    } catch (IOException e) {
+      e.printStackTrace();
+    }
+  }
 
-        printlnBoth(pw, "");
+  /**
+   * load by dfs, and sort by TreeMap
+   *
+   * @param metadataIndexNode current node
+   * @param metadataIndexNodeMap result map, key is offset
+   * @param treeOutputStringBuffer result list, string is index tree
+   * @param deep current deep
+   */
+  private void loadIndexTree(
+      MetadataIndexNode metadataIndexNode,
+      TreeMap<Long, MetadataIndexNode> metadataIndexNodeMap,
+      List<String> treeOutputStringBuffer,
+      int deep)
+      throws IOException {
+    StringBuilder tableWriter = new StringBuilder("\t");
+    for (int i = 0; i < deep; i++) {
+      tableWriter.append("\t\t");
+    }
+    treeOutputStringBuffer.add(
+        tableWriter.toString() + "[MetadataIndex:" + metadataIndexNode.getNodeType() + "]");
+    for (int i = 0; i < metadataIndexNode.getChildren().size(); i++) {
+      MetadataIndexEntry metadataIndexEntry = metadataIndexNode.getChildren().get(i);
 
-        printlnBoth(
-            pw,
-            "---------------------------------- TsFile Sketch End ----------------------------------");
+      treeOutputStringBuffer.add(
+          tableWriter.toString()
+              + "└──────["
+              + metadataIndexEntry.getName()
+              + ","
+              + metadataIndexEntry.getOffset()
+              + "]");
+      if (!metadataIndexNode.getNodeType().equals(MetadataIndexNodeType.LEAF_MEASUREMENT)) {
+        long endOffset = metadataIndexNode.getEndOffset();
+        if (i != metadataIndexNode.getChildren().size() - 1) {
+          endOffset = metadataIndexNode.getChildren().get(i + 1).getOffset();
+        }
+        MetadataIndexNode subNode =
+            reader.getMetadataIndexNode(metadataIndexEntry.getOffset(), endOffset);
+        metadataIndexNodeMap.put(metadataIndexEntry.getOffset(), subNode);
+        loadIndexTree(subNode, metadataIndexNodeMap, treeOutputStringBuffer, deep + 1);
       }
     }
   }
 
-  private static void printlnBoth(PrintWriter pw, String str) {
+  private void printlnBoth(PrintWriter pw, String str) {
     System.out.println(str);
     pw.println(str);
   }
@@ -255,4 +412,94 @@ public class TsFileSketchTool {
     }
     return new Pair<>(filename, outFile);
   }
+
+  private class TsFileSketchToolReader extends TsFileSequenceReader {
+    public TsFileSketchToolReader(String file) throws IOException {
+      super(file);
+    }
+    /**
+     * Traverse the metadata index from MetadataIndexEntry to get TimeseriesMetadatas
+     *
+     * @param metadataIndex MetadataIndexEntry
+     * @param buffer byte buffer
+     * @param deviceId String
+     * @param timeseriesMetadataMap map: deviceId -> timeseriesMetadata list
+     * @param needChunkMetadata deserialize chunk metadata list or not
+     */
+    private void generateMetadataIndexWithOffset(
+        long startOffset,
+        MetadataIndexEntry metadataIndex,
+        ByteBuffer buffer,
+        String deviceId,
+        MetadataIndexNodeType type,
+        Map<Long, Pair<Path, TimeseriesMetadata>> timeseriesMetadataMap,
+        boolean needChunkMetadata)
+        throws IOException {
+      try {
+        if (type.equals(MetadataIndexNodeType.LEAF_MEASUREMENT)) {
+          while (buffer.hasRemaining()) {
+            long pos = startOffset + buffer.position();
+            TimeseriesMetadata timeseriesMetadata =
+                TimeseriesMetadata.deserializeFrom(buffer, needChunkMetadata);
+            timeseriesMetadataMap.put(
+                pos,
+                new Pair<>(
+                    new Path(deviceId, timeseriesMetadata.getMeasurementId()), timeseriesMetadata));
+          }
+        } else {
+          // deviceId should be determined by LEAF_DEVICE node
+          if (type.equals(MetadataIndexNodeType.LEAF_DEVICE)) {
+            deviceId = metadataIndex.getName();
+          }
+          MetadataIndexNode metadataIndexNode = MetadataIndexNode.deserializeFrom(buffer);
+          int metadataIndexListSize = metadataIndexNode.getChildren().size();
+          for (int i = 0; i < metadataIndexListSize; i++) {
+            long endOffset = metadataIndexNode.getEndOffset();
+            if (i != metadataIndexListSize - 1) {
+              endOffset = metadataIndexNode.getChildren().get(i + 1).getOffset();
+            }
+            ByteBuffer nextBuffer =
+                readData(metadataIndexNode.getChildren().get(i).getOffset(), endOffset);
+            generateMetadataIndexWithOffset(
+                metadataIndexNode.getChildren().get(i).getOffset(),
+                metadataIndexNode.getChildren().get(i),
+                nextBuffer,
+                deviceId,
+                metadataIndexNode.getNodeType(),
+                timeseriesMetadataMap,
+                needChunkMetadata);
+          }
+        }
+      } catch (BufferOverflowException e) {
+        throw e;
+      }
+    }
+
+    public Map<Long, Pair<Path, TimeseriesMetadata>> getAllTimeseriesMetadataWithOffset()
+        throws IOException {
+      if (tsFileMetaData == null) {
+        readFileMetadata();
+      }
+      MetadataIndexNode metadataIndexNode = tsFileMetaData.getMetadataIndex();
+      Map<Long, Pair<Path, TimeseriesMetadata>> timeseriesMetadataMap = new TreeMap<>();
+      List<MetadataIndexEntry> metadataIndexEntryList = metadataIndexNode.getChildren();
+      for (int i = 0; i < metadataIndexEntryList.size(); i++) {
+        MetadataIndexEntry metadataIndexEntry = metadataIndexEntryList.get(i);
+        long endOffset = tsFileMetaData.getMetadataIndex().getEndOffset();
+        if (i != metadataIndexEntryList.size() - 1) {
+          endOffset = metadataIndexEntryList.get(i + 1).getOffset();
+        }
+        ByteBuffer buffer = readData(metadataIndexEntry.getOffset(), endOffset);
+        generateMetadataIndexWithOffset(
+            metadataIndexEntry.getOffset(),
+            metadataIndexEntry,
+            buffer,
+            null,
+            metadataIndexNode.getNodeType(),
+            timeseriesMetadataMap,
+            false);
+      }
+      return timeseriesMetadataMap;
+    }
+  }
 }
diff --git a/server/src/test/java/org/apache/iotdb/db/engine/memtable/MemTableFlushTaskTest.java b/server/src/test/java/org/apache/iotdb/db/engine/memtable/MemTableFlushTaskTest.java
index af52254..d682c2f 100644
--- a/server/src/test/java/org/apache/iotdb/db/engine/memtable/MemTableFlushTaskTest.java
+++ b/server/src/test/java/org/apache/iotdb/db/engine/memtable/MemTableFlushTaskTest.java
@@ -111,20 +111,23 @@ public class MemTableFlushTaskTest {
     MemTableFlushTask memTableFlushTask = new MemTableFlushTask(memTable, writer, storageGroup);
     assertTrue(
         writer
-            .getVisibleMetadataList(MemTableTestUtils.deviceId0, "sensor0", TSDataType.BOOLEAN)
+            .getVisibleMetadataList(
+                MemTableTestUtils.deviceId0, "vectorName.sensor0", TSDataType.BOOLEAN)
             .isEmpty());
     memTableFlushTask.syncFlushMemTable();
     writer.makeMetadataVisible();
     assertEquals(
         1,
         writer
-            .getVisibleMetadataList(MemTableTestUtils.deviceId0, "sensor0", TSDataType.BOOLEAN)
+            .getVisibleMetadataList(
+                MemTableTestUtils.deviceId0, "vectorName.sensor0", TSDataType.BOOLEAN)
             .size());
     ChunkMetadata chunkMetaData =
         writer
-            .getVisibleMetadataList(MemTableTestUtils.deviceId0, "sensor0", TSDataType.BOOLEAN)
+            .getVisibleMetadataList(
+                MemTableTestUtils.deviceId0, "vectorName.sensor0", TSDataType.BOOLEAN)
             .get(0);
-    assertEquals("sensor0", chunkMetaData.getMeasurementUid());
+    assertEquals("vectorName.sensor0", chunkMetaData.getMeasurementUid());
     assertEquals(startTime, chunkMetaData.getStartTime());
     assertEquals(endTime, chunkMetaData.getEndTime());
     assertEquals(TSDataType.BOOLEAN, chunkMetaData.getDataType());
@@ -138,20 +141,23 @@ public class MemTableFlushTaskTest {
     MemTableFlushTask memTableFlushTask = new MemTableFlushTask(memTable, writer, storageGroup);
     assertTrue(
         writer
-            .getVisibleMetadataList(MemTableTestUtils.deviceId0, "sensor0", TSDataType.BOOLEAN)
+            .getVisibleMetadataList(
+                MemTableTestUtils.deviceId0, "vectorName.sensor0", TSDataType.BOOLEAN)
             .isEmpty());
     memTableFlushTask.syncFlushMemTable();
     writer.makeMetadataVisible();
     assertEquals(
         1,
         writer
-            .getVisibleMetadataList(MemTableTestUtils.deviceId0, "sensor0", TSDataType.BOOLEAN)
+            .getVisibleMetadataList(
+                MemTableTestUtils.deviceId0, "vectorName.sensor0", TSDataType.BOOLEAN)
             .size());
     ChunkMetadata chunkMetaData =
         writer
-            .getVisibleMetadataList(MemTableTestUtils.deviceId0, "sensor0", TSDataType.BOOLEAN)
+            .getVisibleMetadataList(
+                MemTableTestUtils.deviceId0, "vectorName.sensor0", TSDataType.BOOLEAN)
             .get(0);
-    assertEquals("sensor0", chunkMetaData.getMeasurementUid());
+    assertEquals("vectorName.sensor0", chunkMetaData.getMeasurementUid());
     assertEquals(startTime, chunkMetaData.getStartTime());
     assertEquals(endTime, chunkMetaData.getEndTime());
     assertEquals(TSDataType.BOOLEAN, chunkMetaData.getDataType());
diff --git a/server/src/test/java/org/apache/iotdb/db/engine/memtable/MemTableTestUtils.java b/server/src/test/java/org/apache/iotdb/db/engine/memtable/MemTableTestUtils.java
index 59bfa48..2c3777b 100644
--- a/server/src/test/java/org/apache/iotdb/db/engine/memtable/MemTableTestUtils.java
+++ b/server/src/test/java/org/apache/iotdb/db/engine/memtable/MemTableTestUtils.java
@@ -87,7 +87,7 @@ public class MemTableTestUtils {
 
     MeasurementMNode[] mNodes = new MeasurementMNode[2];
     IMeasurementSchema schema =
-        new VectorMeasurementSchema("$#$0", measurements, dataTypes, encodings);
+        new VectorMeasurementSchema("vectorName", measurements, dataTypes, encodings);
     mNodes[0] = new MeasurementMNode(null, "sensor0", schema, null);
     mNodes[1] = new MeasurementMNode(null, "sensor1", schema, null);
 
diff --git a/server/src/test/java/org/apache/iotdb/db/tools/TsFileSketchToolTest.java b/server/src/test/java/org/apache/iotdb/db/tools/TsFileSketchToolTest.java
index 8a9a143..69b2903 100644
--- a/server/src/test/java/org/apache/iotdb/db/tools/TsFileSketchToolTest.java
+++ b/server/src/test/java/org/apache/iotdb/db/tools/TsFileSketchToolTest.java
@@ -110,12 +110,12 @@ public class TsFileSketchToolTest {
 
   @Test
   public void tsFileSketchToolTest() {
-    TsFileSketchTool tool = new TsFileSketchTool();
     String args[] = new String[2];
     args[0] = path;
     args[1] = sketchOut;
+    TsFileSketchTool tool = new TsFileSketchTool(path, sketchOut);
     try {
-      tool.main(args);
+      tool.run();
     } catch (IOException e) {
       Assert.fail(e.getMessage());
     }
diff --git a/session/src/test/java/org/apache/iotdb/session/IoTDBSessionVectorIT.java b/session/src/test/java/org/apache/iotdb/session/IoTDBSessionVectorIT.java
new file mode 100644
index 0000000..f3f781a
--- /dev/null
+++ b/session/src/test/java/org/apache/iotdb/session/IoTDBSessionVectorIT.java
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.session;
+
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.utils.EnvironmentUtils;
+import org.apache.iotdb.rpc.IoTDBConnectionException;
+import org.apache.iotdb.rpc.StatementExecutionException;
+import org.apache.iotdb.tsfile.common.conf.TSFileDescriptor;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.write.record.Tablet;
+import org.apache.iotdb.tsfile.write.schema.IMeasurementSchema;
+import org.apache.iotdb.tsfile.write.schema.VectorMeasurementSchema;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+/** use session interface to IT for vector timeseries insert and select Black-box Testing */
+public class IoTDBSessionVectorIT {
+  private static final String ROOT_SG1_D1_VECTOR1 = "root.sg_1.d1.vector";
+  private static final String ROOT_SG1_D1 = "root.sg_1.d1";
+  private static final String ROOT_SG1_D2 = "root.sg_1.d2";
+
+  private Session session;
+
+  @Before
+  public void setUp() throws Exception {
+    System.setProperty(IoTDBConstant.IOTDB_CONF, "src/test/resources/");
+    EnvironmentUtils.closeStatMonitor();
+    TSFileDescriptor.getInstance().getConfig().setMaxDegreeOfIndexNode(3);
+    EnvironmentUtils.envSetUp();
+    session = new Session("127.0.0.1", 6667, "root", "root");
+    session.open();
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    session.close();
+    EnvironmentUtils.cleanEnv();
+  }
+
+  @Test
+  public void alignedTest() {
+    try {
+      insertTabletWithAlignedTimeseriesMethod();
+      session.executeNonQueryStatement("flush");
+      SessionDataSet dataSet = selectTest("select * from root.sg_1.d1");
+      assertEquals(dataSet.getColumnNames().size(), 3);
+      assertEquals(dataSet.getColumnNames().get(0), "Time");
+      assertEquals(dataSet.getColumnNames().get(1), ROOT_SG1_D1_VECTOR1 + ".s1");
+      assertEquals(dataSet.getColumnNames().get(2), ROOT_SG1_D1_VECTOR1 + ".s2");
+      int time = 0;
+      while (dataSet.hasNext()) {
+        RowRecord rowRecord = dataSet.next();
+        assertEquals(rowRecord.getFields().get(0).getIntV(), time);
+        assertEquals(rowRecord.getFields().get(1).getIntV(), (time + 1) * 10 + 1);
+        assertEquals(rowRecord.getFields().get(2).getIntV(), (time + 1) * 10 + 2);
+        System.out.println(dataSet.next());
+      }
+
+      dataSet.closeOperationHandle();
+    } catch (IoTDBConnectionException | StatementExecutionException e) {
+      e.printStackTrace();
+      fail(e.getMessage());
+    }
+  }
+
+  @Test
+  public void nonAlignedSingleSelectTest() {
+    try {
+      insertRecord(ROOT_SG1_D1);
+      insertTabletWithAlignedTimeseriesMethod();
+      insertRecord(ROOT_SG1_D2);
+      session.executeNonQueryStatement("flush");
+      SessionDataSet dataSet = selectTest("select * from root.sg_1.d1.vector.s2");
+      assertEquals(dataSet.getColumnNames().size(), 2);
+      assertEquals(dataSet.getColumnNames().get(0), "Time");
+      assertEquals(dataSet.getColumnNames().get(1), ROOT_SG1_D1_VECTOR1 + ".s2");
+      int time = 0;
+      while (dataSet.hasNext()) {
+        RowRecord rowRecord = dataSet.next();
+        assertEquals(rowRecord.getFields().get(0).getIntV(), time);
+        assertEquals(rowRecord.getFields().get(1).getIntV(), (time + 1) * 10 + 2);
+        System.out.println(dataSet.next());
+      }
+
+      dataSet.closeOperationHandle();
+    } catch (IoTDBConnectionException | StatementExecutionException e) {
+      e.printStackTrace();
+    }
+  }
+
+  @Test
+  public void nonAlignedVectorSelectTest() {
+    try {
+      insertRecord(ROOT_SG1_D1);
+      insertTabletWithAlignedTimeseriesMethod();
+      insertRecord(ROOT_SG1_D2);
+      session.executeNonQueryStatement("flush");
+      SessionDataSet dataSet = selectTest("select * from root.sg_1.d1.vector");
+      assertEquals(dataSet.getColumnNames().size(), 3);
+      assertEquals(dataSet.getColumnNames().get(0), "Time");
+      assertEquals(dataSet.getColumnNames().get(1), ROOT_SG1_D1_VECTOR1 + ".s1");
+      assertEquals(dataSet.getColumnNames().get(2), ROOT_SG1_D1_VECTOR1 + ".s2");
+      int time = 0;
+      while (dataSet.hasNext()) {
+        RowRecord rowRecord = dataSet.next();
+        assertEquals(rowRecord.getFields().get(0).getIntV(), time);
+        assertEquals(rowRecord.getFields().get(1).getIntV(), (time + 1) * 10 + 1);
+        assertEquals(rowRecord.getFields().get(2).getIntV(), (time + 1) * 10 + 2);
+        System.out.println(dataSet.next());
+      }
+
+      dataSet.closeOperationHandle();
+    } catch (IoTDBConnectionException | StatementExecutionException e) {
+      e.printStackTrace();
+    }
+  }
+
+  private SessionDataSet selectTest(String sql)
+      throws StatementExecutionException, IoTDBConnectionException {
+    SessionDataSet dataSet = session.executeQueryStatement(sql);
+    System.out.println(dataSet.getColumnNames());
+    while (dataSet.hasNext()) {
+      System.out.println(dataSet.next());
+    }
+    return dataSet;
+  }
+
+  /** Method 1 for insert tablet with aligned timeseries */
+  private void insertTabletWithAlignedTimeseriesMethod()
+      throws IoTDBConnectionException, StatementExecutionException {
+    // The schema of measurements of one device
+    // only measurementId and data type in MeasurementSchema take effects in Tablet
+    List<IMeasurementSchema> schemaList = new ArrayList<>();
+    schemaList.add(
+        new VectorMeasurementSchema(
+            "vector",
+            new String[] {"s1", "s2"},
+            new TSDataType[] {TSDataType.INT64, TSDataType.INT32}));
+
+    Tablet tablet = new Tablet(ROOT_SG1_D1_VECTOR1, schemaList);
+    tablet.setAligned(true);
+    long timestamp = 0;
+
+    for (long row = 0; row < 100; row++) {
+      int rowIndex = tablet.rowSize++;
+      tablet.addTimestamp(rowIndex, timestamp);
+      tablet.addValue(
+          schemaList.get(0).getValueMeasurementIdList().get(0), rowIndex, row * 10 + 1L);
+      tablet.addValue(
+          schemaList.get(0).getValueMeasurementIdList().get(1), rowIndex, (int) (row * 10 + 2));
+
+      if (tablet.rowSize == tablet.getMaxRowNumber()) {
+        session.insertTablet(tablet, true);
+        tablet.reset();
+      }
+      timestamp++;
+    }
+
+    if (tablet.rowSize != 0) {
+      session.insertTablet(tablet);
+      tablet.reset();
+    }
+  }
+
+  private void insertRecord(String deviceId)
+      throws IoTDBConnectionException, StatementExecutionException {
+    List<String> measurements = new ArrayList<>();
+    List<TSDataType> types = new ArrayList<>();
+    measurements.add("s2");
+    measurements.add("s4");
+    measurements.add("s5");
+    measurements.add("s6");
+    types.add(TSDataType.INT64);
+    types.add(TSDataType.INT64);
+    types.add(TSDataType.INT64);
+    types.add(TSDataType.INT64);
+
+    for (long time = 0; time < 100; time++) {
+      List<Object> values = new ArrayList<>();
+      values.add(time * 10 + 3L);
+      values.add(time * 10 + 4L);
+      values.add(time * 10 + 5L);
+      values.add(time * 10 + 6L);
+      session.insertRecord(deviceId, time, measurements, types, values);
+    }
+  }
+}
diff --git a/tsfile/src/main/java/org/apache/iotdb/tsfile/file/metadata/MetadataIndexConstructor.java b/tsfile/src/main/java/org/apache/iotdb/tsfile/file/metadata/MetadataIndexConstructor.java
index 57a9b25..de20f40 100644
--- a/tsfile/src/main/java/org/apache/iotdb/tsfile/file/metadata/MetadataIndexConstructor.java
+++ b/tsfile/src/main/java/org/apache/iotdb/tsfile/file/metadata/MetadataIndexConstructor.java
@@ -50,6 +50,7 @@ public class MetadataIndexConstructor {
   public static MetadataIndexNode constructMetadataIndex(
       Map<String, List<TimeseriesMetadata>> deviceTimeseriesMetadataMap, TsFileOutput out)
       throws IOException {
+
     Map<String, MetadataIndexNode> deviceMetadataIndexMap = new TreeMap<>();
 
     // for timeseriesMetadata of each device
@@ -64,47 +65,18 @@ public class MetadataIndexConstructor {
       int serializedTimeseriesMetadataNum = 0;
       for (int i = 0; i < entry.getValue().size(); i++) {
         timeseriesMetadata = entry.getValue().get(i);
-        if (timeseriesMetadata.isTimeColumn()) {
-          // calculate the number of value columns in this vector
-          int numOfValueColumns = 0;
-          for (int j = i + 1; j < entry.getValue().size(); j++) {
-            if (entry.getValue().get(j).isValueColumn()) {
-              numOfValueColumns++;
-            } else {
-              break;
-            }
+        if (serializedTimeseriesMetadataNum == 0
+            || serializedTimeseriesMetadataNum >= config.getMaxDegreeOfIndexNode()) {
+          if (currentIndexNode.isFull()) {
+            addCurrentIndexNodeToQueue(currentIndexNode, measurementMetadataIndexQueue, out);
+            currentIndexNode = new MetadataIndexNode(MetadataIndexNodeType.LEAF_MEASUREMENT);
           }
-
-          // for each vector, add time column of vector into LEAF_MEASUREMENT node
           currentIndexNode.addEntry(
               new MetadataIndexEntry(timeseriesMetadata.getMeasurementId(), out.getPosition()));
           serializedTimeseriesMetadataNum = 0;
-
-          timeseriesMetadata.serializeTo(out.wrapAsStream());
-          serializedTimeseriesMetadataNum++;
-          for (int j = 0; j < numOfValueColumns; j++) {
-            i++;
-            timeseriesMetadata = entry.getValue().get(i);
-            // value columns of vector should not be added into LEAF_MEASUREMENT node
-            timeseriesMetadata.serializeTo(out.wrapAsStream());
-            serializedTimeseriesMetadataNum++;
-          }
-        } else {
-          // when constructing from leaf node, every "degree number of nodes" are related to an
-          // entry
-          if (serializedTimeseriesMetadataNum == 0
-              || serializedTimeseriesMetadataNum >= config.getMaxDegreeOfIndexNode()) {
-            if (currentIndexNode.isFull()) {
-              addCurrentIndexNodeToQueue(currentIndexNode, measurementMetadataIndexQueue, out);
-              currentIndexNode = new MetadataIndexNode(MetadataIndexNodeType.LEAF_MEASUREMENT);
-            }
-            currentIndexNode.addEntry(
-                new MetadataIndexEntry(timeseriesMetadata.getMeasurementId(), out.getPosition()));
-            serializedTimeseriesMetadataNum = 0;
-          }
-          timeseriesMetadata.serializeTo(out.wrapAsStream());
-          serializedTimeseriesMetadataNum++;
         }
+        timeseriesMetadata.serializeTo(out.wrapAsStream());
+        serializedTimeseriesMetadataNum++;
       }
       addCurrentIndexNodeToQueue(currentIndexNode, measurementMetadataIndexQueue, out);
       deviceMetadataIndexMap.put(
@@ -127,21 +99,21 @@ public class MetadataIndexConstructor {
     }
 
     // else, build level index for devices
-    Queue<MetadataIndexNode> deviceMetadaIndexQueue = new ArrayDeque<>();
+    Queue<MetadataIndexNode> deviceMetadataIndexQueue = new ArrayDeque<>();
     MetadataIndexNode currentIndexNode = new MetadataIndexNode(MetadataIndexNodeType.LEAF_DEVICE);
 
     for (Map.Entry<String, MetadataIndexNode> entry : deviceMetadataIndexMap.entrySet()) {
       // when constructing from internal node, each node is related to an entry
       if (currentIndexNode.isFull()) {
-        addCurrentIndexNodeToQueue(currentIndexNode, deviceMetadaIndexQueue, out);
+        addCurrentIndexNodeToQueue(currentIndexNode, deviceMetadataIndexQueue, out);
         currentIndexNode = new MetadataIndexNode(MetadataIndexNodeType.LEAF_DEVICE);
       }
       currentIndexNode.addEntry(new MetadataIndexEntry(entry.getKey(), out.getPosition()));
       entry.getValue().serializeTo(out.wrapAsStream());
     }
-    addCurrentIndexNodeToQueue(currentIndexNode, deviceMetadaIndexQueue, out);
+    addCurrentIndexNodeToQueue(currentIndexNode, deviceMetadataIndexQueue, out);
     MetadataIndexNode deviceMetadataIndexNode =
-        generateRootNode(deviceMetadaIndexQueue, out, MetadataIndexNodeType.INTERNAL_DEVICE);
+        generateRootNode(deviceMetadataIndexQueue, out, MetadataIndexNodeType.INTERNAL_DEVICE);
     deviceMetadataIndexNode.setEndOffset(out.getPosition());
     return deviceMetadataIndexNode;
   }
diff --git a/tsfile/src/main/java/org/apache/iotdb/tsfile/read/TsFileSequenceReader.java b/tsfile/src/main/java/org/apache/iotdb/tsfile/read/TsFileSequenceReader.java
index a6253f4..5b5e23a 100644
--- a/tsfile/src/main/java/org/apache/iotdb/tsfile/read/TsFileSequenceReader.java
+++ b/tsfile/src/main/java/org/apache/iotdb/tsfile/read/TsFileSequenceReader.java
@@ -647,7 +647,10 @@ public class TsFileSequenceReader implements AutoCloseable {
             .computeIfAbsent(deviceId, k -> new ArrayList<>())
             .addAll(timeseriesMetadataList);
       } else {
-        deviceId = metadataIndex.getName();
+        // deviceId should be determined by LEAF_DEVICE node
+        if (type.equals(MetadataIndexNodeType.LEAF_DEVICE)) {
+          deviceId = metadataIndex.getName();
+        }
         MetadataIndexNode metadataIndexNode = MetadataIndexNode.deserializeFrom(buffer);
         int metadataIndexListSize = metadataIndexNode.getChildren().size();
         for (int i = 0; i < metadataIndexListSize; i++) {
@@ -1267,6 +1270,18 @@ public class TsFileSequenceReader implements AutoCloseable {
   }
 
   /**
+   * get metadata index node
+   *
+   * @param startOffset start read offset
+   * @param endOffset end read offset
+   * @return MetadataIndexNode
+   */
+  public MetadataIndexNode getMetadataIndexNode(long startOffset, long endOffset)
+      throws IOException {
+    return MetadataIndexNode.deserializeFrom(readData(startOffset, endOffset));
+  }
+
+  /**
    * Check if the device has at least one Chunk in this partition
    *
    * @param seriesMetadataMap chunkMetaDataList of each measurement
diff --git a/tsfile/src/main/java/org/apache/iotdb/tsfile/read/common/Path.java b/tsfile/src/main/java/org/apache/iotdb/tsfile/read/common/Path.java
index 86c00a7..2ebae9d 100644
--- a/tsfile/src/main/java/org/apache/iotdb/tsfile/read/common/Path.java
+++ b/tsfile/src/main/java/org/apache/iotdb/tsfile/read/common/Path.java
@@ -133,6 +133,10 @@ public class Path implements Serializable, Comparable<Path> {
     throw new IllegalArgumentException("doesn't alias in TSFile Path");
   }
 
+  public void setMeasurement(String measurement) {
+    this.measurement = measurement;
+  }
+
   @Override
   public int hashCode() {
     return fullPath.hashCode();
diff --git a/tsfile/src/main/java/org/apache/iotdb/tsfile/write/chunk/ChunkGroupWriterImpl.java b/tsfile/src/main/java/org/apache/iotdb/tsfile/write/chunk/ChunkGroupWriterImpl.java
index 10e3044..65bc9c7 100644
--- a/tsfile/src/main/java/org/apache/iotdb/tsfile/write/chunk/ChunkGroupWriterImpl.java
+++ b/tsfile/src/main/java/org/apache/iotdb/tsfile/write/chunk/ChunkGroupWriterImpl.java
@@ -26,6 +26,8 @@ import org.apache.iotdb.tsfile.utils.Binary;
 import org.apache.iotdb.tsfile.write.record.Tablet;
 import org.apache.iotdb.tsfile.write.record.datapoint.DataPoint;
 import org.apache.iotdb.tsfile.write.schema.IMeasurementSchema;
+import org.apache.iotdb.tsfile.write.schema.MeasurementSchema;
+import org.apache.iotdb.tsfile.write.schema.VectorMeasurementSchema;
 import org.apache.iotdb.tsfile.write.writer.TsFileIOWriter;
 
 import org.slf4j.Logger;
@@ -53,7 +55,13 @@ public class ChunkGroupWriterImpl implements IChunkGroupWriter {
   @Override
   public void tryToAddSeriesWriter(IMeasurementSchema schema, int pageSizeThreshold) {
     if (!chunkWriters.containsKey(schema.getMeasurementId())) {
-      IChunkWriter seriesWriter = new ChunkWriterImpl(schema);
+      IChunkWriter seriesWriter = null;
+      // initialize depend on schema type
+      if (schema instanceof VectorMeasurementSchema) {
+        seriesWriter = new VectorChunkWriterImpl(schema);
+      } else if (schema instanceof MeasurementSchema) {
+        seriesWriter = new ChunkWriterImpl(schema);
+      }
       this.chunkWriters.put(schema.getMeasurementId(), seriesWriter);
     }
   }
@@ -79,10 +87,75 @@ public class ChunkGroupWriterImpl implements IChunkGroupWriter {
       if (!chunkWriters.containsKey(measurementId)) {
         throw new NoMeasurementException("measurement id" + measurementId + " not found!");
       }
-      writeByDataType(tablet, measurementId, dataType, i);
+      if (dataType.equals(TSDataType.VECTOR)) {
+        writeVectorDataType(tablet, measurementId, i);
+      } else {
+        writeByDataType(tablet, measurementId, dataType, i);
+      }
+    }
+  }
+
+  /**
+   * write if data type is VECTOR this method write next n column values (belong to one vector), and
+   * return n to increase index
+   *
+   * @param tablet table
+   * @param measurement vector measurement
+   * @param index measurement start index
+   */
+  private void writeVectorDataType(Tablet tablet, String measurement, int index) {
+    // reference: MemTableFlushTask.java
+    int batchSize = tablet.rowSize;
+    VectorMeasurementSchema vectorMeasurementSchema =
+        (VectorMeasurementSchema) tablet.getSchemas().get(index);
+    List<TSDataType> valueDataTypes = vectorMeasurementSchema.getValueTSDataTypeList();
+    IChunkWriter vectorChunkWriter = chunkWriters.get(measurement);
+    for (int row = 0; row < batchSize; row++) {
+      long time = tablet.timestamps[row];
+      for (int columnIndex = 0; columnIndex < valueDataTypes.size(); columnIndex++) {
+        boolean isNull = false;
+        // check isNull by bitMap in tablet
+        if (tablet.bitMaps != null
+            && tablet.bitMaps[columnIndex] != null
+            && tablet.bitMaps[columnIndex].isMarked(row)) {
+          isNull = true;
+        }
+        switch (valueDataTypes.get(columnIndex)) {
+          case BOOLEAN:
+            vectorChunkWriter.write(time, ((boolean[]) tablet.values[columnIndex])[row], isNull);
+            break;
+          case INT32:
+            vectorChunkWriter.write(time, ((int[]) tablet.values[columnIndex])[row], isNull);
+            break;
+          case INT64:
+            vectorChunkWriter.write(time, ((long[]) tablet.values[columnIndex])[row], isNull);
+            break;
+          case FLOAT:
+            vectorChunkWriter.write(time, ((float[]) tablet.values[columnIndex])[row], isNull);
+            break;
+          case DOUBLE:
+            vectorChunkWriter.write(time, ((double[]) tablet.values[columnIndex])[row], isNull);
+            break;
+          case TEXT:
+            vectorChunkWriter.write(time, ((Binary[]) tablet.values[columnIndex])[row], isNull);
+            break;
+          default:
+            throw new UnSupportedDataTypeException(
+                String.format("Data type %s is not supported.", valueDataTypes.get(columnIndex)));
+        }
+      }
+      vectorChunkWriter.write(time);
     }
   }
 
+  /**
+   * write by data type dataType should not be VECTOR! VECTOR type should use writeVector
+   *
+   * @param tablet table contain all time and value
+   * @param measurementId current measurement
+   * @param dataType current data type
+   * @param index which column values should be write
+   */
   private void writeByDataType(
       Tablet tablet, String measurementId, TSDataType dataType, int index) {
     int batchSize = tablet.rowSize;
diff --git a/tsfile/src/main/java/org/apache/iotdb/tsfile/write/chunk/VectorChunkWriterImpl.java b/tsfile/src/main/java/org/apache/iotdb/tsfile/write/chunk/VectorChunkWriterImpl.java
index 8f1e907..83faad4 100644
--- a/tsfile/src/main/java/org/apache/iotdb/tsfile/write/chunk/VectorChunkWriterImpl.java
+++ b/tsfile/src/main/java/org/apache/iotdb/tsfile/write/chunk/VectorChunkWriterImpl.java
@@ -18,6 +18,7 @@
  */
 package org.apache.iotdb.tsfile.write.chunk;
 
+import org.apache.iotdb.tsfile.common.constant.TsFileConstant;
 import org.apache.iotdb.tsfile.encoding.encoder.Encoder;
 import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
 import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
@@ -53,7 +54,9 @@ public class VectorChunkWriterImpl implements IChunkWriter {
     for (int i = 0; i < valueMeasurementIdList.size(); i++) {
       valueChunkWriterList.add(
           new ValueChunkWriter(
-              valueMeasurementIdList.get(i),
+              schema.getMeasurementId()
+                  + TsFileConstant.PATH_SEPARATOR
+                  + valueMeasurementIdList.get(i),
               schema.getCompressor(),
               valueTSDataTypeList.get(i),
               valueTSEncodingList.get(i),
diff --git a/tsfile/src/main/java/org/apache/iotdb/tsfile/write/writer/TsFileIOWriter.java b/tsfile/src/main/java/org/apache/iotdb/tsfile/write/writer/TsFileIOWriter.java
index fb43fe4..0c293af 100644
--- a/tsfile/src/main/java/org/apache/iotdb/tsfile/write/writer/TsFileIOWriter.java
+++ b/tsfile/src/main/java/org/apache/iotdb/tsfile/write/writer/TsFileIOWriter.java
@@ -318,6 +318,9 @@ public class TsFileIOWriter {
   /**
    * Flush TsFileMetadata, including ChunkMetadataList and TimeseriesMetaData
    *
+   * @param chunkMetadataListMap chunkMetadata that Path.mask == 0
+   * @param vectorToPathsMap Map Path to chunkMataList, Key is Path(timeColumn) and Value is it's
+   *     sub chunkMetadataListMap
    * @return MetadataIndexEntry list in TsFileMetadata
    */
   private MetadataIndexNode flushMetadataIndex(
@@ -337,6 +340,13 @@ public class TsFileIOWriter {
     return MetadataIndexConstructor.constructMetadataIndex(deviceTimeseriesMetadataMap, out);
   }
 
+  /**
+   * Flush one chunkMetadata
+   *
+   * @param path Path of chunk
+   * @param chunkMetadataList List of chunkMetadata about path(previous param)
+   * @param vectorToPathsMap Key is Path(timeColumn) and Value is it's sub chunkMetadataListMap
+   */
   private void flushOneChunkMetadata(
       Path path,
       List<IChunkMetadata> chunkMetadataList,
diff --git a/tsfile/src/test/java/org/apache/iotdb/tsfile/write/MetadataIndexConstructorTest.java b/tsfile/src/test/java/org/apache/iotdb/tsfile/write/MetadataIndexConstructorTest.java
new file mode 100644
index 0000000..caf80c2
--- /dev/null
+++ b/tsfile/src/test/java/org/apache/iotdb/tsfile/write/MetadataIndexConstructorTest.java
@@ -0,0 +1,478 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.tsfile.write;
+
+import org.apache.iotdb.tsfile.common.conf.TSFileConfig;
+import org.apache.iotdb.tsfile.common.conf.TSFileDescriptor;
+import org.apache.iotdb.tsfile.common.constant.TsFileConstant;
+import org.apache.iotdb.tsfile.constant.TestConstant;
+import org.apache.iotdb.tsfile.file.metadata.MetadataIndexEntry;
+import org.apache.iotdb.tsfile.file.metadata.MetadataIndexNode;
+import org.apache.iotdb.tsfile.file.metadata.TimeseriesMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsFileMetadata;
+import org.apache.iotdb.tsfile.file.metadata.enums.MetadataIndexNodeType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.fileSystem.FSFactoryProducer;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.write.record.TSRecord;
+import org.apache.iotdb.tsfile.write.record.Tablet;
+import org.apache.iotdb.tsfile.write.record.datapoint.DataPoint;
+import org.apache.iotdb.tsfile.write.record.datapoint.LongDataPoint;
+import org.apache.iotdb.tsfile.write.schema.IMeasurementSchema;
+import org.apache.iotdb.tsfile.write.schema.MeasurementSchema;
+import org.apache.iotdb.tsfile.write.schema.Schema;
+import org.apache.iotdb.tsfile.write.schema.VectorMeasurementSchema;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.testcontainers.shaded.org.apache.commons.lang.text.StrBuilder;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.*;
+
+import static org.junit.Assert.*;
+
+/** test for MetadataIndexConstructor */
+public class MetadataIndexConstructorTest {
+  private static final Logger logger = LoggerFactory.getLogger(MetadataIndexConstructorTest.class);
+  private final TSFileConfig conf = TSFileDescriptor.getInstance().getConfig();
+  private static final String FILE_PATH =
+      TestConstant.BASE_OUTPUT_PATH.concat("MetadataIndexConstructorTest.tsfile");
+
+  private static final String measurementPrefix = "sensor_";
+  private static final String vectorPrefix = "vector_";
+  private int maxDegreeOfIndexNode;
+
+  @Before
+  public void before() {
+    maxDegreeOfIndexNode = conf.getMaxDegreeOfIndexNode();
+    conf.setMaxDegreeOfIndexNode(10);
+  }
+
+  @After
+  public void after() {
+    conf.setMaxDegreeOfIndexNode(maxDegreeOfIndexNode);
+  }
+
+  /** Example 1: 5 entities with 5 measurements each */
+  @Test
+  public void singleIndexTest1() {
+    int deviceNum = 5;
+    int measurementNum = 5;
+    String[] devices = new String[deviceNum];
+    int[][] vectorMeasurement = new int[deviceNum][];
+    String[][] singleMeasurement = new String[deviceNum][];
+    for (int i = 0; i < deviceNum; i++) {
+      devices[i] = "d" + i;
+      vectorMeasurement[i] = new int[0];
+      singleMeasurement[i] = new String[measurementNum];
+      for (int j = 0; j < measurementNum; j++) {
+        singleMeasurement[i][j] = measurementPrefix + generateIndexString(j, measurementNum);
+      }
+    }
+    test(devices, vectorMeasurement, singleMeasurement);
+  }
+
+  /** Example 2: 1 entity with 150 measurements */
+  @Test
+  public void singleIndexTest2() {
+    int deviceNum = 1;
+    int measurementNum = 150;
+    String[] devices = new String[deviceNum];
+    int[][] vectorMeasurement = new int[deviceNum][];
+    String[][] singleMeasurement = new String[deviceNum][];
+    for (int i = 0; i < deviceNum; i++) {
+      devices[i] = "d" + i;
+      vectorMeasurement[i] = new int[0];
+      singleMeasurement[i] = new String[measurementNum];
+      for (int j = 0; j < measurementNum; j++) {
+        singleMeasurement[i][j] = measurementPrefix + generateIndexString(j, measurementNum);
+      }
+    }
+    test(devices, vectorMeasurement, singleMeasurement);
+  }
+
+  /** Example 3: 150 entities with 1 measurement each */
+  @Test
+  public void singleIndexTest3() {
+    int deviceNum = 150;
+    int measurementNum = 1;
+    String[] devices = new String[deviceNum];
+    int[][] vectorMeasurement = new int[deviceNum][];
+    String[][] singleMeasurement = new String[deviceNum][];
+    for (int i = 0; i < deviceNum; i++) {
+      devices[i] = "d" + generateIndexString(i, deviceNum);
+      vectorMeasurement[i] = new int[0];
+      singleMeasurement[i] = new String[measurementNum];
+      for (int j = 0; j < measurementNum; j++) {
+        singleMeasurement[i][j] = measurementPrefix + generateIndexString(j, measurementNum);
+      }
+    }
+    test(devices, vectorMeasurement, singleMeasurement);
+  }
+
+  /** Example 4: 150 entities with 150 measurements each */
+  @Test
+  public void singleIndexTest4() {
+    int deviceNum = 150;
+    int measurementNum = 1;
+    String[] devices = new String[deviceNum];
+    int[][] vectorMeasurement = new int[deviceNum][];
+    String[][] singleMeasurement = new String[deviceNum][];
+    for (int i = 0; i < deviceNum; i++) {
+      devices[i] = "d" + i;
+      vectorMeasurement[i] = new int[0];
+      singleMeasurement[i] = new String[measurementNum];
+      for (int j = 0; j < measurementNum; j++) {
+        singleMeasurement[i][j] = measurementPrefix + generateIndexString(j, measurementNum);
+      }
+    }
+    test(devices, vectorMeasurement, singleMeasurement);
+  }
+
+  /** Example 5: 1 entities with 2 vectors, 9 measurements for each vector */
+  @Test
+  public void vectorIndexTest1() {
+    String[] devices = {"d0"};
+    int[][] vectorMeasurement = {{9, 9}};
+    test(devices, vectorMeasurement, null);
+  }
+
+  /** Example 6: 1 entities with 2 vectors, 15 measurements for each vector */
+  @Test
+  public void vectorIndexTest2() {
+    String[] devices = {"d0"};
+    int[][] vectorMeasurement = {{15, 15}};
+    test(devices, vectorMeasurement, null);
+  }
+
+  /**
+   * Example 7: 2 entities, measurements of entities are shown in the following table
+   *
+   * <p>d0.s0~s4 | d0.v0.(s0~s8) | d0.z0~z3 d1.s0~s14 | d1.v0.(s0~s3)
+   */
+  @Test
+  public void compositeIndexTest() {
+    String[] devices = {"d0", "d1"};
+    int[][] vectorMeasurement = {{9}, {4}};
+    String[][] singleMeasurement = {
+      {"s0", "s1", "s2", "s3", "s4", "z0", "z1", "z2", "z3"},
+      {
+        "s00", "s01", "s02", "s03", "s04", "s05", "s06", "s07", "s08", "s09", "s10", "s11", "s12",
+        "s13", "s14"
+      }
+    };
+    test(devices, vectorMeasurement, singleMeasurement);
+  }
+
+  /**
+   * start test
+   *
+   * @param devices name and number of device
+   * @param vectorMeasurement the number of device and the number of values to include in the tablet
+   * @param singleMeasurement non-vector measurement name, set null if no need
+   */
+  private void test(String[] devices, int[][] vectorMeasurement, String[][] singleMeasurement) {
+    // 1. generate file
+    generateFile(devices, vectorMeasurement, singleMeasurement);
+    // 2. read metadata from file
+    List<String> actualDevices = new ArrayList<>(); // contains all device by sequence
+    List<List<String>> actualMeasurements =
+        new ArrayList<>(); // contains all measurements group by device
+    readMetaDataDFS(actualDevices, actualMeasurements);
+    // 3. generate correct result
+    List<String> correctDevices = new ArrayList<>(); // contains all device by sequence
+    List<List<String>> correctFirstMeasurements =
+        new ArrayList<>(); // contains first measurements of every leaf, group by device
+    generateCorrectResult(
+        correctDevices, correctFirstMeasurements, devices, vectorMeasurement, singleMeasurement);
+    // 4. compare correct result with TsFile's metadata
+    Arrays.sort(devices);
+    // 4.1 make sure device in order
+    assertEquals(correctDevices.size(), devices.length);
+    assertEquals(actualDevices.size(), correctDevices.size());
+    for (int i = 0; i < actualDevices.size(); i++) {
+      assertEquals(actualDevices.get(i), correctDevices.get(i));
+    }
+    // 4.2 make sure timeseries in order
+    try (TsFileSequenceReader reader = new TsFileSequenceReader(FILE_PATH)) {
+      Map<String, List<TimeseriesMetadata>> allTimeseriesMetadata =
+          reader.getAllTimeseriesMetadata();
+      for (int j = 0; j < actualDevices.size(); j++) {
+        for (int i = 0; i < actualMeasurements.get(j).size(); i++) {
+          assertEquals(
+              allTimeseriesMetadata.get(actualDevices.get(j)).get(i).getMeasurementId(),
+              correctFirstMeasurements.get(j).get(i));
+        }
+      }
+    } catch (IOException e) {
+      e.printStackTrace();
+      fail(e.getMessage());
+    }
+    // 4.3 make sure split leaf correctly
+    for (int j = 0; j < actualDevices.size(); j++) {
+      for (int i = 0; i < actualMeasurements.get(j).size(); i++) {
+        assertEquals(
+            actualMeasurements.get(j).get(i),
+            correctFirstMeasurements.get(j).get(i * conf.getMaxDegreeOfIndexNode()));
+      }
+    }
+  }
+
+  /**
+   * read TsFile metadata, load actual message in devices and measurements
+   *
+   * @param devices load actual devices
+   * @param measurements load actual measurement(first of every leaf)
+   */
+  private void readMetaDataDFS(List<String> devices, List<List<String>> measurements) {
+    try (TsFileSequenceReader reader = new TsFileSequenceReader(FILE_PATH)) {
+      TsFileMetadata tsFileMetaData = reader.readFileMetadata();
+      MetadataIndexNode metadataIndexNode = tsFileMetaData.getMetadataIndex();
+      deviceDFS(devices, measurements, reader, metadataIndexNode);
+    } catch (IOException e) {
+      e.printStackTrace();
+      fail(e.getMessage());
+    }
+  }
+
+  /** DFS in device level load actual devices */
+  private void deviceDFS(
+      List<String> devices,
+      List<List<String>> measurements,
+      TsFileSequenceReader reader,
+      MetadataIndexNode node) {
+    try {
+      assertTrue(
+          node.getNodeType().equals(MetadataIndexNodeType.LEAF_DEVICE)
+              || node.getNodeType().equals(MetadataIndexNodeType.INTERNAL_DEVICE));
+      for (int i = 0; i < node.getChildren().size(); i++) {
+        MetadataIndexEntry metadataIndexEntry = node.getChildren().get(i);
+        long endOffset = node.getEndOffset();
+        if (i != node.getChildren().size() - 1) {
+          endOffset = node.getChildren().get(i + 1).getOffset();
+        }
+        MetadataIndexNode subNode =
+            reader.getMetadataIndexNode(metadataIndexEntry.getOffset(), endOffset);
+        if (node.getNodeType().equals(MetadataIndexNodeType.LEAF_DEVICE)) {
+          devices.add(metadataIndexEntry.getName());
+          measurements.add(new ArrayList<>());
+          measurementDFS(devices.size() - 1, measurements, reader, subNode);
+        } else if (node.getNodeType().equals(MetadataIndexNodeType.INTERNAL_DEVICE)) {
+          deviceDFS(devices, measurements, reader, subNode);
+        }
+      }
+    } catch (IOException e) {
+      e.printStackTrace();
+      fail(e.getMessage());
+    }
+  }
+  /** DFS in measurement level load actual measurements */
+  private void measurementDFS(
+      int deviceIndex,
+      List<List<String>> measurements,
+      TsFileSequenceReader reader,
+      MetadataIndexNode node) {
+
+    try {
+      assertTrue(
+          node.getNodeType().equals(MetadataIndexNodeType.LEAF_MEASUREMENT)
+              || node.getNodeType().equals(MetadataIndexNodeType.INTERNAL_MEASUREMENT));
+      for (int i = 0; i < node.getChildren().size(); i++) {
+        MetadataIndexEntry metadataIndexEntry = node.getChildren().get(i);
+        long endOffset = node.getEndOffset();
+        if (i != node.getChildren().size() - 1) {
+          endOffset = node.getChildren().get(i + 1).getOffset();
+        }
+        if (node.getNodeType().equals(MetadataIndexNodeType.LEAF_MEASUREMENT)) {
+          // 把每个叶子节点的第一个加进来
+          measurements.get(deviceIndex).add(metadataIndexEntry.getName());
+        } else if (node.getNodeType().equals(MetadataIndexNodeType.INTERNAL_MEASUREMENT)) {
+          MetadataIndexNode subNode =
+              reader.getMetadataIndexNode(metadataIndexEntry.getOffset(), endOffset);
+          measurementDFS(deviceIndex, measurements, reader, subNode);
+        }
+      }
+    } catch (IOException e) {
+      e.printStackTrace();
+      fail(e.getMessage());
+    }
+  }
+
+  /**
+   * generate correct devices and measurements for test Note that if the metadata index tree is
+   * re-designed, you may need to modify this function as well.
+   *
+   * @param correctDevices output
+   * @param correctMeasurements output
+   * @param devices input
+   * @param vectorMeasurement input
+   * @param singleMeasurement input
+   */
+  private void generateCorrectResult(
+      List<String> correctDevices,
+      List<List<String>> correctMeasurements,
+      String[] devices,
+      int[][] vectorMeasurement,
+      String[][] singleMeasurement) {
+    for (int i = 0; i < devices.length; i++) {
+      String device = devices[i];
+      correctDevices.add(device);
+      // generate measurement and sort
+      List<String> measurements = new ArrayList<>();
+      // single-variable measurement
+      if (singleMeasurement != null) {
+        measurements.addAll(Arrays.asList(singleMeasurement[i]));
+      }
+      // multi-variable measurement
+      for (int vectorIndex = 0; vectorIndex < vectorMeasurement[i].length; vectorIndex++) {
+        String vectorName =
+            vectorPrefix + generateIndexString(vectorIndex, vectorMeasurement.length);
+        measurements.add(vectorName);
+        int measurementNum = vectorMeasurement[i][vectorIndex];
+        for (int measurementIndex = 0; measurementIndex < measurementNum; measurementIndex++) {
+          String measurementName =
+              measurementPrefix + generateIndexString(measurementIndex, measurementNum);
+          measurements.add(vectorName + TsFileConstant.PATH_SEPARATOR + measurementName);
+        }
+      }
+      Collections.sort(measurements);
+      correctMeasurements.add(measurements);
+    }
+    Collections.sort(correctDevices);
+  }
+
+  /**
+   * @param devices name and number of device
+   * @param vectorMeasurement the number of device and the number of values to include in the tablet
+   * @param singleMeasurement non-vector measurement name, set null if no need
+   */
+  private void generateFile(
+      String[] devices, int[][] vectorMeasurement, String[][] singleMeasurement) {
+    File f = FSFactoryProducer.getFSFactory().getFile(FILE_PATH);
+    if (f.exists() && !f.delete()) {
+      fail("can not delete " + f.getAbsolutePath());
+    }
+    Schema schema = new Schema();
+    try (TsFileWriter tsFileWriter = new TsFileWriter(f, schema)) {
+      // write single-variable timeseries
+      if (singleMeasurement != null) {
+        for (int i = 0; i < singleMeasurement.length; i++) {
+          String device = devices[i];
+          for (String measurement : singleMeasurement[i]) {
+            tsFileWriter.registerTimeseries(
+                new Path(device, measurement),
+                new MeasurementSchema(measurement, TSDataType.INT64, TSEncoding.RLE));
+          }
+          // the number of record rows
+          int rowNum = 10;
+          for (int row = 0; row < rowNum; row++) {
+            TSRecord tsRecord = new TSRecord(row, device);
+            for (String measurement : singleMeasurement[i]) {
+              DataPoint dPoint = new LongDataPoint(measurement, row);
+              tsRecord.addTuple(dPoint);
+            }
+            if (tsRecord.dataPointList.size() > 0) {
+              tsFileWriter.write(tsRecord);
+            }
+          }
+        }
+      }
+
+      // write multi-variable timeseries
+      for (int i = 0; i < devices.length; i++) {
+        String device = devices[i];
+        logger.info("generating device {}...", device);
+        // the number of rows to include in the tablet
+        int rowNum = 10;
+        for (int vectorIndex = 0; vectorIndex < vectorMeasurement[i].length; vectorIndex++) {
+          String vectorName =
+              vectorPrefix + generateIndexString(vectorIndex, vectorMeasurement.length);
+          logger.info("generating vector {}...", vectorName);
+          List<IMeasurementSchema> measurementSchemas = new ArrayList<>();
+          int measurementNum = vectorMeasurement[i][vectorIndex];
+          String[] measurementNames = new String[measurementNum];
+          TSDataType[] dataTypes = new TSDataType[measurementNum];
+          for (int measurementIndex = 0; measurementIndex < measurementNum; measurementIndex++) {
+            String measurementName =
+                measurementPrefix + generateIndexString(measurementIndex, measurementNum);
+            logger.info("generating vector measurement {}...", measurementName);
+            // add measurements into file schema (all with INT64 data type)
+            measurementNames[measurementIndex] = measurementName;
+            dataTypes[measurementIndex] = TSDataType.INT64;
+          }
+          IMeasurementSchema measurementSchema =
+              new VectorMeasurementSchema(vectorName, measurementNames, dataTypes);
+          measurementSchemas.add(measurementSchema);
+          schema.registerTimeseries(new Path(device, vectorName), measurementSchema);
+          // add measurements into TSFileWriter
+          // construct the tablet
+          Tablet tablet = new Tablet(device, measurementSchemas);
+          long[] timestamps = tablet.timestamps;
+          Object[] values = tablet.values;
+          long timestamp = 1;
+          long value = 1000000L;
+          for (int r = 0; r < rowNum; r++, value++) {
+            int row = tablet.rowSize++;
+            timestamps[row] = timestamp++;
+            for (int j = 0; j < measurementNum; j++) {
+              long[] sensor = (long[]) values[j];
+              sensor[row] = value;
+            }
+            // write Tablet to TsFile
+            if (tablet.rowSize == tablet.getMaxRowNumber()) {
+              tsFileWriter.write(tablet);
+              tablet.reset();
+            }
+          }
+          // write Tablet to TsFile
+          if (tablet.rowSize != 0) {
+            tsFileWriter.write(tablet);
+            tablet.reset();
+          }
+        }
+      }
+    } catch (Exception e) {
+      logger.error("meet error in TsFileWrite with tablet", e);
+      fail(e.getMessage());
+    }
+  }
+
+  /**
+   * generate curIndex string, use "0" on left to make sure align
+   *
+   * @param curIndex current index
+   * @param maxIndex max index
+   * @return curIndex's string
+   */
+  private String generateIndexString(int curIndex, int maxIndex) {
+    StrBuilder res = new StrBuilder(String.valueOf(curIndex));
+    String target = String.valueOf(maxIndex);
+    while (res.length() < target.length()) {
+      res.insert(0, "0");
+    }
+    return res.toString();
+  }
+}
diff --git a/tsfile/src/test/java/org/apache/iotdb/tsfile/write/writer/VectorChunkWriterImplTest.java b/tsfile/src/test/java/org/apache/iotdb/tsfile/write/writer/VectorChunkWriterImplTest.java
index 3ca81b1..4beadb8 100644
--- a/tsfile/src/test/java/org/apache/iotdb/tsfile/write/writer/VectorChunkWriterImplTest.java
+++ b/tsfile/src/test/java/org/apache/iotdb/tsfile/write/writer/VectorChunkWriterImplTest.java
@@ -50,9 +50,11 @@ public class VectorChunkWriterImplTest {
     }
 
     chunkWriter.sealCurrentPage();
-    // time chunk: 14 + 4 + 160; value chunk 1: 8 + 2 + 4 + 3 + 80; value chunk 2: 8 + 2 + 4 + 3 +
-    // 20; value chunk 3: 9 + 4 + 7 + 20 * 8;
-    assertEquals(492L, chunkWriter.getCurrentChunkSize());
+    // time chunk: 17 + 4 + 160;
+    // value chunk 1: 19 + 2 + 4 + 3 + 80;
+    // value chunk 2: 19 + 2 + 4 + 3 + 20;
+    // value chunk 3: 20 + 4 + 7 + 20 * 8;
+    assertEquals(528, chunkWriter.getCurrentChunkSize());
 
     try {
       TestTsFileOutput testTsFileOutput = new TestTsFileOutput();
@@ -63,7 +65,7 @@ public class VectorChunkWriterImplTest {
       // time chunk
       assertEquals(
           (byte) (0x80 | MetaMarker.ONLY_ONE_PAGE_CHUNK_HEADER), ReadWriteIOUtils.readByte(buffer));
-      assertEquals("s1.time", ReadWriteIOUtils.readVarIntString(buffer));
+      assertEquals("vectorName", ReadWriteIOUtils.readVarIntString(buffer));
       assertEquals(164, ReadWriteForEncodingUtils.readUnsignedVarInt(buffer));
       assertEquals(TSDataType.VECTOR.serialize(), ReadWriteIOUtils.readByte(buffer));
       assertEquals(CompressionType.UNCOMPRESSED.serialize(), ReadWriteIOUtils.readByte(buffer));
@@ -72,7 +74,7 @@ public class VectorChunkWriterImplTest {
 
       // value chunk 1
       assertEquals(0x40 | MetaMarker.ONLY_ONE_PAGE_CHUNK_HEADER, ReadWriteIOUtils.readByte(buffer));
-      assertEquals("s1", ReadWriteIOUtils.readVarIntString(buffer));
+      assertEquals("vectorName.s1", ReadWriteIOUtils.readVarIntString(buffer));
       assertEquals(89, ReadWriteForEncodingUtils.readUnsignedVarInt(buffer));
       assertEquals(TSDataType.FLOAT.serialize(), ReadWriteIOUtils.readByte(buffer));
       assertEquals(CompressionType.UNCOMPRESSED.serialize(), ReadWriteIOUtils.readByte(buffer));
@@ -81,7 +83,7 @@ public class VectorChunkWriterImplTest {
 
       // value chunk 2
       assertEquals(0x40 | MetaMarker.ONLY_ONE_PAGE_CHUNK_HEADER, ReadWriteIOUtils.readByte(buffer));
-      assertEquals("s2", ReadWriteIOUtils.readVarIntString(buffer));
+      assertEquals("vectorName.s2", ReadWriteIOUtils.readVarIntString(buffer));
       assertEquals(29, ReadWriteForEncodingUtils.readUnsignedVarInt(buffer));
       assertEquals(TSDataType.INT32.serialize(), ReadWriteIOUtils.readByte(buffer));
       assertEquals(CompressionType.UNCOMPRESSED.serialize(), ReadWriteIOUtils.readByte(buffer));
@@ -90,7 +92,7 @@ public class VectorChunkWriterImplTest {
 
       // value chunk 2
       assertEquals(0x40 | MetaMarker.ONLY_ONE_PAGE_CHUNK_HEADER, ReadWriteIOUtils.readByte(buffer));
-      assertEquals("s3", ReadWriteIOUtils.readVarIntString(buffer));
+      assertEquals("vectorName.s3", ReadWriteIOUtils.readVarIntString(buffer));
       assertEquals(171, ReadWriteForEncodingUtils.readUnsignedVarInt(buffer));
       assertEquals(TSDataType.DOUBLE.serialize(), ReadWriteIOUtils.readByte(buffer));
       assertEquals(CompressionType.UNCOMPRESSED.serialize(), ReadWriteIOUtils.readByte(buffer));
@@ -122,11 +124,11 @@ public class VectorChunkWriterImplTest {
     }
     chunkWriter.sealCurrentPage();
 
-    // time chunk: 14 + (4 + 17 + 160) * 2
-    // value chunk 1: 9 + (2 + 41 + 4 + 3 + 80) * 2
-    // value chunk 2: 9 + (2 + 41 + 4 + 3 + 20) * 2
-    // value chunk 3: 9 + (4 + 57 + 4 + 3 + 160) * 2
-    assertEquals(1259L, chunkWriter.getCurrentChunkSize());
+    // time chunk: 17 + (4 + 17 + 160) * 2
+    // value chunk 1: 20 + (2 + 41 + 4 + 3 + 80) * 2
+    // value chunk 2: 20 + (2 + 41 + 4 + 3 + 20) * 2
+    // value chunk 3: 20 + (4 + 57 + 4 + 3 + 160) * 2
+    assertEquals(1295, chunkWriter.getCurrentChunkSize());
 
     try {
       TestTsFileOutput testTsFileOutput = new TestTsFileOutput();
@@ -136,7 +138,7 @@ public class VectorChunkWriterImplTest {
       ByteBuffer buffer = ByteBuffer.wrap(publicBAOS.getBuf(), 0, publicBAOS.size());
       // time chunk
       assertEquals((byte) (0x80 | MetaMarker.CHUNK_HEADER), ReadWriteIOUtils.readByte(buffer));
-      assertEquals("s1.time", ReadWriteIOUtils.readVarIntString(buffer));
+      assertEquals("vectorName", ReadWriteIOUtils.readVarIntString(buffer));
       assertEquals(362, ReadWriteForEncodingUtils.readUnsignedVarInt(buffer));
       assertEquals(TSDataType.VECTOR.serialize(), ReadWriteIOUtils.readByte(buffer));
       assertEquals(CompressionType.UNCOMPRESSED.serialize(), ReadWriteIOUtils.readByte(buffer));
@@ -145,7 +147,7 @@ public class VectorChunkWriterImplTest {
 
       // value chunk 1
       assertEquals(0x40 | MetaMarker.CHUNK_HEADER, ReadWriteIOUtils.readByte(buffer));
-      assertEquals("s1", ReadWriteIOUtils.readVarIntString(buffer));
+      assertEquals("vectorName.s1", ReadWriteIOUtils.readVarIntString(buffer));
       assertEquals(260, ReadWriteForEncodingUtils.readUnsignedVarInt(buffer));
       assertEquals(TSDataType.FLOAT.serialize(), ReadWriteIOUtils.readByte(buffer));
       assertEquals(CompressionType.UNCOMPRESSED.serialize(), ReadWriteIOUtils.readByte(buffer));
@@ -154,7 +156,7 @@ public class VectorChunkWriterImplTest {
 
       // value chunk 2
       assertEquals(0x40 | MetaMarker.CHUNK_HEADER, ReadWriteIOUtils.readByte(buffer));
-      assertEquals("s2", ReadWriteIOUtils.readVarIntString(buffer));
+      assertEquals("vectorName.s2", ReadWriteIOUtils.readVarIntString(buffer));
       assertEquals(140, ReadWriteForEncodingUtils.readUnsignedVarInt(buffer));
       assertEquals(TSDataType.INT32.serialize(), ReadWriteIOUtils.readByte(buffer));
       assertEquals(CompressionType.UNCOMPRESSED.serialize(), ReadWriteIOUtils.readByte(buffer));
@@ -163,7 +165,7 @@ public class VectorChunkWriterImplTest {
 
       // value chunk 2
       assertEquals(0x40 | MetaMarker.CHUNK_HEADER, ReadWriteIOUtils.readByte(buffer));
-      assertEquals("s3", ReadWriteIOUtils.readVarIntString(buffer));
+      assertEquals("vectorName.s3", ReadWriteIOUtils.readVarIntString(buffer));
       assertEquals(456, ReadWriteForEncodingUtils.readUnsignedVarInt(buffer));
       assertEquals(TSDataType.DOUBLE.serialize(), ReadWriteIOUtils.readByte(buffer));
       assertEquals(CompressionType.UNCOMPRESSED.serialize(), ReadWriteIOUtils.readByte(buffer));
diff --git a/tsfile/src/test/java/org/apache/iotdb/tsfile/write/writer/VectorMeasurementSchemaStub.java b/tsfile/src/test/java/org/apache/iotdb/tsfile/write/writer/VectorMeasurementSchemaStub.java
index d4fbef2..0307fb4 100644
--- a/tsfile/src/test/java/org/apache/iotdb/tsfile/write/writer/VectorMeasurementSchemaStub.java
+++ b/tsfile/src/test/java/org/apache/iotdb/tsfile/write/writer/VectorMeasurementSchemaStub.java
@@ -35,7 +35,7 @@ public class VectorMeasurementSchemaStub implements IMeasurementSchema {
 
   @Override
   public String getMeasurementId() {
-    return "s1.time";
+    return "vectorName";
   }
 
   @Override