You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@shardingsphere.apache.org by wu...@apache.org on 2022/01/11 02:43:43 UTC

[shardingsphere] branch master updated: Restore sysbench documents (#14240)

This is an automated email from the ASF dual-hosted git repository.

wuweijie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/shardingsphere.git


The following commit(s) were added to refs/heads/master by this push:
     new f2f3fab  Restore sysbench documents (#14240)
f2f3fab is described below

commit f2f3fab4f111db86fe51adaca6ab34441158064f
Author: Swastika Gupta <64...@users.noreply.github.com>
AuthorDate: Tue Jan 11 08:12:56 2022 +0530

    Restore sysbench documents (#14240)
    
    * initials apache#13825
    
    * Update docs/document/content/reference/test/performance-test/sysbench-test.en.md
    
    Co-authored-by: 吴伟杰 <wu...@apache.org>
    
    * updates
    
    * Update sysbench-test.en.md
    
    * sysbench-test.en.md updated
    
    * Delete .gitpod.yml
    
    * Update sysbench-test.en.md
    
    Co-authored-by: 吴伟杰 <wu...@apache.org>
---
 docs/document/content/reference/test/_index.en.md  |    2 +
 ...{sysbench-test.cn.md => performance-test.cn.md} |    0
 ...{sysbench-test.en.md => performance-test.en.md} |    2 +-
 .../test/performance-test/sysbench-test.en.md      | 1623 ++++++++++++++++----
 4 files changed, 1341 insertions(+), 286 deletions(-)

diff --git a/docs/document/content/reference/test/_index.en.md b/docs/document/content/reference/test/_index.en.md
index 3636e02..abfc20f 100644
--- a/docs/document/content/reference/test/_index.en.md
+++ b/docs/document/content/reference/test/_index.en.md
@@ -25,3 +25,5 @@ It includes SQL parser and SQL rewriter modules.
 ## Performance Test
 
 Provide multiple performance test methods, includes Sysbench, JMH or TPCC and so on.
+
+## Sysbench Test
\ No newline at end of file
diff --git a/docs/document/content/reference/test/performance-test/sysbench-test.cn.md b/docs/document/content/reference/test/performance-test/performance-test.cn.md
similarity index 100%
rename from docs/document/content/reference/test/performance-test/sysbench-test.cn.md
rename to docs/document/content/reference/test/performance-test/performance-test.cn.md
diff --git a/docs/document/content/reference/test/performance-test/sysbench-test.en.md b/docs/document/content/reference/test/performance-test/performance-test.en.md
similarity index 99%
copy from docs/document/content/reference/test/performance-test/sysbench-test.en.md
copy to docs/document/content/reference/test/performance-test/performance-test.en.md
index ec709e1..835a87d 100644
--- a/docs/document/content/reference/test/performance-test/sysbench-test.en.md
+++ b/docs/document/content/reference/test/performance-test/performance-test.en.md
@@ -1,5 +1,5 @@
 +++
-title = "Performance Test with Sysbench"
+title = "Performance Test"
 weight = 1
 +++
 
diff --git a/docs/document/content/reference/test/performance-test/sysbench-test.en.md b/docs/document/content/reference/test/performance-test/sysbench-test.en.md
index ec709e1..0502d8f 100644
--- a/docs/document/content/reference/test/performance-test/sysbench-test.en.md
+++ b/docs/document/content/reference/test/performance-test/sysbench-test.en.md
@@ -1,121 +1,469 @@
 +++
-title = "Performance Test with Sysbench"
+title = "Sysbench Test"
 weight = 1
 +++
 
-## Target
+At least 5 machines are required:
 
-The performance of ShardingSphere-JDBC, ShardingSphere-Proxy and MySQL would be compared here. INSERT & UPDATE & DELETE which regarded as a set of associated operation and SELECT which focus on sharding optimization are used to evaluate performance for the basic scenarios (single route, readwrite-splitting & encrypt & sharding, full route). While another set of associated operation, INSERT & SELECT & DELETE, is used to evaluate performance for readwrite-splitting.
-To achieve the result better, these tests are performed with jmeter which based on a certain amount of data with 20 concurrent threads for 30 minutes, and one MySQL has been deployed on one machine, while the scenario of MySQL used for comparison is deployed on one machine with one instance.
+```
+Jenkins * 1: ${host-jenkins}
+Sysbench * 1: ${host-sysbench}
+ShardingSphere-Proxy * 1: ${host-proxy}
+MySQL Server * 2: ${host-mysql-1}, ${host-mysql-2}
+```
+
+The hardware standards of Jenkins and Sysbench machines can appropriately lower.
+
+#### Software Environment
+
+```
+Jenkins: The latest version
+Sysbench: 1.0.20
+ShardingSphere-Proxy: package from master branch
+MySQL Server: 5.7.28
+```
+
+## Test Program
+
+According to the above hardware environment, the configuration parameters are as follows,
+and the parameters should be adjusted according to the changes in the hardware environment.
+
+#### ShardingSphere-Proxy Configuration
+
+```
+Proxy runs on ${host-proxy}
+Version includes: Master branch, 4.1.1, 3.0.0
+Scenarios: config-sharding, config-replica-query, config-sharding-replica-query, config-encrypt
+Configurations: Refer to Appendix 1
+```
+
+#### MySQL Server Configuration
+
+Two MySQL instances runs on `${host-mysql-1}` and `${host-mysql-2}` machines respectively.
+```
+Need to create the 'sbtest' database on both instances in advance.
+Set parameter: max_prepared_stmt_count = 500000
+Set parameter: max_connections = 2000
+```
+
+#### Jenkins Configuration
+
+Create 6 Jenkins tasks, and each task calls the next task in turn: (runs on the `${host-jenkins}` machine).
+```
+1. sysbench_install: Pull the latest code, package the Proxy compression package
+```
+
+The following tasks are run on a separate Sysbench pressure generating machine via Jenkins slave: (runs on the `{host-sysbench}` machine)
+```
+2. sysbench_sharding: 
+   a. Sharding scenarios for remote deployment of various versions of Proxy
+   b. Execute Sysbench command to pressure test Proxy
+   c. Execute Sysbench command to pressure test MySQL Server
+   d. Save Sysbench stress test results
+   e. Use drawing scripts to generate performance curves and tables (see Appendix 2 for drawing scripts)
+3. sysbench_master_slave:
+   a. Read and write separation scenarios for remote deployment of various versions of Proxy
+   b. Execute Sysbench command to pressure test Proxy
+   c. Execute Sysbench command to pressure test MySQL Server
+   d. Save Sysbench stress test results
+   e. Use drawing scripts to generate performance curves and tables
+4. sysbench_sharding_master_slave:
+   a. Remote deployment of sharding + read-write splitting scenarios of various versions of Proxy
+   b. Execute Sysbench command to pressure test Proxy
+   c. Execute Sysbench command to pressure test MySQL Server
+   d. Save Sysbench stress test results
+   e. Use drawing scripts to generate performance curves and tables
+5. sysbench_encrypt:
+   a. Encryption scenarios for remote deployment of various versions of Proxy
+   b. Execute Sysbench command to pressure test Proxy
+   c. Execute Sysbench command to pressure test MySQL Server
+   d. Save Sysbench stress test results
+   e. Use drawing scripts to generate performance curves and tables
+6. sysbench_result_aggregation:
+   a. Re-execute the drawing script for the pressure test results of all tasks
+      python3 plot_graph.py sharding
+      python3 plot_graph.py ms
+      python3 plot_graph.py sharding_ms
+      python3 plot_graph.py encrypt
+   b. Use Jenkins "Publish HTML reports" plugin to integrate all images into one HTML page
+```
+
+## Testing Process
+
+Take sysbench sharding as an example (other scenarios are similar)
+
+#### Enter the Sysbench pressure test result directory
+
+```bash
+cd /home/jenkins/sysbench_res/sharding
+```
+
+#### Create the folder for this build
+
+```bash
+mkdir $BUILD_NUMBER
+```
 
-## Test Scenarios
+#### Take the last 14 builds and save them in a hidden file
 
-### Single Route
+```bash
+ls -v | tail -n14 > .build_number.txt
+```
+
+#### Deployment and stress testing
+
+Step 1: Execute remote deployment script to deploy Proxy to `{host-proxy}`
+
+./deploy_sharding.sh
+
+```bash
+#!/bin/sh
+rm -fr apache-shardingsphere-*-shardingsphere-proxy-bin
+tar zxvf apache-shardingsphere-*-shardingsphere-proxy-bin.tar.gz
+sh stop_proxy.sh
+cp -f prepared_conf/mysql-connector-java-5.1.47.jar apache-shardingsphere-*-shardingsphere-proxy-bin/lib
+cp -f prepared_conf/start.sh apache-shardingsphere-*-shardingsphere-proxy-bin/bin
+cp -f prepared_conf/config-sharding.yaml prepared_conf/server.yaml apache-shardingsphere-*-shardingsphere-proxy-bin/conf
+./apache-shardingsphere-*-shardingsphere-proxy-bin/bin/start.sh
+sleep 30
+```
+
+Step 2: Execute the sysbench script
+
+```bash
+# master
+cd /home/jenkins/sysbench_res/sharding
+cd $BUILD_NUMBER
+sysbench oltp_read_only --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=10 --time=3600 --threads=10 --max-requests=0 --percentile=99 --mysql-ignore-errors="all" --rand-type=uniform --range_selects=off --auto_inc=off cleanup
+sysbench oltp_read_only --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=10 --time=3600 --threads=10 --max-requests=0 --percentile=99 --mysql-ignore-errors="all" --rand-type=uniform --range_selects=off --auto_inc=off prepare
+sysbench oltp_read_only        --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=30  --time=180 --threads=256 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run
+sysbench oltp_read_only        --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=30  --time=180 --threads=256 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_read_only.master.txt
+sysbench oltp_point_select     --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=30  --time=180 --threads=256 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_point_select.master.txt
+sysbench oltp_read_write       --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=30  --time=180 --threads=256 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_read_write.master.txt
+sysbench oltp_write_only       --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=30  --time=180 --threads=256 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_write_only.master.txt
+sysbench oltp_update_index     --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=30  --time=180 --threads=256 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_index.master.txt
+sysbench oltp_update_non_index --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=30  --time=180 --threads=256 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_non_index.master.txt
+sysbench oltp_delete           --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=30  --time=180 --threads=256 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_delete.master.txt
+sysbench oltp_read_only --mysql-host=${host-proxy} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=1000000 --report-interval=10 --time=3600 --threads=10 --max-requests=0 --percentile=99 --mysql-ignore-errors="all" --rand-type=uniform --range_selects=off --auto_inc=off cleanup
+```
+
+4.1.1, 3.0.0, three scenarios of direct connection to MySQL, repeat steps 1 and 2 above.
+
+#### Execute stop proxy script
+
+./stop_proxy.sh
+
+```bash
+#!/bin/sh
+./3.0.0_sharding-proxy/bin/stop.sh 
+./4.1.1_apache-shardingsphere-4.1.1-sharding-proxy-bin/bin/stop.sh
+./apache-shardingsphere-*-shardingsphere-proxy-bin/bin/stop.sh
+```
+
+#### Generate pressure test curve picture
+
+```bash
+# Generate graph
+cd /home/jenkins/sysbench_res/
+python3 plot_graph.py sharding
+```
+
+#### Use Jenkins Publish HTML reports plugin to publish pictures to the page
+
+```
+HTML directory to archive: /home/jenkins/sysbench_res/graph/
+Index page[s]: 01_sharding.html
+Report title: HTML Report
+```
+
+## sysbench test case describe
+
+#### oltp_point_select
+
+```
+Prepare Statement (ID = 1): SELECT c FROM sbtest1 WHERE id=?
+Execute Statement: ID = 1
+```
 
-On the basis of one thousand data volume, four databases that are deployed on the same machine and each contains 1024 tables with `id` used for database sharding and `k` used for table sharding are designed for this scenario, single route select sql statement is chosen here.
-While as a comparison, MySQL runs with INSERT & UPDATE & DELETE statement and single route select sql statement on the basis of one thousand data volume.
+#### oltp_read_only
+
+```
+Prepare Statement (ID = 1): 'COMMIT'
+Prepare Statement (ID = 2): SELECT c FROM sbtest1 WHERE id=?
+Statement: 'BEGIN'
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 1
+```
 
-### Readwrite-splitting
+#### oltp_write_only
 
-One primary database and one replica database, which are deployed on different machines, are designed for this scenario based on ten thousand data volume.
-While as a comparison, MySQL runs with INSERT & SELECT & DELETE sql statement on the basis of ten thousand data volume.
+```
+Prepare Statement (ID = 1): 'COMMIT'
+Prepare Statement (ID = 2): UPDATE sbtest1 SET k=k+1 WHERE id=?
+Prepare Statement (ID = 3): UPDATE sbtest6 SET c=? WHERE id=?
+Prepare Statement (ID = 4): DELETE FROM sbtest1 WHERE id=?
+Prepare Statement (ID = 5): INSERT INTO sbtest1 (id, k, c, pad) VALUES (?, ?, ?, ?)
+Statement: 'BEGIN'
+Execute Statement: ID = 2
+Execute Statement: ID = 3
+Execute Statement: ID = 4
+Execute Statement: ID = 5
+Execute Statement: ID = 1
+```
 
-### Readwrite-splitting & Encrypt & Sharding
+#### oltp_read_write
 
-On the basis of one thousand data volume, four databases that are deployed on different machines and each contains 1024 tables with `id` used for database sharding, `k` used for table sharding, `c` encrypted with aes and  `pad` encrypted with md5 are designed for this scenario, single route select sql statement is chosen here.
-While as a comparison, MySQL runs with INSERT & UPDATE & DELETE statement and single route select sql statement on the basis of one thousand data volume.
+```
+Prepare Statement (ID = 1): 'COMMIT'
+Prepare Statement (ID = 2): SELECT c FROM sbtest1 WHERE id=?
+Prepare Statement (ID = 3): UPDATE sbtest3 SET k=k+1 WHERE id=?
+Prepare Statement (ID = 4): UPDATE sbtest10 SET c=? WHERE id=?
+Prepare Statement (ID = 5): DELETE FROM sbtest8 WHERE id=?
+Prepare Statement (ID = 6): INSERT INTO sbtest8 (id, k, c, pad) VALUES (?, ?, ?, ?)
+Statement: 'BEGIN'
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 2
+Execute Statement: ID = 3
+Execute Statement: ID = 4
+Execute Statement: ID = 5
+Execute Statement: ID = 6
+Execute Statement: ID = 1
+```
 
-### Full Route
+#### oltp_update_index
 
-On the basis of one thousand data volume, four databases that are deployed on different machines and each contains one table are designed for this scenario, field `id` is used for database sharding and `k` is used for table sharding, full route select sql statement is chosen here.
-While as a comparison, MySQL runs with INSERT & UPDATE & DELETE statement and full route select sql statement on the basis of one thousand data volume.
+```
+Prepare Statement (ID = 1): UPDATE sbtest1 SET k=k+1 WHERE id=?
+Execute Statement: ID = 1
+```
 
-## Testing Environment
+#### oltp_update_non_index
 
-### Table Structure of Database
+```
+Prepare Statement (ID = 1): UPDATE sbtest1 SET c=? WHERE id=?
+Execute Statement: ID = 1
+```
 
-The structure of table here refer to `sbtest` in `sysbench`
+#### oltp_delete
 
-```shell
-CREATE TABLE `tbl` (
-  `id` bigint(20) NOT NULL AUTO_INCREMENT,
-  `k` int(11) NOT NULL DEFAULT 0,
-  `c` char(120) NOT NULL DEFAULT '',
-  `pad` char(60) NOT NULL DEFAULT '',
-  PRIMARY KEY (`id`)
-);
+```
+Prepare Statement (ID = 1): DELETE FROM sbtest1 WHERE id=?
+Execute Statement: ID = 1
 ```
 
-### Test Scenarios Configuration
+## Appendix 1
 
-The same configurations are used for ShardingSphere-JDBC and ShardingSphere-Proxy, while MySQL with one database connected is designed for comparision.
-The details for these scenarios are shown as follows.
+#### Master branch version
 
-#### Single Route Configuration
+server.yaml
 
 ```yaml
-schemaName: sharding_db
+rules:
+  -!AUTHORITY
+    users:
+      - root@%:root
+      - sharding@:sharding
+    provider:
+      type: ALL_PRIVILEGES_PERMITTED
+props:
+  max-connections-size-per-query: 1
+  kernel-executor-size: 16  # Infinite by default.
+  proxy-frontend-flush-threshold: 128  # The default value is 128.
+  proxy-opentracing-enabled: false
+  proxy-hint-enabled: false
+  sql-show: false
+  check-table-metadata-enabled: false
+  show-process-list-enabled: false
+  proxy-backend-query-fetch-size: -1
+  check-duplicate-table-enabled: false
+  proxy-frontend-executor-size: 0
+  proxy-backend-executor-suitable: OLAP
+  proxy-frontend-max-connections: 0
+  sql-federation-enabled: false
+```
 
+config-sharding.yaml
+
+```yaml
+schemaName: sbtest
 dataSources:
   ds_0:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
-    password: 
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password:
     connectionTimeoutMilliseconds: 30000
     idleTimeoutMilliseconds: 60000
     maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
+    maxPoolSize: 256
+    minPoolSize: 256
   ds_1:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
-    password: 
-    connectionTimeoutMilliseconds: 30000
-    idleTimeoutMilliseconds: 60000
-    maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  ds_2:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test 
-    password: 
-    connectionTimeoutMilliseconds: 30000
-    idleTimeoutMilliseconds: 60000
-    maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  ds_3:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
+    url: jdbc:mysql://${host-mysql-2}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
     password:
     connectionTimeoutMilliseconds: 30000
     idleTimeoutMilliseconds: 60000
     maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
+    maxPoolSize: 256
+    minPoolSize: 256
 rules:
 - !SHARDING
   tables:
-    tbl:
-      actualDataNodes: ds_${0..3}.tbl${0..1023}
+    sbtest1:
+      actualDataNodes: ds_${0..1}.sbtest1_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_1
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest2:
+      actualDataNodes: ds_${0..1}.sbtest2_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_2
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest3:
+      actualDataNodes: ds_${0..1}.sbtest3_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_3
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest4:
+      actualDataNodes: ds_${0..1}.sbtest4_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_4
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest5:
+      actualDataNodes: ds_${0..1}.sbtest5_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_5
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest6:
+      actualDataNodes: ds_${0..1}.sbtest6_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_6
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest7:
+      actualDataNodes: ds_${0..1}.sbtest7_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_7
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest8:
+      actualDataNodes: ds_${0..1}.sbtest8_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_8
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest9:
+      actualDataNodes: ds_${0..1}.sbtest9_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_9
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest10:
+      actualDataNodes: ds_${0..1}.sbtest10_${0..99}
       tableStrategy:
         standard:
-          shardingColumn: k
-          shardingAlgorithmName: tbl_table_inline
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_10
       keyGenerateStrategy:
-          column: id
-          keyGeneratorName: snowflake
+        column: id
+        keyGeneratorName: snowflake
   defaultDatabaseStrategy:
-    inline:
+    standard:
       shardingColumn: id
-      shardingAlgorithmName: default_db_inline
-  defaultTableStrategy:
-    none:
+      shardingAlgorithmName: database_inline
   shardingAlgorithms:
-    tbl_table_inline:
+    database_inline:
+      type: INLINE
+      props:
+        algorithm-expression: ds_${id % 2}
+    table_inline_1:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest1_${id % 100}
+    table_inline_2:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest2_${id % 100}
+    table_inline_3:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest3_${id % 100}
+    table_inline_4:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest4_${id % 100}
+    table_inline_5:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest5_${id % 100}
+    table_inline_6:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest6_${id % 100}
+    table_inline_7:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest7_${id % 100}
+    table_inline_8:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest8_${id % 100}
+    table_inline_9:
       type: INLINE
       props:
-        algorithm-expression: tbl${k % 1024}
-    default_db_inline:
+        algorithm-expression: sbtest9_${id % 100}
+    table_inline_10:
       type: INLINE
       props:
-        algorithm-expression: ds_${id % 4}
+        algorithm-expression: sbtest10_${id % 100}
   keyGenerators:
     snowflake:
       type: SNOWFLAKE
@@ -123,313 +471,1018 @@ rules:
         worker-id: 123
 ```
 
-#### Readwrite-splitting Configuration
+config-readwrite-splitting.yaml
 
 ```yaml
-schemaName: sharding_db
-
+schemaName: sbtest
 dataSources:
-  primary_ds:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
-    password:
-    connectionTimeoutMilliseconds: 30000
-    idleTimeoutMilliseconds: 60000
-    maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  replica_ds_0:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
+  ds_0:
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
     password:
     connectionTimeoutMilliseconds: 30000
     idleTimeoutMilliseconds: 60000
     maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
+    maxPoolSize: 128
+    minPoolSize: 128
 rules:
 - !READWRITE_SPLITTING
   dataSources:
     pr_ds:
-      writeDataSourceName: primary_ds
-      readDataSourceNames:
-        - replica_ds_0
+      primaryDataSourceName: ds_0
+      replicaDataSourceNames:
+        - ds_0
+        - ds_0
 ```
 
-#### Readwrite-splitting & Encrypt & Sharding Configuration
+config-shadow.yaml
 
 ```yaml
-schemaName: sharding_db
-
+schemaName: sbtest
 dataSources:
   primary_ds_0:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
-    password:
-    connectionTimeoutMilliseconds: 30000
-    idleTimeoutMilliseconds: 60000
-    maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  replica_ds_0:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
     password:
     connectionTimeoutMilliseconds: 30000
     idleTimeoutMilliseconds: 60000
     maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
+    maxPoolSize: 256
+    minPoolSize: 256
   primary_ds_1:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
-    password:
-    connectionTimeoutMilliseconds: 30000
-    idleTimeoutMilliseconds: 60000
-    maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  replica_ds_1:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
-    password:
-    connectionTimeoutMilliseconds: 30000
-    idleTimeoutMilliseconds: 60000
-    maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  primary_ds_2:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
-    password:
-    connectionTimeoutMilliseconds: 30000
-    idleTimeoutMilliseconds: 60000
-    maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  replica_ds_2:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
-    password:
-    connectionTimeoutMilliseconds: 30000
-    idleTimeoutMilliseconds: 60000
-    maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  primary_ds_3:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
-    password:
-    connectionTimeoutMilliseconds: 30000
-    idleTimeoutMilliseconds: 60000
-    maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  replica_ds_3:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
+    url: jdbc:mysql://${host-mysql-2}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
     password:
     connectionTimeoutMilliseconds: 30000
     idleTimeoutMilliseconds: 60000
     maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
+    maxPoolSize: 256
+    minPoolSize: 256
 rules:
 - !SHARDING
   tables:
-    tbl:
-      actualDataNodes: pr_ds_${0..3}.tbl${0..1023}
-      databaseStrategy:
+    sbtest1:
+      actualDataNodes: ds_${0..1}.sbtest1_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_1
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest2:
+      actualDataNodes: ds_${0..1}.sbtest2_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_2
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest3:
+      actualDataNodes: ds_${0..1}.sbtest3_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_3
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest4:
+      actualDataNodes: ds_${0..1}.sbtest4_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_4
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest5:
+      actualDataNodes: ds_${0..1}.sbtest5_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_5
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest6:
+      actualDataNodes: ds_${0..1}.sbtest6_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_6
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest7:
+      actualDataNodes: ds_${0..1}.sbtest7_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_7
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest8:
+      actualDataNodes: ds_${0..1}.sbtest8_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_8
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest9:
+      actualDataNodes: ds_${0..1}.sbtest9_${0..99}
+      tableStrategy:
         standard:
           shardingColumn: id
-          shardingAlgorithmName: tbl_database_inline
+          shardingAlgorithmName: table_inline_9
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest10:
+      actualDataNodes: ds_${0..1}.sbtest10_${0..99}
       tableStrategy:
         standard:
-          shardingColumn: k
-          shardingAlgorithmName: tbl_table_inline
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_10
       keyGenerateStrategy:
         column: id
         keyGeneratorName: snowflake
-  bindingTables:
-    - tbl
-  defaultDataSourceName: primary_ds_1
-  defaultTableStrategy:
-    none:
+   
+  defaultDatabaseStrategy:
+    standard:
+      shardingColumn: id
+      shardingAlgorithmName: database_inline
   shardingAlgorithms:
-    tbl_database_inline:
+    database_inline:
       type: INLINE
       props:
-        algorithm-expression: pr_ds_${id % 4}
-    tbl_table_inline:
+        algorithm-expression: ds_${id % 2}
+    table_inline_1:
       type: INLINE
       props:
-        algorithm-expression: tbl${k % 1024}
-  keyGenerators:
-    snowflake:
-      type: SNOWFLAKE
+        algorithm-expression: sbtest1_${id % 100}
+    table_inline_2:
+      type: INLINE
       props:
-          worker-id: 123
-- !READWRITE_SPLITTING
-  dataSources:
-    pr_ds_0:
-      writeDataSourceName: primary_ds_0
-      readDataSourceNames:
-        - replica_ds_0
-      loadBalancerName: round_robin
-    pr_ds_1:
-      writeDataSourceName: primary_ds_1
-      readDataSourceNames:
-        - replica_ds_1
-      loadBalancerName: round_robin
-    pr_ds_2:
-      writeDataSourceName: primary_ds_2
-      readDataSourceNames:
-        - replica_ds_2
-      loadBalancerName: round_robin
-    pr_ds_3:
-      writeDataSourceName: primary_ds_3
-      readDataSourceNames:
-        - replica_ds_3
-      loadBalancerName: round_robin
-  loadBalancers:
-    round_robin:
-      type: ROUND_ROBIN
-- !ENCRYPT:
-  encryptors:
-    aes_encryptor:
-      type: AES
+        algorithm-expression: sbtest2_${id % 100}
+    table_inline_3:
+      type: INLINE
       props:
-        aes-key-value: 123456abc
-    md5_encryptor:
-      type: MD5
+        algorithm-expression: sbtest3_${id % 100}
+    table_inline_4:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest4_${id % 100}
+    table_inline_5:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest5_${id % 100}
+    table_inline_6:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest6_${id % 100}
+    table_inline_7:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest7_${id % 100}
+    table_inline_8:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest8_${id % 100}
+    table_inline_9:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest9_${id % 100}
+    table_inline_10:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest10_${id % 100}
+  keyGenerators:
+    snowflake:
+      type: SNOWFLAKE
+      props:
+        worker-id: 123
+- !READWRITE_SPLITTING
+  dataSources:
+    ds_0:
+      primaryDataSourceName: primary_ds_0
+      replicaDataSourceNames:
+        - primary_ds_0
+        - primary_ds_0
+    ds_1:
+      name: ds_1
+      primaryDataSourceName: primary_ds_1
+      replicaDataSourceNames:
+        - primary_ds_1
+        - primary_ds_1
+```
+
+config-encrypt.yaml
+
+```yaml
+schemaName: sbtest
+dataSources:
+  ds_0:
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password:
+    connectionTimeoutMilliseconds: 30000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 256
+    minPoolSize: 256
+rules:
+- !ENCRYPT
+  encryptors:
+    md5_encryptor:
+      type: MD5
   tables:
-    sbtest:
+    sbtest1:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest2:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest3:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest4:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest5:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest6:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest7:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest8:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest9:
       columns:
-        c:
-          plainColumn: c_plain
-          cipherColumn: c_cipher
-          encryptorName: aes_encryptor
         pad:
-          cipherColumn: pad_cipher
-          encryptorName: md5_encryptor    
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest10:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+```
+
+config-database-discovery.yaml
+
+```yaml
+schemaName: sbtest
+dataSources:
+  ds_0:
+    url: jdbc:postgresql://127.0.0.1:5432/demo_primary_ds
+    username: postgres
+    password: postgres
+    connectionTimeoutMilliseconds: 3000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 50
+    minPoolSize: 1
+  ds_1:
+    url: jdbc:postgresql://127.0.0.1:5432/demo_replica_ds_0
+    username: postgres
+    password: postgres
+    connectionTimeoutMilliseconds: 3000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 50
+    minPoolSize: 1
+  ds_2:
+    url: jdbc:postgresql://127.0.0.1:5432/demo_replica_ds_1
+    username: postgres
+    password: postgres
+    connectionTimeoutMilliseconds: 3000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 50
+    minPoolSize: 1
+rules:
+- !DB_DISCOVERY
+  dataSources:
+    pr_ds:
+      dataSourceNames:
+        - ds_0
+        - ds_1
+        - ds_2
+      discoveryHeartbeatName: mgr-heartbeat
+      discoveryTypeName: mgr
+  discoveryHeartbeats:
+    mgr-heartbeat:
+      props:
+        keep-alive-cron: '0/5 * * * * ?'
+  discoveryTypes:
+    mgr:
+      type: MGR
+      props:
+        group-name: 92504d5b-6dec-11e8-91ea-246e9612aaf1
 ```
 
-#### Full Route Configuration
+#### 4.1.1 version
+
+server.yaml
 
 ```yaml
-schemaName: sharding_db
+authentication:
+  users:
+    root:
+      password: root
+    sharding:
+      password: sharding
+      authorizedSchemas: sharding_db
+props:
+  max.connections.size.per.query: 10
+  acceptor.size: 256  # The default value is available processors count * 2.
+  executor.size: 128  # Infinite by default.
+  proxy.frontend.flush.threshold: 128  # The default value is 128.
+    # LOCAL: Proxy will run with LOCAL transaction.
+    # XA: Proxy will run with XA transaction.
+    # BASE: Proxy will run with B.A.S.E transaction.
+  proxy.transaction.type: LOCAL
+  proxy.opentracing.enabled: false
+  proxy.hint.enabled: false
+  query.with.cipher.column: true
+  sql.show: false
+  allow.range.query.with.inline.sharding: false
+```
+
+config-sharding.yaml
 
+```yaml
+schemaName: sbtest
 dataSources:
   ds_0:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
     password:
     connectionTimeoutMilliseconds: 30000
     idleTimeoutMilliseconds: 60000
     maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
+    maxPoolSize: 256
   ds_1:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
+    url: jdbc:mysql://${host-mysql-2}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
     password:
     connectionTimeoutMilliseconds: 30000
     idleTimeoutMilliseconds: 60000
     maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  ds_2:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
+    maxPoolSize: 256
+shardingRule:
+  tables:
+    sbtest1:
+      actualDataNodes: ds_${0..1}.sbtest1_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest1_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest2:
+      actualDataNodes: ds_${0..1}.sbtest2_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest2_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest3:
+      actualDataNodes: ds_${0..1}.sbtest3_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest3_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest4:
+      actualDataNodes: ds_${0..1}.sbtest4_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest4_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest5:
+      actualDataNodes: ds_${0..1}.sbtest5_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest5_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest6:
+      actualDataNodes: ds_${0..1}.sbtest6_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest6_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest7:
+      actualDataNodes: ds_${0..1}.sbtest7_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest7_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest8:
+      actualDataNodes: ds_${0..1}.sbtest8_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest8_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest9:
+      actualDataNodes: ds_${0..1}.sbtest9_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest9_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest10:
+      actualDataNodes: ds_${0..1}.sbtest10_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest10_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+  defaultDatabaseStrategy:
+    inline:
+      shardingColumn: id
+      algorithmExpression: ds_${id % 2}
+```
+
+config-master_slave.yaml
+
+```yaml
+schemaName: sbtest
+dataSources:
+  ds_0:
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password:
+    connectionTimeoutMilliseconds: 30000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 256
+masterSlaveRule:
+  name: ms_ds
+  masterDataSourceName: ds_0
+  slaveDataSourceNames:
+    - ds_0
+    - ds_0
+```
+
+config-sharding-master_slave.yaml
+
+```yaml
+schemaName: sbtest
+dataSources:
+  primary_ds_0:
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
     password:
     connectionTimeoutMilliseconds: 30000
     idleTimeoutMilliseconds: 60000
     maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-  ds_3:
-    url: jdbc:mysql://***.***.***.***:****/ds?serverTimezone=UTC&useSSL=false
-    username: test
+    maxPoolSize: 256
+  primary_ds_1:
+    url: jdbc:mysql://${host-mysql-2}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
     password:
     connectionTimeoutMilliseconds: 30000
     idleTimeoutMilliseconds: 60000
     maxLifetimeMilliseconds: 1800000
-    maxPoolSize: 200
-rules:
-- !SHARDING
+    maxPoolSize: 256
+shardingRule:
   tables:
-    tbl:
-      actualDataNodes: ds_${0..3}.tbl1
+    sbtest1:
+      actualDataNodes: ds_${0..1}.sbtest1_${0..99}
       tableStrategy:
-        standard:
-          shardingColumn: k
-          shardingAlgorithmName: tbl_table_inline
-      keyGenerateStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest1_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest2:
+      actualDataNodes: ds_${0..1}.sbtest2_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest2_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest3:
+      actualDataNodes: ds_${0..1}.sbtest3_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest3_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest4:
+      actualDataNodes: ds_${0..1}.sbtest4_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest4_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest5:
+      actualDataNodes: ds_${0..1}.sbtest5_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest5_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest6:
+      actualDataNodes: ds_${0..1}.sbtest6_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest6_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest7:
+      actualDataNodes: ds_${0..1}.sbtest7_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest7_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest8:
+      actualDataNodes: ds_${0..1}.sbtest8_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest8_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest9:
+      actualDataNodes: ds_${0..1}.sbtest9_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest9_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
+        column: id
+    sbtest10:
+      actualDataNodes: ds_${0..1}.sbtest10_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest10_${id % 100}
+      keyGenerator:
+        type: SNOWFLAKE
         column: id
-        keyGeneratorName: snowflake
   defaultDatabaseStrategy:
-    standard:
+    inline:
       shardingColumn: id
-      shardingAlgorithmName: default_database_inline
-  defaultTableStrategy:
-    none:  
-  shardingAlgorithms:
-    default_database_inline:
-      type: INLINE
-      props:
-        algorithm-expression: ds_${id % 4}
-    tbl_table_inline:
-      type: INLINE
-      props:
-        algorithm-expression: tbl1    
-  keyGenerators:
-    snowflake:
-      type: SNOWFLAKE
-      props:
-        worker-id: 123  
+      algorithmExpression: ds_${id % 2}
+  masterSlaveRules:
+    ds_0:
+      masterDataSourceName: primary_ds_0
+      slaveDataSourceNames: [primary_ds_0, primary_ds_0]
+      loadBalanceAlgorithmType: ROUND_ROBIN
+    ds_1:
+      masterDataSourceName: primary_ds_1
+      slaveDataSourceNames: [primary_ds_1, primary_ds_1]
+      loadBalanceAlgorithmType: ROUND_ROBIN
 ```
 
-## Test Result Verification
+config-encrypt.yaml
 
-### SQL Statement
- 
-```shell
-INSERT+UPDATE+DELETE sql statements:
-INSERT INTO tbl(k, c, pad) VALUES(1, '###-###-###', '###-###');
-UPDATE tbl SET c='####-####-####', pad='####-####' WHERE id=?;
-DELETE FROM tbl WHERE id=?
+```yaml
+schemaName: sbtest
+dataSources:
+  ds_0:
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password:
+    connectionTimeoutMilliseconds: 30000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 256
+encryptRule:
+  encryptors:
+    encryptor_md5:
+      type: md5
+  tables:
+    sbtest1:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+    sbtest2:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+    sbtest3:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+    sbtest4:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+    sbtest5:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+    sbtest6:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+    sbtest7:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+    sbtest8:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+    sbtest9:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+    sbtest10:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptor: encryptor_md5
+```
 
-SELECT sql statement for full route:
-SELECT max(id) FROM tbl WHERE id%4=1
+#### 3.0.0 version
 
-SELECT sql statement for single route:
-SELECT id, k FROM tbl ignore index(`PRIMARY`) WHERE id=1 AND k=1
+server.yaml
 
-INSERT+SELECT+DELETE sql statements:
-INSERT INTO tbl1(k, c, pad) VALUES(1, '###-###-###', '###-###');
-SELECT count(id) FROM tbl1;
-SELECT max(id) FROM tbl1 ignore index(`PRIMARY`);
-DELETE FROM tbl1 WHERE id=?
+```yaml
+authentication:
+  username: root
+  password: root
+props:
+  max.connections.size.per.query: 10
+  acceptor.size: 256  # The default value is available processors count * 2.
+  executor.size: 128  # Infinite by default.
+  proxy.frontend.flush.threshold: 128  # The default value is 128.
+    # LOCAL: Proxy will run with LOCAL transaction.
+    # XA: Proxy will run with XA transaction.
+    # BASE: Proxy will run with B.A.S.E transaction.
+  proxy.transaction.type: LOCAL
+  proxy.opentracing.enabled: false
+  sql.show: false
 ```
 
-### Jmeter Class
+config-sharding.yaml
 
-Consider the implementation of [shardingsphere-benchmark](https://github.com/apache/shardingsphere-benchmark/tree/master/shardingsphere-benchmark)
-Notes: the notes in shardingsphere-benchmark/README.md should be taken attention to
+```yaml
+schemaName: sbtest
+dataSources:
+  ds_0:
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password:
+    autoCommit: true
+    connectionTimeout: 30000
+    idleTimeout: 60000
+    maxLifetime: 1800000
+    maximumPoolSize: 256
+  ds_1:
+    url: jdbc:mysql://${host-mysql-2}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password:
+    autoCommit: true
+    connectionTimeout: 30000
+    idleTimeout: 60000
+    maxLifetime: 1800000
+    maximumPoolSize: 256
+shardingRule:
+  tables:
+    sbtest1:
+      actualDataNodes: ds_${0..1}.sbtest1_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest1_${id % 100}
+    sbtest2:
+      actualDataNodes: ds_${0..1}.sbtest2_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest2_${id % 100}
+    sbtest3:
+      actualDataNodes: ds_${0..1}.sbtest3_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest3_${id % 100}
+    sbtest4:
+      actualDataNodes: ds_${0..1}.sbtest4_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest4_${id % 100}
+    sbtest5:
+      actualDataNodes: ds_${0..1}.sbtest5_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest5_${id % 100}
+    sbtest6:
+      actualDataNodes: ds_${0..1}.sbtest6_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest6_${id % 100}
+    sbtest7:
+      actualDataNodes: ds_${0..1}.sbtest7_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest7_${id % 100}
+    sbtest8:
+      actualDataNodes: ds_${0..1}.sbtest8_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest8_${id % 100}
+    sbtest9:
+      actualDataNodes: ds_${0..1}.sbtest9_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest9_${id % 100}
+    sbtest10:
+      actualDataNodes: ds_${0..1}.sbtest10_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest10_${id % 100}
+  defaultDatabaseStrategy:
+    inline:
+      shardingColumn: id
+      algorithmExpression: ds_${id % 2}
+```
 
-### Compile & Build
+config-master_slave.yaml
 
-```shell
-git clone https://github.com/apache/shardingsphere-benchmark.git
-cd shardingsphere-benchmark/shardingsphere-benchmark
-mvn clean install
+```yaml
+schemaName: sbtest
+dataSources:
+  ds_0:
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password:
+    autoCommit: true
+    connectionTimeout: 30000
+    idleTimeout: 60000
+    maxLifetime: 1800000
+    maximumPoolSize: 256
+masterSlaveRule:
+  name: ms_ds
+  masterDataSourceName: ds_0
+  slaveDataSourceNames:
+    - ds_0
+    - ds_0
 ```
 
-### Perform Test
+config-sharding-master_slave.yaml
 
-```shell
-cp target/shardingsphere-benchmark-1.0-SNAPSHOT-jar-with-dependencies.jar apache-jmeter-4.0/lib/ext
-jmeter –n –t test_plan/test.jmx
-test.jmx example:https://github.com/apache/shardingsphere-benchmark/tree/master/report/script/test_plan/test.jmx
+```yaml
+schemaName: sbtest
+dataSources:
+  primary_ds_0:
+    url: jdbc:mysql://${host-mysql-1}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password:
+    autoCommit: true
+    connectionTimeout: 30000
+    idleTimeout: 60000
+    maxLifetime: 1800000
+    maximumPoolSize: 256
+  primary_ds_1:
+    url: jdbc:mysql://${host-mysql-2}:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password:
+    autoCommit: true
+    connectionTimeout: 30000
+    idleTimeout: 60000
+    maxLifetime: 1800000
+    maximumPoolSize: 256
+shardingRule:
+  tables:
+    sbtest1:
+      actualDataNodes: ds_${0..1}.sbtest1_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest1_${id % 100}
+    sbtest2:
+      actualDataNodes: ds_${0..1}.sbtest2_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest2_${id % 100}
+    sbtest3:
+      actualDataNodes: ds_${0..1}.sbtest3_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest3_${id % 100}
+    sbtest4:
+      actualDataNodes: ds_${0..1}.sbtest4_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest4_${id % 100}
+    sbtest5:
+      actualDataNodes: ds_${0..1}.sbtest5_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest5_${id % 100}
+    sbtest6:
+      actualDataNodes: ds_${0..1}.sbtest6_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest6_${id % 100}
+    sbtest7:
+      actualDataNodes: ds_${0..1}.sbtest7_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest7_${id % 100}
+    sbtest8:
+      actualDataNodes: ds_${0..1}.sbtest8_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest8_${id % 100}
+    sbtest9:
+      actualDataNodes: ds_${0..1}.sbtest9_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest9_${id % 100}
+    sbtest10:
+      actualDataNodes: ds_${0..1}.sbtest10_${0..99}
+      tableStrategy:
+        inline:
+          shardingColumn: id
+          algorithmExpression: sbtest10_${id % 100}
+  defaultDatabaseStrategy:
+    inline:
+      shardingColumn: id
+      algorithmExpression: ds_${id % 2}
+  masterSlaveRules:
+    ds_0:
+      masterDataSourceName: primary_ds_0
+      slaveDataSourceNames: [primary_ds_0, primary_ds_0]
+      loadBalanceAlgorithmType: ROUND_ROBIN
+    ds_1:
+      masterDataSourceName: primary_ds_1
+      slaveDataSourceNames: [primary_ds_1, primary_ds_1]
+      loadBalanceAlgorithmType: ROUND_ROBIN
 ```
 
-### Process Result Data
+config-encrypt.yaml
 
-Make sure the location of result.jtl file is correct.
-```shell
-sh shardingsphere-benchmark/report/script/gen_report.sh
+```
+Unsupported
 ```
 
-### Display of Historical Performance Test Data
+## Appendix 2
 
-In progress, please wait.
-<!--
-The data of [benchmark platform](https://shardingsphere.apache.org/benchmark/#/overview) is show daily
--->
+plot_graph.py
+
+```python
+import sys
+import matplotlib.pyplot as plt
+import numpy as np
+def generate_graph(path, case_name):
+    dataset = {
+        'build_num': [],
+        'master_version': [],
+        'master_xa': [],
+        '4.1.1_version': [],
+        '3.0.0_version': [],
+        'mysql_server': []
+    }
+    with open(path + '/.build_number.txt') as builds:
+        for line in builds:
+            dataset['build_num'].append(int(line))
+    generate_data(path, case_name, dataset)
+    print(dataset)
+    fig, ax = plt.subplots()
+    ax.grid(True)
+    plt.title(case_name)
+    data = [dataset['master_version'][-7:], dataset['master_xa'][-7:], dataset['4.1.1_version'][-7:], dataset['3.0.0_version'][-7:], dataset['mysql_server'][-7:]]
+    columns = dataset['build_num'][-7:]
+    rows = ['master', 'xa', '4.1.1', '3.0.0', 'mysql']
+    rcolors = plt.cm.BuPu(np.full(len(rows), 0.1))
+    ccolors = plt.cm.BuPu(np.full(len(columns), 0.1))
+    the_table = plt.table(cellText=data, rowLabels=rows, colLabels=columns, rowColours=rcolors, colColours=ccolors,
+                          loc='bottom', bbox=[0.0, -0.50, 1, .28])
+    plt.subplots_adjust(left=0.15, bottom=0.3, right=0.98)
+    plt.xticks(range(14))
+    ax.set_xticklabels(dataset['build_num'])
+    plt.plot(dataset['master_version'], 'o-', color='magenta', label='master_version')
+    plt.plot(dataset['master_xa'], 'o-', color='darkviolet', label='master_xa')
+    plt.plot(dataset['4.1.1_version'], 'r--', color='blue', label='4.1.1_version')
+    plt.plot(dataset['3.0.0_version'], 'r--', color='orange', label='3.0.0_version')
+    plt.plot(dataset['mysql_server'], 'r--', color='lime', label='mysql_server')
+    plt.xlim()
+    plt.legend()
+    plt.xlabel('build_num')
+    plt.ylabel('transactions per second')
+    plt.savefig('graph/' + path + '/' + case_name)
+    plt.show()
+def generate_data(path, case_name, dataset):
+    for build in dataset['build_num']:
+        fill_dataset(build, case_name, dataset, path, 'master_version', '.master.txt')
+        fill_dataset(build, case_name, dataset, path, 'master_xa', '.xa.txt')
+        fill_dataset(build, case_name, dataset, path, '4.1.1_version', '.4_1_1.txt')
+        fill_dataset(build, case_name, dataset, path, '3.0.0_version', '.3_0_0.txt')
+        fill_dataset(build, case_name, dataset, path, 'mysql_server', '.mysql.txt')
+def fill_dataset(build, case_name, dataset, path, version, suffix):
+    try:
+        with open(path + '/' + str(build) + '/' + case_name + suffix) as version_master:
+            value = 0
+            for line in version_master:
+                if 'transactions:' in line:
+                    items = line.split('(')
+                    value = float(items[1][:-10])
+            dataset[version].append(value)
+    except FileNotFoundError:
+        dataset[version].append(0)
+if __name__ == '__main__':
+    path = sys.argv[1]
+    generate_graph(path, 'oltp_point_select')
+    generate_graph(path, 'oltp_read_only')
+    generate_graph(path, 'oltp_write_only')
+    generate_graph(path, 'oltp_read_write')
+    generate_graph(path, 'oltp_update_index')
+    generate_graph(path, 'oltp_update_non_index')
+    generate_graph(path, 'oltp_delete')
+```