You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@shardingsphere.apache.org by pa...@apache.org on 2022/11/30 11:33:47 UTC

[shardingsphere-on-cloud] branch main updated: new

This is an automated email from the ASF dual-hosted git repository.

panjuan pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/shardingsphere-on-cloud.git


The following commit(s) were added to refs/heads/main by this push:
     new c3731b1  new
     new 4710ec1  Merge pull request #125 from Mike0601/main
c3731b1 is described below

commit c3731b1aa54e192a56825e66c43f3ad3a21e95db
Author: Mike0601 <40...@users.noreply.github.com>
AuthorDate: Wed Nov 30 19:23:13 2022 +0800

    new
---
 .../_index.en.md => operation-guide/_index.cn.md}  |   2 +-
 .../{quick-start => operation-guide}/_index.en.md  |   2 +-
 .../cloudformation-multi-az/_index.cn.md           | 214 +++++++++++++++
 .../cloudformation-multi-az/_index.en.md           | 218 +++++++++++++++
 docs/content/operation-guide/helm/_index.cn.md     | 248 +++++++++++++++++
 docs/content/operation-guide/helm/_index.en.md     | 249 +++++++++++++++++
 docs/content/operation-guide/operator/_index.cn.md | 303 +++++++++++++++++++++
 docs/content/operation-guide/operator/_index.en.md | 303 +++++++++++++++++++++
 .../_index.cn.md                                   |  79 ++++++
 .../_index.en.md                                   |  79 ++++++
 docs/content/overview/_index.cn.md                 |  45 +++
 docs/content/overview/_index.en.md                 |  45 +++
 docs/content/quick-start/_index.cn.md              |  10 -
 docs/static/img/operation-guide/1.PNG              | Bin 0 -> 64650 bytes
 docs/static/img/operation-guide/10.PNG             | Bin 0 -> 430736 bytes
 docs/static/img/operation-guide/11.PNG             | Bin 0 -> 111208 bytes
 docs/static/img/operation-guide/12.PNG             | Bin 0 -> 154760 bytes
 docs/static/img/operation-guide/13.PNG             | Bin 0 -> 38669 bytes
 docs/static/img/operation-guide/2.PNG              | Bin 0 -> 72038 bytes
 docs/static/img/operation-guide/3.PNG              | Bin 0 -> 113625 bytes
 docs/static/img/operation-guide/4-1.PNG            | Bin 0 -> 35767 bytes
 docs/static/img/operation-guide/4-10.PNG           | Bin 0 -> 63036 bytes
 docs/static/img/operation-guide/4-11.PNG           | Bin 0 -> 164193 bytes
 docs/static/img/operation-guide/4-12.PNG           | Bin 0 -> 205060 bytes
 docs/static/img/operation-guide/4-13.PNG           | Bin 0 -> 113288 bytes
 docs/static/img/operation-guide/4-2.PNG            | Bin 0 -> 91053 bytes
 docs/static/img/operation-guide/4-3.PNG            | Bin 0 -> 72134 bytes
 docs/static/img/operation-guide/4-4.PNG            | Bin 0 -> 41906 bytes
 docs/static/img/operation-guide/4-5.PNG            | Bin 0 -> 88535 bytes
 docs/static/img/operation-guide/4-6.PNG            | Bin 0 -> 64559 bytes
 docs/static/img/operation-guide/4-7.PNG            | Bin 0 -> 106435 bytes
 docs/static/img/operation-guide/4-8.PNG            | Bin 0 -> 49067 bytes
 docs/static/img/operation-guide/4-9.PNG            | Bin 0 -> 51534 bytes
 docs/static/img/operation-guide/4.PNG              | Bin 0 -> 124544 bytes
 docs/static/img/operation-guide/5.PNG              | Bin 0 -> 120585 bytes
 docs/static/img/operation-guide/6.PNG              | Bin 0 -> 205486 bytes
 docs/static/img/operation-guide/7.PNG              | Bin 0 -> 131816 bytes
 docs/static/img/operation-guide/8.PNG              | Bin 0 -> 131424 bytes
 docs/static/img/operation-guide/9.PNG              | Bin 0 -> 272302 bytes
 docs/static/img/overview/operator.png              | Bin 0 -> 73873 bytes
 docs/static/img/overview/terraform.png             | Bin 0 -> 98232 bytes
 41 files changed, 1785 insertions(+), 12 deletions(-)

diff --git a/docs/content/quick-start/_index.en.md b/docs/content/operation-guide/_index.cn.md
similarity index 70%
copy from docs/content/quick-start/_index.en.md
copy to docs/content/operation-guide/_index.cn.md
index a86f651..bbbc2dc 100644
--- a/docs/content/quick-start/_index.en.md
+++ b/docs/content/operation-guide/_index.cn.md
@@ -1,6 +1,6 @@
 +++
 pre = "<b>2. </b>"
-title = "Quick Start"
+title = "操作指南"
 weight = 2
 chapter = true
 +++
diff --git a/docs/content/quick-start/_index.en.md b/docs/content/operation-guide/_index.en.md
similarity index 67%
rename from docs/content/quick-start/_index.en.md
rename to docs/content/operation-guide/_index.en.md
index a86f651..290bd2c 100644
--- a/docs/content/quick-start/_index.en.md
+++ b/docs/content/operation-guide/_index.en.md
@@ -1,6 +1,6 @@
 +++
 pre = "<b>2. </b>"
-title = "Quick Start"
+title = "Operation Guide"
 weight = 2
 chapter = true
 +++
diff --git a/docs/content/operation-guide/cloudformation-multi-az/_index.cn.md b/docs/content/operation-guide/cloudformation-multi-az/_index.cn.md
new file mode 100644
index 0000000..d3d4a29
--- /dev/null
+++ b/docs/content/operation-guide/cloudformation-multi-az/_index.cn.md
@@ -0,0 +1,214 @@
++++
+pre = "<b>2.4 </b>"
+title = "CloudFormation 部署多可用区 ShardingSphere Proxy 集群"
+weight = 4
+chapter = true
++++
+
+## 背景
+
+ShardingSphere Proxy 集群作为数据基础设施重要的一部分,集群自身的高可用性尤为重要,本部分内容将介绍使用 CloudFormation 在 Amazon 上从零搭建一套满足高可用的 ShardingSphere Proxy 集群。
+
+## 目标
+
+我们将创建如下架构图的 ShardingSphere Proxy 高可用集群:
+
+![](../../../../img/overview/terraform.png)
+
+创建的 Amazon 资源如下:
+1. 每个可用区一个 ZooKeeper 实例。
+2. 每个可用区一个 Auto Scaling Group。
+3. 每个可用区一个 Launch Template, 用于给 Auto Scaling Group 启动 ShardingSphere Proxy 实例。
+4. 一个内网 Network LoadBalancer, 给应用使用。
+
+## 快速开始
+
+### 前提条件
+
+为创建 ShardingSphere Proxy 高可用集群,您需要事先准备如下资源:
+1. 一个 ssh keypair,用于远程连接 EC2 实例。
+2. 一个 VPC。
+3. 每个可用区的 subnet。
+4. 一个 SecurityGroup, 能够放行 ZooKeeper Server 使用的 2888,3888,2181 端口。
+5. 一个内网 HostedZone。
+6. 一个通用的 AMI 镜像, Amazon linux2 即可。
+7. 最好准备好 CloudFormation [配置文件](https://raw.githubusercontent.com/apache/shardingsphere-on-cloud/main/cloudformation/multi-az/cf.json)。
+
+### 步骤
+
+1. 进入 Amazon CloudFormation 服务,创建 Stacks。
+
+![](../../../../img/operation-guide/4-1.PNG)
+
+点击 `Choose File` 按钮 上传准备好的 CloudFormation 配置。
+
+![](../../../../img/operation-guide/4-2.PNG)
+
+上传好后点击 `Next` 按钮。
+
+2. 将您准备好的资源填入以下对应的相关位置。
+
+![](../../../../img/operation-guide/4-3.PNG)
+
+![](../../../../img/operation-guide/4-4.PNG)
+
+填入相应参数后,点击 `Next`  按钮。
+
+3. 按您实际情况配置 `stack` 相关参数。
+
+![](../../../../img/operation-guide/4-5.PNG)
+
+![](../../../../img/operation-guide/4-6.PNG)
+
+配置好后点击 `Next` 按钮。
+
+4. 进行配置 `Review`。
+
+![](../../../../img/operation-guide/4-7.PNG)
+
+![](../../../../img/operation-guide/4-8.PNG)
+
+![](../../../../img/operation-guide/4-9.PNG)
+
+确认好点击 `Submit` 按钮。
+
+5. 在上述操作后,将进入创建阶段。
+
+![](../../../../img/operation-guide/4-10.PNG)
+
+![](../../../../img/operation-guide/4-11.PNG)
+
+![](../../../../img/operation-guide/4-12.PNG)
+
+6. 等待一段时间,创建完成后,进入 `Outputs` 标签页,如下图。
+
+![](../../../../img/operation-guide/4-13.PNG)
+
+其中 `ssinernaldomain` 对应的值就是我们需要的域名。
+
+默认创建的内部域名为 [proxy.shardingsphere.org](proxy.shardingsphere.org),端口为 3307,用户名和密码为 root。
+
+## 使用手册
+
+### CloudFormation 配置
+
+#### 参数列表
+
+|名称                      |描述                                                       |类型              |默认值|
+|--------------------------|-----------------------------------------------------------|-----------------|------|
+|HostedZoneId              |内网  HostedZone Id                                        |String               | |
+|HostedZoneName            |内网 HostedZone 名称                                        |String          |[shardingsphere.org](shardingsphere.org)|
+|ImageId                   |AMI Id, 需是Amazon Linux 2 类型或者包管理是 yum 的 Linux 系列|String          | |
+|KeyName                   |SSH 密钥对                                                  |String          | |
+|VpcId                     |VPC Id                                                     |String            | |
+|Subnets                   |VPC 中的子网列表,顺序需要和按可用区字母排序的顺序一致          |CommaDelimitedList| |
+|SecurityGroupIds          |安全组列表,需要放行 ZooKeeper Server 的 2181,2888,3888 端口|CommaDelimitedList| |
+|ShardingSphereInstanceType|ShardingSphere Proxy Server 的 EC2 实例类型                 |String            | |
+|ShardingSphereJavaMemOpts |ShardingSphere Proxy Server 的 jvm 内存参数                 |String            |-Xmx512m -Xms512m -Xmn128m|
+|ShardingSpherePort        |ShardingSphere Proxy 的端口                                 |String            |3307|
+|ShardingSphereVersion     |ShardingSphere Proxy 的版本                                 |String            |5.2.1|
+|ZookeeperHeap             |Zookeeper Server 的 jvm Heap 大小,单位为 m                  |String            |512|
+|ZookeeperInstanceType     |Zookeeper Server 的 EC2 实例类型                             |String            |t2.nano|
+|ZookeeperVersion          |Zookeeper Server 版本号                                      |String            |3.7.1|
+
+#### 输出列表
+
+|名称|描述|导出名称|值|
+|----|---|--------|--|
+|ZK1|Zookeeper Server1 信息|{'Fn::Sub': '${AWS::StackName}-Zookeeper-Server-1'}|{'Fn::Join': [':', [{'Ref': 'ZK1'}, {'Fn::GetAtt': ['ZK1', 'PrivateIp']}, {'Fn::GetAtt': ['ZK1', 'AvailabilityZone']}]]}|
+|ZK2|Zookeeper Server2 信息| {'Fn::Sub': '${AWS::StackName}-Zookeeper-Server-2'} |{'Fn::Join': [':', [{'Ref': 'ZK2'}, {'Fn::GetAtt': ['ZK2‘, 'PrivateIp']}, {'Fn::GetAtt': ['ZK2', 'AvailabilityZone']}]]}|
+|ZK3|Zookeeper Server3 信息|{'Fn::Sub': '${AWS::StackName}-Zookeeper-Server-3'}|{'Fn::Join': [':', [{'Ref': 'ZK2'}, {'Fn::GetAtt': ['ZK2', 'PrivateIp']}, {'Fn::GetAtt': ['ZK2', 'AvailabilityZone']}]]}|
+|zoneZK1|Zookeeper Server1 内部域名|{'Fn::Sub': '${AWS::StackName}-Zookeeper-Domain-1'}| {'Ref': 'zoneZK1'}|
+|zoneZK2|Zookeeper Server2 内部域名| {'Fn::Sub': '${AWS::StackName}-Zookeeper-Domain-2'}|{'Ref': 'zoneZK2'}|
+|zoneZK3|Zookeeper Server3 内部域名|{'Fn::Sub': '${AWS::StackName}-Zookeeper-Domain-3'}| {'Ref': 'zoneZK3'}|
+|ssinternaldomain|ShardingSphere Proxy 对外使用的内部域名|{'Fn::Sub': '${AWS::StackName}-ShardingSphere-Internal-Domain'}|{'Ref': 'ssinternaldomain'}|
+
+## 运维
+
+默认使用我们提供的 CloudFormation 创建的 ZooKeeper 和 ShardingSphere Proxy 服务可以使用 Systemd 管理。
+
+### ZooKeeper
+
+#### 启动
+
+```shell
+systemctl start zookeeper
+```
+
+#### 停止
+
+```shell
+systemctl stop zookeeper
+```
+
+#### 重启
+
+```shell
+systemctl restart zookeeper
+```
+
+### ShardingSphere Proxy
+
+#### 启动
+
+```shell
+systemctl start shardingsphere
+```
+
+#### 停止
+
+```shell
+systemctl stop shardingsphere
+```
+
+#### 重启
+
+```shell
+systemctl restart shardingsphere
+```
+
+## 开发手册
+
+此 CloudFormation 涉及以下资源列表。
+
+|资源名称        |类型|
+|----------------|----|
+|ZK1             |AWS::EC2::Instance|
+|ZK2             |AWS::EC2::Instance|
+|ZK3             |AWS::EC2::Instance|
+|zoneZK1         |AWS::Route53::RecordSet|
+|zoneZK2         |AWS::Route53::RecordSet|
+|zoneZK3         |AWS::Route53::RecordSet|
+|networkiface0   |AWS::EC2::NetworkInterface|
+|networkiface1   |AWS::EC2::NetworkInterface|
+|networkiface2   |AWS::EC2::NetworkInterface|
+|launchtemplate0 |AWS::EC2::LaunchTemplate|
+|launchtemplate1 |AWS::EC2::LaunchTemplate|
+|launchtemplate2 |AWS::EC2::LaunchTemplate|
+|ssinternallb    |AWS::ElasticLoadBalancingV2::LoadBalancer|
+|sslbtg          |AWS::ElasticLoadBalancingV2::TargetGroup|
+|autoscaling0    |AWS::AutoScaling::AutoScalingGroup |
+|autoscaling1    |AWS::AutoScaling::AutoScalingGroup |
+|autoscaling2    |AWS::AutoScaling::AutoScalingGroup |
+|sslblistener    |AWS::ElasticLoadBalancingV2::Listener|
+|ssinternaldomain|AWS::Route53::RecordSet|
+
+### 依赖
+
+我们使用 [cfndsl](https://github.com/cfndsl/cfndsl) 生成 CloudFormation 配置。
+
+您需要按照 [cfndsl](https://github.com/cfndsl/cfndsl)  提供的步骤去安装。
+
+### 步骤
+
+1. 初始化 `cfndsl`,只需运行一次。
+
+```shell
+cfndsl -u 94.0.0
+```
+
+2. 修改 `cf.rb` 配置后,运行下面命令生成 CloudFormation 配置。
+
+```shell
+ cfndsl cf.rb -o cf.json --pretty
+```
diff --git a/docs/content/operation-guide/cloudformation-multi-az/_index.en.md b/docs/content/operation-guide/cloudformation-multi-az/_index.en.md
new file mode 100644
index 0000000..e02e9f9
--- /dev/null
+++ b/docs/content/operation-guide/cloudformation-multi-az/_index.en.md
@@ -0,0 +1,218 @@
++++
+pre = "<b>2.4 </b>"
+title = "CloudFormation deploys ShardingSphere Proxy Cluster in Multiple AZs"
+weight = 4
+chapter = true
++++
+
+## Background
+
+As an important part of the data infrastructure, the shardingSphere Proxy cluster is particularly important for its high availability. This section will introduce how to use CloudFormation to build a shardingSphere proxy cluster from scratch on Amazon to meet high availability.
+
+## Goal
+
+We will create a shardingSphere Proxy highly available cluster as shown in the following architecture diagram:
+
+![](../../../../img/overview/terraform.png)
+
+The Amazon resources created are as follows:
+
+1. Each AZ has one ZooKeeper instance.
+
+2. Each AZ has an Auto Scaling Group.
+
+3. Each AZ has a Launch Template, which is used to start the SharedingSphere Proxy instance for the Auto Scaling Group.
+
+4. An intranet Network LoadBalancer for applications.
+
+## Quick Start
+
+### Prerequisites
+
+To create ShardingSphere Proxy highly available cluster, you need to prepare the following resources in advance:
+1. An ssh keypair used to remotely connect EC2 instances.
+2. One VPC.
+3. The subnet of each AZ.
+4. A SecurityGroup can release the 2888, 3888, 2181 ports used by ZooKeeper Server.
+5. An intranet HostedZone.
+6. A common AMI image, Amazon linux2.
+7. Better get ready CloudFormation [configuration file](https://raw.githubusercontent.com/apache/shardingsphere-on-cloud/main/cloudformation/multi-az/cf.json).
+
+### Procedure
+
+1. Enter Amazon CloudFormation service and create Stacks.
+
+![](../../../../img/operation-guide/4-1.PNG)
+
+Click `Choose File` button to upload the prepared CloudFormation configuration.
+
+![](../../../../img/operation-guide/4-2.PNG)
+
+Click Next after uploading.
+
+2. Fill the resources you have prepared into the relevant locations below.
+
+![](../../../../img/operation-guide/4-3.PNG)
+
+![](../../../../img/operation-guide/4-4.PNG)
+
+After filling in the corresponding parameters, click Next.
+
+3. Configure 'stack' related parameters according to your actual situation.
+
+![](../../../../img/operation-guide/4-5.PNG)
+
+![](../../../../img/operation-guide/4-6.PNG)
+
+Click Next after configuration.
+
+4. Configure 'Review'.
+
+![](../../../../img/operation-guide/4-7.PNG)
+
+![](../../../../img/operation-guide/4-8.PNG)
+
+![](../../../../img/operation-guide/4-9.PNG)
+
+Confirm and click `Submit`.
+
+5. After the above operations, you will enter the creation phase.
+
+![](../../../../img/operation-guide/4-10.PNG)
+
+![](../../../../img/operation-guide/4-11.PNG)
+
+![](../../../../img/operation-guide/4-12.PNG)
+
+6. Wait for a while. After the creation, enter the 'Outputs' tab, as shown in the following figure.
+
+![](../../../../img/operation-guide/4-13.PNG)
+
+The value corresponding to 'ssinernaldomain' is the domain name we need.
+
+The internal domain name created by default is [proxy.shardingsphere.org](proxy.shardingsphere.org), the port is 3307, and the user name and password are root.
+
+## User Manual
+
+### CloudFormation Configuration
+
+#### Parameter List
+
+|Name                      |Description                                                       |Type              |Default Value|
+|--------------------------|-----------------------------------------------------------|-----------------|------|
+|HostedZoneId              |Intranet HostedZone Id                                        |String               | |
+|HostedZoneName            |Intranet HostedZone Name                                        |String          |[shardingsphere.org](shardingsphere.org)|
+|ImageId                   |AMI Id, should be Amazon Linux 2 type or package management of yum Linux series|String          | |
+|KeyName                   |SSH key pair                                                  |String          | |
+|VpcId                     |VPC Id                                                     |String            | |
+|Subnets                   |The subnet list in the VPC must be in the same order as the alphabetical order of the AZ          |CommaDelimitedList| |
+|SecurityGroupIds          |Security group list. Needs to release ports 2181, 2888, 3888 of ZooKeeper Server|CommaDelimitedList| |
+|ShardingSphereInstanceType|EC2 instance type of ShardingSphere Proxy Server                 |String            | |
+|ShardingSphereJavaMemOpts |jvm memory parameters of ShardingSphere Proxy Server                 |String            |-Xmx512m -Xms512m -Xmn128m|
+|ShardingSpherePort        |Port of ShardingSphere Proxy                                 |String            |3307|
+|ShardingSphereVersion     |Version of ShardingSphere Proxy                                 |String            |5.2.1|
+|ZookeeperHeap             |jvm Heap size of ZooKeeper, unit is m                  |String            |512|
+|ZookeeperInstanceType     |EC2 instance type of ZooKeeper Server                             |String            |t2.nano|
+|ZookeeperVersion          |Version number of Zookeeper Server                                      |String            |3.7.1|
+
+#### 输出列表
+
+|Name |Description|Export Name|Value|
+|----|---|--------|--|
+|ZK1|Zookeeper Server1 information|{'Fn::Sub': '${AWS::StackName}-Zookeeper-Server-1'}|{'Fn::Join': [':', [{'Ref': 'ZK1'}, {'Fn::GetAtt': ['ZK1', 'PrivateIp']}, {'Fn::GetAtt': ['ZK1', 'AvailabilityZone']}]]}|
+|ZK2|Zookeeper Server2 information| {'Fn::Sub': '${AWS::StackName}-Zookeeper-Server-2'} |{'Fn::Join': [':', [{'Ref': 'ZK2'}, {'Fn::GetAtt': ['ZK2‘, 'PrivateIp']}, {'Fn::GetAtt': ['ZK2', 'AvailabilityZone']}]]}|
+|ZK3|Zookeeper Server3 information|{'Fn::Sub': '${AWS::StackName}-Zookeeper-Server-3'}|{'Fn::Join': [':', [{'Ref': 'ZK2'}, {'Fn::GetAtt': ['ZK2', 'PrivateIp']}, {'Fn::GetAtt': ['ZK2', 'AvailabilityZone']}]]}|
+|zoneZK1|Zookeeper Server1 Internal domain name|{'Fn::Sub': '${AWS::StackName}-Zookeeper-Domain-1'}| {'Ref': 'zoneZK1'}|
+|zoneZK2|Zookeeper Server2 Internal domain name| {'Fn::Sub': '${AWS::StackName}-Zookeeper-Domain-2'}|{'Ref': 'zoneZK2'}|
+|zoneZK3|Zookeeper Server3 Internal domain name|{'Fn::Sub': '${AWS::StackName}-Zookeeper-Domain-3'}| {'Ref': 'zoneZK3'}|
+|ssinternaldomain|Internal domain name used externally by ShardingSphere Proxy|{'Fn::Sub': '${AWS::StackName}-ShardingSphere-Internal-Domain'}|{'Ref': 'ssinternaldomain'}|
+
+## Operation and Maintenance
+
+By default, ZooKeeper and SharedingSphere Proxy services created using our CloudFormation can be managed using Systemd.
+
+### ZooKeeper
+
+#### Start
+
+```shell
+systemctl start zookeeper
+```
+
+#### Stop
+
+```shell
+systemctl stop zookeeper
+```
+
+#### Restart
+
+```shell
+systemctl restart zookeeper
+```
+
+### ShardingSphere Proxy
+
+#### Start
+
+```shell
+systemctl start shardingsphere
+```
+
+#### Stop
+
+```shell
+systemctl stop shardingsphere
+```
+
+#### Restart
+
+```shell
+systemctl restart shardingsphere
+```
+
+## Development Manual
+
+This CloudFormat involves the following resource lists.
+
+|Resource Name   |Type|
+|----------------|----|
+|ZK1             |AWS::EC2::Instance|
+|ZK2             |AWS::EC2::Instance|
+|ZK3             |AWS::EC2::Instance|
+|zoneZK1         |AWS::Route53::RecordSet|
+|zoneZK2         |AWS::Route53::RecordSet|
+|zoneZK3         |AWS::Route53::RecordSet|
+|networkiface0   |AWS::EC2::NetworkInterface|
+|networkiface1   |AWS::EC2::NetworkInterface|
+|networkiface2   |AWS::EC2::NetworkInterface|
+|launchtemplate0 |AWS::EC2::LaunchTemplate|
+|launchtemplate1 |AWS::EC2::LaunchTemplate|
+|launchtemplate2 |AWS::EC2::LaunchTemplate|
+|ssinternallb    |AWS::ElasticLoadBalancingV2::LoadBalancer|
+|sslbtg          |AWS::ElasticLoadBalancingV2::TargetGroup|
+|autoscaling0    |AWS::AutoScaling::AutoScalingGroup |
+|autoscaling1    |AWS::AutoScaling::AutoScalingGroup |
+|autoscaling2    |AWS::AutoScaling::AutoScalingGroup |
+|sslblistener    |AWS::ElasticLoadBalancingV2::Listener|
+|ssinternaldomain|AWS::Route53::RecordSet|
+
+### Dependency
+
+We use [cfndsl](https://github.com/cfndsl/cfndsl) to generate CloudFormation configurations.
+
+You need to follow the steps provided in [cfndsl](https://github.com/cfndsl/cfndsl) to install.
+
+### Procedure
+
+1. Initialize 'cfndsl' and run it only once.
+
+```shell
+cfndsl -u 94.0.0
+```
+
+2. After modify the configuration of `cf.rb`, run the following command to generate CloudFormation configuration.
+
+```shell
+ cfndsl cf.rb -o cf.json --pretty
+```
\ No newline at end of file
diff --git a/docs/content/operation-guide/helm/_index.cn.md b/docs/content/operation-guide/helm/_index.cn.md
new file mode 100644
index 0000000..8e93df0
--- /dev/null
+++ b/docs/content/operation-guide/helm/_index.cn.md
@@ -0,0 +1,248 @@
++++
+pre = "<b>2.1 </b>"
+title = "ShardingSphere Helm Charts 简明用户手册"
+weight = 1
+chapter = true
++++
+
+## 操作步骤
+
+### 在线安装
+
+1. 添加 ShardingSphere-Proxy 到本地 Helm 仓库:
+
+```shell
+helm repo add shardingsphere-proxy https://apache.github.io/shardingsphere-on-cloud
+ helm repo update
+```
+
+2. 安装 ShardingSphere-Proxy Charts:
+
+```shell
+helm install shardingsphere-proxy shardingsphere/apache-shardingsphere-proxy-charts --version 0.1.0
+```
+
+### 源码安装
+
+1. Charts 可以使用如下命令进行默认配置安装:
+
+```shell
+cd charts/apache-shardingsphere-proxy-charts/charts/governance
+helm dependency build 
+cd ../..
+helm dependency build 
+cd ..
+helm install shardingsphere-proxy apache-shardingsphere-proxy-charts
+```
+
+注意:详情请参考下方配置说明。
+
+2. 执行 `helm list` 获取所有已安装的发布版本列表。
+
+### 卸载
+
+1. 默认删除所有的发布记录,通过添加 `--keep-history` 可以进行保留。
+
+```shell
+helm uninstall shardingsphere-proxy
+```
+
+## 参数说明
+
+### 治理节点参数
+
+| Name                 | Description                                           | Value  |
+| -------------------- | ----------------------------------------------------- | ------ |
+| `governance.enabled` | Switch to enable or disable the governance helm chart | `true` |
+
+### 治理节点 ZooKeeper 参数
+
+| Name                                             | Description                                          | Value               |
+| ------------------------------------------------ | ---------------------------------------------------- | ------------------- |
+| `governance.zookeeper.enabled`                   | Switch to enable or disable the ZooKeeper helm chart | `true`              |
+| `governance.zookeeper.replicaCount`              | Number of ZooKeeper nodes                            | `1`                 |
+| `governance.zookeeper.persistence.enabled`       | Enable persistence on ZooKeeper using PVC(s)         | `false`             |
+| `governance.zookeeper.persistence.storageClass`  | Persistent Volume storage class                      | `""`                |
+| `governance.zookeeper.persistence.accessModes`   | Persistent Volume access modes                       | `["ReadWriteOnce"]` |
+| `governance.zookeeper.persistence.size`          | Persistent Volume size                               | `8Gi`               |
+| `governance.zookeeper.resources.limits`          | The resources limits for the ZooKeeper containers    | `{}`                |
+| `governance.zookeeper.resources.requests.memory` | The requested memory for the ZooKeeper containers    | `256Mi`             |
+| `governance.zookeeper.resources.requests.cpu`    | The requested cpu for the ZooKeeper containers       | `250m`              |
+
+### 计算节点 ShardingSphere-Proxy 参数
+
+| Name                                | Description                                                  | Value                         |
+| ----------------------------------- | ------------------------------------------------------------ |-------------------------------|
+| `compute.image.repository`          | Image name of ShardingSphere-Proxy.                          | `apache/shardingsphere-proxy` |
+| `compute.image.pullPolicy`          | The policy for pulling ShardingSphere-Proxy image            | `IfNotPresent`                |
+| `compute.image.tag`                 | ShardingSphere-Proxy image tag                               | `5.2.0`                       |
+| `compute.imagePullSecrets`          | Specify docker-registry secret names as an array             | `[]`                          |
+| `compute.resources.limits`          | The resources limits for the ShardingSphere-Proxy containers | `{}`                          |
+| `compute.resources.requests.memory` | The requested memory for the ShardingSphere-Proxy containers | `2Gi`                         |
+| `compute.resources.requests.cpu`    | The requested cpu for the ShardingSphere-Proxy containers    | `200m`                        |
+| `compute.replicas`                  | Number of cluster replicas                                   | `3`                           |
+| `compute.service.type`              | ShardingSphere-Proxy network mode                            | `ClusterIP`                   |
+| `compute.service.port`              | ShardingSphere-Proxy expose port                             | `3307`                        |
+| `compute.mysqlConnector.version`    | MySQL connector version                                      | `5.1.49`                      |
+| `compute.startPort`                 | ShardingSphere-Proxy start port                              | `3307`                        |
+| `compute.serverConfig`              | Server Configuration file for ShardingSphere-Proxy            | `""`                          |
+
+## 配置示例
+
+```yaml
+#
+#  Licensed to the Apache Software Foundation (ASF) under one or more
+#  contributor license agreements.  See the NOTICE file distributed with
+#  this work for additional information regarding copyright ownership.
+#  The ASF licenses this file to You under the Apache License, Version 2.0
+#  (the "License"); you may not use this file except in compliance with
+#  the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+#
+
+## @section Governance-Node parameters
+## @param governance.enabled Switch to enable or disable the governance helm chart
+##
+governance:
+  enabled: true
+  ## @section Governance-Node ZooKeeper parameters
+  zookeeper:
+    ## @param governance.zookeeper.enabled Switch to enable or disable the ZooKeeper helm chart
+    ##
+    enabled: true
+    ## @param governance.zookeeper.replicaCount Number of ZooKeeper nodes
+    ##
+    replicaCount: 1
+    ## ZooKeeper Persistence parameters
+    ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
+    ## @param governance.zookeeper.persistence.enabled Enable persistence on ZooKeeper using PVC(s)
+    ## @param governance.zookeeper.persistence.storageClass Persistent Volume storage class
+    ## @param governance.zookeeper.persistence.accessModes Persistent Volume access modes
+    ## @param governance.zookeeper.persistence.size Persistent Volume size
+    ##
+    persistence:
+      enabled: false
+      storageClass: ""
+      accessModes:
+        - ReadWriteOnce
+      size: 8Gi
+    ## ZooKeeper's resource requests and limits
+    ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
+    ## @param governance.zookeeper.resources.limits The resources limits for the ZooKeeper containers
+    ## @param governance.zookeeper.resources.requests.memory The requested memory for the ZooKeeper containers
+    ## @param governance.zookeeper.resources.requests.cpu The requested cpu for the ZooKeeper containers
+    ##
+    resources:
+      limits: {}
+      requests:
+        memory: 256Mi
+        cpu: 250m
+
+## @section Compute-Node parameters
+## 
+compute:
+  ## @section Compute-Node ShardingSphere-Proxy parameters
+  ## ref: https://kubernetes.io/docs/concepts/containers/images/
+  ## @param compute.image.repository Image name of ShardingSphere-Proxy.
+  ## @param compute.image.pullPolicy The policy for pulling ShardingSphere-Proxy image
+  ## @param compute.image.tag ShardingSphere-Proxy image tag
+  ##
+  image:
+    repository: "apache/shardingsphere-proxy"
+    pullPolicy: IfNotPresent
+    ## Overrides the image tag whose default is the chart appVersion.
+    ##
+    tag: "5.2.0"
+  ## @param compute.imagePullSecrets Specify docker-registry secret names as an array
+  ## e.g:
+  ## imagePullSecrets:
+  ##   - name: myRegistryKeySecretName
+  ##
+  imagePullSecrets: []
+  ## ShardingSphere-Proxy resource requests and limits
+  ## ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+  ## @param compute.resources.limits The resources limits for the ShardingSphere-Proxy containers
+  ## @param compute.resources.requests.memory The requested memory for the ShardingSphere-Proxy containers
+  ## @param compute.resources.requests.cpu The requested cpu for the ShardingSphere-Proxy containers
+  ##
+  resources:
+    limits: {}
+    requests:
+      memory: 2Gi
+      cpu: 200m
+  ## ShardingSphere-Proxy Deployment Configuration
+  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
+  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/
+  ## @param compute.replicas Number of cluster replicas
+  ##
+  replicas: 3
+  ## @param compute.service.type ShardingSphere-Proxy network mode
+  ## @param compute.service.port ShardingSphere-Proxy expose port
+  ##
+  service:
+    type: ClusterIP
+    port: 3307
+  ## MySQL connector Configuration
+  ## ref: https://shardingsphere.apache.org/document/current/en/quick-start/shardingsphere-proxy-quick-start/
+  ## @param compute.mysqlConnector.version MySQL connector version
+  ##
+  mysqlConnector:
+    version: "5.1.49"
+  ## @param compute.startPort ShardingSphere-Proxy start port
+  ## ShardingSphere-Proxy start port
+  ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/startup/docker/
+  ##
+  startPort: 3307
+  ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration parameters
+  ## NOTE: If you use the sub-charts to deploy Zookeeper, the server-lists field must be "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}",
+  ## otherwise please fill in the correct zookeeper address
+  ## The server.yaml is auto-generated based on this parameter.
+  ## If it is empty, the server.yaml is also empty.
+  ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/yaml-config/mode/
+  ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/common-config/builtin-algorithm/metadata-repository/
+  ##
+  serverConfig:
+    ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration authority parameters
+    ## NOTE: It is used to set up initial user to login compute node, and authority data of storage node.
+    ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/yaml-config/authentication/
+    ## @param compute.serverConfig.authority.privilege.type authority provider for storage node, the default value is ALL_PERMITTED
+    ## @param compute.serverConfig.authority.users[0].password Password for compute node.
+    ## @param compute.serverConfig.authority.users[0].user Username,authorized host for compute node. Format: <username>@<hostname> hostname is % or empty string means do not care about authorized host
+    ##
+    authority:
+      privilege:
+        type: ALL_PRIVILEGES_PERMITTED
+      users:
+      - password: root
+        user: root@%
+    ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration mode Configuration parameters
+    ## @param compute.serverConfig.mode.type Type of mode configuration. Now only support Cluster mode
+    ## @param compute.serverConfig.mode.repository.props.namespace Namespace of registry center
+    ## @param compute.serverConfig.mode.repository.props.server-lists Server lists of registry center
+    ## @param compute.serverConfig.mode.repository.props.maxRetries Max retries of client connection
+    ## @param compute.serverConfig.mode.repository.props.operationTimeoutMilliseconds Milliseconds of operation timeout
+    ## @param compute.serverConfig.mode.repository.props.retryIntervalMilliseconds Milliseconds of retry interval
+    ## @param compute.serverConfig.mode.repository.props.timeToLiveSeconds Seconds of ephemeral data live
+    ## @param compute.serverConfig.mode.repository.type Type of persist repository. Now only support ZooKeeper
+    ## @param compute.serverConfig.mode.overwrite Whether overwrite persistent configuration with local configuration
+    ##
+    mode:
+      type: Cluster
+      repository:
+        type: ZooKeeper
+        props:
+          maxRetries: 3
+          namespace: governance_ds
+          operationTimeoutMilliseconds: 5000
+          retryIntervalMilliseconds: 500
+          server-lists: "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}"
+          timeToLiveSeconds: 60
+      overwrite: true
+```
diff --git a/docs/content/operation-guide/helm/_index.en.md b/docs/content/operation-guide/helm/_index.en.md
new file mode 100644
index 0000000..958f0ab
--- /dev/null
+++ b/docs/content/operation-guide/helm/_index.en.md
@@ -0,0 +1,249 @@
++++
+pre = "<b>2.1 </b>"
+title = "ShardingSphere Helm Charts User Manual"
+weight = 1
+chapter = true
++++
+
+## Procedure
+
+### Online Installation
+
+1. Add SharedingSphere-Proxy to the local Helm warehouse:
+
+```shell
+helm repo add shardingsphere-proxy https://apache.github.io/shardingsphere-on-cloud
+ helm repo update
+```
+
+2. Install ShardingSphere-Proxy Charts:
+
+```shell
+helm install shardingsphere-proxy shardingsphere/apache-shardingsphere-proxy-charts --version 0.1.0
+```
+
+### Source Code Installation
+
+1. Charts can be configured and installed by default using the following command:
+
+```shell
+cd charts/apache-shardingsphere-proxy-charts/charts/governance
+helm dependency build 
+cd ../..
+helm dependency build 
+cd ..
+helm install shardingsphere-proxy apache-shardingsphere-proxy-charts
+```
+
+Note: Please refer to the configuration description below for details.
+
+2. Execute `helm list` to get the list of all installed releases.
+
+### Uninstallation
+
+1. By default, all publishing records are deleted and can be retained by adding '-- keep history'.
+
+```shell
+helm uninstall shardingsphere-proxy
+```
+
+## Parameter Description
+
+### Governance Node Parameters
+
+| Name                 | Description                                           | Value  |
+| -------------------- | ----------------------------------------------------- | ------ |
+| `governance.enabled` | Switch to enable or disable the governance helm chart | `true` |
+
+### ZooKeeper Parameters of Governance Node
+
+| Name                                             | Description                                          | Value               |
+| ------------------------------------------------ | ---------------------------------------------------- | ------------------- |
+| `governance.zookeeper.enabled`                   | Switch to enable or disable the ZooKeeper helm chart | `true`              |
+| `governance.zookeeper.replicaCount`              | Number of ZooKeeper nodes                            | `1`                 |
+| `governance.zookeeper.persistence.enabled`       | Enable persistence on ZooKeeper using PVC(s)         | `false`             |
+| `governance.zookeeper.persistence.storageClass`  | Persistent Volume storage class                      | `""`                |
+| `governance.zookeeper.persistence.accessModes`   | Persistent Volume access modes                       | `["ReadWriteOnce"]` |
+| `governance.zookeeper.persistence.size`          | Persistent Volume size                               | `8Gi`               |
+| `governance.zookeeper.resources.limits`          | The resources limits for the ZooKeeper containers    | `{}`                |
+| `governance.zookeeper.resources.requests.memory` | The requested memory for the ZooKeeper containers    | `256Mi`             |
+| `governance.zookeeper.resources.requests.cpu`    | The requested cpu for the ZooKeeper containers       | `250m`              |
+
+### ShardingSphere-Proxy Parameters of Compute Node
+
+| Name                                | Description                                                  | Value                         |
+| ----------------------------------- | ------------------------------------------------------------ |-------------------------------|
+| `compute.image.repository`          | Image name of ShardingSphere-Proxy.                          | `apache/shardingsphere-proxy` |
+| `compute.image.pullPolicy`          | The policy for pulling ShardingSphere-Proxy image            | `IfNotPresent`                |
+| `compute.image.tag`                 | ShardingSphere-Proxy image tag                               | `5.2.0`                       |
+| `compute.imagePullSecrets`          | Specify docker-registry secret names as an array             | `[]`                          |
+| `compute.resources.limits`          | The resources limits for the ShardingSphere-Proxy containers | `{}`                          |
+| `compute.resources.requests.memory` | The requested memory for the ShardingSphere-Proxy containers | `2Gi`                         |
+| `compute.resources.requests.cpu`    | The requested cpu for the ShardingSphere-Proxy containers    | `200m`                        |
+| `compute.replicas`                  | Number of cluster replicas                                   | `3`                           |
+| `compute.service.type`              | ShardingSphere-Proxy network mode                            | `ClusterIP`                   |
+| `compute.service.port`              | ShardingSphere-Proxy expose port                             | `3307`                        |
+| `compute.mysqlConnector.version`    | MySQL connector version                                      | `5.1.49`                      |
+| `compute.startPort`                 | ShardingSphere-Proxy start port                              | `3307`                        |
+| `compute.serverConfig`              | Server Configuration file for ShardingSphere-Proxy            | `""`                          |
+
+## Example
+
+```yaml
+#
+#  Licensed to the Apache Software Foundation (ASF) under one or more
+#  contributor license agreements.  See the NOTICE file distributed with
+#  this work for additional information regarding copyright ownership.
+#  The ASF licenses this file to You under the Apache License, Version 2.0
+#  (the "License"); you may not use this file except in compliance with
+#  the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+#
+
+## @section Governance-Node parameters
+## @param governance.enabled Switch to enable or disable the governance helm chart
+##
+governance:
+  enabled: true
+  ## @section Governance-Node ZooKeeper parameters
+  zookeeper:
+    ## @param governance.zookeeper.enabled Switch to enable or disable the ZooKeeper helm chart
+    ##
+    enabled: true
+    ## @param governance.zookeeper.replicaCount Number of ZooKeeper nodes
+    ##
+    replicaCount: 1
+    ## ZooKeeper Persistence parameters
+    ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
+    ## @param governance.zookeeper.persistence.enabled Enable persistence on ZooKeeper using PVC(s)
+    ## @param governance.zookeeper.persistence.storageClass Persistent Volume storage class
+    ## @param governance.zookeeper.persistence.accessModes Persistent Volume access modes
+    ## @param governance.zookeeper.persistence.size Persistent Volume size
+    ##
+    persistence:
+      enabled: false
+      storageClass: ""
+      accessModes:
+        - ReadWriteOnce
+      size: 8Gi
+    ## ZooKeeper's resource requests and limits
+    ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
+    ## @param governance.zookeeper.resources.limits The resources limits for the ZooKeeper containers
+    ## @param governance.zookeeper.resources.requests.memory The requested memory for the ZooKeeper containers
+    ## @param governance.zookeeper.resources.requests.cpu The requested cpu for the ZooKeeper containers
+    ##
+    resources:
+      limits: {}
+      requests:
+        memory: 256Mi
+        cpu: 250m
+
+## @section Compute-Node parameters
+## 
+compute:
+  ## @section Compute-Node ShardingSphere-Proxy parameters
+  ## ref: https://kubernetes.io/docs/concepts/containers/images/
+  ## @param compute.image.repository Image name of ShardingSphere-Proxy.
+  ## @param compute.image.pullPolicy The policy for pulling ShardingSphere-Proxy image
+  ## @param compute.image.tag ShardingSphere-Proxy image tag
+  ##
+  image:
+    repository: "apache/shardingsphere-proxy"
+    pullPolicy: IfNotPresent
+    ## Overrides the image tag whose default is the chart appVersion.
+    ##
+    tag: "5.2.0"
+  ## @param compute.imagePullSecrets Specify docker-registry secret names as an array
+  ## e.g:
+  ## imagePullSecrets:
+  ##   - name: myRegistryKeySecretName
+  ##
+  imagePullSecrets: []
+  ## ShardingSphere-Proxy resource requests and limits
+  ## ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+  ## @param compute.resources.limits The resources limits for the ShardingSphere-Proxy containers
+  ## @param compute.resources.requests.memory The requested memory for the ShardingSphere-Proxy containers
+  ## @param compute.resources.requests.cpu The requested cpu for the ShardingSphere-Proxy containers
+  ##
+  resources:
+    limits: {}
+    requests:
+      memory: 2Gi
+      cpu: 200m
+  ## ShardingSphere-Proxy Deployment Configuration
+  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
+  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/
+  ## @param compute.replicas Number of cluster replicas
+  ##
+  replicas: 3
+  ## @param compute.service.type ShardingSphere-Proxy network mode
+  ## @param compute.service.port ShardingSphere-Proxy expose port
+  ##
+  service:
+    type: ClusterIP
+    port: 3307
+  ## MySQL connector Configuration
+  ## ref: https://shardingsphere.apache.org/document/current/en/quick-start/shardingsphere-proxy-quick-start/
+  ## @param compute.mysqlConnector.version MySQL connector version
+  ##
+  mysqlConnector:
+    version: "5.1.49"
+  ## @param compute.startPort ShardingSphere-Proxy start port
+  ## ShardingSphere-Proxy start port
+  ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/startup/docker/
+  ##
+  startPort: 3307
+  ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration parameters
+  ## NOTE: If you use the sub-charts to deploy Zookeeper, the server-lists field must be "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}",
+  ## otherwise please fill in the correct zookeeper address
+  ## The server.yaml is auto-generated based on this parameter.
+  ## If it is empty, the server.yaml is also empty.
+  ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/yaml-config/mode/
+  ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/common-config/builtin-algorithm/metadata-repository/
+  ##
+  serverConfig:
+    ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration authority parameters
+    ## NOTE: It is used to set up initial user to login compute node, and authority data of storage node.
+    ## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/yaml-config/authentication/
+    ## @param compute.serverConfig.authority.privilege.type authority provider for storage node, the default value is ALL_PERMITTED
+    ## @param compute.serverConfig.authority.users[0].password Password for compute node.
+    ## @param compute.serverConfig.authority.users[0].user Username,authorized host for compute node. Format: <username>@<hostname> hostname is % or empty string means do not care about authorized host
+    ##
+    authority:
+      privilege:
+        type: ALL_PRIVILEGES_PERMITTED
+      users:
+      - password: root
+        user: root@%
+    ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration mode Configuration parameters
+    ## @param compute.serverConfig.mode.type Type of mode configuration. Now only support Cluster mode
+    ## @param compute.serverConfig.mode.repository.props.namespace Namespace of registry center
+    ## @param compute.serverConfig.mode.repository.props.server-lists Server lists of registry center
+    ## @param compute.serverConfig.mode.repository.props.maxRetries Max retries of client connection
+    ## @param compute.serverConfig.mode.repository.props.operationTimeoutMilliseconds Milliseconds of operation timeout
+    ## @param compute.serverConfig.mode.repository.props.retryIntervalMilliseconds Milliseconds of retry interval
+    ## @param compute.serverConfig.mode.repository.props.timeToLiveSeconds Seconds of ephemeral data live
+    ## @param compute.serverConfig.mode.repository.type Type of persist repository. Now only support ZooKeeper
+    ## @param compute.serverConfig.mode.overwrite Whether overwrite persistent configuration with local configuration
+    ##
+    mode:
+      type: Cluster
+      repository:
+        type: ZooKeeper
+        props:
+          maxRetries: 3
+          namespace: governance_ds
+          operationTimeoutMilliseconds: 5000
+          retryIntervalMilliseconds: 500
+          server-lists: "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}"
+          timeToLiveSeconds: 60
+      overwrite: true
+```
+
diff --git a/docs/content/operation-guide/operator/_index.cn.md b/docs/content/operation-guide/operator/_index.cn.md
new file mode 100644
index 0000000..0969d94
--- /dev/null
+++ b/docs/content/operation-guide/operator/_index.cn.md
@@ -0,0 +1,303 @@
++++
+pre = "<b>2.2 </b>"
+title = "ShardingSphere-Operator 简明用户手册"
+weight = 2
+chapter = true
++++
+
+## 安装 ShardingSphere-Operator
+
+如下配置内容和配置文件目录为:apache-shardingsphere-operator-charts/values.yaml。
+
+### 在线安装
+
+```shell
+ kubectl create ns shardingsphere-operator
+ helm repo add shardingsphere https://apache.github.io/shardingsphere-on-cloud
+ helm repo update
+ helm install shardingsphere-operator shardingsphere/apache-shardingsphere-operator-charts --version 0.1.0 -n shardingsphere-operator
+```
+
+### 源码安装
+
+```shell
+kubectl create ns shardingsphere-operator
+cd charts/apache-shardingsphere-operator-charts/
+helm dependency build
+cd ../
+helm install shardingsphere-operator apache-shardingsphere-operator-charts -n shardingsphere-operator
+```
+
+## 安装 ShardingSphere-Proxy Cluster
+
+如下配置内容和配置文件目录为:apache-shardingsphere-operator-cluster-charts/values.yaml。
+
+### 在线安装
+
+```shell
+kubectl create ns shardingsphere
+helm repo add shardingsphere https://apache.github.io/shardingsphere-on-cloud
+helm repo update
+helm install shardingsphere shardingsphere/apache-shardingsphere-operator-cluster-charts --version 0.1.0 -n shardingsphere
+```
+
+### 源码安装
+
+```shell
+kubectl create ns shardingsphere
+cd charts/apache-shardingsphere-operator-cluster-charts
+helm dependency build
+cd ../
+helm install shardingsphere apache-shardingsphere-operator-cluster-charts -n shardingsphere
+```
+
+## 在线安装 ShardingSphere-Proxy Cluster 和 ShardingSphere-Operator
+
+```shell
+helm repo add shardingsphere https://apache.github.io/shardingsphere-on-cloud
+kubectl create ns  shardingsphere-operator
+helm install shardingsphere-operator shardingsphere/apache-shardingsphere-operator-charts --version 0.1.0
+kubectl create ns  shardingsphere
+helm install shardingsphere shardingsphere/apache-shardingsphere-operator-cluster-charts --version 0.1.0
+```
+
+## 参数
+
+### ShardingSphere Operator 参数
+
+| Name                     | Description                                 | Value                     |
+| ------------------------ | ------------------------------------------- | ------------------------- |
+| `replicaCount`           | operator replica count                      | `2`                       |
+| `image.repository`       | operator image name                         | `sahrdingsphere-operator` |
+| `image.pullPolicy`       | image pull policy                           | `IfNotPresent`            |
+| `image.tag`              | image tag                                   | `0.0.1`                   |
+| `imagePullSecrets`       | image pull secret of private repository     | `[]`                      |
+| `resources`              | operator Resources required by the operator | `{}`                      |
+| `webhook.port`           | operator webhook boot port                  | `9443`                    |
+| `health.healthProbePort` | operator health check port                  | `8081`                    |
+
+### ShardingSphere-Proxy Cluster 参数
+
+| Name                                | Description                                                                                                                                                                                        | Value       |
+| ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
+| `replicaCount`                      | ShardingSphere-Proxy cluster starts the number of replicas, Note: After you enable automaticScaling, this parameter will no longer take effect                                                     | `3`         |
+| `proxyVersion`                      | ShardingSphere-Proxy cluster version                                                                                                                                                               | `5.2.0`     |
+| `automaticScaling.enable`           | ShardingSphere-Proxy Whether the ShardingSphere-Proxy cluster has auto-scaling enabled                                                                                                             | `false`     |
+| `automaticScaling.scaleUpWindows`   | ShardingSphere-Proxy automatically scales the stable window                                                                                                                                        | `30`        |
+| `automaticScaling.scaleDownWindows` | ShardingSphere-Proxy automatically shrinks the stabilized window                                                                                                                                   | `30`        |
+| `automaticScaling.target`           | ShardingSphere-Proxy auto-scaling threshold, the value is a percentage, note: at this stage, only cpu is supported as a metric for scaling                                                         | `20`        |
+| `automaticScaling.maxInstance`      | ShardingSphere-Proxy maximum number of scaled-out replicas                                                                                                                                         | `4`         |
+| `automaticScaling.minInstance`      | ShardingSphere-Proxy has a minimum number of boot replicas, and the shrinkage will not be less than this number of replicas                                                                        | `1`         |
+| `resources`                         | ShardingSphere-Proxy starts the requirement resource, and after opening automaticScaling, the resource of the request multiplied by the percentage of target is used to trigger the scaling action | `{}`        |
+| `service.type`                      | ShardingSphere-Proxy external exposure mode                                                                                                                                                        | `ClusterIP` |
+| `service.port`                      | ShardingSphere-Proxy exposes  port                                                                                                                                                                 | `3307`      |
+| `startPort`                         | ShardingSphere-Proxy boot port                                                                                                                                                                     | `3307`      |
+| `mySQLDriver.version`               | ShardingSphere-Proxy The ShardingSphere-Proxy mysql driver version will not be downloaded if it is empty                                                                                           | `5.1.47`    |
+
+### 计算节点 ShardingSphere-Proxy ServerConfig 权限相关参数
+
+| Name                                       | Description                                                                                                                                    | Value                      |
+| ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- |
+| `serverConfig.authority.privilege.type`    | authority provider for storage node, the default value is ALL_PERMITTED                                                                        | `ALL_PRIVILEGES_PERMITTED` |
+| `serverConfig.authority.users[0].password` | Password for compute node.                                                                                                                     | `root`                     |
+| `serverConfig.authority.users[0].user`     | Username,authorized host for compute node. Format: <username>@<hostname> hostname is % or empty string means do not care about authorized host | `root@%`                   |
+
+### 计算节点 ShardingSphere-Proxy ServerConfig Mode 相关参数
+
+| Name                                                              | Description                                                         | Value                                                                  |
+| ----------------------------------------------------------------- | ------------------------------------------------------------------- | ---------------------------------------------------------------------- |
+| `serverConfig.mode.type`                                          | Type of mode configuration. Now only support Cluster mode           | `Cluster`                                                              |
+| `serverConfig.mode.repository.props.namespace`                    | Namespace of registry center                                        | `governance_ds`                                                        |
+| `serverConfig.mode.repository.props.server-lists`                 | Server lists of registry center                                     | `{{ printf "%s-zookeeper.%s:2181" .Release.Name .Release.Namespace }}` |
+| `serverConfig.mode.repository.props.maxRetries`                   | Max retries of client connection                                    | `3`                                                                    |
+| `serverConfig.mode.repository.props.operationTimeoutMilliseconds` | Milliseconds of operation timeout                                   | `5000`                                                                 |
+| `serverConfig.mode.repository.props.retryIntervalMilliseconds`    | Milliseconds of retry interval                                      | `500`                                                                  |
+| `serverConfig.mode.repository.props.timeToLiveSeconds`            | Seconds of ephemeral data live                                      | `600`                                                                  |
+| `serverConfig.mode.repository.type`                               | Type of persist repository. Now only support ZooKeeper              | `ZooKeeper`                                                            |
+| `serverConfig.mode.overwrite`                                     | Whether overwrite persistent configuration with local configuration | `true`                                                                 |
+| `serverConfig.props.proxy-frontend-database-protocol-type`        | Default startup protocol                                            | `MySQL`                                                                |
+
+### ZooKeeper Chart 参数
+
+| Name                                 | Description                                          | Value               |
+| ------------------------------------ | ---------------------------------------------------- | ------------------- |
+| `zookeeper.enabled`                  | Switch to enable or disable the ZooKeeper helm chart | `true`              |
+| `zookeeper.replicaCount`             | Number of ZooKeeper nodes                            | `1`                 |
+| `zookeeper.persistence.enabled`      | Enable persistence on ZooKeeper using PVC(s)         | `false`             |
+| `zookeeper.persistence.storageClass` | Persistent Volume storage class                      | `""`                |
+| `zookeeper.persistence.accessModes`  | Persistent Volume access modes                       | `["ReadWriteOnce"]` |
+| `zookeeper.persistence.size`         | Persistent Volume size                               | `8Gi`               |
+
+## 配置示例
+
+apache-shardingsphere-operator-charts/values.yaml
+
+```yaml
+## @section ShardingSphere-Proxy operator parameters
+## @param replicaCount operator  replica count
+##
+replicaCount: 2
+image:
+  ## @param image.repository operator image name
+  ##
+  repository: "sahrdingsphere-operator"
+  ## @param image.pullPolicy image pull policy
+  ##
+  pullPolicy: IfNotPresent
+  ## @param image.tag image tag
+  ##
+  tag: "0.0.1"
+## @param imagePullSecrets image pull secret of private repository
+## e.g:
+## imagePullSecrets:
+##   - name: mysecret
+##
+imagePullSecrets: []
+## @param resources operator Resources required by the operator
+## e.g:
+## resources:
+##   limits:
+##     cpu: 2
+##   limits:
+##     cpu: 2
+##
+resources: {}
+## @param webhook.port operator webhook boot port
+##
+webhook:
+  port: 9443
+## @param health.healthProbePort operator health check port
+##
+health:
+  healthProbePort: 8081
+```
+
+apache-shardingsphere-operator-cluster-charts/values.yaml
+
+```yaml
+# @section ShardingSphere-Proxy cluster parameters
+## @param replicaCount ShardingSphere-Proxy cluster starts the number of replicas, Note: After you enable automaticScaling, this parameter will no longer take effect
+## @param proxyVersion ShardingSphere-Proxy cluster version
+##
+replicaCount: "3"
+proxyVersion: "5.2.0"
+## @param automaticScaling.enable ShardingSphere-Proxy Whether the ShardingSphere-Proxy cluster has auto-scaling enabled
+## @param automaticScaling.scaleUpWindows ShardingSphere-Proxy automatically scales the stable window
+## @param automaticScaling.scaleDownWindows ShardingSphere-Proxy automatically shrinks the stabilized window
+## @param automaticScaling.target ShardingSphere-Proxy auto-scaling threshold, the value is a percentage, note: at this stage, only cpu is supported as a metric for scaling
+## @param automaticScaling.maxInstance ShardingSphere-Proxy maximum number of scaled-out replicas
+## @param automaticScaling.minInstance ShardingSphere-Proxy has a minimum number of boot replicas, and the shrinkage will not be less than this number of replicas
+##
+automaticScaling:
+  enable: false
+  scaleUpWindows: 30
+  scaleDownWindows: 30
+  target: 20
+  maxInstance: 4
+  minInstance: 1
+## @param resources ShardingSphere-Proxy starts the requirement resource, and after opening automaticScaling, the resource of the request multiplied by the percentage of target is used to trigger the scaling action
+## e.g:
+## resources:
+##   limits:
+##     cpu: 2
+##   requests:
+##     cpu: 2
+##
+resources:
+  limits:
+    cpu: '2'
+  requests:
+    cpu: '1'
+## @param service.type ShardingSphere-Proxy external exposure mode
+## @param service.port ShardingSphere-Proxy exposes  port
+##
+service:
+  type: ClusterIP
+  port: 3307
+## @param startPort ShardingSphere-Proxy boot port
+##
+startPort: 3307
+## @param mySQLDriver.version ShardingSphere-Proxy The ShardingSphere-Proxy mysql driver version will not be downloaded if it is empty
+##
+mySQLDriver:
+  version: "5.1.47"
+## @section  ShardingSphere-Proxy ServerConfiguration parameters
+## NOTE: If you use the sub-charts to deploy Zookeeper, the server-lists field must be "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}",
+## otherwise please fill in the correct zookeeper address
+## The server.yaml is auto-generated based on this parameter.
+## If it is empty, the server.yaml is also empty.
+## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/yaml-config/mode/
+## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/builtin-algorithm/metadata-repository/
+##
+serverConfig:
+  ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration authority parameters
+  ## NOTE: It is used to set up initial user to login compute node, and authority data of storage node.
+  ## @param serverConfig.authority.privilege.type authority provider for storage node, the default value is ALL_PERMITTED
+  ## @param serverConfig.authority.users[0].password Password for compute node.
+  ## @param serverConfig.authority.users[0].user Username,authorized host for compute node. Format: <username>@<hostname> hostname is % or empty string means do not care about authorized host
+  ##
+  authority:
+    privilege:
+      type: ALL_PRIVILEGES_PERMITTED
+    users:
+      - password: root
+        user: root@%
+  ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration mode Configuration parameters
+  ## @param serverConfig.mode.type Type of mode configuration. Now only support Cluster mode
+  ## @param serverConfig.mode.repository.props.namespace Namespace of registry center
+  ## @param serverConfig.mode.repository.props.server-lists Server lists of registry center
+  ## @param serverConfig.mode.repository.props.maxRetries Max retries of client connection
+  ## @param serverConfig.mode.repository.props.operationTimeoutMilliseconds Milliseconds of operation timeout
+  ## @param serverConfig.mode.repository.props.retryIntervalMilliseconds Milliseconds of retry interval
+  ## @param serverConfig.mode.repository.props.timeToLiveSeconds Seconds of ephemeral data live
+  ## @param serverConfig.mode.repository.type Type of persist repository. Now only support ZooKeeper
+  ## @param serverConfig.mode.overwrite Whether overwrite persistent configuration with local configuration
+  ##
+  mode:
+    overwrite: true
+    repository:
+      props:
+        maxRetries: 3
+        namespace: governance_ds
+        operationTimeoutMilliseconds: 5000
+        retryIntervalMilliseconds: 500
+        server-lists: "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}"
+        timeToLiveSeconds: 600
+      type: ZooKeeper
+    type: Cluster
+  props:
+    proxy-frontend-database-protocol-type: MySQL
+## @section ZooKeeper chart parameters
+
+## ZooKeeper chart configuration
+## https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
+##
+zookeeper:
+  ## @param zookeeper.enabled Switch to enable or disable the ZooKeeper helm chart
+  ##
+  enabled: true
+  ## @param zookeeper.replicaCount Number of ZooKeeper nodes
+  ##
+  replicaCount: 3
+  ## ZooKeeper Persistence parameters
+  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
+  ## @param zookeeper.persistence.enabled Enable persistence on ZooKeeper using PVC(s)
+  ## @param zookeeper.persistence.storageClass Persistent Volume storage class
+  ## @param zookeeper.persistence.accessModes Persistent Volume access modes
+  ## @param zookeeper.persistence.size Persistent Volume size
+  ##
+  persistence:
+    enabled: false
+    storageClass: ""
+    accessModes:
+      - ReadWriteOnce
+    size: 8Gi
+```
+
+## 清理
+
+```shell
+helm uninstall shardingsphere -n shardingsphere
+helm uninstall shardingsphere-operator -n shardingsphere-operator
+kubectl delete crd shardingsphereproxies.shardingsphere.apache.org shardingsphereproxyserverconfigs.shardingsphere.apache.org
+```
diff --git a/docs/content/operation-guide/operator/_index.en.md b/docs/content/operation-guide/operator/_index.en.md
new file mode 100644
index 0000000..22790f4
--- /dev/null
+++ b/docs/content/operation-guide/operator/_index.en.md
@@ -0,0 +1,303 @@
++++
+pre = "<b>2.2 </b>"
+title = "ShardingSphere-Operator User Manual"
+weight = 2
+chapter = true
++++
+
+## ShardingSphere-Operator Installation
+
+The following configuration content and configuration file directory are: apache-shardingsphere-operator-charts/values.yaml.
+
+### Online Installation
+
+```shell
+ kubectl create ns shardingsphere-operator
+ helm repo add shardingsphere https://apache.github.io/shardingsphere-on-cloud
+ helm repo update
+ helm install shardingsphere-operator shardingsphere/apache-shardingsphere-operator-charts --version 0.1.0 -n shardingsphere-operator
+```
+
+### Source Code Installation
+
+```shell
+kubectl create ns shardingsphere-operator
+cd charts/apache-shardingsphere-operator-charts/
+helm dependency build
+cd ../
+helm install shardingsphere-operator apache-shardingsphere-operator-charts -n shardingsphere-operator
+```
+
+## ShardingSphere-Proxy Cluster Installation
+
+The following configuration content and configuration file directory are: apache-shardingsphere-operator-cluster-charts/values.yaml.
+
+### Online Installation
+
+```shell
+kubectl create ns shardingsphere
+helm repo add shardingsphere https://apache.github.io/shardingsphere-on-cloud
+helm repo update
+helm install shardingsphere shardingsphere/apache-shardingsphere-operator-cluster-charts --version 0.1.0 -n shardingsphere
+```
+
+### Source Code Installation
+
+```shell
+kubectl create ns shardingsphere
+cd charts/apache-shardingsphere-operator-cluster-charts
+helm dependency build
+cd ../
+helm install shardingsphere apache-shardingsphere-operator-cluster-charts -n shardingsphere
+```
+
+## Online Install ShardingSphere-Proxy Cluster and ShardingSphere-Operator
+
+```shell
+helm repo add shardingsphere https://apache.github.io/shardingsphere-on-cloud
+kubectl create ns  shardingsphere-operator
+helm install shardingsphere-operator shardingsphere/apache-shardingsphere-operator-charts --version 0.1.0
+kubectl create ns  shardingsphere
+helm install shardingsphere shardingsphere/apache-shardingsphere-operator-cluster-charts --version 0.1.0
+```
+
+## Parameters
+
+### ShardingSphere Operator Parameters
+
+| Name                     | Description                                 | Value                     |
+| ------------------------ | ------------------------------------------- | ------------------------- |
+| `replicaCount`           | operator replica count                      | `2`                       |
+| `image.repository`       | operator image name                         | `sahrdingsphere-operator` |
+| `image.pullPolicy`       | image pull policy                           | `IfNotPresent`            |
+| `image.tag`              | image tag                                   | `0.0.1`                   |
+| `imagePullSecrets`       | image pull secret of private repository     | `[]`                      |
+| `resources`              | operator Resources required by the operator | `{}`                      |
+| `webhook.port`           | operator webhook boot port                  | `9443`                    |
+| `health.healthProbePort` | operator health check port                  | `8081`                    |
+
+### ShardingSphere-Proxy Cluster Parameters
+
+| Name                                | Description                                                                                                                                                                                        | Value       |
+| ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
+| `replicaCount`                      | ShardingSphere-Proxy cluster starts the number of replicas, Note: After you enable automaticScaling, this parameter will no longer take effect                                                     | `3`         |
+| `proxyVersion`                      | ShardingSphere-Proxy cluster version                                                                                                                                                               | `5.2.0`     |
+| `automaticScaling.enable`           | ShardingSphere-Proxy Whether the ShardingSphere-Proxy cluster has auto-scaling enabled                                                                                                             | `false`     |
+| `automaticScaling.scaleUpWindows`   | ShardingSphere-Proxy automatically scales the stable window                                                                                                                                        | `30`        |
+| `automaticScaling.scaleDownWindows` | ShardingSphere-Proxy automatically shrinks the stabilized window                                                                                                                                   | `30`        |
+| `automaticScaling.target`           | ShardingSphere-Proxy auto-scaling threshold, the value is a percentage, note: at this stage, only cpu is supported as a metric for scaling                                                         | `20`        |
+| `automaticScaling.maxInstance`      | ShardingSphere-Proxy maximum number of scaled-out replicas                                                                                                                                         | `4`         |
+| `automaticScaling.minInstance`      | ShardingSphere-Proxy has a minimum number of boot replicas, and the shrinkage will not be less than this number of replicas                                                                        | `1`         |
+| `resources`                         | ShardingSphere-Proxy starts the requirement resource, and after opening automaticScaling, the resource of the request multiplied by the percentage of target is used to trigger the scaling action | `{}`        |
+| `service.type`                      | ShardingSphere-Proxy external exposure mode                                                                                                                                                        | `ClusterIP` |
+| `service.port`                      | ShardingSphere-Proxy exposes  port                                                                                                                                                                 | `3307`      |
+| `startPort`                         | ShardingSphere-Proxy boot port                                                                                                                                                                     | `3307`      |
+| `mySQLDriver.version`               | ShardingSphere-Proxy The ShardingSphere-Proxy mysql driver version will not be downloaded if it is empty                                                                                           | `5.1.47`    |
+
+### ShardingSphere-Proxy ServerConfig Authority Related Parameters of Compute Node
+
+| Name                                       | Description                                                                                                                                    | Value                      |
+| ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- |
+| `serverConfig.authority.privilege.type`    | authority provider for storage node, the default value is ALL_PERMITTED                                                                        | `ALL_PRIVILEGES_PERMITTED` |
+| `serverConfig.authority.users[0].password` | Password for compute node.                                                                                                                     | `root`                     |
+| `serverConfig.authority.users[0].user`     | Username,authorized host for compute node. Format: <username>@<hostname> hostname is % or empty string means do not care about authorized host | `root@%`                   |
+
+### ShardingSphere-Proxy ServerConfig Mode Related Paraters of Compute Node
+
+| Name                                                              | Description                                                         | Value                                                                  |
+| ----------------------------------------------------------------- | ------------------------------------------------------------------- | ---------------------------------------------------------------------- |
+| `serverConfig.mode.type`                                          | Type of mode configuration. Now only support Cluster mode           | `Cluster`                                                              |
+| `serverConfig.mode.repository.props.namespace`                    | Namespace of registry center                                        | `governance_ds`                                                        |
+| `serverConfig.mode.repository.props.server-lists`                 | Server lists of registry center                                     | `{{ printf "%s-zookeeper.%s:2181" .Release.Name .Release.Namespace }}` |
+| `serverConfig.mode.repository.props.maxRetries`                   | Max retries of client connection                                    | `3`                                                                    |
+| `serverConfig.mode.repository.props.operationTimeoutMilliseconds` | Milliseconds of operation timeout                                   | `5000`                                                                 |
+| `serverConfig.mode.repository.props.retryIntervalMilliseconds`    | Milliseconds of retry interval                                      | `500`                                                                  |
+| `serverConfig.mode.repository.props.timeToLiveSeconds`            | Seconds of ephemeral data live                                      | `600`                                                                  |
+| `serverConfig.mode.repository.type`                               | Type of persist repository. Now only support ZooKeeper              | `ZooKeeper`                                                            |
+| `serverConfig.mode.overwrite`                                     | Whether overwrite persistent configuration with local configuration | `true`                                                                 |
+| `serverConfig.props.proxy-frontend-database-protocol-type`        | Default startup protocol                                            | `MySQL`                                                                |
+
+### ZooKeeper Chart Parameters
+
+| Name                                 | Description                                          | Value               |
+| ------------------------------------ | ---------------------------------------------------- | ------------------- |
+| `zookeeper.enabled`                  | Switch to enable or disable the ZooKeeper helm chart | `true`              |
+| `zookeeper.replicaCount`             | Number of ZooKeeper nodes                            | `1`                 |
+| `zookeeper.persistence.enabled`      | Enable persistence on ZooKeeper using PVC(s)         | `false`             |
+| `zookeeper.persistence.storageClass` | Persistent Volume storage class                      | `""`                |
+| `zookeeper.persistence.accessModes`  | Persistent Volume access modes                       | `["ReadWriteOnce"]` |
+| `zookeeper.persistence.size`         | Persistent Volume size                               | `8Gi`               |
+
+## Examples
+
+apache-shardingsphere-operator-charts/values.yaml
+
+```yaml
+## @section ShardingSphere-Proxy operator parameters
+## @param replicaCount operator  replica count
+##
+replicaCount: 2
+image:
+  ## @param image.repository operator image name
+  ##
+  repository: "sahrdingsphere-operator"
+  ## @param image.pullPolicy image pull policy
+  ##
+  pullPolicy: IfNotPresent
+  ## @param image.tag image tag
+  ##
+  tag: "0.0.1"
+## @param imagePullSecrets image pull secret of private repository
+## e.g:
+## imagePullSecrets:
+##   - name: mysecret
+##
+imagePullSecrets: []
+## @param resources operator Resources required by the operator
+## e.g:
+## resources:
+##   limits:
+##     cpu: 2
+##   limits:
+##     cpu: 2
+##
+resources: {}
+## @param webhook.port operator webhook boot port
+##
+webhook:
+  port: 9443
+## @param health.healthProbePort operator health check port
+##
+health:
+  healthProbePort: 8081
+```
+
+apache-shardingsphere-operator-cluster-charts/values.yaml
+
+```yaml
+# @section ShardingSphere-Proxy cluster parameters
+## @param replicaCount ShardingSphere-Proxy cluster starts the number of replicas, Note: After you enable automaticScaling, this parameter will no longer take effect
+## @param proxyVersion ShardingSphere-Proxy cluster version
+##
+replicaCount: "3"
+proxyVersion: "5.2.0"
+## @param automaticScaling.enable ShardingSphere-Proxy Whether the ShardingSphere-Proxy cluster has auto-scaling enabled
+## @param automaticScaling.scaleUpWindows ShardingSphere-Proxy automatically scales the stable window
+## @param automaticScaling.scaleDownWindows ShardingSphere-Proxy automatically shrinks the stabilized window
+## @param automaticScaling.target ShardingSphere-Proxy auto-scaling threshold, the value is a percentage, note: at this stage, only cpu is supported as a metric for scaling
+## @param automaticScaling.maxInstance ShardingSphere-Proxy maximum number of scaled-out replicas
+## @param automaticScaling.minInstance ShardingSphere-Proxy has a minimum number of boot replicas, and the shrinkage will not be less than this number of replicas
+##
+automaticScaling:
+  enable: false
+  scaleUpWindows: 30
+  scaleDownWindows: 30
+  target: 20
+  maxInstance: 4
+  minInstance: 1
+## @param resources ShardingSphere-Proxy starts the requirement resource, and after opening automaticScaling, the resource of the request multiplied by the percentage of target is used to trigger the scaling action
+## e.g:
+## resources:
+##   limits:
+##     cpu: 2
+##   requests:
+##     cpu: 2
+##
+resources:
+  limits:
+    cpu: '2'
+  requests:
+    cpu: '1'
+## @param service.type ShardingSphere-Proxy external exposure mode
+## @param service.port ShardingSphere-Proxy exposes  port
+##
+service:
+  type: ClusterIP
+  port: 3307
+## @param startPort ShardingSphere-Proxy boot port
+##
+startPort: 3307
+## @param mySQLDriver.version ShardingSphere-Proxy The ShardingSphere-Proxy mysql driver version will not be downloaded if it is empty
+##
+mySQLDriver:
+  version: "5.1.47"
+## @section  ShardingSphere-Proxy ServerConfiguration parameters
+## NOTE: If you use the sub-charts to deploy Zookeeper, the server-lists field must be "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}",
+## otherwise please fill in the correct zookeeper address
+## The server.yaml is auto-generated based on this parameter.
+## If it is empty, the server.yaml is also empty.
+## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/yaml-config/mode/
+## ref: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/builtin-algorithm/metadata-repository/
+##
+serverConfig:
+  ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration authority parameters
+  ## NOTE: It is used to set up initial user to login compute node, and authority data of storage node.
+  ## @param serverConfig.authority.privilege.type authority provider for storage node, the default value is ALL_PERMITTED
+  ## @param serverConfig.authority.users[0].password Password for compute node.
+  ## @param serverConfig.authority.users[0].user Username,authorized host for compute node. Format: <username>@<hostname> hostname is % or empty string means do not care about authorized host
+  ##
+  authority:
+    privilege:
+      type: ALL_PRIVILEGES_PERMITTED
+    users:
+      - password: root
+        user: root@%
+  ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration mode Configuration parameters
+  ## @param serverConfig.mode.type Type of mode configuration. Now only support Cluster mode
+  ## @param serverConfig.mode.repository.props.namespace Namespace of registry center
+  ## @param serverConfig.mode.repository.props.server-lists Server lists of registry center
+  ## @param serverConfig.mode.repository.props.maxRetries Max retries of client connection
+  ## @param serverConfig.mode.repository.props.operationTimeoutMilliseconds Milliseconds of operation timeout
+  ## @param serverConfig.mode.repository.props.retryIntervalMilliseconds Milliseconds of retry interval
+  ## @param serverConfig.mode.repository.props.timeToLiveSeconds Seconds of ephemeral data live
+  ## @param serverConfig.mode.repository.type Type of persist repository. Now only support ZooKeeper
+  ## @param serverConfig.mode.overwrite Whether overwrite persistent configuration with local configuration
+  ##
+  mode:
+    overwrite: true
+    repository:
+      props:
+        maxRetries: 3
+        namespace: governance_ds
+        operationTimeoutMilliseconds: 5000
+        retryIntervalMilliseconds: 500
+        server-lists: "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name .Release.Namespace }}"
+        timeToLiveSeconds: 600
+      type: ZooKeeper
+    type: Cluster
+  props:
+    proxy-frontend-database-protocol-type: MySQL
+## @section ZooKeeper chart parameters
+
+## ZooKeeper chart configuration
+## https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
+##
+zookeeper:
+  ## @param zookeeper.enabled Switch to enable or disable the ZooKeeper helm chart
+  ##
+  enabled: true
+  ## @param zookeeper.replicaCount Number of ZooKeeper nodes
+  ##
+  replicaCount: 3
+  ## ZooKeeper Persistence parameters
+  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
+  ## @param zookeeper.persistence.enabled Enable persistence on ZooKeeper using PVC(s)
+  ## @param zookeeper.persistence.storageClass Persistent Volume storage class
+  ## @param zookeeper.persistence.accessModes Persistent Volume access modes
+  ## @param zookeeper.persistence.size Persistent Volume size
+  ##
+  persistence:
+    enabled: false
+    storageClass: ""
+    accessModes:
+      - ReadWriteOnce
+    size: 8Gi
+```
+
+## Clean
+
+```shell
+helm uninstall shardingsphere -n shardingsphere
+helm uninstall shardingsphere-operator -n shardingsphere-operator
+kubectl delete crd shardingsphereproxies.shardingsphere.apache.org shardingsphereproxyserverconfigs.shardingsphere.apache.org
+```
diff --git a/docs/content/operation-guide/using-cloudformation-to-start-proxy/_index.cn.md b/docs/content/operation-guide/using-cloudformation-to-start-proxy/_index.cn.md
new file mode 100644
index 0000000..0167031
--- /dev/null
+++ b/docs/content/operation-guide/using-cloudformation-to-start-proxy/_index.cn.md
@@ -0,0 +1,79 @@
++++
+pre = "<b>2.3 </b>"
+title = "利用 CloudFormation 启动 ShardingSphere Proxy"
+weight = 3
+chapter = true
++++
+
+AWS CloudFormation 是一个以基础设施即代码的方式配置和启动任何环境和基础设施的简易工具。通过 AWS CloudFormation Stack 模板可以帮助在 AWS 上快速启动 Apache ShardingSphere。
+
+## 前置条件
+
+开始之前,需要确认以下的检查列表清单:
+
+- [ ] 选择区域为 ap-north-1(北京),当前保护 Apache ShardingSphere Proxy 的 AMI 和相关组件仅在 ap-north-1 区域有效
+- [ ] 一个已存在的 VPC 用于部署 Apache ShardingSphere Proxy
+- [ ] 该 VPC 下一个已规划的 CIDR 和对应子网
+- [ ] 允许应用访问数据库(比如 3307 端口)和控制流量(比如 22 端口)的安全组配置
+- [ ] 可以用于访问该实例资源的密钥对 
+- [ ] 对该 CloudFormation Stack 涉及资源设计的标签
+
+## 启动 ShardingSphere Proxy 集群
+
+### 1. 利用新资源创建 CloudFormation 堆栈
+
+如下图所示:
+
+![](../../../../img/operation-guide/1.PNG)
+
+![](../../../../img/operation-guide/2.PNG)
+
+### 2. 上传本仓库中的模板文件
+
+上传本地文件 `cloudformation/apache-shardingsphere-5.2.0.json` 到 CloudFormation,然后点击 `Next`。
+
+![](../../../../img/operation-guide/3.PNG)
+
+![](../../../../img/operation-guide/4.PNG)
+
+### 3. 指定 CloudFormation 堆栈细节
+
+填写本页中的空白项,必填项已在前置条件中就绪。
+
+![](../../../../img/operation-guide/5.PNG)
+
+### 4. 配置堆栈选项
+
+为该堆栈添加标签,有助于后续成本分析。
+
+![](../../../../img/operation-guide/6.PNG)
+
+### 5. 回顾和确认配置
+
+回顾配置内容,在提交前确认所有内容符合期望。
+
+![](../../../../img/operation-guide/7.PNG)
+
+### 6. 检查 EC2 实例
+
+几分钟后,EC2 实例已经启动。
+
+![](../../../../img/operation-guide/8.PNG)
+
+### 7. 检查 ShardingSphere Proxy 和 ZooKeeper 状态
+
+使用 `systemctl status shardingsphere` 和 `./bin/zkServer.sh status` 来检查组件的运行状态。
+
+![](../../../../img/operation-guide/9.PNG)
+
+![](../../../../img/operation-guide/10.PNG)
+
+### 8. 测试简单的分片示例
+
+创建数据库 `sharding_db`,以及添加两个独立的数据库实例 `resources`。然后创建逻辑表 `t_order` 并插入两行数据。如下检查结果:
+
+![](../../../../img/operation-guide/11.PNG)
+
+![](../../../../img/operation-guide/12.PNG)
+
+![](../../../../img/operation-guide/13.PNG)
diff --git a/docs/content/operation-guide/using-cloudformation-to-start-proxy/_index.en.md b/docs/content/operation-guide/using-cloudformation-to-start-proxy/_index.en.md
new file mode 100644
index 0000000..2b15781
--- /dev/null
+++ b/docs/content/operation-guide/using-cloudformation-to-start-proxy/_index.en.md
@@ -0,0 +1,79 @@
++++
+pre = "<b>2.3 </b>"
+title = "Start ShardingSphere Proxy with CloudFormation "
+weight = 3
+chapter = true
++++
+
+AWS CloudFormation is a simple tool to configure and start any environment and infrastructure in the way of infrastructure is code. The AWS CloudFormation Stack template can help you quickly start Apache ShardingSphere on AWS.
+
+## Preconditions
+
+Before starting, you need to confirm the following checklist:
+
+- [ ] The selected region is ap-north-1 (Beijing). Currently, the AMI and related components that protect Apache ShardingSphere Proxy are only valid in the ap-north-1 region
+- [ ] An existing VPC is used to deploy Apache ShardingSphere Proxy
+- [ ] The next planned CIDR and corresponding subnet of the VPC
+- [ ] Security group configuration that allows applications to access databases (such as port 3307) and control traffic (such as port 22)
+- [ ] The key pair that can be used to access the instance resources 
+- [ ] Label of resource design related to CloudFormation Stack
+
+## Start ShardingSphere Proxy Cluster
+
+### 1. Create CloudFormat stack with new resources
+
+As shown in the figure below:
+
+![](../../../../img/operation-guide/1.PNG)
+
+![](../../../../img/operation-guide/2.PNG)
+
+### 2. Upload the template file in this warehouse
+
+Upload local file `cloudformation/apache-shardingsphere-5.2.0.json` to CloudFormation, and then click `Next`.
+
+![](../../../../img/operation-guide/3.PNG)
+
+![](../../../../img/operation-guide/4.PNG)
+
+### 3. Specify CloudFormation stack details.
+
+Fill in the blank items on this page. The required items are ready in the preconditions.
+
+![](../../../../img/operation-guide/5.PNG)
+
+### 4. Configure stack options
+
+Adding labels to the stack is helpful for subsequent cost analysis.
+
+![](../../../../img/operation-guide/6.PNG)
+
+### 5. Review and confirm configuration
+
+Review the configuration contents and confirm that all contents meet the expectations before submission.
+
+![](../../../../img/operation-guide/7.PNG)
+
+### 6. Check EC2 instances
+
+A few minutes later, the EC2 instance has started.
+
+![](../../../../img/operation-guide/8.PNG)
+
+### 7. Check the status of ShardingSphere Proxy and ZooKeeper
+
+Use `systemctl status shardingsphere` and `./bin/zkServer.sh status` to check the running status of components.
+
+![](../../../../img/operation-guide/9.PNG)
+
+![](../../../../img/operation-guide/10.PNG)
+
+### 8. Test simple sharding example
+
+Create database `sharding_db` and add two independent database instances `resources`. Then create logical table `t_order` and insert two rows of data. The following inspection results:
+
+![](../../../../img/operation-guide/11.PNG)
+
+![](../../../../img/operation-guide/12.PNG)
+
+![](../../../../img/operation-guide/13.PNG)
diff --git a/docs/content/overview/_index.cn.md b/docs/content/overview/_index.cn.md
new file mode 100644
index 0000000..6276a3f
--- /dev/null
+++ b/docs/content/overview/_index.cn.md
@@ -0,0 +1,45 @@
++++
+pre = "<b>1. </b>"
+title = "概览"
+weight = 1
+chapter = true
++++
+
+## 什么是 ShardingSphere-on-cloud?
+
+ShardingSphere-on-Cloud 项目是面向 Apache ShardingSphere 的云上解决方案集合,包括在 AWS、GCP、阿里云等云环境下面向虚机的自动化部署脚本,如 CloudFormation Stack 模板、Terraform 一键部署脚本等,在 Kubernetes 云原生环境下的 Helm Charts、Operator、自动水平扩缩容等工具,以及高可用、可观测性、安全合规、等方面的各类实践内容。
+
+## 核心概念
+
+目前本仓库涉及的术语均来自常见的云服务提供商和开源项目,相关概念和定义保持一致。
+
+- CloudFormation:是 AWS 提供的一个工具,可以帮助我们快速创建云资源。
+- CloudFormation Stack:是 AWS 资源的集合,可以将其作为一个单元进行管理。
+- Terraform: 是一个开源的基础设施管理工具,使用 "infrastructure as code" (基础设施即代码)的理念,可以高效的构建,更改及版本化基础设施。
+- Kubernetes:是一个开源的容器编排管理平台,可自动部署、管理和扩展容器应用。
+- Operator: 是 Kubernetes 的扩展软件, 它利用[定制资源](https://kubernetes.io/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)管理应用及其组件。 Operator 遵循 Kubernetes 的理念,特别是在[控制器](https://kubernetes.io/zh-cn/docs/concepts/architecture/controller/)方面。
+- Helm Charts: Helm 是 kubernetes 应用的包管理工具,Charts 是描述一组相关的 Kubernetes 资源的文件集合。
+
+## 基础架构
+
+- ShardingSphere-Operator 示意图
+
+![Operator 示意图](../../../../img/overview/operator.png)
+
+- ShardingSphere-Terraform 示意图
+
+![Terraform 示意图](../../../../img/overview/terraform.png)
+
+## 项目功能
+
+- 基于 Helm Charts 的 ShardingSphere Proxy 在 kubernetes 环境下一键部署
+- 基于 Operator 的 ShardingSphere Proxy 在 kubernetes 环境下一键部署和自动运维
+- 基于 AWS CloudFormation 的 ShardingSphere Proxy 快速部署
+- 基于 Terraform 的 AWS 环境下 ShardingSphere Proxy 快速部署
+
+## 应用场景
+
+对于 SharidngSphere-On-Cloud 提供的不同部署方案有如下应用场景:
+
+1. 如果您想快速了解,验证或者使用 ShardingSphere Proxy 的功能特性,又没有 Kubernetes 环境的时候,您可以用 AWS CloudFormation 或者是 Terraform 来按需部署。
+2. 如果您想在 kubernetes 环境中部署的时候,您可以体验我们提供的 Operator 功能,或者不使用 Operator, 直接通过 helm charts 部署原生的 ShardingSphere Proxy。
diff --git a/docs/content/overview/_index.en.md b/docs/content/overview/_index.en.md
new file mode 100644
index 0000000..ab208b5
--- /dev/null
+++ b/docs/content/overview/_index.en.md
@@ -0,0 +1,45 @@
++++
+pre = "<b>1. </b>"
+title = "Overview"
+weight = 1
+chapter = true
++++
+
+## What is ShardingSphere-on-Cloud?
+
+The ShardingSphere-on-Cloud project is a collection of cloud solutions for Apache ShardingSphere, including automated deployment scripts to virtual machines in AWS, GCP, Alibaba Cloud and other cloud environments. Such as CloudFormation Stack templates, Terraform one click deployment scripts. Helm Charts, Operators, automatic horizontal scaling and other tools in the Kubernetes cloud native environment, as well as high availability, observability, security compliance and other aspects.
+
+## Core Concept
+
+At present, the terms involved in this warehouse are from common cloud service providers and open source projects, and the relevant concepts and definitions are consistent.
+
+- CloudFormation: It is a tool provided by AWS to help us quickly create cloud resources.
+- CloudFormation Stack: It is a collection of AWS resources that can be managed as a unit.
+- Terraform: It is an open source infrastructure management tool. Using the concept of "infrastructure as code", you can efficiently build, change and version infrastructure.
+- Kubernetes: It is an open source container orchestration management platform, which can automatically deploy, manage and extend container applications.
+- Operator: It is an extension software of Kubernetes, which uses [custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to manage applications and their components. The Operator follows the concept of Kubernetes, especially in [controller](https://kubernetes.io/docs/concepts/architecture/controller/).
+- Helm Charts: Helm is a package management tool for the Kubernetes application, and Charts is a collection of files that describe a group of related Kubernetes resources.
+
+## Infrastructure
+
+- ShardingSphere-Operator Diagram
+
+![Operator Diagram](../../../../img/overview/operator.png)
+
+- ShardingSphere-Terraform Diagram
+
+![Terraform Diagram](../../../../img/overview/terraform.png)
+
+## Feature List
+
+- ShardingSphere Proxy based on Helm Charts is deployed in the Kubernetes environment with one click
+- One click deployment and automatic operation and maintenance of ShardingSphere Proxy based on Operator in the Kubernetes environment
+- Rapid deployment of ShardingSphere Proxy based on AWS CloudFormation
+- Rapid deployment of ShardingSphere Proxy based on Terraform in the AWS environment
+
+## Application Scenario
+
+The following application scenarios are available for different deployment schemes provided by SharidngSphere-On-Cloud:
+
+1. If you want to quickly understand, verify or use the features of SharedingSphere Proxy, and there is no Kubernetes environment, you can use AWS CloudFormat or Terraform to deploy on demand.
+2. If you want to deploy in the Kubernetes environment, you can experience the Operator function we provide, or you can directly deploy the native SharedingSphere Proxy through helm charts without using the Operator.
diff --git a/docs/content/quick-start/_index.cn.md b/docs/content/quick-start/_index.cn.md
deleted file mode 100644
index f096f24..0000000
--- a/docs/content/quick-start/_index.cn.md
+++ /dev/null
@@ -1,10 +0,0 @@
-+++
-pre = "<b>2. </b>"
-title = "快速入门"
-weight = 2
-chapter = true
-+++
-
-本章节以尽量短的时间,为使用者提供最简单的 Apache ShardingSphere 的快速入门。
-
-**示例代码:https://github.com/apache/shardingsphere/tree/master/examples**
diff --git a/docs/static/img/operation-guide/1.PNG b/docs/static/img/operation-guide/1.PNG
new file mode 100644
index 0000000..04fdfb1
Binary files /dev/null and b/docs/static/img/operation-guide/1.PNG differ
diff --git a/docs/static/img/operation-guide/10.PNG b/docs/static/img/operation-guide/10.PNG
new file mode 100644
index 0000000..8968178
Binary files /dev/null and b/docs/static/img/operation-guide/10.PNG differ
diff --git a/docs/static/img/operation-guide/11.PNG b/docs/static/img/operation-guide/11.PNG
new file mode 100644
index 0000000..964084d
Binary files /dev/null and b/docs/static/img/operation-guide/11.PNG differ
diff --git a/docs/static/img/operation-guide/12.PNG b/docs/static/img/operation-guide/12.PNG
new file mode 100644
index 0000000..37649c6
Binary files /dev/null and b/docs/static/img/operation-guide/12.PNG differ
diff --git a/docs/static/img/operation-guide/13.PNG b/docs/static/img/operation-guide/13.PNG
new file mode 100644
index 0000000..8ead204
Binary files /dev/null and b/docs/static/img/operation-guide/13.PNG differ
diff --git a/docs/static/img/operation-guide/2.PNG b/docs/static/img/operation-guide/2.PNG
new file mode 100644
index 0000000..6465367
Binary files /dev/null and b/docs/static/img/operation-guide/2.PNG differ
diff --git a/docs/static/img/operation-guide/3.PNG b/docs/static/img/operation-guide/3.PNG
new file mode 100644
index 0000000..7d45a5e
Binary files /dev/null and b/docs/static/img/operation-guide/3.PNG differ
diff --git a/docs/static/img/operation-guide/4-1.PNG b/docs/static/img/operation-guide/4-1.PNG
new file mode 100644
index 0000000..1ba2efa
Binary files /dev/null and b/docs/static/img/operation-guide/4-1.PNG differ
diff --git a/docs/static/img/operation-guide/4-10.PNG b/docs/static/img/operation-guide/4-10.PNG
new file mode 100644
index 0000000..9b63db1
Binary files /dev/null and b/docs/static/img/operation-guide/4-10.PNG differ
diff --git a/docs/static/img/operation-guide/4-11.PNG b/docs/static/img/operation-guide/4-11.PNG
new file mode 100644
index 0000000..3afe85c
Binary files /dev/null and b/docs/static/img/operation-guide/4-11.PNG differ
diff --git a/docs/static/img/operation-guide/4-12.PNG b/docs/static/img/operation-guide/4-12.PNG
new file mode 100644
index 0000000..9d20127
Binary files /dev/null and b/docs/static/img/operation-guide/4-12.PNG differ
diff --git a/docs/static/img/operation-guide/4-13.PNG b/docs/static/img/operation-guide/4-13.PNG
new file mode 100644
index 0000000..b61329b
Binary files /dev/null and b/docs/static/img/operation-guide/4-13.PNG differ
diff --git a/docs/static/img/operation-guide/4-2.PNG b/docs/static/img/operation-guide/4-2.PNG
new file mode 100644
index 0000000..fee4f1a
Binary files /dev/null and b/docs/static/img/operation-guide/4-2.PNG differ
diff --git a/docs/static/img/operation-guide/4-3.PNG b/docs/static/img/operation-guide/4-3.PNG
new file mode 100644
index 0000000..61b428a
Binary files /dev/null and b/docs/static/img/operation-guide/4-3.PNG differ
diff --git a/docs/static/img/operation-guide/4-4.PNG b/docs/static/img/operation-guide/4-4.PNG
new file mode 100644
index 0000000..7009863
Binary files /dev/null and b/docs/static/img/operation-guide/4-4.PNG differ
diff --git a/docs/static/img/operation-guide/4-5.PNG b/docs/static/img/operation-guide/4-5.PNG
new file mode 100644
index 0000000..da5a32e
Binary files /dev/null and b/docs/static/img/operation-guide/4-5.PNG differ
diff --git a/docs/static/img/operation-guide/4-6.PNG b/docs/static/img/operation-guide/4-6.PNG
new file mode 100644
index 0000000..6eab65b
Binary files /dev/null and b/docs/static/img/operation-guide/4-6.PNG differ
diff --git a/docs/static/img/operation-guide/4-7.PNG b/docs/static/img/operation-guide/4-7.PNG
new file mode 100644
index 0000000..683e059
Binary files /dev/null and b/docs/static/img/operation-guide/4-7.PNG differ
diff --git a/docs/static/img/operation-guide/4-8.PNG b/docs/static/img/operation-guide/4-8.PNG
new file mode 100644
index 0000000..c600bf2
Binary files /dev/null and b/docs/static/img/operation-guide/4-8.PNG differ
diff --git a/docs/static/img/operation-guide/4-9.PNG b/docs/static/img/operation-guide/4-9.PNG
new file mode 100644
index 0000000..a394009
Binary files /dev/null and b/docs/static/img/operation-guide/4-9.PNG differ
diff --git a/docs/static/img/operation-guide/4.PNG b/docs/static/img/operation-guide/4.PNG
new file mode 100644
index 0000000..bd0570c
Binary files /dev/null and b/docs/static/img/operation-guide/4.PNG differ
diff --git a/docs/static/img/operation-guide/5.PNG b/docs/static/img/operation-guide/5.PNG
new file mode 100644
index 0000000..cfc4490
Binary files /dev/null and b/docs/static/img/operation-guide/5.PNG differ
diff --git a/docs/static/img/operation-guide/6.PNG b/docs/static/img/operation-guide/6.PNG
new file mode 100644
index 0000000..deecb4f
Binary files /dev/null and b/docs/static/img/operation-guide/6.PNG differ
diff --git a/docs/static/img/operation-guide/7.PNG b/docs/static/img/operation-guide/7.PNG
new file mode 100644
index 0000000..1a648ea
Binary files /dev/null and b/docs/static/img/operation-guide/7.PNG differ
diff --git a/docs/static/img/operation-guide/8.PNG b/docs/static/img/operation-guide/8.PNG
new file mode 100644
index 0000000..7b9ae78
Binary files /dev/null and b/docs/static/img/operation-guide/8.PNG differ
diff --git a/docs/static/img/operation-guide/9.PNG b/docs/static/img/operation-guide/9.PNG
new file mode 100644
index 0000000..47c64be
Binary files /dev/null and b/docs/static/img/operation-guide/9.PNG differ
diff --git a/docs/static/img/overview/operator.png b/docs/static/img/overview/operator.png
new file mode 100644
index 0000000..9504285
Binary files /dev/null and b/docs/static/img/overview/operator.png differ
diff --git a/docs/static/img/overview/terraform.png b/docs/static/img/overview/terraform.png
new file mode 100644
index 0000000..1891c92
Binary files /dev/null and b/docs/static/img/overview/terraform.png differ