You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by ur...@apache.org on 2022/02/17 07:46:55 UTC

[pulsar-site] branch main updated: feat: update 2.6.x

This is an automated email from the ASF dual-hosted git repository.

urfree pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/pulsar-site.git


The following commit(s) were added to refs/heads/main by this push:
     new 7db103d  feat: update 2.6.x
7db103d is described below

commit 7db103da9f09d281dfdb7a95ddeb9790326d123a
Author: LiLi <ur...@apache.org>
AuthorDate: Thu Feb 17 15:45:09 2022 +0800

    feat: update 2.6.x
    
    Signed-off-by: LiLi <ur...@apache.org>
---
 site2/website-next/migrate/tool/del-duplicate.js   |   1 -
 .../version-2.6.0/administration-pulsar-manager.md |   2 +-
 .../version-2.6.0/client-libraries-dotnet.md       |   2 +-
 .../version-2.6.0/client-libraries-go.md           |   5 +-
 .../develop-binary-protocol.md}                    |   0
 .../develop-cpp.md}                                |   0
 .../develop-load-manager.md}                       |   0
 .../develop-tools.md}                              |   0
 ...{getting-started-helm.md => kubernetes-helm.md} |   0
 .../pulsar-2.0.md}                                 |   0
 .../{reference-pulsar-admin.md => pulsar-admin.md} |   0
 .../version-2.6.0/reference-cli-tools.md           |   2 +-
 .../standalone-docker.md}                          |   0
 .../version-2.6.1/administration-dashboard.md      |  76 +++
 .../version-2.6.1/administration-pulsar-manager.md |   2 +-
 .../version-2.6.1/client-libraries-cgo.md          | 579 +++++++++++++++++++++
 .../version-2.6.1/client-libraries-dotnet.md       |   2 +-
 .../version-2.6.1/client-libraries-go.md           |   5 +-
 .../version-2.6.1/concepts-messaging.md            |   2 +-
 .../develop-binary-protocol.md}                    |   0
 .../develop-cpp.md}                                |   0
 .../develop-load-manager.md}                       |   0
 .../develop-tools.md}                              |   0
 .../version-2.6.1/io-aerospike-sink.md             |  26 +
 .../version-2.6.1/io-canal-source.md               | 235 +++++++++
 .../version-2.6.1/io-cassandra-sink.md             |  57 ++
 .../version-2.6.1/io-cdc-debezium.md               | 543 +++++++++++++++++++
 .../version-2.6.1/io-debezium-source.md            | 564 ++++++++++++++++++++
 .../version-2.6.1/io-dynamodb-source.md            |  80 +++
 .../version-2.6.1/io-elasticsearch-sink.md         | 173 ++++++
 .../versioned_docs/version-2.6.1/io-file-source.md | 160 ++++++
 .../versioned_docs/version-2.6.1/io-flume-sink.md  |  56 ++
 .../version-2.6.1/io-flume-source.md               |  56 ++
 .../versioned_docs/version-2.6.1/io-hbase-sink.md  |  67 +++
 .../versioned_docs/version-2.6.1/io-hdfs2-sink.md  |  61 +++
 .../versioned_docs/version-2.6.1/io-hdfs3-sink.md  |  59 +++
 .../version-2.6.1/io-influxdb-sink.md              | 119 +++++
 .../versioned_docs/version-2.6.1/io-jdbc-sink.md   | 157 ++++++
 .../versioned_docs/version-2.6.1/io-kafka-sink.md  |  72 +++
 .../version-2.6.1/io-kafka-source.md               | 197 +++++++
 .../version-2.6.1/io-kinesis-sink.md               |  80 +++
 .../version-2.6.1/io-kinesis-source.md             |  81 +++
 .../versioned_docs/version-2.6.1/io-mongo-sink.md  |  57 ++
 .../version-2.6.1/io-netty-source.md               | 241 +++++++++
 .../version-2.6.1/io-rabbitmq-sink.md              |  85 +++
 .../version-2.6.1/io-rabbitmq-source.md            |  82 +++
 .../versioned_docs/version-2.6.1/io-redis-sink.md  |  74 +++
 .../versioned_docs/version-2.6.1/io-solr-sink.md   |  65 +++
 .../version-2.6.1/io-twitter-source.md             |  28 +
 .../kubernetes-helm.md}                            |   0
 .../pulsar-2.0.md}                                 |   0
 .../{reference-pulsar-admin.md => pulsar-admin.md} |   0
 .../version-2.6.1/reference-cli-tools.md           |  14 +-
 .../version-2.6.1/reference-connector-admin.md     |  11 +
 .../version-2.6.1/security-token-admin.md          | 183 +++++++
 .../standalone-docker.md}                          |   0
 .../version-2.6.2/administration-dashboard.md      |  76 +++
 .../version-2.6.2/administration-pulsar-manager.md |   2 +-
 .../version-2.6.2/client-libraries-cgo.md          | 579 +++++++++++++++++++++
 .../version-2.6.2/client-libraries-go.md           |   5 +-
 .../version-2.6.2/concepts-messaging.md            |   2 +-
 ...nary-protocol.md => develop-binary-protocol.md} |   0
 .../{developing-cpp.md => develop-cpp.md}          |   0
 ...ing-load-manager.md => develop-load-manager.md} |   0
 .../{developing-tools.md => develop-tools.md}      |   0
 .../version-2.6.2/io-aerospike-sink.md             |  26 +
 .../version-2.6.2/io-canal-source.md               | 235 +++++++++
 .../version-2.6.2/io-cassandra-sink.md             |  57 ++
 .../version-2.6.2/io-cdc-debezium.md               | 543 +++++++++++++++++++
 .../version-2.6.2/io-debezium-source.md            | 564 ++++++++++++++++++++
 .../version-2.6.2/io-dynamodb-source.md            |  80 +++
 .../version-2.6.2/io-elasticsearch-sink.md         | 173 ++++++
 .../versioned_docs/version-2.6.2/io-file-source.md | 160 ++++++
 .../versioned_docs/version-2.6.2/io-flume-sink.md  |  56 ++
 .../version-2.6.2/io-flume-source.md               |  56 ++
 .../versioned_docs/version-2.6.2/io-hbase-sink.md  |  67 +++
 .../versioned_docs/version-2.6.2/io-hdfs2-sink.md  |  61 +++
 .../versioned_docs/version-2.6.2/io-hdfs3-sink.md  |  59 +++
 .../version-2.6.2/io-influxdb-sink.md              | 119 +++++
 .../versioned_docs/version-2.6.2/io-jdbc-sink.md   | 157 ++++++
 .../versioned_docs/version-2.6.2/io-kafka-sink.md  |  72 +++
 .../version-2.6.2/io-kafka-source.md               | 197 +++++++
 .../version-2.6.2/io-kinesis-sink.md               |  80 +++
 .../version-2.6.2/io-kinesis-source.md             |  81 +++
 .../versioned_docs/version-2.6.2/io-mongo-sink.md  |  57 ++
 .../version-2.6.2/io-netty-source.md               | 241 +++++++++
 .../version-2.6.2/io-rabbitmq-sink.md              |  85 +++
 .../version-2.6.2/io-rabbitmq-source.md            |  82 +++
 .../versioned_docs/version-2.6.2/io-redis-sink.md  |  74 +++
 .../versioned_docs/version-2.6.2/io-solr-sink.md   |  65 +++
 .../version-2.6.2/io-twitter-source.md             |  28 +
 ...{getting-started-helm.md => kubernetes-helm.md} |   0
 .../{getting-started-pulsar.md => pulsar-2.0.md}   |   0
 .../pulsar-admin.md}                               |   0
 .../version-2.6.2/reference-connector-admin.md     |  11 +
 .../version-2.6.2/security-token-admin.md          | 183 +++++++
 ...ting-started-docker.md => standalone-docker.md} |   0
 .../version-2.6.3/administration-pulsar-manager.md |   2 +-
 .../version-2.6.3/client-libraries-go.md           |   5 +-
 .../version-2.6.3/concepts-messaging.md            |   2 +-
 .../develop-binary-protocol.md}                    |   0
 .../develop-cpp.md}                                |   0
 .../develop-load-manager.md}                       |   0
 .../develop-tools.md}                              |   0
 .../kubernetes-helm.md}                            |   0
 .../pulsar-2.0.md}                                 |   0
 .../pulsar-admin.md}                               |   0
 .../version-2.6.3/reference-cli-tools.md           |   2 +-
 .../standalone-docker.md}                          |   0
 .../version-2.6.4/administration-pulsar-manager.md |   2 +-
 .../version-2.6.4/client-libraries-dotnet.md       |   2 +-
 .../version-2.6.4/client-libraries-go.md           |   5 +-
 .../version-2.6.4/concepts-messaging.md            |   2 +-
 .../versioned_docs/version-2.6.4/deploy-aws.md     |   2 +-
 .../develop-binary-protocol.md}                    |   0
 .../develop-cpp.md}                                |   0
 .../develop-load-manager.md}                       |   0
 .../develop-tools.md}                              |   0
 ...{getting-started-helm.md => kubernetes-helm.md} |   0
 .../pulsar-2.0.md}                                 |   0
 .../version-2.6.4/reference-cli-tools.md           |   2 +-
 .../standalone-docker.md}                          |   0
 site2/website-next/versions.json                   |   2 +-
 123 files changed, 8687 insertions(+), 35 deletions(-)

diff --git a/site2/website-next/migrate/tool/del-duplicate.js b/site2/website-next/migrate/tool/del-duplicate.js
index 7d9711e..a63e4d3 100644
--- a/site2/website-next/migrate/tool/del-duplicate.js
+++ b/site2/website-next/migrate/tool/del-duplicate.js
@@ -17,7 +17,6 @@ module.exports = (dest, version) => {
     duplicateMap[id] = duplicateMap[id] || [];
     duplicateMap[id].push(pathname);
   }
-  console.log(duplicateMap);
   for (let [key, duplicateFiles] of Object.entries(duplicateMap)) {
     if (duplicateFiles.length > 1) {
       for (let file of duplicateFiles) {
diff --git a/site2/website-next/versioned_docs/version-2.6.0/administration-pulsar-manager.md b/site2/website-next/versioned_docs/version-2.6.0/administration-pulsar-manager.md
index 12ec681..1e70069 100644
--- a/site2/website-next/versioned_docs/version-2.6.0/administration-pulsar-manager.md
+++ b/site2/website-next/versioned_docs/version-2.6.0/administration-pulsar-manager.md
@@ -103,7 +103,7 @@ If you want to enable JWT authentication, use one of the following methods.
 
 ```
 
-wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/apache-pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
+wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
 tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
 cd pulsar-manager
 tar -zxvf pulsar-manager.tar
diff --git a/site2/website-next/versioned_docs/version-2.6.0/client-libraries-dotnet.md b/site2/website-next/versioned_docs/version-2.6.0/client-libraries-dotnet.md
index ade664c..4e0afe3 100644
--- a/site2/website-next/versioned_docs/version-2.6.0/client-libraries-dotnet.md
+++ b/site2/website-next/versioned_docs/version-2.6.0/client-libraries-dotnet.md
@@ -9,7 +9,7 @@ You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and cons
 
 ## Installation
 
-You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019).
+You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio, see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019).
 
 ### Prerequisites
 
diff --git a/site2/website-next/versioned_docs/version-2.6.0/client-libraries-go.md b/site2/website-next/versioned_docs/version-2.6.0/client-libraries-go.md
index 46f6342..e976144 100644
--- a/site2/website-next/versioned_docs/version-2.6.0/client-libraries-go.md
+++ b/site2/website-next/versioned_docs/version-2.6.0/client-libraries-go.md
@@ -192,8 +192,9 @@ if err != nil {
 defer client.Close()
 
 topicName := newTopicName()
-producer, err := client.CreateProducer(ProducerOptions{
-	Topic: topicName,
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic:           topicName,
+    DisableBatching: true,
 })
 if err != nil {
 	log.Fatal(err)
diff --git a/site2/website-next/versioned_docs/version-2.6.4/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.6.0/develop-binary-protocol.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.4/developing-binary-protocol.md
rename to site2/website-next/versioned_docs/version-2.6.0/develop-binary-protocol.md
diff --git a/site2/website-next/versioned_docs/version-2.6.4/developing-cpp.md b/site2/website-next/versioned_docs/version-2.6.0/develop-cpp.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.4/developing-cpp.md
rename to site2/website-next/versioned_docs/version-2.6.0/develop-cpp.md
diff --git a/site2/website-next/versioned_docs/version-2.6.4/developing-load-manager.md b/site2/website-next/versioned_docs/version-2.6.0/develop-load-manager.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.4/developing-load-manager.md
rename to site2/website-next/versioned_docs/version-2.6.0/develop-load-manager.md
diff --git a/site2/website-next/versioned_docs/version-2.6.4/developing-tools.md b/site2/website-next/versioned_docs/version-2.6.0/develop-tools.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.4/developing-tools.md
rename to site2/website-next/versioned_docs/version-2.6.0/develop-tools.md
diff --git a/site2/website-next/versioned_docs/version-2.6.0/getting-started-helm.md b/site2/website-next/versioned_docs/version-2.6.0/kubernetes-helm.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/getting-started-helm.md
rename to site2/website-next/versioned_docs/version-2.6.0/kubernetes-helm.md
diff --git a/site2/website-next/versioned_docs/version-2.6.4/getting-started-pulsar.md b/site2/website-next/versioned_docs/version-2.6.0/pulsar-2.0.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.4/getting-started-pulsar.md
rename to site2/website-next/versioned_docs/version-2.6.0/pulsar-2.0.md
diff --git a/site2/website-next/versioned_docs/version-2.6.0/reference-pulsar-admin.md b/site2/website-next/versioned_docs/version-2.6.0/pulsar-admin.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/reference-pulsar-admin.md
rename to site2/website-next/versioned_docs/version-2.6.0/pulsar-admin.md
diff --git a/site2/website-next/versioned_docs/version-2.6.0/reference-cli-tools.md b/site2/website-next/versioned_docs/version-2.6.0/reference-cli-tools.md
index ba2ae45..7541f34 100644
--- a/site2/website-next/versioned_docs/version-2.6.0/reference-cli-tools.md
+++ b/site2/website-next/versioned_docs/version-2.6.0/reference-cli-tools.md
@@ -796,7 +796,7 @@ The table below lists the environment variables that you can use to configure th
 |BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful||
 
 
-### `autorecovery`
+### `auto-recovery`
 Runs an auto-recovery service
 
 Usage
diff --git a/site2/website-next/versioned_docs/version-2.6.4/getting-started-docker.md b/site2/website-next/versioned_docs/version-2.6.0/standalone-docker.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.4/getting-started-docker.md
rename to site2/website-next/versioned_docs/version-2.6.0/standalone-docker.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/administration-dashboard.md b/site2/website-next/versioned_docs/version-2.6.1/administration-dashboard.md
new file mode 100644
index 0000000..514b076
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/administration-dashboard.md
@@ -0,0 +1,76 @@
+---
+id: administration-dashboard
+title: Pulsar dashboard
+sidebar_label: "Dashboard"
+original_id: administration-dashboard
+---
+
+:::note
+
+Pulsar dashboard is deprecated. If you want to manage and monitor the stats of your topics, use [Pulsar Manager](administration-pulsar-manager). 
+
+:::
+
+Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form.
+
+The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database.
+
+You can use the [Django](https://www.djangoproject.com) web app to render the collected data.
+
+## Install
+
+The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+```shell
+
+$ SERVICE_URL=http://broker.example.com:8080/
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  apachepulsar/pulsar-dashboard:@pulsar:version@
+
+```
+
+You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well:
+
+```shell
+
+$ docker build -t apachepulsar/pulsar-dashboard dashboard
+
+```
+
+If token authentication is enabled:
+> Provided token should have super-user access. 
+
+```shell
+
+$ SERVICE_URL=http://broker.example.com:8080/
+$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  -e JWT_TOKEN=$JWT_TOKEN \
+  apachepulsar/pulsar-dashboard
+
+```
+
+ 
+You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://<broker-ip>:8080` by default. `<broker-ip>` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard.
+
+Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses.
+
+> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container
+
+If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to
+be the IP of the machine.
+
+Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to
+explicitly set the advertise address to the host IP. For example:
+
+```shell
+
+$ bin/pulsar standalone --advertised-address 1.2.3.4
+
+```
+
+### Known issues
+
+Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported.
diff --git a/site2/website-next/versioned_docs/version-2.6.1/administration-pulsar-manager.md b/site2/website-next/versioned_docs/version-2.6.1/administration-pulsar-manager.md
index 3e129ae..eb125c5 100644
--- a/site2/website-next/versioned_docs/version-2.6.1/administration-pulsar-manager.md
+++ b/site2/website-next/versioned_docs/version-2.6.1/administration-pulsar-manager.md
@@ -103,7 +103,7 @@ If you want to enable JWT authentication, use one of the following methods.
 
 ```
 
-wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/apache-pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
+wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
 tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
 cd pulsar-manager
 tar -zxvf pulsar-manager.tar
diff --git a/site2/website-next/versioned_docs/version-2.6.1/client-libraries-cgo.md b/site2/website-next/versioned_docs/version-2.6.1/client-libraries-cgo.md
new file mode 100644
index 0000000..c79f7bb
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/client-libraries-cgo.md
@@ -0,0 +1,579 @@
+---
+id: client-libraries-cgo
+title: Pulsar CGo client
+sidebar_label: "CGo(deprecated)"
+original_id: client-libraries-cgo
+---
+
+You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+
+Currently, the following Go clients are maintained in two repositories.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
+| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+
+> **API docs available as well**  
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
+
+## Installation
+
+### Requirements
+
+Pulsar Go client library is based on the C++ client library. Follow
+the instructions for [C++ library](client-libraries-cpp) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
+
+### Install go package
+
+> **Compatibility Warning**  
+> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
+
+You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
+
+```bash
+
+$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
+
+```
+
+Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
+
+```bash
+
+$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@
+
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+
+import "github.com/apache/pulsar/pulsar-client-go/pulsar"
+
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+
+pulsar://localhost:6650
+
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+
+pulsar://pulsar.us-west.example.com:6650
+
+```
+
+If you're using [TLS](security-tls-authentication) authentication, the URL will look like something like this:
+
+```http
+
+pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+## Create a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+```go
+
+import (
+    "log"
+    "runtime"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+        OperationTimeoutSeconds: 5,
+        MessageListenerThreads: runtime.NumCPU(),
+    })
+
+    if err != nil {
+        log.Fatalf("Could not instantiate Pulsar client: %v", err)
+    }
+}
+
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
+`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
+`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
+`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
+`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
+`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
+`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
+`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
+`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
+`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic: "my-topic",
+})
+
+if err != nil {
+    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
+}
+
+defer producer.Close()
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Hello, Pulsar"),
+}
+
+if err := producer.Send(context.Background(), msg); err != nil {
+    log.Fatalf("Producer could not send message: %v", err)
+}
+
+```
+
+> **Blocking operation**  
+> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
+`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
+`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
+`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
+`Schema()` | | Schema
+
+Here's a more involved example usage of a producer:
+
+```go
+
+import (
+    "context"
+    "fmt"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client to instantiate a producer
+    producer, err := client.CreateProducer(pulsar.ProducerOptions{
+        Topic: "my-topic",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    ctx := context.Background()
+
+    // Send 10 messages synchronously and 10 messages asynchronously
+    for i := 0; i < 10; i++ {
+        // Create a message
+        msg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("message-%d", i)),
+        }
+
+        // Attempt to send the message
+        if err := producer.Send(ctx, msg); err != nil {
+            log.Fatal(err)
+        }
+
+        // Create a different message to send asynchronously
+        asyncMsg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
+        }
+
+        // Attempt to send the message asynchronously and handle the response
+        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
+            if err != nil { log.Fatal(err) }
+
+            fmt.Printf("the %s successfully published", string(msg.Payload))
+        })
+    }
+}
+
+```
+
+### Producer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
+`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
+`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
+`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication) feature. | 30 seconds
+`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
+`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
+`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
+`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
+`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
+`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
+`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
+`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
+`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms
+`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+
+msgChannel := make(chan pulsar.ConsumerMessage)
+
+consumerOpts := pulsar.ConsumerOptions{
+    Topic:            "my-topic",
+    SubscriptionName: "my-subscription-1",
+    Type:             pulsar.Exclusive,
+    MessageChannel:   msgChannel,
+}
+
+consumer, err := client.Subscribe(consumerOpts)
+
+if err != nil {
+    log.Fatalf("Could not establish subscription: %v", err)
+}
+
+defer consumer.Close()
+
+for cm := range msgChannel {
+    msg := cm.Message
+
+    fmt.Printf("Message ID: %s", msg.ID())
+    fmt.Printf("Message value: %s", string(msg.Payload()))
+
+    consumer.Ack(msg)
+}
+
+```
+
+> **Blocking operation**  
+> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
+`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
+`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
+`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
+`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
+`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
+
+#### Receive example
+
+Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
+
+```go
+
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client object to instantiate a consumer
+    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+        Topic:            "my-golang-topic",
+        SubscriptionName: "sub-1",
+        Type: pulsar.Exclusive,
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    defer consumer.Close()
+
+    ctx := context.Background()
+
+    // Listen indefinitely on the topic
+    for {
+        msg, err := consumer.Receive(ctx)
+        if err != nil { log.Fatal(err) }
+
+        // Do something with the message
+        err = processMessage(msg)
+
+        if err == nil {
+            // Message processed successfully
+            consumer.Ack(msg)
+        } else {
+            // Failed to process messages
+            consumer.Nack(msg)
+        }
+    }
+}
+
+```
+
+### Consumer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
+`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`SubscriptionName` | The subscription name for this consumer |
+`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
+`Name` | The name of the consumer |
+`AckTimeout` | Set the timeout for unacked messages | 0
+`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
+`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
+`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
+`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
+`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
+`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic: "my-golang-topic",
+    StartMessageId: pulsar.LatestMessage,
+})
+
+```
+
+> **Blocking operation**  
+> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
+
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+#### "Next" example
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatalf("Could not create client: %v", err) }
+
+    // Use the client to instantiate a reader
+    reader, err := client.CreateReader(pulsar.ReaderOptions{
+        Topic:          "my-golang-topic",
+        StartMessageID: pulsar.EarliestMessage,
+    })
+
+    if err != nil { log.Fatalf("Could not create reader: %v", err) }
+
+    defer reader.Close()
+
+    ctx := context.Background()
+
+    // Listen on the topic for incoming messages
+    for {
+        msg, err := reader.Next(ctx)
+        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
+
+        // Process the message
+    }
+}
+
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic:          "my-golang-topic",
+    StartMessageID: DeserializeMessageID(lastSavedId),
+})
+
+```
+
+### Reader configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages 
+`Name` | The name of the reader 
+`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
+`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
+`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
+`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Here is some message data"),
+    Key: "message-key",
+    Properties: map[string]string{
+        "foo": "bar",
+    },
+    EventTime: time.Now(),
+    ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if err := producer.send(msg); err != nil {
+    log.Fatalf("Could not publish message due to: %v", err)
+}
+
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+
+opts := pulsar.ClientOptions{
+    URL: "pulsar+ssl://my-cluster.com:6651",
+    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+
+```
+
+## Schema
+
+This example shows how to create a producer and consumer with schema.
+
+```go
+
+var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
+    		"\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
+jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
+// create producer
+producer, err := client.CreateProducerWithSchema(ProducerOptions{
+	Topic: "jsonTopic",
+}, jsonSchema)
+err = producer.Send(context.Background(), ProducerMessage{
+	Value: &testJson{
+		ID:   100,
+		Name: "pulsar",
+	},
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+//create consumer
+var s testJson
+consumerJS := NewJsonSchema(exampleSchemaDef, nil)
+consumer, err := client.SubscribeWithSchema(ConsumerOptions{
+	Topic:            "jsonTopic",
+	SubscriptionName: "sub-2",
+}, consumerJS)
+if err != nil {
+	log.Fatal(err)
+}
+msg, err := consumer.Receive(context.Background())
+if err != nil {
+	log.Fatal(err)
+}
+err = msg.GetValue(&s)
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(s.ID) // output: 100
+fmt.Println(s.Name) // output: pulsar
+defer consumer.Close()
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/client-libraries-dotnet.md b/site2/website-next/versioned_docs/version-2.6.1/client-libraries-dotnet.md
index 4e0afe3..ade664c 100644
--- a/site2/website-next/versioned_docs/version-2.6.1/client-libraries-dotnet.md
+++ b/site2/website-next/versioned_docs/version-2.6.1/client-libraries-dotnet.md
@@ -9,7 +9,7 @@ You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and cons
 
 ## Installation
 
-You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio, see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019).
+You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019).
 
 ### Prerequisites
 
diff --git a/site2/website-next/versioned_docs/version-2.6.1/client-libraries-go.md b/site2/website-next/versioned_docs/version-2.6.1/client-libraries-go.md
index c8b5047..df40107 100644
--- a/site2/website-next/versioned_docs/version-2.6.1/client-libraries-go.md
+++ b/site2/website-next/versioned_docs/version-2.6.1/client-libraries-go.md
@@ -192,8 +192,9 @@ if err != nil {
 defer client.Close()
 
 topicName := newTopicName()
-producer, err := client.CreateProducer(ProducerOptions{
-	Topic: topicName,
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic:           topicName,
+    DisableBatching: true,
 })
 if err != nil {
 	log.Fatal(err)
diff --git a/site2/website-next/versioned_docs/version-2.6.1/concepts-messaging.md b/site2/website-next/versioned_docs/version-2.6.1/concepts-messaging.md
index 995d632..29cebdf 100644
--- a/site2/website-next/versioned_docs/version-2.6.1/concepts-messaging.md
+++ b/site2/website-next/versioned_docs/version-2.6.1/concepts-messaging.md
@@ -66,7 +66,7 @@ When you enable chunking, read the following instructions.
 - Chunking is only supported for persisted topics.
 - Chunking is only supported for the exclusive and failover subscription types.
 
-When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
+When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
 
 The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChuckedMessage` param [...]
 
diff --git a/site2/website-next/versioned_docs/version-2.6.3/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.6.1/develop-binary-protocol.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/developing-binary-protocol.md
rename to site2/website-next/versioned_docs/version-2.6.1/develop-binary-protocol.md
diff --git a/site2/website-next/versioned_docs/version-2.6.3/developing-cpp.md b/site2/website-next/versioned_docs/version-2.6.1/develop-cpp.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/developing-cpp.md
rename to site2/website-next/versioned_docs/version-2.6.1/develop-cpp.md
diff --git a/site2/website-next/versioned_docs/version-2.6.3/developing-load-manager.md b/site2/website-next/versioned_docs/version-2.6.1/develop-load-manager.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/developing-load-manager.md
rename to site2/website-next/versioned_docs/version-2.6.1/develop-load-manager.md
diff --git a/site2/website-next/versioned_docs/version-2.6.3/developing-tools.md b/site2/website-next/versioned_docs/version-2.6.1/develop-tools.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/developing-tools.md
rename to site2/website-next/versioned_docs/version-2.6.1/develop-tools.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-aerospike-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-aerospike-sink.md
new file mode 100644
index 0000000..63d7338
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-aerospike-sink.md
@@ -0,0 +1,26 @@
+---
+id: io-aerospike-sink
+title: Aerospike sink connector
+sidebar_label: "Aerospike sink connector"
+original_id: io-aerospike-sink
+---
+
+The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters.
+
+## Configuration
+
+The configuration of the Aerospike sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.<br /><br />Each host can be specified as a valid IP address or hostname followed by an optional port number. | 
+| `keyspace` | String| true |No default value |The Aerospike namespace. |
+| `columnName` | String | true| No default value|The Aerospike column name. |
+|`userName`|String|false|NULL|The Aerospike username.|
+|`password`|String|false|NULL|The Aerospike password.|
+| `keySet` | String|false |NULL | The Aerospike set name. |
+| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. |
+| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions.  |
+| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. |
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-canal-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-canal-source.md
new file mode 100644
index 0000000..d1fd43b
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-canal-source.md
@@ -0,0 +1,235 @@
+---
+id: io-canal-source
+title: Canal source connector
+sidebar_label: "Canal source connector"
+original_id: io-canal-source
+---
+
+The Canal source connector pulls messages from MySQL to Pulsar topics.
+
+## Configuration
+
+The configuration of Canal source connector has the following properties.
+
+### Property
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `username` | true | None | Canal server account (not MySQL).|
+| `password` | true | None | Canal server password (not MySQL). |
+|`destination`|true|None|Source destination that Canal source connector connects to.
+| `singleHostname` | false | None | Canal server address.|
+| `singlePort` | false | None | Canal server port.|
+| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.<br /><br /><li>true: **cluster** mode.<br />If set to true, it talks to `zkServers` to figure out the actual database host.<br /><br /></li><li>false: **standalone** mode.<br />If set to false, it connects to the database specified by `singleHostname` and `singlePort`. </li>|
+| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.|
+| `batchSize` | false | 1000 | Batch size to fetch from Canal. |
+
+### Example
+
+Before using the Canal connector, you can create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "zkServers": "127.0.0.1:2181",
+      "batchSize": "5120",
+      "destination": "example",
+      "username": "",
+      "password": "",
+      "cluster": false,
+      "singleHostname": "127.0.0.1",
+      "singlePort": "11111",
+  }
+  
+  ```
+
+* YAML
+
+  You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file.
+
+  ```yaml
+  
+  configs:
+      zkServers: "127.0.0.1:2181"
+      batchSize: 5120
+      destination: "example"
+      username: ""
+      password: ""
+      cluster: false
+      singleHostname: "127.0.0.1"
+      singlePort: 11111
+  
+  ```
+
+## Usage
+
+Here is an example of storing MySQL data using the configuration file as above.
+
+1. Start a MySQL server.
+
+   ```bash
+   
+   $ docker pull mysql:5.7
+   $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7
+   
+   ```
+
+2. Create a configuration file `mysqld.cnf`.
+
+   ```bash
+   
+   [mysqld]
+   pid-file    = /var/run/mysqld/mysqld.pid
+   socket      = /var/run/mysqld/mysqld.sock
+   datadir     = /var/lib/mysql
+   #log-error  = /var/log/mysql/error.log
+   # By default we only accept connections from localhost
+   #bind-address   = 127.0.0.1
+   # Disabling symbolic-links is recommended to prevent assorted security risks
+   symbolic-links=0
+   log-bin=mysql-bin
+   binlog-format=ROW
+   server_id=1
+   
+   ```
+
+3. Copy the configuration file `mysqld.cnf` to MySQL server.
+
+   ```bash
+   
+   $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/
+   
+   ```
+
+4.  Restart the MySQL server.
+
+   ```bash
+   
+   $ docker restart pulsar-mysql
+   
+   ```
+
+5.  Create a test database in MySQL server.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mysql /bin/bash
+   $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;'
+   
+   ```
+
+6. Start a Canal server and connect to MySQL server.
+
+   ```
+   
+   $ docker pull canal/canal-server:v1.1.2
+   $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2
+   
+   ```
+
+7. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:2.3.0
+   $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone
+   
+   ```
+
+8. Modify the configuration file `canal-mysql-source-config.yaml`.
+
+   ```yaml
+   
+   configs:
+       zkServers: ""
+       batchSize: "5120"
+       destination: "test"
+       username: ""
+       password: ""
+       cluster: false
+       singleHostname: "pulsar-canal-server"
+       singlePort: "11111"
+   
+   ```
+
+9. Create a consumer file `pulsar-client.py`.
+
+   ```python
+   
+   import pulsar
+
+   client = pulsar.Client('pulsar://localhost:6650')
+   consumer = client.subscribe('my-topic',
+                               subscription_name='my-sub')
+
+   while True:
+       msg = consumer.receive()
+       print("Received message: '%s'" % msg.data())
+       consumer.acknowledge(msg)
+
+   client.close()
+   
+   ```
+
+10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file  `pulsar-client.py` to Pulsar server.
+
+   ```bash
+   
+   $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/
+   $ docker cp pulsar-client.py pulsar-standalone:/pulsar/
+   
+   ```
+
+11. Download a Canal connector and start it.
+
+   ```bash
+   
+   $ docker exec -it pulsar-standalone /bin/bash
+   $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors
+   $ ./bin/pulsar-admin source localrun \
+   --archive ./connectors/pulsar-io-canal-2.3.0.nar \
+   --classname org.apache.pulsar.io.canal.CanalStringSource \
+   --tenant public \
+   --namespace default \
+   --name canal \
+   --destination-topic-name my-topic \
+   --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \
+   --parallelism 1
+   
+   ```
+
+12. Consume data from MySQL. 
+
+   ```bash
+   
+   $ docker exec -it pulsar-standalone /bin/bash
+   $ python pulsar-client.py
+   
+   ```
+
+13. Open another window to log in MySQL server.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mysql /bin/bash
+   $ mysql -h 127.0.0.1 -uroot -pcanal
+   
+   ```
+
+14. Create a table, and insert, delete, and update data in MySQL server.
+
+   ```bash
+   
+   mysql> use test;
+   mysql> show tables;
+   mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL,
+   `test_author` VARCHAR(40) NOT NULL,
+   `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8;
+   mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW());
+   mysql> UPDATE test_table SET test_title='c' WHERE test_title='a';
+   mysql> DELETE FROM test_table WHERE test_title='c';
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-cassandra-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-cassandra-sink.md
new file mode 100644
index 0000000..b27a754
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-cassandra-sink.md
@@ -0,0 +1,57 @@
+---
+id: io-cassandra-sink
+title: Cassandra sink connector
+sidebar_label: "Cassandra sink connector"
+original_id: io-cassandra-sink
+---
+
+The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters.
+
+## Configuration
+
+The configuration of the Cassandra sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.|
+| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages. <br /><br />**Note: `keyspace` should be created prior to a Cassandra sink.**|
+| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family. <br /><br />The column is used for storing Pulsar message keys. <br /><br />If a Pulsar message doesn't have any key associated, the message value is used as the key. |
+| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.<br /><br />**Note: `columnFamily` should be created prior to a Cassandra sink.**|
+| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.<br /><br /> The column is used for storing Pulsar message values. |
+
+### Example
+
+Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "roots": "localhost:9042",
+      "keyspace": "pulsar_test_keyspace",
+      "columnFamily": "pulsar_test_table",
+      "keyname": "key",
+      "columnName": "col"
+  }
+  
+  ```
+
+* YAML
+
+  ```
+  
+  configs:
+      roots: "localhost:9042"
+      keyspace: "pulsar_test_keyspace"
+      columnFamily: "pulsar_test_table"
+      keyname: "key"
+      columnName: "col"
+  
+  ```
+
+## Usage
+
+For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra).
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-cdc-debezium.md b/site2/website-next/versioned_docs/version-2.6.1/io-cdc-debezium.md
new file mode 100644
index 0000000..fa2efe9
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-cdc-debezium.md
@@ -0,0 +1,543 @@
+---
+id: io-cdc-debezium
+title: Debezium source connector
+sidebar_label: "Debezium source connector"
+original_id: io-cdc-debezium
+---
+
+The Debezium source connector pulls messages from MySQL or PostgreSQL 
+and persists the messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of Debezium source connector has the following properties.
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `task.class` | true | null | A source task class that implemented in Debezium. |
+| `database.hostname` | true | null | The address of a database server. |
+| `database.port` | true | null | The port number of a database server.|
+| `database.user` | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | null | The password for a database user that has the required privileges. |
+| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the  connector.<br /><br /> This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value.  |
+| `database.history` | true | null | The name of the database history class. |
+| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements. <br /><br />**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster. |
+| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. |
+| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+
+
+
+## Example of MySQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration 
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "3306",
+      "database.user": "debezium",
+      "database.password": "dbz",
+      "database.server.id": "184054",
+      "database.server.name": "dbserver1",
+      "database.whitelist": "inventory",
+      "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+      "database.history.pulsar.topic": "history-topic",
+      "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "offset.storage.topic": "offset-topic"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mysql-source"
+  topicName: "debezium-mysql-topic"
+  archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for mysql, docker image: debezium/example-mysql:0.8
+      database.hostname: "localhost"
+      database.port: "3306"
+      database.user: "debezium"
+      database.password: "dbz"
+      database.server.id: "184054"
+      database.server.name: "dbserver1"
+      database.whitelist: "inventory"
+      database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
+      database.history.pulsar.topic: "history-topic"
+      database.history.pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
+      key.converter: "org.apache.kafka.connect.json.JsonConverter"
+      value.converter: "org.apache.kafka.connect.json.JsonConverter"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## OFFSET_STORAGE_TOPIC_CONFIG
+      offset.storage.topic: "offset-topic"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MySQL table using the Pulsar Debezium connector.
+
+1. Start a MySQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysql \
+   -p 3306:3306 \
+   -e MYSQL_ROOT_PASSWORD=debezium \
+   -e MYSQL_USER=mysqluser \
+   -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+    * Use the **JSON** configuration file as shown previously. 
+   
+       Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \
+       --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","va [...]
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --source-config-file debezium-mysql-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the table _inventory.products_.
+
+   ```bash
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MySQL client in docker.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysqlterm \
+   --link mysql \
+   --rm mysql:5.7 sh \
+   -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+   
+   ```
+
+6. A MySQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   mysql> use inventory;
+   mysql> show tables;
+   mysql> SELECT * FROM  products;
+   mysql> UPDATE products SET name='1111111111' WHERE id=101;
+   mysql> UPDATE products SET name='1111111111' WHERE id=107;
+   
+   ```
+
+   In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic.
+
+## Example of PostgreSQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "5432",
+      "database.user": "postgres",
+      "database.password": "postgres",
+      "database.dbname": "postgres",
+      "database.server.name": "dbserver1",
+      "schema.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-postgres-source"
+  topicName: "debezium-postgres-topic"
+  archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-postgress:0.8
+      database.hostname: "localhost"
+      database.port: "5432"
+      database.user: "postgres"
+      database.password: "postgres"
+      database.dbname: "postgres"
+      database.server.name: "dbserver1"
+      schema.whitelist: "inventory"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector.
+
+
+1. Start a PostgreSQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-postgres:0.8
+   $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432  debezium/example-postgres:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \
+       --name debezium-postgres-source \
+       --destination-topic-name debezium-postgres-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-postgres-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a PostgreSQL client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-postgresql /bin/bash
+   
+   ```
+
+6. A PostgreSQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   psql -U postgres postgres
+   postgres=# \c postgres;
+   You are now connected to database "postgres" as user "postgres".
+   postgres=# SET search_path TO inventory;
+   SET
+   postgres=# select * from products;
+    id  |        name        |                       description                       | weight
+   -----+--------------------+---------------------------------------------------------+--------
+    102 | car battery        | 12V car battery                                         |    8.1
+    103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 |    0.8
+    104 | hammer             | 12oz carpenter's hammer                                 |   0.75
+    105 | hammer             | 14oz carpenter's hammer                                 |  0.875
+    106 | hammer             | 16oz carpenter's hammer                                 |      1
+    107 | rocks              | box of assorted rocks                                   |    5.3
+    108 | jacket             | water resistent black wind breaker                      |    0.1
+    109 | spare tire         | 24 inch spare tire                                      |   22.2
+    101 | 1111111111         | Small 2-wheel scooter                                   |   3.14
+   (9 rows)
+   
+   postgres=# UPDATE products SET name='1111111111' WHERE id=107;
+   UPDATE 1
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products. [...]
+   
+   ```
+
+## Example of MongoDB
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+* JSON 
+
+  ```json
+  
+  {
+      "mongodb.hosts": "rs0/mongodb:27017",
+      "mongodb.name": "dbserver1",
+      "mongodb.user": "debezium",
+      "mongodb.password": "dbz",
+      "mongodb.task.id": "1",
+      "database.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mongodb-source"
+  topicName: "debezium-mongodb-topic"
+  archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-postgress:0.10
+      mongodb.hosts: "rs0/mongodb:27017",
+      mongodb.name: "dbserver1",
+      mongodb.user: "debezium",
+      mongodb.password: "dbz",
+      mongodb.task.id: "1",
+      database.whitelist: "inventory",
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector.
+
+
+1. Start a MongoDB server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-mongodb:0.10
+   $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017  debezium/example-mongodb:0.10
+   
+   ```
+
+    Use the following commands to initialize the data.
+
+    ``` bash
+    
+    ./usr/local/bin/init-inventory.sh
+    
+    ```
+
+    If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a```
+
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \
+       --name debezium-mongodb-source \
+       --destination-topic-name debezium-mongodb-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-mongodb-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MongoDB client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mongodb /bin/bash
+   
+   ```
+
+6. A MongoDB client pops out. 
+
+   ```bash
+   
+   mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory
+   db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}})
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type" [...]
+   
+   ```
+
+## FAQ
+ 
+### Debezium postgres connector will hang when create snap
+
+```$xslt
+
+#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000]
+    java.lang.Thread.State: WAITING (parking)
+     at sun.misc.Unsafe.park(Native Method)
+     - parking to wait for  <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
+     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
+     at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396)
+     at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649)
+     at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132)
+     at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source)
+     at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source)
+     at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
+     at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717)
+     at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126)
+     at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)
+     at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127)
+     at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230)
+     at java.lang.Thread.run(Thread.java:748)
+
+```
+
+If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file:
+
+```$xslt
+
+max.queue.size=
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-debezium-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-debezium-source.md
new file mode 100644
index 0000000..808051b
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-debezium-source.md
@@ -0,0 +1,564 @@
+---
+id: io-debezium-source
+title: Debezium source connector
+sidebar_label: "Debezium source connector"
+original_id: io-debezium-source
+---
+
+The Debezium source connector pulls messages from MySQL or PostgreSQL 
+and persists the messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of Debezium source connector has the following properties.
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `task.class` | true | null | A source task class that implemented in Debezium. |
+| `database.hostname` | true | null | The address of a database server. |
+| `database.port` | true | null | The port number of a database server.|
+| `database.user` | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | null | The password for a database user that has the required privileges. |
+| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the  connector.<br /><br /> This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value.  |
+| `database.history` | true | null | The name of the database history class. |
+| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements. <br /><br />**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | true | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. |
+| `json-with-envelope` | false | false | Present the message only consist of payload.
+
+### Converter Options
+
+1. org.apache.kafka.connect.json.JsonConverter
+
+This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema `
+Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`,
+and the message only consist of payload.
+
+If the config `json-with-envelope` value is true, the consumer use the schema 
+`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload.
+
+2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter
+
+If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), 
+Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload.
+
+### MongoDB Configuration
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+
+
+
+## Example of MySQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration 
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "3306",
+      "database.user": "debezium",
+      "database.password": "dbz",
+      "database.server.id": "184054",
+      "database.server.name": "dbserver1",
+      "database.whitelist": "inventory",
+      "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+      "database.history.pulsar.topic": "history-topic",
+      "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "offset.storage.topic": "offset-topic"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mysql-source"
+  topicName: "debezium-mysql-topic"
+  archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for mysql, docker image: debezium/example-mysql:0.8
+      database.hostname: "localhost"
+      database.port: "3306"
+      database.user: "debezium"
+      database.password: "dbz"
+      database.server.id: "184054"
+      database.server.name: "dbserver1"
+      database.whitelist: "inventory"
+      database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
+      database.history.pulsar.topic: "history-topic"
+      database.history.pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
+      key.converter: "org.apache.kafka.connect.json.JsonConverter"
+      value.converter: "org.apache.kafka.connect.json.JsonConverter"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## OFFSET_STORAGE_TOPIC_CONFIG
+      offset.storage.topic: "offset-topic"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MySQL table using the Pulsar Debezium connector.
+
+1. Start a MySQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysql \
+   -p 3306:3306 \
+   -e MYSQL_ROOT_PASSWORD=debezium \
+   -e MYSQL_USER=mysqluser \
+   -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+    * Use the **JSON** configuration file as shown previously. 
+   
+       Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \
+       --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","va [...]
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --source-config-file debezium-mysql-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the table _inventory.products_.
+
+   ```bash
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MySQL client in docker.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysqlterm \
+   --link mysql \
+   --rm mysql:5.7 sh \
+   -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+   
+   ```
+
+6. A MySQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   mysql> use inventory;
+   mysql> show tables;
+   mysql> SELECT * FROM  products;
+   mysql> UPDATE products SET name='1111111111' WHERE id=101;
+   mysql> UPDATE products SET name='1111111111' WHERE id=107;
+   
+   ```
+
+   In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic.
+
+## Example of PostgreSQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "5432",
+      "database.user": "postgres",
+      "database.password": "postgres",
+      "database.dbname": "postgres",
+      "database.server.name": "dbserver1",
+      "schema.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-postgres-source"
+  topicName: "debezium-postgres-topic"
+  archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-postgress:0.8
+      database.hostname: "localhost"
+      database.port: "5432"
+      database.user: "postgres"
+      database.password: "postgres"
+      database.dbname: "postgres"
+      database.server.name: "dbserver1"
+      schema.whitelist: "inventory"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector.
+
+
+1. Start a PostgreSQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-postgres:0.8
+   $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432  debezium/example-postgres:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \
+       --name debezium-postgres-source \
+       --destination-topic-name debezium-postgres-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-postgres-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a PostgreSQL client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-postgresql /bin/bash
+   
+   ```
+
+6. A PostgreSQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   psql -U postgres postgres
+   postgres=# \c postgres;
+   You are now connected to database "postgres" as user "postgres".
+   postgres=# SET search_path TO inventory;
+   SET
+   postgres=# select * from products;
+    id  |        name        |                       description                       | weight
+   -----+--------------------+---------------------------------------------------------+--------
+    102 | car battery        | 12V car battery                                         |    8.1
+    103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 |    0.8
+    104 | hammer             | 12oz carpenter's hammer                                 |   0.75
+    105 | hammer             | 14oz carpenter's hammer                                 |  0.875
+    106 | hammer             | 16oz carpenter's hammer                                 |      1
+    107 | rocks              | box of assorted rocks                                   |    5.3
+    108 | jacket             | water resistent black wind breaker                      |    0.1
+    109 | spare tire         | 24 inch spare tire                                      |   22.2
+    101 | 1111111111         | Small 2-wheel scooter                                   |   3.14
+   (9 rows)
+   
+   postgres=# UPDATE products SET name='1111111111' WHERE id=107;
+   UPDATE 1
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products. [...]
+   
+   ```
+
+## Example of MongoDB
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+* JSON 
+
+  ```json
+  
+  {
+      "mongodb.hosts": "rs0/mongodb:27017",
+      "mongodb.name": "dbserver1",
+      "mongodb.user": "debezium",
+      "mongodb.password": "dbz",
+      "mongodb.task.id": "1",
+      "database.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mongodb-source"
+  topicName: "debezium-mongodb-topic"
+  archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-mongodb:0.10
+      mongodb.hosts: "rs0/mongodb:27017",
+      mongodb.name: "dbserver1",
+      mongodb.user: "debezium",
+      mongodb.password: "dbz",
+      mongodb.task.id: "1",
+      database.whitelist: "inventory",
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector.
+
+
+1. Start a MongoDB server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-mongodb:0.10
+   $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017  debezium/example-mongodb:0.10
+   
+   ```
+
+    Use the following commands to initialize the data.
+
+    ``` bash
+    
+    ./usr/local/bin/init-inventory.sh
+    
+    ```
+
+    If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a```
+
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \
+       --name debezium-mongodb-source \
+       --destination-topic-name debezium-mongodb-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-mongodb-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MongoDB client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mongodb /bin/bash
+   
+   ```
+
+6. A MongoDB client pops out. 
+
+   ```bash
+   
+   mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory
+   db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}})
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type" [...]
+   
+   ```
+
+## FAQ
+ 
+### Debezium postgres connector will hang when create snap
+
+```$xslt
+
+#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000]
+    java.lang.Thread.State: WAITING (parking)
+     at sun.misc.Unsafe.park(Native Method)
+     - parking to wait for  <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
+     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
+     at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396)
+     at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649)
+     at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132)
+     at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source)
+     at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source)
+     at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
+     at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717)
+     at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126)
+     at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)
+     at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127)
+     at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230)
+     at java.lang.Thread.run(Thread.java:748)
+
+```
+
+If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file:
+
+```$xslt
+
+max.queue.size=
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-dynamodb-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-dynamodb-source.md
new file mode 100644
index 0000000..ce58578
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-dynamodb-source.md
@@ -0,0 +1,80 @@
+---
+id: io-dynamodb-source
+title: AWS DynamoDB source connector
+sidebar_label: "AWS DynamoDB source connector"
+original_id: io-dynamodb-source
+---
+
+The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar.
+
+This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter),
+which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual
+consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics.
+
+
+## Configuration
+
+The configuration of the DynamoDB source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.<br /><br />Below are the available options:<br /><br /><li>`AT_TIMESTAMP`: start from the record at or after the specified timestamp.<br /><br /></li><li>`LATEST`: start after the most recent data record.<br /><br /></li><li>`TRIM_HORIZON`: start from the oldest available data record.</li>
+`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption.
+`applicationName`|String|false|Pulsar IO connector|The name of the KCL application.  Must be unique, as it is used to define the table name for the dynamo table used for state tracking. <br /><br />By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances.
+`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds.
+`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds.
+`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint.
+`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector. <br /><br />Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed.
+`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br /><br />**Example**<br /> us-west-1, us-west-2
+`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.<br /><br />`awsCredentialProviderPlugin` has the following built-in plugs:<br /><br /><li>`org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:<br /> this plugin uses the default AWS provider chain.<br />For more information, see [using the [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Example
+
+Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "awsEndpoint": "https://some.endpoint.aws",
+      "awsRegion": "us-east-1",
+      "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291",
+      "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+      "applicationName": "My test application",
+      "checkpointInterval": "30000",
+      "backoffTime": "4000",
+      "numRetries": "3",
+      "receiveQueueSize": 2000,
+      "initialPositionInStream": "TRIM_HORIZON",
+      "startAtTime": "2019-03-05T19:28:58.000Z"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      awsEndpoint: "https://some.endpoint.aws"
+      awsRegion: "us-east-1"
+      awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291"
+      awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+      applicationName: "My test application"
+      checkpointInterval: 30000
+      backoffTime: 4000
+      numRetries: 3
+      receiveQueueSize: 2000
+      initialPositionInStream: "TRIM_HORIZON"
+      startAtTime: "2019-03-05T19:28:58.000Z"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-elasticsearch-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-elasticsearch-sink.md
new file mode 100644
index 0000000..4acedd3
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-elasticsearch-sink.md
@@ -0,0 +1,173 @@
+---
+id: io-elasticsearch-sink
+title: ElasticSearch sink connector
+sidebar_label: "ElasticSearch sink connector"
+original_id: io-elasticsearch-sink
+---
+
+The ElasticSearch sink connector pulls messages from Pulsar topics and persists the messages to indexes.
+
+## Configuration
+
+The configuration of the ElasticSearch sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. |
+| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. |
+| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to. <br /><br /> The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. |
+| `indexNumberOfShards` | int| false |1| The number of shards of the index. |
+| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. |
+| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster. <br /><br />If `username` is set, then `password` should also be provided. |
+| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster. <br /><br />If `username` is set, then `password` should also be provided.  |
+
+## Example
+
+Before using the ElasticSearch sink connector, you need to create a configuration file through one of the following methods.
+
+### Configuration
+
+#### For Elasticsearch After 6.2
+
+* JSON 
+
+  ```json
+  
+  {
+      "elasticSearchUrl": "http://localhost:9200",
+      "indexName": "my_index",
+      "username": "scooby",
+      "password": "doobie"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      elasticSearchUrl: "http://localhost:9200"
+      indexName: "my_index"
+      username: "scooby"
+      password: "doobie"
+  
+  ```
+
+#### For Elasticsearch Before 6.2
+
+* JSON 
+
+  ```json
+  
+  {
+      "elasticSearchUrl": "http://localhost:9200",
+      "indexName": "my_index",
+      "typeName": "doc",
+      "username": "scooby",
+      "password": "doobie"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      elasticSearchUrl: "http://localhost:9200"
+      indexName: "my_index"
+      typeName: "doc"
+      username: "scooby"
+      password: "doobie"
+  
+  ```
+
+### Usage
+
+1. Start a single node Elasticsearch cluster.
+
+   ```bash
+   
+   $ docker run -p 9200:9200 -p 9300:9300 \
+       -e "discovery.type=single-node" \
+       docker.elastic.co/elasticsearch/elasticsearch:7.5.1
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+   Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`.
+
+3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods.
+   * Use the **JSON** configuration as shown previously. 
+
+       ```bash
+       
+       $ bin/pulsar-admin sinks localrun \
+           --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \
+           --tenant public \
+           --namespace default \
+           --name elasticsearch-test-sink \
+           --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \
+           --inputs elasticsearch_test
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin sinks localrun \
+           --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \
+           --tenant public \
+           --namespace default \
+           --name elasticsearch-test-sink \
+           --sink-config-file elasticsearch-sink.yml \
+           --inputs elasticsearch_test
+       
+       ```
+
+4. Publish records to the topic.
+
+   ```bash
+   
+   $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}"
+   
+   ```
+
+5. Check documents in Elasticsearch.
+   
+   * refresh the index
+
+       ```bash
+       
+           $ curl -s http://localhost:9200/my_index/_refresh
+       
+       ```
+
+ 
+   * search documents
+
+       ```bash
+       
+           $ curl -s http://localhost:9200/my_index/_search
+       
+       ```
+
+       You can see the record that published earlier has been successfully written into Elasticsearch.
+
+       ```json
+       
+       {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}}
+       
+       ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-file-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-file-source.md
new file mode 100644
index 0000000..e9d710c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-file-source.md
@@ -0,0 +1,160 @@
+---
+id: io-file-source
+title: File source connector
+sidebar_label: "File source connector"
+original_id: io-file-source
+---
+
+The File source connector pulls messages from files in directories and persists the messages to Pulsar topics.
+
+## Configuration
+
+The configuration of the File source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `inputDirectory` | String|true  | No default value|The input directory to pull files. |
+| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.|
+| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. |
+| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. |
+| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. |
+| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed. <br /><br />Any file younger than `minimumFileAge` (according to the last modification date) is ignored. |
+| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed. <br /><br />Any file older than `maximumFileAge` (according to last modification date) is ignored. |
+| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. |
+| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. |
+| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. |
+| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. |
+| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.<br /><br /> This allows you to process a larger number of files concurrently. <br /><br />However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. |
+
+### Example
+
+Before using the File source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "inputDirectory": "/Users/david",
+      "recurse": true,
+      "keepFile": true,
+      "fileFilter": "[^\\.].*",
+      "pathFilter": "*",
+      "minimumFileAge": 0,
+      "maximumFileAge": 9999999999,
+      "minimumSize": 1,
+      "maximumSize": 5000000,
+      "ignoreHiddenFiles": true,
+      "pollingInterval": 5000,
+      "numWorkers": 1
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      inputDirectory: "/Users/david"
+      recurse: true
+      keepFile: true
+      fileFilter: "[^\\.].*"
+      pathFilter: "*"
+      minimumFileAge: 0
+      maximumFileAge: 9999999999
+      minimumSize: 1
+      maximumSize: 5000000
+      ignoreHiddenFiles: true
+      pollingInterval: 5000
+      numWorkers: 1
+  
+  ```
+
+## Usage
+
+Here is an example of using the File source connecter.
+
+1. Pull a Pulsar image.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:{version}
+   
+   ```
+
+2. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+   
+   ```
+
+3. Create a configuration file _file-connector.yaml_.
+
+   ```yaml
+   
+   configs:
+       inputDirectory: "/opt"
+   
+   ```
+
+4. Copy the configuration file _file-connector.yaml_ to the container.
+
+   ```bash
+   
+   $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/
+   
+   ```
+
+5. Download the File source connector.
+
+   ```bash
+   
+   $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar
+   
+   ```
+
+6. Start the File source connector.
+
+   ```bash
+   
+   $ docker exec -it pulsar-standalone /bin/bash
+
+   $ ./bin/pulsar-admin sources localrun \
+   --archive /pulsar/pulsar-io-file-{version}.nar \
+   --name file-test \
+   --destination-topic-name  pulsar-file-test \
+   --source-config-file /pulsar/file-connector.yaml
+   
+   ```
+
+7. Start a consumer.
+
+   ```bash
+   
+   ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test
+   
+   ```
+
+8. Write the message to the file _test.txt_.
+
+   ```bash
+   
+   echo "hello world!" > /opt/test.txt
+   
+   ```
+
+   The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   ----- got message -----
+   hello world!
+   
+   ```
+
+   
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-flume-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-flume-sink.md
new file mode 100644
index 0000000..b2ace53
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-flume-sink.md
@@ -0,0 +1,56 @@
+---
+id: io-flume-sink
+title: Flume sink connector
+sidebar_label: "Flume sink connector"
+original_id: io-flume-sink
+---
+
+The Flume sink connector pulls messages from Pulsar topics to logs.
+
+## Configuration
+
+The configuration of the Flume sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`name`|String|true|"" (empty string)|The name of the agent.
+`confFile`|String|true|"" (empty string)|The configuration file.
+`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed.
+`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection.
+`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration.
+
+### Example
+
+Before using the Flume sink connector, you need to create a configuration file through one of the following methods.
+
+> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf).
+
+* JSON 
+
+  ```json
+  
+  {
+      "name": "a1",
+      "confFile": "sink.conf",
+      "noReloadConf": "false",
+      "zkConnString": "",
+      "zkBasePath": ""
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      name: a1
+      confFile: sink.conf
+      noReloadConf: false
+      zkConnString: ""
+      zkBasePath: ""
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-flume-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-flume-source.md
new file mode 100644
index 0000000..b7fd7ed
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-flume-source.md
@@ -0,0 +1,56 @@
+---
+id: io-flume-source
+title: Flume source connector
+sidebar_label: "Flume source connector"
+original_id: io-flume-source
+---
+
+The Flume source connector pulls messages from logs to Pulsar topics.
+
+## Configuration
+
+The configuration of the Flume source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`name`|String|true|"" (empty string)|The name of the agent.
+`confFile`|String|true|"" (empty string)|The configuration file.
+`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed.
+`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection.
+`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration.
+
+### Example
+
+Before using the Flume source connector, you need to create a configuration file through one of the following methods.
+
+> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf).
+
+* JSON 
+
+  ```json
+  
+  {
+      "name": "a1",
+      "confFile": "source.conf",
+      "noReloadConf": "false",
+      "zkConnString": "",
+      "zkBasePath": ""
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      name: a1
+      confFile: source.conf
+      noReloadConf: false
+      zkConnString: ""
+      zkBasePath: ""
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-hbase-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-hbase-sink.md
new file mode 100644
index 0000000..1737b00
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-hbase-sink.md
@@ -0,0 +1,67 @@
+---
+id: io-hbase-sink
+title: HBase sink connector
+sidebar_label: "HBase sink connector"
+original_id: io-hbase-sink
+---
+
+The HBase sink connector pulls the messages from Pulsar topics 
+and persists the messages to HBase tables
+
+## Configuration
+
+The configuration of the HBase sink connector has the following properties.
+
+### Property
+
+| Name | Type|Default | Required | Description |
+|------|---------|----------|-------------|---
+| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. |
+| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. |
+| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. |
+| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. |
+| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. |
+| `rowKeyName` | String|None | true | HBase table rowkey name. |
+| `familyName` | String|None | true | HBase table column family name. |
+| `qualifierNames` |String| None | true | HBase table column qualifier names. |
+| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. |
+| `batchSize` | int|200| false | Batch size of updates made to the HBase table. |
+
+### Example
+
+Before using the HBase sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "hbaseConfigResources": "hbase-site.xml",
+      "zookeeperQuorum": "localhost",
+      "zookeeperClientPort": "2181",
+      "zookeeperZnodeParent": "/hbase",
+      "tableName": "pulsar_hbase",
+      "rowKeyName": "rowKey",
+      "familyName": "info",
+      "qualifierNames": [ 'name', 'address', 'age']
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      hbaseConfigResources: "hbase-site.xml"
+      zookeeperQuorum: "localhost"
+      zookeeperClientPort: "2181"
+      zookeeperZnodeParent: "/hbase"
+      tableName: "pulsar_hbase"
+      rowKeyName: "rowKey"
+      familyName: "info"
+      qualifierNames: [ 'name', 'address', 'age']
+  
+  ```
+
+  
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-hdfs2-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-hdfs2-sink.md
new file mode 100644
index 0000000..411b972
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-hdfs2-sink.md
@@ -0,0 +1,61 @@
+---
+id: io-hdfs2-sink
+title: HDFS2 sink connector
+sidebar_label: "HDFS2 sink connector"
+original_id: io-hdfs2-sink
+---
+
+The HDFS2 sink connector pulls the messages from Pulsar topics 
+and persists the messages to HDFS files.
+
+## Configuration
+
+The configuration of the HDFS2 sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.<br /><br />**Example**<br />'core-site.xml'<br />'hdfs-site.xml' |
+| `directory` | String | true | None|The HDFS directory where files read from or written to. |
+| `encoding` | String |false |None |The character encoding for the files.<br /><br />**Example**<br />UTF-8<br />ASCII |
+| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS. <br /><br />Below are the available options:<br /><li>BZIP2<br /></li><li>DEFLATE<br /></li><li>GZIP<br /></li><li>LZ4<br /></li><li>SNAPPY</li>|
+| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. |
+| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. |
+| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.<br /><br />**Example**<br /> The value of topicA result in files named topicA-. |
+| `fileExtension` | String| true | None | The extension added to the files written to HDFS.<br /><br />**Example**<br />'.txt'<br /> '.seq' |
+| `separator` | char|false |None |The character used to separate records in a text file. <br /><br />If no value is provided, the contents from all records are concatenated together in one continuous byte array. |
+| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. |
+| `maxPendingRecords` |int| false|Integer.MAX_VALUE |  The maximum number of records that hold in memory before acking. <br /><br />Setting this property to 1 makes every record send to disk before the record is acked.<br /><br />Setting this property to a higher value allows buffering records before flushing them to disk. 
+
+### Example
+
+Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "hdfsConfigResources": "core-site.xml",
+      "directory": "/foo/bar",
+      "filenamePrefix": "prefix",
+      "fileExtension": ".log",
+      "compression": "SNAPPY"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      hdfsConfigResources: "core-site.xml"
+      directory: "/foo/bar"
+      filenamePrefix: "prefix"
+      fileExtension: ".log"
+      compression: "SNAPPY"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-hdfs3-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-hdfs3-sink.md
new file mode 100644
index 0000000..aec065a
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-hdfs3-sink.md
@@ -0,0 +1,59 @@
+---
+id: io-hdfs3-sink
+title: HDFS3 sink connector
+sidebar_label: "HDFS3 sink connector"
+original_id: io-hdfs3-sink
+---
+
+The HDFS3 sink connector pulls the messages from Pulsar topics 
+and persists the messages to HDFS files.
+
+## Configuration
+
+The configuration of the HDFS3 sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.<br /><br />**Example**<br />'core-site.xml'<br />'hdfs-site.xml' |
+| `directory` | String | true | None|The HDFS directory where files read from or written to. |
+| `encoding` | String |false |None |The character encoding for the files.<br /><br />**Example**<br />UTF-8<br />ASCII |
+| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS. <br /><br />Below are the available options:<br /><li>BZIP2<br /></li><li>DEFLATE<br /></li><li>GZIP<br /></li><li>LZ4<br /></li><li>SNAPPY</li>|
+| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. |
+| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. |
+| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.<br /><br />**Example**<br /> The value of topicA result in files named topicA-. |
+| `fileExtension` | String| false | None| The extension added to the files written to HDFS.<br /><br />**Example**<br />'.txt'<br /> '.seq' |
+| `separator` | char|false |None |The character used to separate records in a text file. <br /><br />If no value is provided, the contents from all records are concatenated together in one continuous byte array. |
+| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. |
+| `maxPendingRecords` |int| false|Integer.MAX_VALUE |  The maximum number of records that hold in memory before acking. <br /><br />Setting this property to 1 makes every record send to disk before the record is acked.<br /><br />Setting this property to a higher value allows buffering records before flushing them to disk. 
+
+### Example
+
+Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "hdfsConfigResources": "core-site.xml",
+      "directory": "/foo/bar",
+      "filenamePrefix": "prefix",
+      "compression": "SNAPPY"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      hdfsConfigResources: "core-site.xml"
+      directory: "/foo/bar"
+      filenamePrefix: "prefix"
+      compression: "SNAPPY"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-influxdb-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-influxdb-sink.md
new file mode 100644
index 0000000..9382f8c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-influxdb-sink.md
@@ -0,0 +1,119 @@
+---
+id: io-influxdb-sink
+title: InfluxDB sink connector
+sidebar_label: "InfluxDB sink connector"
+original_id: io-influxdb-sink
+---
+
+The InfluxDB sink connector pulls messages from Pulsar topics 
+and persists the messages to InfluxDB.
+
+The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively.
+
+## Configuration
+
+The configuration of the InfluxDB sink connector has the following properties.
+
+### Property
+#### InfluxDBv2
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
+| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. |
+| `organization` | String| true|" " (empty string)  | The InfluxDB organization to write to. |
+| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. |
+| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB. <br /><br />Below are the available options:<li>ns<br /></li><li>us<br /></li><li>ms<br /></li><li>s</li>|
+| `logLevel` | String|false| NONE|The log level for InfluxDB request and response. <br /><br />Below are the available options:<li>NONE<br /></li><li>BASIC<br /></li><li>HEADERS<br /></li><li>FULL</li>|
+| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
+| `batchTimeMs` |long|false| 1000L |   The InfluxDB operation time in milliseconds. |
+| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+
+#### InfluxDBv1
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
+| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. |
+| `password` | String| false|" " (empty string)  | The password used to authenticate to InfluxDB. |
+| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. |
+| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB. <br /><br />Below are the available options:<li>ALL<br /></li><li> ANY<br /></li><li>ONE<br /></li><li>QUORUM </li>|
+| `logLevel` | String|false| NONE|The log level for InfluxDB request and response. <br /><br />Below are the available options:<li>NONE<br /></li><li>BASIC<br /></li><li>HEADERS<br /></li><li>FULL</li>|
+| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. |
+| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
+| `batchTimeMs` |long|false| 1000L |   The InfluxDB operation time in milliseconds. |
+| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+
+### Example
+Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods.
+#### InfluxDBv2
+* JSON
+
+  ```json
+  
+  {
+      "influxdbUrl": "http://localhost:9999",
+      "organization": "example-org",
+      "bucket": "example-bucket",
+      "token": "xxxx",
+      "precision": "ns",
+      "logLevel": "NONE",
+      "gzipEnable": false,
+      "batchTimeMs": 1000,
+      "batchSize": 100
+  }
+  
+  ```
+
+  
+* YAML
+
+  ```yaml
+  
+  configs:
+      influxdbUrl: "http://localhost:9999"
+      organization: "example-org"
+      bucket: "example-bucket"
+      token: "xxxx"
+      precision: "ns"
+      logLevel: "NONE"
+      gzipEnable: false
+      batchTimeMs: 1000
+      batchSize: 100
+  
+  ```
+
+  
+#### InfluxDBv1
+
+* JSON 
+
+  ```json
+  
+  {
+      "influxdbUrl": "http://localhost:8086",
+      "database": "test_db",
+      "consistencyLevel": "ONE",
+      "logLevel": "NONE",
+      "retentionPolicy": "autogen",
+      "gzipEnable": false,
+      "batchTimeMs": 1000,
+      "batchSize": 100
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      influxdbUrl: "http://localhost:8086"
+      database: "test_db"
+      consistencyLevel: "ONE"
+      logLevel: "NONE"
+      retentionPolicy: "autogen"
+      gzipEnable: false
+      batchTimeMs: 1000
+      batchSize: 100
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-jdbc-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-jdbc-sink.md
new file mode 100644
index 0000000..77dbb61
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-jdbc-sink.md
@@ -0,0 +1,157 @@
+---
+id: io-jdbc-sink
+title: JDBC sink connector
+sidebar_label: "JDBC sink connector"
+original_id: io-jdbc-sink
+---
+
+The JDBC sink connectors allow pulling messages from Pulsar topics 
+and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite.
+
+> Currently, INSERT, DELETE and UPDATE operations are supported.
+
+## Configuration 
+
+The configuration of all JDBC sink connectors has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.<br /><br />**Note: `userName` is case-sensitive.**|
+| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`. <br /><br />**Note: `password` is case-sensitive.**|
+| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. |
+| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. |
+| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events.  |
+| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
+| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. |
+| `batchSize` | int|false | 200 | The batch size of updates made to the database. |
+
+### Example for ClickHouse
+
+* JSON 
+
+  ```json
+  
+  {
+      "userName": "clickhouse",
+      "password": "password",
+      "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink",
+      "tableName": "pulsar_clickhouse_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-clickhouse-sink"
+  topicName: "persistent://public/default/jdbc-clickhouse-topic"
+  sinkType: "jdbc-clickhouse"    
+  configs:
+      userName: "clickhouse"
+      password: "password"
+      jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink"
+      tableName: "pulsar_clickhouse_jdbc_sink"
+  
+  ```
+
+### Example for MariaDB
+
+* JSON 
+
+  ```json
+  
+  {
+      "userName": "mariadb",
+      "password": "password",
+      "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink",
+      "tableName": "pulsar_mariadb_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-mariadb-sink"
+  topicName: "persistent://public/default/jdbc-mariadb-topic"
+  sinkType: "jdbc-mariadb"    
+  configs:
+      userName: "mariadb"
+      password: "password"
+      jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink"
+      tableName: "pulsar_mariadb_jdbc_sink"
+  
+  ```
+
+### Example for PostgreSQL
+
+Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "userName": "postgres",
+      "password": "password",
+      "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink",
+      "tableName": "pulsar_postgres_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-postgres-sink"
+  topicName: "persistent://public/default/jdbc-postgres-topic"
+  sinkType: "jdbc-postgres"    
+  configs:
+      userName: "postgres"
+      password: "password"
+      jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink"
+      tableName: "pulsar_postgres_jdbc_sink"
+  
+  ```
+
+For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql).
+
+### Example for SQLite
+
+* JSON 
+
+  ```json
+  
+  {
+      "jdbcUrl": "jdbc:sqlite:db.sqlite",
+      "tableName": "pulsar_sqlite_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-sqlite-sink"
+  topicName: "persistent://public/default/jdbc-sqlite-topic"
+  sinkType: "jdbc-sqlite"    
+  configs:
+      jdbcUrl: "jdbc:sqlite:db.sqlite"
+      tableName: "pulsar_sqlite_jdbc_sink"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-kafka-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-kafka-sink.md
new file mode 100644
index 0000000..09dad4c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-kafka-sink.md
@@ -0,0 +1,72 @@
+---
+id: io-kafka-sink
+title: Kafka sink connector
+sidebar_label: "Kafka sink connector"
+original_id: io-kafka-sink
+---
+
+The Kafka sink connector pulls messages from Pulsar topics and persists the messages
+to Kafka topics.
+
+This guide explains how to configure and use the Kafka sink connector.
+
+## Configuration
+
+The configuration of the Kafka sink connector has the following parameters.
+
+### Property
+
+| Name | Type| Required | Default | Description 
+|------|----------|---------|-------------|-------------|
+|  `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. |
+|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes. <br />This controls the durability of the sent records.
+|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers.
+|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes.
+|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar.
+| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys.
+| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.<br /><br />The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java).
+|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers. <br /><br />**Note:  other properties specified in the connector configuration file take precedence over this configuration**.
+
+
+### Example
+
+Before using the Kafka sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "bootstrapServers": "localhost:6667",
+      "topic": "test",
+      "acks": "1",
+      "batchSize": "16384",
+      "maxRequestSize": "1048576",
+      "producerConfigProperties":
+       {
+          "client.id": "test-pulsar-producer",
+          "security.protocol": "SASL_PLAINTEXT",
+          "sasl.mechanism": "GSSAPI",
+          "sasl.kerberos.service.name": "kafka",
+          "acks": "all" 
+       }
+  }
+
+* YAML
+  
+  ```
+
+yaml
+  configs:
+      bootstrapServers: "localhost:6667"
+      topic: "test"
+      acks: "1"
+      batchSize: "16384"
+      maxRequestSize: "1048576"
+      producerConfigProperties:
+          client.id: "test-pulsar-producer"
+          security.protocol: "SASL_PLAINTEXT"
+          sasl.mechanism: "GSSAPI"
+          sasl.kerberos.service.name: "kafka"
+          acks: "all"   
+  ```
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-kafka-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-kafka-source.md
new file mode 100644
index 0000000..8d68e29
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-kafka-source.md
@@ -0,0 +1,197 @@
+---
+id: io-kafka-source
+title: Kafka source connector
+sidebar_label: "Kafka source connector"
+original_id: io-kafka-source
+---
+
+The Kafka source connector pulls messages from Kafka topics and persists the messages
+to Pulsar topics.
+
+This guide explains how to configure and use the Kafka source connector.
+
+## Configuration
+
+The configuration of the Kafka source connector has the following properties.
+
+### Property
+
+| Name | Type| Required | Default | Description 
+|------|----------|---------|-------------|-------------|
+|  `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. |
+| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. |
+| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. |
+| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.<br /><br /> This committed offset is used when the process fails as the position from which a new consumer begins. |
+| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. |
+| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities. <br /><br />**Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.|
+| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. |
+| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. |
+|  `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers. <br /><br />**Note: other properties specified in the connector configuration file take precedence over this configuration**. |
+| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.<br /> The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java).
+| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values.
+
+
+### Example
+
+Before using the Kafka source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "bootstrapServers": "pulsar-kafka:9092",
+      "groupId": "test-pulsar-io",
+      "topic": "my-topic",
+      "sessionTimeoutMs": "10000",
+      "autoCommitEnabled": false
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      bootstrapServers: "pulsar-kafka:9092"
+      groupId: "test-pulsar-io"
+      topic: "my-topic"
+      sessionTimeoutMs: "10000"
+      autoCommitEnabled: false
+  
+  ```
+
+## Usage
+
+Here is an example of using the Kafka source connecter with the configuration file as shown previously.
+
+1. Download a Kafka client and a Kafka connector.
+
+   ```bash
+   
+   $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar
+
+   $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar
+   
+   ```
+
+2. Create a network.
+
+   ```bash
+   
+   $ docker network create kafka-pulsar
+   
+   ```
+
+3. Pull a ZooKeeper image and start ZooKeeper.
+
+   ```bash
+   
+   $ docker pull wurstmeister/zookeeper
+
+   $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper
+   
+   ```
+
+4. Pull a Kafka image and start Kafka.
+
+   ```bash
+   
+   $ docker pull wurstmeister/kafka:2.11-1.0.2
+   
+   $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2
+   
+   ```
+
+5. Pull a Pulsar image and start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:2.4.0
+   
+   $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone
+   
+   ```
+
+6. Create a producer file _kafka-producer.py_.
+
+   ```python
+   
+   from kafka import KafkaProducer
+   producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092')
+   future = producer.send('my-topic', b'hello world')
+   future.get()
+   
+   ```
+
+7. Create a consumer file _pulsar-client.py_.
+
+   ```python
+   
+   import pulsar
+
+   client = pulsar.Client('pulsar://localhost:6650')
+   consumer = client.subscribe('my-topic', subscription_name='my-aa')
+
+   while True:
+       msg = consumer.receive()
+       print msg
+       print dir(msg)
+       print("Received message: '%s'" % msg.data())
+       consumer.acknowledge(msg)
+
+   client.close()
+   
+   ```
+
+8. Copy the following files to Pulsar.
+
+   ```bash
+   
+   $ docker cp pulsar-io-kafka-2.4.0.nar pulsar-kafka-standalone:/pulsar
+   $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf
+   $ docker cp kafka-clients-0.10.2.1.jar pulsar-kafka-standalone:/pulsar/lib
+   $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/
+   $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/
+   
+   ```
+
+9. Open a new terminal window and start the Kafka source connector in local run mode. 
+
+   ```bash
+   
+   $ docker exec -it pulsar-kafka-standalone /bin/bash
+
+   $ ./bin/pulsar-admin source localrun \
+   --archive ./pulsar-io-kafka-2.4.0.nar \
+   --classname org.apache.pulsar.io.kafka.KafkaBytesSource \
+   --tenant public \
+   --namespace default \
+   --name kafka \
+   --destination-topic-name my-topic \
+   --source-config-file ./conf/kafkaSourceConfig.yaml \
+   --parallelism 1
+   
+   ```
+
+10. Open a new terminal window and run the consumer.
+
+   ```bash
+   
+   $ docker exec -it pulsar-kafka-standalone /bin/bash
+
+   $ pip install kafka-python
+
+   $ python3 kafka-producer.py
+   
+   ```
+
+   The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   Received message: 'hello world'
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-kinesis-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-kinesis-sink.md
new file mode 100644
index 0000000..153587d
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-kinesis-sink.md
@@ -0,0 +1,80 @@
+---
+id: io-kinesis-sink
+title: Kinesis sink connector
+sidebar_label: "Kinesis sink connector"
+original_id: io-kinesis-sink
+---
+
+The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis.
+
+## Configuration
+
+The configuration of the Kinesis sink connector has the following property.
+
+### Property
+
+| Name | Type|Required | Default | Description
+|------|----------|----------|---------|-------------|
+`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.<br /><br />Below are the available options:<br /><br /><li>`ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream. <br /><br /></li><li>`FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON pa [...]
+`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not.
+`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br /><br />**Example**<br /> us-west-1, us-west-2
+`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}. <br /><br />It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink. <br /><br />If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPlu [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Built-in plugins
+
+The following are built-in `AwsCredentialProviderPlugin` plugins:
+
+* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin`
+  
+  This plugin takes no configuration, it uses the default AWS provider chain. 
+  
+  For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).
+
+* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin`
+  
+  This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL.
+
+  This configuration takes the form of a small json document like:
+
+  ```json
+  
+  {"roleArn": "arn...", "roleSessionName": "name"}
+  
+  ```
+
+### Example
+
+Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "awsEndpoint": "some.endpoint.aws",
+      "awsRegion": "us-east-1",
+      "awsKinesisStreamName": "my-stream",
+      "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+      "messageFormat": "ONLY_RAW_PAYLOAD",
+      "retainOrdering": "true"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      awsEndpoint: "some.endpoint.aws"
+      awsRegion: "us-east-1"
+      awsKinesisStreamName: "my-stream"
+      awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+      messageFormat: "ONLY_RAW_PAYLOAD"
+      retainOrdering: "true"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-kinesis-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-kinesis-source.md
new file mode 100644
index 0000000..0d07eef
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-kinesis-source.md
@@ -0,0 +1,81 @@
+---
+id: io-kinesis-source
+title: Kinesis source connector
+sidebar_label: "Kinesis source connector"
+original_id: io-kinesis-source
+---
+
+The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar.
+
+This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers.
+
+> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release.
+
+
+## Configuration
+
+The configuration of the Kinesis source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.<br /><br />Below are the available options:<br /><br /><li>`AT_TIMESTAMP`: start from the record at or after the specified timestamp.<br /><br /></li><li>`LATEST`: start after the most recent data record.<br /><br /></li><li>`TRIM_HORIZON`: start from the oldest available data record.</li>
+`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption.
+`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application. <br /><br />By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances.
+`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds.
+`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds.
+`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint.
+`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector. <br /><br />Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed.
+`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.<br /><br />If set to false, it uses polling.
+`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br /><br />**Example**<br /> us-west-1, us-west-2
+`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.<br /><br />`awsCredentialProviderPlugin` has the following built-in plugs:<br /><br /><li>`org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:<br /> this plugin uses the default AWS provider chain.<br />For more information, see [using the [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Example
+
+Before using the Kinesis source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "awsEndpoint": "https://some.endpoint.aws",
+      "awsRegion": "us-east-1",
+      "awsKinesisStreamName": "my-stream",
+      "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+      "applicationName": "My test application",
+      "checkpointInterval": "30000",
+      "backoffTime": "4000",
+      "numRetries": "3",
+      "receiveQueueSize": 2000,
+      "initialPositionInStream": "TRIM_HORIZON",
+      "startAtTime": "2019-03-05T19:28:58.000Z"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      awsEndpoint: "https://some.endpoint.aws"
+      awsRegion: "us-east-1"
+      awsKinesisStreamName: "my-stream"
+      awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+      applicationName: "My test application"
+      checkpointInterval: 30000
+      backoffTime: 4000
+      numRetries: 3
+      receiveQueueSize: 2000
+      initialPositionInStream: "TRIM_HORIZON"
+      startAtTime: "2019-03-05T19:28:58.000Z"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-mongo-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-mongo-sink.md
new file mode 100644
index 0000000..3e6b3e6
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-mongo-sink.md
@@ -0,0 +1,57 @@
+---
+id: io-mongo-sink
+title: MongoDB sink connector
+sidebar_label: "MongoDB sink connector"
+original_id: io-mongo-sink
+---
+
+The MongoDB sink connector pulls messages from Pulsar topics 
+and persists the messages to collections.
+
+## Configuration
+
+The configuration of the MongoDB sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects. <br /><br />For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). |
+| `database` | String| true| " " (empty string)| The database name to which the collection belongs. |
+| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. |
+| `batchSize` | int|false|100 | The batch size of writing messages to collections. |
+| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. |
+
+
+### Example
+
+Before using the Mongo sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "mongoUri": "mongodb://localhost:27017",
+      "database": "pulsar",
+      "collection": "messages",
+      "batchSize": "2",
+      "batchTimeMs": "500"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  {
+      mongoUri: "mongodb://localhost:27017"
+      database: "pulsar"
+      collection: "messages"
+      batchSize: 2
+      batchTimeMs: 500
+  }
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-netty-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-netty-source.md
new file mode 100644
index 0000000..e1ec8d8
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-netty-source.md
@@ -0,0 +1,241 @@
+---
+id: io-netty-source
+title: Netty source connector
+sidebar_label: "Netty source connector"
+original_id: io-netty-source
+---
+
+The Netty source connector opens a port that accepts incoming data via the configured network protocol 
+and publish it to user-defined Pulsar topics.
+
+This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports.
+
+## Configuration
+
+The configuration of the Netty source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `type` |String| true |tcp | The network protocol over which data is transmitted to netty. <br /><br />Below are the available options:<br /><li>tcp</li><li>http</li><li>udp </li>|
+| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. |
+| `port` | int|true | 10999 | The port on which the source instance listen. |
+| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. |
+
+
+### Example
+
+Before using the Netty source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "type": "tcp",
+      "host": "127.0.0.1",
+      "port": "10911",
+      "numberOfThreads": "1"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      type: "tcp"
+      host: "127.0.0.1"
+      port: 10999
+      numberOfThreads: 1
+  
+  ```
+
+## Usage 
+
+The following examples show how to use the Netty source connector with TCP and HTTP.
+
+### TCP 
+
+1. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:{version}
+
+   $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+   
+   ```
+
+2. Create a configuration file _netty-source-config.yaml_.
+
+   ```yaml
+   
+   configs:
+       type: "tcp"
+       host: "127.0.0.1"
+       port: 10999
+       numberOfThreads: 1
+   
+   ```
+
+3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server.
+
+   ```bash
+   
+   $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/
+   
+   ```
+
+4. Download the Netty source connector.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar
+   
+   ```
+
+5. Start the Netty source connector.
+
+   ```bash
+   
+   $ ./bin/pulsar-admin sources localrun \
+   --archive pulsar-io-@pulsar:version@.nar \
+   --tenant public \
+   --namespace default \
+   --name netty \
+   --destination-topic-name netty-topic \
+   --source-config-file netty-source-config.yaml \
+   --parallelism 1
+   
+   ```
+
+6. Consume data.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0
+   
+   ```
+
+7. Open another terminal window to send data to the Netty source.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ apt-get update
+   
+   $ apt-get -y install telnet
+
+   $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999
+   Trying 127.0.0.1...
+   Connected to 127.0.0.1.
+   Escape character is '^]'.
+   hello
+   world
+   
+   ```
+
+8. The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   ----- got message -----
+   hello
+
+   ----- got message -----
+   world
+   
+   ```
+
+### HTTP 
+
+1. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:{version}
+
+   $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+   
+   ```
+
+2. Create a configuration file _netty-source-config.yaml_.
+
+   ```yaml
+   
+   configs:
+       type: "http"
+       host: "127.0.0.1"
+       port: 10999
+       numberOfThreads: 1
+   
+   ```
+
+3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server.
+
+   ```bash
+   
+   $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/
+   
+   ```
+
+4. Download the Netty source connector.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar
+   
+   ```
+
+5. Start the Netty source connector.
+
+   ```bash
+   
+   $ ./bin/pulsar-admin sources localrun \
+   --archive pulsar-io-@pulsar:version@.nar \
+   --tenant public \
+   --namespace default \
+   --name netty \
+   --destination-topic-name netty-topic \
+   --source-config-file netty-source-config.yaml \
+   --parallelism 1
+   
+   ```
+
+6. Consume data.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0
+   
+   ```
+
+7. Open another terminal window to send data to the Netty source.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/
+   
+   ```
+
+8. The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   ----- got message -----
+   hello, world!
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-rabbitmq-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-rabbitmq-sink.md
new file mode 100644
index 0000000..d7fda99
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-rabbitmq-sink.md
@@ -0,0 +1,85 @@
+---
+id: io-rabbitmq-sink
+title: RabbitMQ sink connector
+sidebar_label: "RabbitMQ sink connector"
+original_id: io-rabbitmq-sink
+---
+
+The RabbitMQ sink connector pulls messages from Pulsar topics 
+and persist the messages to RabbitMQ queues.
+
+
+## Configuration 
+
+The configuration of the RabbitMQ sink connector has the following properties.
+
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `connectionName` |String| true | " " (empty string) | The connection name. |
+| `host` | String| true | " " (empty string) | The RabbitMQ host. |
+| `port` | int |true | 5672 | The RabbitMQ port. |
+| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. |
+| `username` | String|false | guest | The username used to authenticate to RabbitMQ. |
+| `password` | String|false | guest | The password used to authenticate to RabbitMQ. |
+| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
+| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number. <br /><br />0 means unlimited. |
+| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets. <br /><br />0 means unlimited. |
+| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds. <br /><br />0 means infinite. |
+| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
+| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. |
+| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.<br /><br /> 0 means unlimited. |
+| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. |
+
+
+### Example
+
+Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "host": "localhost",
+      "port": "5672",
+      "virtualHost": "/",
+      "username": "guest",
+      "password": "guest",
+      "queueName": "test-queue",
+      "connectionName": "test-connection",
+      "requestedChannelMax": "0",
+      "requestedFrameMax": "0",
+      "connectionTimeout": "60000",
+      "handshakeTimeout": "10000",
+      "requestedHeartbeat": "60",
+      "exchangeName": "test-exchange",
+      "routingKey": "test-key"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      host: "localhost"
+      port: 5672
+      virtualHost: "/",
+      username: "guest"
+      password: "guest"
+      queueName: "test-queue"
+      connectionName: "test-connection"
+      requestedChannelMax: 0
+      requestedFrameMax: 0
+      connectionTimeout: 60000
+      handshakeTimeout: 10000
+      requestedHeartbeat: 60
+      exchangeName: "test-exchange"
+      routingKey: "test-key"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-rabbitmq-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-rabbitmq-source.md
new file mode 100644
index 0000000..491df4d
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-rabbitmq-source.md
@@ -0,0 +1,82 @@
+---
+id: io-rabbitmq-source
+title: RabbitMQ source connector
+sidebar_label: "RabbitMQ source connector"
+original_id: io-rabbitmq-source
+---
+
+The RabbitMQ source connector receives messages from RabbitMQ clusters 
+and writes messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of the RabbitMQ source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `connectionName` |String| true | " " (empty string) | The connection name. |
+| `host` | String| true | " " (empty string) | The RabbitMQ host. |
+| `port` | int |true | 5672 | The RabbitMQ port. |
+| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. |
+| `username` | String|false | guest | The username used to authenticate to RabbitMQ. |
+| `password` | String|false | guest | The password used to authenticate to RabbitMQ. |
+| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
+| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number. <br /><br />0 means unlimited. |
+| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets. <br /><br />0 means unlimited. |
+| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds. <br /><br />0 means infinite. |
+| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
+| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. |
+| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.<br /><br /> 0 means unlimited. |
+| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. |
+
+### Example
+
+Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "host": "localhost",
+      "port": "5672",
+      "virtualHost": "/",
+      "username": "guest",
+      "password": "guest",
+      "queueName": "test-queue",
+      "connectionName": "test-connection",
+      "requestedChannelMax": "0",
+      "requestedFrameMax": "0",
+      "connectionTimeout": "60000",
+      "handshakeTimeout": "10000",
+      "requestedHeartbeat": "60",
+      "prefetchCount": "0",
+      "prefetchGlobal": "false"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      host: "localhost"
+      port: 5672
+      virtualHost: "/"
+      username: "guest"
+      password: "guest"
+      queueName: "test-queue"
+      connectionName: "test-connection"
+      requestedChannelMax: 0
+      requestedFrameMax: 0
+      connectionTimeout: 60000
+      handshakeTimeout: 10000
+      requestedHeartbeat: 60
+      prefetchCount: 0
+      prefetchGlobal: "false"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-redis-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-redis-sink.md
new file mode 100644
index 0000000..793d74a
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-redis-sink.md
@@ -0,0 +1,74 @@
+---
+id: io-redis-sink
+title: Redis sink connector
+sidebar_label: "Redis sink connector"
+original_id: io-redis-sink
+---
+
+The  Redis sink connector pulls messages from Pulsar topics 
+and persists the messages to a Redis database.
+
+
+
+## Configuration
+
+The configuration of the Redis sink connector has the following properties.
+
+
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. |
+| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. |
+| `redisDatabase` | int|true|0  | The Redis database to connect to. |
+| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster. <br /><br />Below are the available options: <br /><li>Standalone<br /></li><li>Cluster </li>|
+| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. |
+| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. |
+| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. |
+| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. |
+| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. |
+| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . |
+| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. |
+| `batchSize` | int|false|200 | The batch size of writing to Redis database. |
+
+
+### Example
+
+Before using the Redis sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "redisHosts": "localhost:6379",
+      "redisPassword": "fake@123",
+      "redisDatabase": "1",
+      "clientMode": "Standalone",
+      "operationTimeout": "2000",
+      "batchSize": "100",
+      "batchTimeMs": "1000",
+      "connectTimeout": "3000"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  {
+      redisHosts: "localhost:6379"
+      redisPassword: "fake@123"
+      redisDatabase: 1
+      clientMode: "Standalone"
+      operationTimeout: 2000
+      batchSize: 100
+      batchTimeMs: 1000
+      connectTimeout: 3000
+  }
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-solr-sink.md b/site2/website-next/versioned_docs/version-2.6.1/io-solr-sink.md
new file mode 100644
index 0000000..df2c361
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-solr-sink.md
@@ -0,0 +1,65 @@
+---
+id: io-solr-sink
+title: Solr sink connector
+sidebar_label: "Solr sink connector"
+original_id: io-solr-sink
+---
+
+The Solr sink connector pulls messages from Pulsar topics 
+and persists the messages to Solr collections.
+
+
+
+## Configuration
+
+The configuration of the Solr sink connector has the following properties.
+
+
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `solrUrl` | String|true|" " (empty string) | <li>Comma-separated zookeeper hosts with chroot used in the SolrCloud mode. <br />**Example**<br />`localhost:2181,localhost:2182/chroot` <br /><br /></li><li>URL to connect to Solr used in standalone mode. <br />**Example**<br />`localhost:8983/solr` </li>|
+| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster. <br /><br />Below are the available options:<br /><li>Standalone<br /></li><li> SolrCloud</li>|
+| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. |
+| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.|
+| `username` |String|false|  " " (empty string) | The username for basic authentication.<br /><br />**Note: `usename` is case-sensitive.** |
+| `password` | String|false|  " " (empty string) | The password for basic authentication. <br /><br />**Note: `password` is case-sensitive.** |
+
+
+
+### Example
+
+Before using the Solr sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "solrUrl": "localhost:2181,localhost:2182/chroot",
+      "solrMode": "SolrCloud",
+      "solrCollection": "techproducts",
+      "solrCommitWithinMs": 100,
+      "username": "fakeuser",
+      "password": "fake@123"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  {
+      solrUrl: "localhost:2181,localhost:2182/chroot"
+      solrMode: "SolrCloud"
+      solrCollection: "techproducts"
+      solrCommitWithinMs: 100
+      username: "fakeuser"
+      password: "fake@123"
+  }
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.1/io-twitter-source.md b/site2/website-next/versioned_docs/version-2.6.1/io-twitter-source.md
new file mode 100644
index 0000000..8de3504
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/io-twitter-source.md
@@ -0,0 +1,28 @@
+---
+id: io-twitter-source
+title: Twitter Firehose source connector
+sidebar_label: "Twitter Firehose source connector"
+original_id: io-twitter-source
+---
+
+The Twitter Firehose source connector receives tweets from Twitter Firehose and 
+writes the tweets to Pulsar topics.
+
+## Configuration
+
+The configuration of the Twitter Firehose source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.<br /><br />For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). |
+| `consumerSecret` | String |true | " " (empty string)  | The twitter OAuth consumer secret. |
+| `token` | String|true | " " (empty string)  | The twitter OAuth token. |
+| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. |
+| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.<br /><br />If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time.
+| `clientName` |  String |false | openconnector-twitter-source| The twitter firehose client name. |
+| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. |
+| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. |
+
+> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html).
diff --git a/site2/website-next/versioned_docs/version-2.6.3/getting-started-helm.md b/site2/website-next/versioned_docs/version-2.6.1/kubernetes-helm.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/getting-started-helm.md
rename to site2/website-next/versioned_docs/version-2.6.1/kubernetes-helm.md
diff --git a/site2/website-next/versioned_docs/version-2.6.3/getting-started-pulsar.md b/site2/website-next/versioned_docs/version-2.6.1/pulsar-2.0.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/getting-started-pulsar.md
rename to site2/website-next/versioned_docs/version-2.6.1/pulsar-2.0.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/reference-pulsar-admin.md b/site2/website-next/versioned_docs/version-2.6.1/pulsar-admin.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/reference-pulsar-admin.md
rename to site2/website-next/versioned_docs/version-2.6.1/pulsar-admin.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/reference-cli-tools.md b/site2/website-next/versioned_docs/version-2.6.1/reference-cli-tools.md
index 17b8348..a46e5cc 100644
--- a/site2/website-next/versioned_docs/version-2.6.1/reference-cli-tools.md
+++ b/site2/website-next/versioned_docs/version-2.6.1/reference-cli-tools.md
@@ -780,7 +780,7 @@ $ bookkeeper command
 ```
 
 Commands
-* `auto-recovery`
+* `autorecovery`
 * `bookie`
 * `localbookie`
 * `upgrade`
@@ -802,14 +802,14 @@ The table below lists the environment variables that you can use to configure th
 |BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful||
 
 
-### `autorecovery`
-Runs an auto-recovery service daemon
+### `auto-recovery`
+Runs an auto-recovery service
 
 Usage
 
 ```bash
 
-$ bookkeeper auto-recovery options
+$ bookkeeper autorecovery options
 
 ```
 
@@ -817,7 +817,7 @@ Options
 
 |Flag|Description|Default|
 |---|---|---|
-|`-c`, `--conf`|Configuration for the auto-recovery daemon||
+|`-c`, `--conf`|Configuration for the auto-recovery||
 
 
 ### `bookie`
@@ -835,7 +835,7 @@ Options
 
 |Flag|Description|Default|
 |---|---|---|
-|`-c`, `--conf`|Configuration for the auto-recovery daemon||
+|`-c`, `--conf`|Configuration for the auto-recovery||
 |-readOnly|Force start a read-only bookie server|false|
 |-withAutoRecovery|Start auto-recovery service bookie server|false|
 
@@ -866,7 +866,7 @@ Options
 
 |Flag|Description|Default|
 |---|---|---|
-|`-c`, `--conf`|Configuration for the auto-recovery daemon||
+|`-c`, `--conf`|Configuration for the auto-recovery||
 |`-u`, `--upgrade`|Upgrade the bookie’s directories||
 
 
diff --git a/site2/website-next/versioned_docs/version-2.6.1/reference-connector-admin.md b/site2/website-next/versioned_docs/version-2.6.1/reference-connector-admin.md
new file mode 100644
index 0000000..7b73ae8
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/reference-connector-admin.md
@@ -0,0 +1,11 @@
+---
+id: reference-connector-admin
+title: Connector Admin CLI
+sidebar_label: "Connector Admin CLI"
+original_id: reference-connector-admin
+---
+
+> **Important**
+>
+> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/).
+> 
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.6.1/security-token-admin.md b/site2/website-next/versioned_docs/version-2.6.1/security-token-admin.md
new file mode 100644
index 0000000..1679193
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.1/security-token-admin.md
@@ -0,0 +1,183 @@
+---
+id: security-token-admin
+title: Token authentication admin
+sidebar_label: "Token authentication admin"
+original_id: security-token-admin
+---
+
+## Token Authentication Overview
+
+Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)).
+
+Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which
+will be then granted permissions to do some actions (eg: publish or consume from a topic).
+
+A user will typically be given a token string by an administrator (or some automated service).
+
+The compact representation of a signed JWT is a string that looks like:
+
+```
+
+ eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY
+
+```
+
+Application will specify the token when creating the client instance. An alternative is to pass
+a "token supplier", that is to say a function that returns the token when the client library
+will need one.
+
+> #### Always use TLS transport encryption
+> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to
+> always use TLS encryption when talking to the Pulsar service. See
+> [Transport Encryption using TLS](security-tls-transport)
+
+## Secret vs Public/Private keys
+
+JWT support two different kind of keys in order to generate and validate the tokens:
+
+ * Symmetric :
+    - there is a single ***Secret*** key that is used both to generate and validate
+ * Asymmetric: there is a pair of keys.
+    - ***Private*** key is used to generate tokens
+    - ***Public*** key is used to validate tokens
+
+### Secret key
+
+When using a secret key, the administrator will create the key and he will
+use it to generate the client tokens. This key will be also configured to
+the brokers to allow them to validate the clients.
+
+#### Creating a secret key
+
+> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file.
+
+```shell
+
+$ bin/pulsar tokens create-secret-key --output my-secret.key
+
+```
+
+To generate base64 encoded private key
+
+```shell
+
+$ bin/pulsar tokens create-secret-key --output  /opt/my-secret.key --base64
+
+```
+
+### Public/Private keys
+
+With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)
+
+#### Creating a key pair
+
+> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file.
+
+```shell
+
+$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key
+
+```
+
+ * `my-private.key` will be stored in a safe location and only used by administrator to generate
+   new tokens.
+ * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without
+   any security concern.
+
+## Generating tokens
+
+A token is the credential associated with a user. The association is done through the "principal",
+or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though
+it's exactly the same concept.
+
+The generated token is then required to have a **subject** field set.
+
+```shell
+
+$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \
+            --subject test-user
+
+```
+
+This will print the token string on stdout.
+
+Similarly, one can create a token by passing the "private" key:
+
+```shell
+
+$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \
+            --subject test-user
+
+```
+
+Finally, a token can also be created with a pre-defined TTL. After that time,
+the token will be automatically invalidated.
+
+```shell
+
+$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \
+            --subject test-user \
+            --expiry-time 1y
+
+```
+
+## Authorization
+
+The token itself doesn't have any permission associated. That will be determined by the
+authorization engine. Once the token is created, one can grant permission for this token to do certain
+actions. Eg. :
+
+```shell
+
+$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \
+            --role test-user \
+            --actions produce,consume
+
+```
+
+## Enabling Token Authentication ...
+
+### ... on Brokers
+
+To configure brokers to authenticate clients, put the following in `broker.conf`:
+
+```properties
+
+# Configuration to enable authentication and authorization
+authenticationEnabled=true
+authorizationEnabled=true
+authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken
+
+# If using secret key
+tokenSecretKey=file:///path/to/secret.key
+# The key can also be passed inline:
+# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU=
+
+# If using public/private
+# tokenPublicKey=file:///path/to/public.key
+
+```
+
+### ... on Proxies
+
+To configure proxies to authenticate clients, put the following in `proxy.conf`:
+
+The proxy will have its own token used when talking to brokers. The role token for this
+key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization) for more details.
+
+```properties
+
+# For clients connecting to the proxy
+authenticationEnabled=true
+authorizationEnabled=true
+authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken
+tokenSecretKey=file:///path/to/secret.key
+
+# For the proxy to connect to brokers
+brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken
+brokerClientAuthenticationParameters=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw
+# Or, alternatively, read token from file
+# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.3/getting-started-docker.md b/site2/website-next/versioned_docs/version-2.6.1/standalone-docker.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/getting-started-docker.md
rename to site2/website-next/versioned_docs/version-2.6.1/standalone-docker.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/administration-dashboard.md b/site2/website-next/versioned_docs/version-2.6.2/administration-dashboard.md
new file mode 100644
index 0000000..514b076
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/administration-dashboard.md
@@ -0,0 +1,76 @@
+---
+id: administration-dashboard
+title: Pulsar dashboard
+sidebar_label: "Dashboard"
+original_id: administration-dashboard
+---
+
+:::note
+
+Pulsar dashboard is deprecated. If you want to manage and monitor the stats of your topics, use [Pulsar Manager](administration-pulsar-manager). 
+
+:::
+
+Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form.
+
+The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database.
+
+You can use the [Django](https://www.djangoproject.com) web app to render the collected data.
+
+## Install
+
+The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+```shell
+
+$ SERVICE_URL=http://broker.example.com:8080/
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  apachepulsar/pulsar-dashboard:@pulsar:version@
+
+```
+
+You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well:
+
+```shell
+
+$ docker build -t apachepulsar/pulsar-dashboard dashboard
+
+```
+
+If token authentication is enabled:
+> Provided token should have super-user access. 
+
+```shell
+
+$ SERVICE_URL=http://broker.example.com:8080/
+$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  -e JWT_TOKEN=$JWT_TOKEN \
+  apachepulsar/pulsar-dashboard
+
+```
+
+ 
+You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://<broker-ip>:8080` by default. `<broker-ip>` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard.
+
+Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses.
+
+> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container
+
+If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to
+be the IP of the machine.
+
+Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to
+explicitly set the advertise address to the host IP. For example:
+
+```shell
+
+$ bin/pulsar standalone --advertised-address 1.2.3.4
+
+```
+
+### Known issues
+
+Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported.
diff --git a/site2/website-next/versioned_docs/version-2.6.2/administration-pulsar-manager.md b/site2/website-next/versioned_docs/version-2.6.2/administration-pulsar-manager.md
index 3e129ae..eb125c5 100644
--- a/site2/website-next/versioned_docs/version-2.6.2/administration-pulsar-manager.md
+++ b/site2/website-next/versioned_docs/version-2.6.2/administration-pulsar-manager.md
@@ -103,7 +103,7 @@ If you want to enable JWT authentication, use one of the following methods.
 
 ```
 
-wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/apache-pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
+wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
 tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
 cd pulsar-manager
 tar -zxvf pulsar-manager.tar
diff --git a/site2/website-next/versioned_docs/version-2.6.2/client-libraries-cgo.md b/site2/website-next/versioned_docs/version-2.6.2/client-libraries-cgo.md
new file mode 100644
index 0000000..c79f7bb
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/client-libraries-cgo.md
@@ -0,0 +1,579 @@
+---
+id: client-libraries-cgo
+title: Pulsar CGo client
+sidebar_label: "CGo(deprecated)"
+original_id: client-libraries-cgo
+---
+
+You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+
+Currently, the following Go clients are maintained in two repositories.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
+| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+
+> **API docs available as well**  
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
+
+## Installation
+
+### Requirements
+
+Pulsar Go client library is based on the C++ client library. Follow
+the instructions for [C++ library](client-libraries-cpp) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
+
+### Install go package
+
+> **Compatibility Warning**  
+> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
+
+You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
+
+```bash
+
+$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
+
+```
+
+Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
+
+```bash
+
+$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@
+
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+
+import "github.com/apache/pulsar/pulsar-client-go/pulsar"
+
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+
+pulsar://localhost:6650
+
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+
+pulsar://pulsar.us-west.example.com:6650
+
+```
+
+If you're using [TLS](security-tls-authentication) authentication, the URL will look like something like this:
+
+```http
+
+pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+## Create a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+```go
+
+import (
+    "log"
+    "runtime"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+        OperationTimeoutSeconds: 5,
+        MessageListenerThreads: runtime.NumCPU(),
+    })
+
+    if err != nil {
+        log.Fatalf("Could not instantiate Pulsar client: %v", err)
+    }
+}
+
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
+`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
+`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
+`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
+`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
+`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
+`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
+`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
+`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
+`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic: "my-topic",
+})
+
+if err != nil {
+    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
+}
+
+defer producer.Close()
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Hello, Pulsar"),
+}
+
+if err := producer.Send(context.Background(), msg); err != nil {
+    log.Fatalf("Producer could not send message: %v", err)
+}
+
+```
+
+> **Blocking operation**  
+> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
+`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
+`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
+`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
+`Schema()` | | Schema
+
+Here's a more involved example usage of a producer:
+
+```go
+
+import (
+    "context"
+    "fmt"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client to instantiate a producer
+    producer, err := client.CreateProducer(pulsar.ProducerOptions{
+        Topic: "my-topic",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    ctx := context.Background()
+
+    // Send 10 messages synchronously and 10 messages asynchronously
+    for i := 0; i < 10; i++ {
+        // Create a message
+        msg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("message-%d", i)),
+        }
+
+        // Attempt to send the message
+        if err := producer.Send(ctx, msg); err != nil {
+            log.Fatal(err)
+        }
+
+        // Create a different message to send asynchronously
+        asyncMsg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
+        }
+
+        // Attempt to send the message asynchronously and handle the response
+        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
+            if err != nil { log.Fatal(err) }
+
+            fmt.Printf("the %s successfully published", string(msg.Payload))
+        })
+    }
+}
+
+```
+
+### Producer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
+`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
+`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
+`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication) feature. | 30 seconds
+`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
+`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
+`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
+`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
+`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
+`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
+`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
+`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
+`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms
+`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+
+msgChannel := make(chan pulsar.ConsumerMessage)
+
+consumerOpts := pulsar.ConsumerOptions{
+    Topic:            "my-topic",
+    SubscriptionName: "my-subscription-1",
+    Type:             pulsar.Exclusive,
+    MessageChannel:   msgChannel,
+}
+
+consumer, err := client.Subscribe(consumerOpts)
+
+if err != nil {
+    log.Fatalf("Could not establish subscription: %v", err)
+}
+
+defer consumer.Close()
+
+for cm := range msgChannel {
+    msg := cm.Message
+
+    fmt.Printf("Message ID: %s", msg.ID())
+    fmt.Printf("Message value: %s", string(msg.Payload()))
+
+    consumer.Ack(msg)
+}
+
+```
+
+> **Blocking operation**  
+> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
+`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
+`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
+`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
+`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
+`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
+
+#### Receive example
+
+Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
+
+```go
+
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client object to instantiate a consumer
+    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+        Topic:            "my-golang-topic",
+        SubscriptionName: "sub-1",
+        Type: pulsar.Exclusive,
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    defer consumer.Close()
+
+    ctx := context.Background()
+
+    // Listen indefinitely on the topic
+    for {
+        msg, err := consumer.Receive(ctx)
+        if err != nil { log.Fatal(err) }
+
+        // Do something with the message
+        err = processMessage(msg)
+
+        if err == nil {
+            // Message processed successfully
+            consumer.Ack(msg)
+        } else {
+            // Failed to process messages
+            consumer.Nack(msg)
+        }
+    }
+}
+
+```
+
+### Consumer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
+`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`SubscriptionName` | The subscription name for this consumer |
+`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
+`Name` | The name of the consumer |
+`AckTimeout` | Set the timeout for unacked messages | 0
+`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
+`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
+`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
+`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
+`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
+`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic: "my-golang-topic",
+    StartMessageId: pulsar.LatestMessage,
+})
+
+```
+
+> **Blocking operation**  
+> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
+
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+#### "Next" example
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatalf("Could not create client: %v", err) }
+
+    // Use the client to instantiate a reader
+    reader, err := client.CreateReader(pulsar.ReaderOptions{
+        Topic:          "my-golang-topic",
+        StartMessageID: pulsar.EarliestMessage,
+    })
+
+    if err != nil { log.Fatalf("Could not create reader: %v", err) }
+
+    defer reader.Close()
+
+    ctx := context.Background()
+
+    // Listen on the topic for incoming messages
+    for {
+        msg, err := reader.Next(ctx)
+        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
+
+        // Process the message
+    }
+}
+
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic:          "my-golang-topic",
+    StartMessageID: DeserializeMessageID(lastSavedId),
+})
+
+```
+
+### Reader configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages 
+`Name` | The name of the reader 
+`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
+`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
+`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
+`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Here is some message data"),
+    Key: "message-key",
+    Properties: map[string]string{
+        "foo": "bar",
+    },
+    EventTime: time.Now(),
+    ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if err := producer.send(msg); err != nil {
+    log.Fatalf("Could not publish message due to: %v", err)
+}
+
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+
+opts := pulsar.ClientOptions{
+    URL: "pulsar+ssl://my-cluster.com:6651",
+    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+
+```
+
+## Schema
+
+This example shows how to create a producer and consumer with schema.
+
+```go
+
+var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
+    		"\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
+jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
+// create producer
+producer, err := client.CreateProducerWithSchema(ProducerOptions{
+	Topic: "jsonTopic",
+}, jsonSchema)
+err = producer.Send(context.Background(), ProducerMessage{
+	Value: &testJson{
+		ID:   100,
+		Name: "pulsar",
+	},
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+//create consumer
+var s testJson
+consumerJS := NewJsonSchema(exampleSchemaDef, nil)
+consumer, err := client.SubscribeWithSchema(ConsumerOptions{
+	Topic:            "jsonTopic",
+	SubscriptionName: "sub-2",
+}, consumerJS)
+if err != nil {
+	log.Fatal(err)
+}
+msg, err := consumer.Receive(context.Background())
+if err != nil {
+	log.Fatal(err)
+}
+err = msg.GetValue(&s)
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(s.ID) // output: 100
+fmt.Println(s.Name) // output: pulsar
+defer consumer.Close()
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/client-libraries-go.md b/site2/website-next/versioned_docs/version-2.6.2/client-libraries-go.md
index c8b5047..df40107 100644
--- a/site2/website-next/versioned_docs/version-2.6.2/client-libraries-go.md
+++ b/site2/website-next/versioned_docs/version-2.6.2/client-libraries-go.md
@@ -192,8 +192,9 @@ if err != nil {
 defer client.Close()
 
 topicName := newTopicName()
-producer, err := client.CreateProducer(ProducerOptions{
-	Topic: topicName,
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic:           topicName,
+    DisableBatching: true,
 })
 if err != nil {
 	log.Fatal(err)
diff --git a/site2/website-next/versioned_docs/version-2.6.2/concepts-messaging.md b/site2/website-next/versioned_docs/version-2.6.2/concepts-messaging.md
index 995d632..29cebdf 100644
--- a/site2/website-next/versioned_docs/version-2.6.2/concepts-messaging.md
+++ b/site2/website-next/versioned_docs/version-2.6.2/concepts-messaging.md
@@ -66,7 +66,7 @@ When you enable chunking, read the following instructions.
 - Chunking is only supported for persisted topics.
 - Chunking is only supported for the exclusive and failover subscription types.
 
-When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
+When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
 
 The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChuckedMessage` param [...]
 
diff --git a/site2/website-next/versioned_docs/version-2.6.2/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.6.2/develop-binary-protocol.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/developing-binary-protocol.md
rename to site2/website-next/versioned_docs/version-2.6.2/develop-binary-protocol.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/developing-cpp.md b/site2/website-next/versioned_docs/version-2.6.2/develop-cpp.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/developing-cpp.md
rename to site2/website-next/versioned_docs/version-2.6.2/develop-cpp.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/developing-load-manager.md b/site2/website-next/versioned_docs/version-2.6.2/develop-load-manager.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/developing-load-manager.md
rename to site2/website-next/versioned_docs/version-2.6.2/develop-load-manager.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/developing-tools.md b/site2/website-next/versioned_docs/version-2.6.2/develop-tools.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/developing-tools.md
rename to site2/website-next/versioned_docs/version-2.6.2/develop-tools.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-aerospike-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-aerospike-sink.md
new file mode 100644
index 0000000..63d7338
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-aerospike-sink.md
@@ -0,0 +1,26 @@
+---
+id: io-aerospike-sink
+title: Aerospike sink connector
+sidebar_label: "Aerospike sink connector"
+original_id: io-aerospike-sink
+---
+
+The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters.
+
+## Configuration
+
+The configuration of the Aerospike sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.<br /><br />Each host can be specified as a valid IP address or hostname followed by an optional port number. | 
+| `keyspace` | String| true |No default value |The Aerospike namespace. |
+| `columnName` | String | true| No default value|The Aerospike column name. |
+|`userName`|String|false|NULL|The Aerospike username.|
+|`password`|String|false|NULL|The Aerospike password.|
+| `keySet` | String|false |NULL | The Aerospike set name. |
+| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. |
+| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions.  |
+| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. |
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-canal-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-canal-source.md
new file mode 100644
index 0000000..d1fd43b
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-canal-source.md
@@ -0,0 +1,235 @@
+---
+id: io-canal-source
+title: Canal source connector
+sidebar_label: "Canal source connector"
+original_id: io-canal-source
+---
+
+The Canal source connector pulls messages from MySQL to Pulsar topics.
+
+## Configuration
+
+The configuration of Canal source connector has the following properties.
+
+### Property
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `username` | true | None | Canal server account (not MySQL).|
+| `password` | true | None | Canal server password (not MySQL). |
+|`destination`|true|None|Source destination that Canal source connector connects to.
+| `singleHostname` | false | None | Canal server address.|
+| `singlePort` | false | None | Canal server port.|
+| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.<br /><br /><li>true: **cluster** mode.<br />If set to true, it talks to `zkServers` to figure out the actual database host.<br /><br /></li><li>false: **standalone** mode.<br />If set to false, it connects to the database specified by `singleHostname` and `singlePort`. </li>|
+| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.|
+| `batchSize` | false | 1000 | Batch size to fetch from Canal. |
+
+### Example
+
+Before using the Canal connector, you can create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "zkServers": "127.0.0.1:2181",
+      "batchSize": "5120",
+      "destination": "example",
+      "username": "",
+      "password": "",
+      "cluster": false,
+      "singleHostname": "127.0.0.1",
+      "singlePort": "11111",
+  }
+  
+  ```
+
+* YAML
+
+  You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file.
+
+  ```yaml
+  
+  configs:
+      zkServers: "127.0.0.1:2181"
+      batchSize: 5120
+      destination: "example"
+      username: ""
+      password: ""
+      cluster: false
+      singleHostname: "127.0.0.1"
+      singlePort: 11111
+  
+  ```
+
+## Usage
+
+Here is an example of storing MySQL data using the configuration file as above.
+
+1. Start a MySQL server.
+
+   ```bash
+   
+   $ docker pull mysql:5.7
+   $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7
+   
+   ```
+
+2. Create a configuration file `mysqld.cnf`.
+
+   ```bash
+   
+   [mysqld]
+   pid-file    = /var/run/mysqld/mysqld.pid
+   socket      = /var/run/mysqld/mysqld.sock
+   datadir     = /var/lib/mysql
+   #log-error  = /var/log/mysql/error.log
+   # By default we only accept connections from localhost
+   #bind-address   = 127.0.0.1
+   # Disabling symbolic-links is recommended to prevent assorted security risks
+   symbolic-links=0
+   log-bin=mysql-bin
+   binlog-format=ROW
+   server_id=1
+   
+   ```
+
+3. Copy the configuration file `mysqld.cnf` to MySQL server.
+
+   ```bash
+   
+   $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/
+   
+   ```
+
+4.  Restart the MySQL server.
+
+   ```bash
+   
+   $ docker restart pulsar-mysql
+   
+   ```
+
+5.  Create a test database in MySQL server.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mysql /bin/bash
+   $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;'
+   
+   ```
+
+6. Start a Canal server and connect to MySQL server.
+
+   ```
+   
+   $ docker pull canal/canal-server:v1.1.2
+   $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2
+   
+   ```
+
+7. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:2.3.0
+   $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone
+   
+   ```
+
+8. Modify the configuration file `canal-mysql-source-config.yaml`.
+
+   ```yaml
+   
+   configs:
+       zkServers: ""
+       batchSize: "5120"
+       destination: "test"
+       username: ""
+       password: ""
+       cluster: false
+       singleHostname: "pulsar-canal-server"
+       singlePort: "11111"
+   
+   ```
+
+9. Create a consumer file `pulsar-client.py`.
+
+   ```python
+   
+   import pulsar
+
+   client = pulsar.Client('pulsar://localhost:6650')
+   consumer = client.subscribe('my-topic',
+                               subscription_name='my-sub')
+
+   while True:
+       msg = consumer.receive()
+       print("Received message: '%s'" % msg.data())
+       consumer.acknowledge(msg)
+
+   client.close()
+   
+   ```
+
+10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file  `pulsar-client.py` to Pulsar server.
+
+   ```bash
+   
+   $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/
+   $ docker cp pulsar-client.py pulsar-standalone:/pulsar/
+   
+   ```
+
+11. Download a Canal connector and start it.
+
+   ```bash
+   
+   $ docker exec -it pulsar-standalone /bin/bash
+   $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors
+   $ ./bin/pulsar-admin source localrun \
+   --archive ./connectors/pulsar-io-canal-2.3.0.nar \
+   --classname org.apache.pulsar.io.canal.CanalStringSource \
+   --tenant public \
+   --namespace default \
+   --name canal \
+   --destination-topic-name my-topic \
+   --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \
+   --parallelism 1
+   
+   ```
+
+12. Consume data from MySQL. 
+
+   ```bash
+   
+   $ docker exec -it pulsar-standalone /bin/bash
+   $ python pulsar-client.py
+   
+   ```
+
+13. Open another window to log in MySQL server.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mysql /bin/bash
+   $ mysql -h 127.0.0.1 -uroot -pcanal
+   
+   ```
+
+14. Create a table, and insert, delete, and update data in MySQL server.
+
+   ```bash
+   
+   mysql> use test;
+   mysql> show tables;
+   mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL,
+   `test_author` VARCHAR(40) NOT NULL,
+   `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8;
+   mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW());
+   mysql> UPDATE test_table SET test_title='c' WHERE test_title='a';
+   mysql> DELETE FROM test_table WHERE test_title='c';
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-cassandra-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-cassandra-sink.md
new file mode 100644
index 0000000..b27a754
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-cassandra-sink.md
@@ -0,0 +1,57 @@
+---
+id: io-cassandra-sink
+title: Cassandra sink connector
+sidebar_label: "Cassandra sink connector"
+original_id: io-cassandra-sink
+---
+
+The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters.
+
+## Configuration
+
+The configuration of the Cassandra sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.|
+| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages. <br /><br />**Note: `keyspace` should be created prior to a Cassandra sink.**|
+| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family. <br /><br />The column is used for storing Pulsar message keys. <br /><br />If a Pulsar message doesn't have any key associated, the message value is used as the key. |
+| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.<br /><br />**Note: `columnFamily` should be created prior to a Cassandra sink.**|
+| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.<br /><br /> The column is used for storing Pulsar message values. |
+
+### Example
+
+Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "roots": "localhost:9042",
+      "keyspace": "pulsar_test_keyspace",
+      "columnFamily": "pulsar_test_table",
+      "keyname": "key",
+      "columnName": "col"
+  }
+  
+  ```
+
+* YAML
+
+  ```
+  
+  configs:
+      roots: "localhost:9042"
+      keyspace: "pulsar_test_keyspace"
+      columnFamily: "pulsar_test_table"
+      keyname: "key"
+      columnName: "col"
+  
+  ```
+
+## Usage
+
+For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra).
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-cdc-debezium.md b/site2/website-next/versioned_docs/version-2.6.2/io-cdc-debezium.md
new file mode 100644
index 0000000..ac5039d
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-cdc-debezium.md
@@ -0,0 +1,543 @@
+---
+id: io-cdc-debezium
+title: Debezium source connector
+sidebar_label: "Debezium source connector"
+original_id: io-cdc-debezium
+---
+
+The Debezium source connector pulls messages from MySQL or PostgreSQL 
+and persists the messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of Debezium source connector has the following properties.
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `task.class` | true | null | A source task class that implemented in Debezium. |
+| `database.hostname` | true | null | The address of a database server. |
+| `database.port` | true | null | The port number of a database server.|
+| `database.user` | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | null | The password for a database user that has the required privileges. |
+| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the  connector.<br /><br /> This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value.  |
+| `database.history` | true | null | The name of the database history class. |
+| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements. <br /><br />**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | true | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. |
+| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+
+
+
+## Example of MySQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration 
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "3306",
+      "database.user": "debezium",
+      "database.password": "dbz",
+      "database.server.id": "184054",
+      "database.server.name": "dbserver1",
+      "database.whitelist": "inventory",
+      "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+      "database.history.pulsar.topic": "history-topic",
+      "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "offset.storage.topic": "offset-topic"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mysql-source"
+  topicName: "debezium-mysql-topic"
+  archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for mysql, docker image: debezium/example-mysql:0.8
+      database.hostname: "localhost"
+      database.port: "3306"
+      database.user: "debezium"
+      database.password: "dbz"
+      database.server.id: "184054"
+      database.server.name: "dbserver1"
+      database.whitelist: "inventory"
+      database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
+      database.history.pulsar.topic: "history-topic"
+      database.history.pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
+      key.converter: "org.apache.kafka.connect.json.JsonConverter"
+      value.converter: "org.apache.kafka.connect.json.JsonConverter"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## OFFSET_STORAGE_TOPIC_CONFIG
+      offset.storage.topic: "offset-topic"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MySQL table using the Pulsar Debezium connector.
+
+1. Start a MySQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysql \
+   -p 3306:3306 \
+   -e MYSQL_ROOT_PASSWORD=debezium \
+   -e MYSQL_USER=mysqluser \
+   -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+    * Use the **JSON** configuration file as shown previously. 
+   
+       Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \
+       --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","va [...]
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --source-config-file debezium-mysql-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the table _inventory.products_.
+
+   ```bash
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MySQL client in docker.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysqlterm \
+   --link mysql \
+   --rm mysql:5.7 sh \
+   -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+   
+   ```
+
+6. A MySQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   mysql> use inventory;
+   mysql> show tables;
+   mysql> SELECT * FROM  products;
+   mysql> UPDATE products SET name='1111111111' WHERE id=101;
+   mysql> UPDATE products SET name='1111111111' WHERE id=107;
+   
+   ```
+
+   In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic.
+
+## Example of PostgreSQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "5432",
+      "database.user": "postgres",
+      "database.password": "postgres",
+      "database.dbname": "postgres",
+      "database.server.name": "dbserver1",
+      "schema.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-postgres-source"
+  topicName: "debezium-postgres-topic"
+  archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-postgress:0.8
+      database.hostname: "localhost"
+      database.port: "5432"
+      database.user: "postgres"
+      database.password: "postgres"
+      database.dbname: "postgres"
+      database.server.name: "dbserver1"
+      schema.whitelist: "inventory"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector.
+
+
+1. Start a PostgreSQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-postgres:0.8
+   $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432  debezium/example-postgres:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \
+       --name debezium-postgres-source \
+       --destination-topic-name debezium-postgres-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-postgres-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a PostgreSQL client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-postgresql /bin/bash
+   
+   ```
+
+6. A PostgreSQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   psql -U postgres postgres
+   postgres=# \c postgres;
+   You are now connected to database "postgres" as user "postgres".
+   postgres=# SET search_path TO inventory;
+   SET
+   postgres=# select * from products;
+    id  |        name        |                       description                       | weight
+   -----+--------------------+---------------------------------------------------------+--------
+    102 | car battery        | 12V car battery                                         |    8.1
+    103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 |    0.8
+    104 | hammer             | 12oz carpenter's hammer                                 |   0.75
+    105 | hammer             | 14oz carpenter's hammer                                 |  0.875
+    106 | hammer             | 16oz carpenter's hammer                                 |      1
+    107 | rocks              | box of assorted rocks                                   |    5.3
+    108 | jacket             | water resistent black wind breaker                      |    0.1
+    109 | spare tire         | 24 inch spare tire                                      |   22.2
+    101 | 1111111111         | Small 2-wheel scooter                                   |   3.14
+   (9 rows)
+   
+   postgres=# UPDATE products SET name='1111111111' WHERE id=107;
+   UPDATE 1
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products. [...]
+   
+   ```
+
+## Example of MongoDB
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+* JSON 
+
+  ```json
+  
+  {
+      "mongodb.hosts": "rs0/mongodb:27017",
+      "mongodb.name": "dbserver1",
+      "mongodb.user": "debezium",
+      "mongodb.password": "dbz",
+      "mongodb.task.id": "1",
+      "database.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mongodb-source"
+  topicName: "debezium-mongodb-topic"
+  archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-postgress:0.10
+      mongodb.hosts: "rs0/mongodb:27017",
+      mongodb.name: "dbserver1",
+      mongodb.user: "debezium",
+      mongodb.password: "dbz",
+      mongodb.task.id: "1",
+      database.whitelist: "inventory",
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector.
+
+
+1. Start a MongoDB server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-mongodb:0.10
+   $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017  debezium/example-mongodb:0.10
+   
+   ```
+
+    Use the following commands to initialize the data.
+
+    ``` bash
+    
+    ./usr/local/bin/init-inventory.sh
+    
+    ```
+
+    If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a```
+
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \
+       --name debezium-mongodb-source \
+       --destination-topic-name debezium-mongodb-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-mongodb-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MongoDB client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mongodb /bin/bash
+   
+   ```
+
+6. A MongoDB client pops out. 
+
+   ```bash
+   
+   mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory
+   db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}})
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type" [...]
+   
+   ```
+
+## FAQ
+ 
+### Debezium postgres connector will hang when create snap
+
+```$xslt
+
+#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000]
+    java.lang.Thread.State: WAITING (parking)
+     at sun.misc.Unsafe.park(Native Method)
+     - parking to wait for  <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
+     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
+     at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396)
+     at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649)
+     at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132)
+     at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source)
+     at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source)
+     at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
+     at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717)
+     at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126)
+     at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)
+     at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127)
+     at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230)
+     at java.lang.Thread.run(Thread.java:748)
+
+```
+
+If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file:
+
+```$xslt
+
+max.queue.size=
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-debezium-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-debezium-source.md
new file mode 100644
index 0000000..808051b
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-debezium-source.md
@@ -0,0 +1,564 @@
+---
+id: io-debezium-source
+title: Debezium source connector
+sidebar_label: "Debezium source connector"
+original_id: io-debezium-source
+---
+
+The Debezium source connector pulls messages from MySQL or PostgreSQL 
+and persists the messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of Debezium source connector has the following properties.
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `task.class` | true | null | A source task class that implemented in Debezium. |
+| `database.hostname` | true | null | The address of a database server. |
+| `database.port` | true | null | The port number of a database server.|
+| `database.user` | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | null | The password for a database user that has the required privileges. |
+| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the  connector.<br /><br /> This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value.  |
+| `database.history` | true | null | The name of the database history class. |
+| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements. <br /><br />**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | true | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. |
+| `json-with-envelope` | false | false | Present the message only consist of payload.
+
+### Converter Options
+
+1. org.apache.kafka.connect.json.JsonConverter
+
+This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema `
+Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`,
+and the message only consist of payload.
+
+If the config `json-with-envelope` value is true, the consumer use the schema 
+`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload.
+
+2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter
+
+If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), 
+Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload.
+
+### MongoDB Configuration
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+
+
+
+## Example of MySQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration 
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "3306",
+      "database.user": "debezium",
+      "database.password": "dbz",
+      "database.server.id": "184054",
+      "database.server.name": "dbserver1",
+      "database.whitelist": "inventory",
+      "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+      "database.history.pulsar.topic": "history-topic",
+      "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "offset.storage.topic": "offset-topic"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mysql-source"
+  topicName: "debezium-mysql-topic"
+  archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for mysql, docker image: debezium/example-mysql:0.8
+      database.hostname: "localhost"
+      database.port: "3306"
+      database.user: "debezium"
+      database.password: "dbz"
+      database.server.id: "184054"
+      database.server.name: "dbserver1"
+      database.whitelist: "inventory"
+      database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
+      database.history.pulsar.topic: "history-topic"
+      database.history.pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
+      key.converter: "org.apache.kafka.connect.json.JsonConverter"
+      value.converter: "org.apache.kafka.connect.json.JsonConverter"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## OFFSET_STORAGE_TOPIC_CONFIG
+      offset.storage.topic: "offset-topic"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MySQL table using the Pulsar Debezium connector.
+
+1. Start a MySQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysql \
+   -p 3306:3306 \
+   -e MYSQL_ROOT_PASSWORD=debezium \
+   -e MYSQL_USER=mysqluser \
+   -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+    * Use the **JSON** configuration file as shown previously. 
+   
+       Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \
+       --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","va [...]
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --source-config-file debezium-mysql-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the table _inventory.products_.
+
+   ```bash
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MySQL client in docker.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysqlterm \
+   --link mysql \
+   --rm mysql:5.7 sh \
+   -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+   
+   ```
+
+6. A MySQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   mysql> use inventory;
+   mysql> show tables;
+   mysql> SELECT * FROM  products;
+   mysql> UPDATE products SET name='1111111111' WHERE id=101;
+   mysql> UPDATE products SET name='1111111111' WHERE id=107;
+   
+   ```
+
+   In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic.
+
+## Example of PostgreSQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "5432",
+      "database.user": "postgres",
+      "database.password": "postgres",
+      "database.dbname": "postgres",
+      "database.server.name": "dbserver1",
+      "schema.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-postgres-source"
+  topicName: "debezium-postgres-topic"
+  archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-postgress:0.8
+      database.hostname: "localhost"
+      database.port: "5432"
+      database.user: "postgres"
+      database.password: "postgres"
+      database.dbname: "postgres"
+      database.server.name: "dbserver1"
+      schema.whitelist: "inventory"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector.
+
+
+1. Start a PostgreSQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-postgres:0.8
+   $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432  debezium/example-postgres:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \
+       --name debezium-postgres-source \
+       --destination-topic-name debezium-postgres-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-postgres-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a PostgreSQL client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-postgresql /bin/bash
+   
+   ```
+
+6. A PostgreSQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   psql -U postgres postgres
+   postgres=# \c postgres;
+   You are now connected to database "postgres" as user "postgres".
+   postgres=# SET search_path TO inventory;
+   SET
+   postgres=# select * from products;
+    id  |        name        |                       description                       | weight
+   -----+--------------------+---------------------------------------------------------+--------
+    102 | car battery        | 12V car battery                                         |    8.1
+    103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 |    0.8
+    104 | hammer             | 12oz carpenter's hammer                                 |   0.75
+    105 | hammer             | 14oz carpenter's hammer                                 |  0.875
+    106 | hammer             | 16oz carpenter's hammer                                 |      1
+    107 | rocks              | box of assorted rocks                                   |    5.3
+    108 | jacket             | water resistent black wind breaker                      |    0.1
+    109 | spare tire         | 24 inch spare tire                                      |   22.2
+    101 | 1111111111         | Small 2-wheel scooter                                   |   3.14
+   (9 rows)
+   
+   postgres=# UPDATE products SET name='1111111111' WHERE id=107;
+   UPDATE 1
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products. [...]
+   
+   ```
+
+## Example of MongoDB
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+* JSON 
+
+  ```json
+  
+  {
+      "mongodb.hosts": "rs0/mongodb:27017",
+      "mongodb.name": "dbserver1",
+      "mongodb.user": "debezium",
+      "mongodb.password": "dbz",
+      "mongodb.task.id": "1",
+      "database.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mongodb-source"
+  topicName: "debezium-mongodb-topic"
+  archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-mongodb:0.10
+      mongodb.hosts: "rs0/mongodb:27017",
+      mongodb.name: "dbserver1",
+      mongodb.user: "debezium",
+      mongodb.password: "dbz",
+      mongodb.task.id: "1",
+      database.whitelist: "inventory",
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector.
+
+
+1. Start a MongoDB server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-mongodb:0.10
+   $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017  debezium/example-mongodb:0.10
+   
+   ```
+
+    Use the following commands to initialize the data.
+
+    ``` bash
+    
+    ./usr/local/bin/init-inventory.sh
+    
+    ```
+
+    If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a```
+
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \
+       --name debezium-mongodb-source \
+       --destination-topic-name debezium-mongodb-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-mongodb-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MongoDB client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mongodb /bin/bash
+   
+   ```
+
+6. A MongoDB client pops out. 
+
+   ```bash
+   
+   mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory
+   db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}})
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type" [...]
+   
+   ```
+
+## FAQ
+ 
+### Debezium postgres connector will hang when create snap
+
+```$xslt
+
+#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000]
+    java.lang.Thread.State: WAITING (parking)
+     at sun.misc.Unsafe.park(Native Method)
+     - parking to wait for  <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
+     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
+     at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396)
+     at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649)
+     at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132)
+     at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source)
+     at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source)
+     at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
+     at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717)
+     at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126)
+     at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)
+     at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127)
+     at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230)
+     at java.lang.Thread.run(Thread.java:748)
+
+```
+
+If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file:
+
+```$xslt
+
+max.queue.size=
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-dynamodb-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-dynamodb-source.md
new file mode 100644
index 0000000..ce58578
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-dynamodb-source.md
@@ -0,0 +1,80 @@
+---
+id: io-dynamodb-source
+title: AWS DynamoDB source connector
+sidebar_label: "AWS DynamoDB source connector"
+original_id: io-dynamodb-source
+---
+
+The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar.
+
+This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter),
+which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual
+consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics.
+
+
+## Configuration
+
+The configuration of the DynamoDB source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.<br /><br />Below are the available options:<br /><br /><li>`AT_TIMESTAMP`: start from the record at or after the specified timestamp.<br /><br /></li><li>`LATEST`: start after the most recent data record.<br /><br /></li><li>`TRIM_HORIZON`: start from the oldest available data record.</li>
+`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption.
+`applicationName`|String|false|Pulsar IO connector|The name of the KCL application.  Must be unique, as it is used to define the table name for the dynamo table used for state tracking. <br /><br />By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances.
+`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds.
+`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds.
+`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint.
+`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector. <br /><br />Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed.
+`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br /><br />**Example**<br /> us-west-1, us-west-2
+`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.<br /><br />`awsCredentialProviderPlugin` has the following built-in plugs:<br /><br /><li>`org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:<br /> this plugin uses the default AWS provider chain.<br />For more information, see [using the [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Example
+
+Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "awsEndpoint": "https://some.endpoint.aws",
+      "awsRegion": "us-east-1",
+      "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291",
+      "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+      "applicationName": "My test application",
+      "checkpointInterval": "30000",
+      "backoffTime": "4000",
+      "numRetries": "3",
+      "receiveQueueSize": 2000,
+      "initialPositionInStream": "TRIM_HORIZON",
+      "startAtTime": "2019-03-05T19:28:58.000Z"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      awsEndpoint: "https://some.endpoint.aws"
+      awsRegion: "us-east-1"
+      awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291"
+      awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+      applicationName: "My test application"
+      checkpointInterval: 30000
+      backoffTime: 4000
+      numRetries: 3
+      receiveQueueSize: 2000
+      initialPositionInStream: "TRIM_HORIZON"
+      startAtTime: "2019-03-05T19:28:58.000Z"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-elasticsearch-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-elasticsearch-sink.md
new file mode 100644
index 0000000..4acedd3
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-elasticsearch-sink.md
@@ -0,0 +1,173 @@
+---
+id: io-elasticsearch-sink
+title: ElasticSearch sink connector
+sidebar_label: "ElasticSearch sink connector"
+original_id: io-elasticsearch-sink
+---
+
+The ElasticSearch sink connector pulls messages from Pulsar topics and persists the messages to indexes.
+
+## Configuration
+
+The configuration of the ElasticSearch sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. |
+| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. |
+| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to. <br /><br /> The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. |
+| `indexNumberOfShards` | int| false |1| The number of shards of the index. |
+| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. |
+| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster. <br /><br />If `username` is set, then `password` should also be provided. |
+| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster. <br /><br />If `username` is set, then `password` should also be provided.  |
+
+## Example
+
+Before using the ElasticSearch sink connector, you need to create a configuration file through one of the following methods.
+
+### Configuration
+
+#### For Elasticsearch After 6.2
+
+* JSON 
+
+  ```json
+  
+  {
+      "elasticSearchUrl": "http://localhost:9200",
+      "indexName": "my_index",
+      "username": "scooby",
+      "password": "doobie"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      elasticSearchUrl: "http://localhost:9200"
+      indexName: "my_index"
+      username: "scooby"
+      password: "doobie"
+  
+  ```
+
+#### For Elasticsearch Before 6.2
+
+* JSON 
+
+  ```json
+  
+  {
+      "elasticSearchUrl": "http://localhost:9200",
+      "indexName": "my_index",
+      "typeName": "doc",
+      "username": "scooby",
+      "password": "doobie"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      elasticSearchUrl: "http://localhost:9200"
+      indexName: "my_index"
+      typeName: "doc"
+      username: "scooby"
+      password: "doobie"
+  
+  ```
+
+### Usage
+
+1. Start a single node Elasticsearch cluster.
+
+   ```bash
+   
+   $ docker run -p 9200:9200 -p 9300:9300 \
+       -e "discovery.type=single-node" \
+       docker.elastic.co/elasticsearch/elasticsearch:7.5.1
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+   Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`.
+
+3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods.
+   * Use the **JSON** configuration as shown previously. 
+
+       ```bash
+       
+       $ bin/pulsar-admin sinks localrun \
+           --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \
+           --tenant public \
+           --namespace default \
+           --name elasticsearch-test-sink \
+           --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \
+           --inputs elasticsearch_test
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin sinks localrun \
+           --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \
+           --tenant public \
+           --namespace default \
+           --name elasticsearch-test-sink \
+           --sink-config-file elasticsearch-sink.yml \
+           --inputs elasticsearch_test
+       
+       ```
+
+4. Publish records to the topic.
+
+   ```bash
+   
+   $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}"
+   
+   ```
+
+5. Check documents in Elasticsearch.
+   
+   * refresh the index
+
+       ```bash
+       
+           $ curl -s http://localhost:9200/my_index/_refresh
+       
+       ```
+
+ 
+   * search documents
+
+       ```bash
+       
+           $ curl -s http://localhost:9200/my_index/_search
+       
+       ```
+
+       You can see the record that published earlier has been successfully written into Elasticsearch.
+
+       ```json
+       
+       {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}}
+       
+       ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-file-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-file-source.md
new file mode 100644
index 0000000..e9d710c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-file-source.md
@@ -0,0 +1,160 @@
+---
+id: io-file-source
+title: File source connector
+sidebar_label: "File source connector"
+original_id: io-file-source
+---
+
+The File source connector pulls messages from files in directories and persists the messages to Pulsar topics.
+
+## Configuration
+
+The configuration of the File source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `inputDirectory` | String|true  | No default value|The input directory to pull files. |
+| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.|
+| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. |
+| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. |
+| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. |
+| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed. <br /><br />Any file younger than `minimumFileAge` (according to the last modification date) is ignored. |
+| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed. <br /><br />Any file older than `maximumFileAge` (according to last modification date) is ignored. |
+| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. |
+| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. |
+| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. |
+| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. |
+| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.<br /><br /> This allows you to process a larger number of files concurrently. <br /><br />However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. |
+
+### Example
+
+Before using the File source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "inputDirectory": "/Users/david",
+      "recurse": true,
+      "keepFile": true,
+      "fileFilter": "[^\\.].*",
+      "pathFilter": "*",
+      "minimumFileAge": 0,
+      "maximumFileAge": 9999999999,
+      "minimumSize": 1,
+      "maximumSize": 5000000,
+      "ignoreHiddenFiles": true,
+      "pollingInterval": 5000,
+      "numWorkers": 1
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      inputDirectory: "/Users/david"
+      recurse: true
+      keepFile: true
+      fileFilter: "[^\\.].*"
+      pathFilter: "*"
+      minimumFileAge: 0
+      maximumFileAge: 9999999999
+      minimumSize: 1
+      maximumSize: 5000000
+      ignoreHiddenFiles: true
+      pollingInterval: 5000
+      numWorkers: 1
+  
+  ```
+
+## Usage
+
+Here is an example of using the File source connecter.
+
+1. Pull a Pulsar image.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:{version}
+   
+   ```
+
+2. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+   
+   ```
+
+3. Create a configuration file _file-connector.yaml_.
+
+   ```yaml
+   
+   configs:
+       inputDirectory: "/opt"
+   
+   ```
+
+4. Copy the configuration file _file-connector.yaml_ to the container.
+
+   ```bash
+   
+   $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/
+   
+   ```
+
+5. Download the File source connector.
+
+   ```bash
+   
+   $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar
+   
+   ```
+
+6. Start the File source connector.
+
+   ```bash
+   
+   $ docker exec -it pulsar-standalone /bin/bash
+
+   $ ./bin/pulsar-admin sources localrun \
+   --archive /pulsar/pulsar-io-file-{version}.nar \
+   --name file-test \
+   --destination-topic-name  pulsar-file-test \
+   --source-config-file /pulsar/file-connector.yaml
+   
+   ```
+
+7. Start a consumer.
+
+   ```bash
+   
+   ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test
+   
+   ```
+
+8. Write the message to the file _test.txt_.
+
+   ```bash
+   
+   echo "hello world!" > /opt/test.txt
+   
+   ```
+
+   The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   ----- got message -----
+   hello world!
+   
+   ```
+
+   
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-flume-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-flume-sink.md
new file mode 100644
index 0000000..b2ace53
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-flume-sink.md
@@ -0,0 +1,56 @@
+---
+id: io-flume-sink
+title: Flume sink connector
+sidebar_label: "Flume sink connector"
+original_id: io-flume-sink
+---
+
+The Flume sink connector pulls messages from Pulsar topics to logs.
+
+## Configuration
+
+The configuration of the Flume sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`name`|String|true|"" (empty string)|The name of the agent.
+`confFile`|String|true|"" (empty string)|The configuration file.
+`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed.
+`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection.
+`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration.
+
+### Example
+
+Before using the Flume sink connector, you need to create a configuration file through one of the following methods.
+
+> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf).
+
+* JSON 
+
+  ```json
+  
+  {
+      "name": "a1",
+      "confFile": "sink.conf",
+      "noReloadConf": "false",
+      "zkConnString": "",
+      "zkBasePath": ""
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      name: a1
+      confFile: sink.conf
+      noReloadConf: false
+      zkConnString: ""
+      zkBasePath: ""
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-flume-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-flume-source.md
new file mode 100644
index 0000000..b7fd7ed
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-flume-source.md
@@ -0,0 +1,56 @@
+---
+id: io-flume-source
+title: Flume source connector
+sidebar_label: "Flume source connector"
+original_id: io-flume-source
+---
+
+The Flume source connector pulls messages from logs to Pulsar topics.
+
+## Configuration
+
+The configuration of the Flume source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`name`|String|true|"" (empty string)|The name of the agent.
+`confFile`|String|true|"" (empty string)|The configuration file.
+`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed.
+`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection.
+`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration.
+
+### Example
+
+Before using the Flume source connector, you need to create a configuration file through one of the following methods.
+
+> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf).
+
+* JSON 
+
+  ```json
+  
+  {
+      "name": "a1",
+      "confFile": "source.conf",
+      "noReloadConf": "false",
+      "zkConnString": "",
+      "zkBasePath": ""
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      name: a1
+      confFile: source.conf
+      noReloadConf: false
+      zkConnString: ""
+      zkBasePath: ""
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-hbase-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-hbase-sink.md
new file mode 100644
index 0000000..1737b00
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-hbase-sink.md
@@ -0,0 +1,67 @@
+---
+id: io-hbase-sink
+title: HBase sink connector
+sidebar_label: "HBase sink connector"
+original_id: io-hbase-sink
+---
+
+The HBase sink connector pulls the messages from Pulsar topics 
+and persists the messages to HBase tables
+
+## Configuration
+
+The configuration of the HBase sink connector has the following properties.
+
+### Property
+
+| Name | Type|Default | Required | Description |
+|------|---------|----------|-------------|---
+| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. |
+| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. |
+| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. |
+| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. |
+| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. |
+| `rowKeyName` | String|None | true | HBase table rowkey name. |
+| `familyName` | String|None | true | HBase table column family name. |
+| `qualifierNames` |String| None | true | HBase table column qualifier names. |
+| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. |
+| `batchSize` | int|200| false | Batch size of updates made to the HBase table. |
+
+### Example
+
+Before using the HBase sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "hbaseConfigResources": "hbase-site.xml",
+      "zookeeperQuorum": "localhost",
+      "zookeeperClientPort": "2181",
+      "zookeeperZnodeParent": "/hbase",
+      "tableName": "pulsar_hbase",
+      "rowKeyName": "rowKey",
+      "familyName": "info",
+      "qualifierNames": [ 'name', 'address', 'age']
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      hbaseConfigResources: "hbase-site.xml"
+      zookeeperQuorum: "localhost"
+      zookeeperClientPort: "2181"
+      zookeeperZnodeParent: "/hbase"
+      tableName: "pulsar_hbase"
+      rowKeyName: "rowKey"
+      familyName: "info"
+      qualifierNames: [ 'name', 'address', 'age']
+  
+  ```
+
+  
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-hdfs2-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-hdfs2-sink.md
new file mode 100644
index 0000000..411b972
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-hdfs2-sink.md
@@ -0,0 +1,61 @@
+---
+id: io-hdfs2-sink
+title: HDFS2 sink connector
+sidebar_label: "HDFS2 sink connector"
+original_id: io-hdfs2-sink
+---
+
+The HDFS2 sink connector pulls the messages from Pulsar topics 
+and persists the messages to HDFS files.
+
+## Configuration
+
+The configuration of the HDFS2 sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.<br /><br />**Example**<br />'core-site.xml'<br />'hdfs-site.xml' |
+| `directory` | String | true | None|The HDFS directory where files read from or written to. |
+| `encoding` | String |false |None |The character encoding for the files.<br /><br />**Example**<br />UTF-8<br />ASCII |
+| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS. <br /><br />Below are the available options:<br /><li>BZIP2<br /></li><li>DEFLATE<br /></li><li>GZIP<br /></li><li>LZ4<br /></li><li>SNAPPY</li>|
+| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. |
+| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. |
+| `filenamePrefix` |String| true, if `compression` is set to `None`. | None |The prefix of the files created inside the HDFS directory.<br /><br />**Example**<br /> The value of topicA result in files named topicA-. |
+| `fileExtension` | String| true | None | The extension added to the files written to HDFS.<br /><br />**Example**<br />'.txt'<br /> '.seq' |
+| `separator` | char|false |None |The character used to separate records in a text file. <br /><br />If no value is provided, the contents from all records are concatenated together in one continuous byte array. |
+| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. |
+| `maxPendingRecords` |int| false|Integer.MAX_VALUE |  The maximum number of records that hold in memory before acking. <br /><br />Setting this property to 1 makes every record send to disk before the record is acked.<br /><br />Setting this property to a higher value allows buffering records before flushing them to disk. 
+
+### Example
+
+Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "hdfsConfigResources": "core-site.xml",
+      "directory": "/foo/bar",
+      "filenamePrefix": "prefix",
+      "fileExtension": ".log",
+      "compression": "SNAPPY"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      hdfsConfigResources: "core-site.xml"
+      directory: "/foo/bar"
+      filenamePrefix: "prefix"
+      fileExtension: ".log"
+      compression: "SNAPPY"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-hdfs3-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-hdfs3-sink.md
new file mode 100644
index 0000000..aec065a
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-hdfs3-sink.md
@@ -0,0 +1,59 @@
+---
+id: io-hdfs3-sink
+title: HDFS3 sink connector
+sidebar_label: "HDFS3 sink connector"
+original_id: io-hdfs3-sink
+---
+
+The HDFS3 sink connector pulls the messages from Pulsar topics 
+and persists the messages to HDFS files.
+
+## Configuration
+
+The configuration of the HDFS3 sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.<br /><br />**Example**<br />'core-site.xml'<br />'hdfs-site.xml' |
+| `directory` | String | true | None|The HDFS directory where files read from or written to. |
+| `encoding` | String |false |None |The character encoding for the files.<br /><br />**Example**<br />UTF-8<br />ASCII |
+| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS. <br /><br />Below are the available options:<br /><li>BZIP2<br /></li><li>DEFLATE<br /></li><li>GZIP<br /></li><li>LZ4<br /></li><li>SNAPPY</li>|
+| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. |
+| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. |
+| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.<br /><br />**Example**<br /> The value of topicA result in files named topicA-. |
+| `fileExtension` | String| false | None| The extension added to the files written to HDFS.<br /><br />**Example**<br />'.txt'<br /> '.seq' |
+| `separator` | char|false |None |The character used to separate records in a text file. <br /><br />If no value is provided, the contents from all records are concatenated together in one continuous byte array. |
+| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. |
+| `maxPendingRecords` |int| false|Integer.MAX_VALUE |  The maximum number of records that hold in memory before acking. <br /><br />Setting this property to 1 makes every record send to disk before the record is acked.<br /><br />Setting this property to a higher value allows buffering records before flushing them to disk. 
+
+### Example
+
+Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "hdfsConfigResources": "core-site.xml",
+      "directory": "/foo/bar",
+      "filenamePrefix": "prefix",
+      "compression": "SNAPPY"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      hdfsConfigResources: "core-site.xml"
+      directory: "/foo/bar"
+      filenamePrefix: "prefix"
+      compression: "SNAPPY"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-influxdb-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-influxdb-sink.md
new file mode 100644
index 0000000..9382f8c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-influxdb-sink.md
@@ -0,0 +1,119 @@
+---
+id: io-influxdb-sink
+title: InfluxDB sink connector
+sidebar_label: "InfluxDB sink connector"
+original_id: io-influxdb-sink
+---
+
+The InfluxDB sink connector pulls messages from Pulsar topics 
+and persists the messages to InfluxDB.
+
+The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively.
+
+## Configuration
+
+The configuration of the InfluxDB sink connector has the following properties.
+
+### Property
+#### InfluxDBv2
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
+| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. |
+| `organization` | String| true|" " (empty string)  | The InfluxDB organization to write to. |
+| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. |
+| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB. <br /><br />Below are the available options:<li>ns<br /></li><li>us<br /></li><li>ms<br /></li><li>s</li>|
+| `logLevel` | String|false| NONE|The log level for InfluxDB request and response. <br /><br />Below are the available options:<li>NONE<br /></li><li>BASIC<br /></li><li>HEADERS<br /></li><li>FULL</li>|
+| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
+| `batchTimeMs` |long|false| 1000L |   The InfluxDB operation time in milliseconds. |
+| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+
+#### InfluxDBv1
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
+| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. |
+| `password` | String| false|" " (empty string)  | The password used to authenticate to InfluxDB. |
+| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. |
+| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB. <br /><br />Below are the available options:<li>ALL<br /></li><li> ANY<br /></li><li>ONE<br /></li><li>QUORUM </li>|
+| `logLevel` | String|false| NONE|The log level for InfluxDB request and response. <br /><br />Below are the available options:<li>NONE<br /></li><li>BASIC<br /></li><li>HEADERS<br /></li><li>FULL</li>|
+| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. |
+| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
+| `batchTimeMs` |long|false| 1000L |   The InfluxDB operation time in milliseconds. |
+| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+
+### Example
+Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods.
+#### InfluxDBv2
+* JSON
+
+  ```json
+  
+  {
+      "influxdbUrl": "http://localhost:9999",
+      "organization": "example-org",
+      "bucket": "example-bucket",
+      "token": "xxxx",
+      "precision": "ns",
+      "logLevel": "NONE",
+      "gzipEnable": false,
+      "batchTimeMs": 1000,
+      "batchSize": 100
+  }
+  
+  ```
+
+  
+* YAML
+
+  ```yaml
+  
+  configs:
+      influxdbUrl: "http://localhost:9999"
+      organization: "example-org"
+      bucket: "example-bucket"
+      token: "xxxx"
+      precision: "ns"
+      logLevel: "NONE"
+      gzipEnable: false
+      batchTimeMs: 1000
+      batchSize: 100
+  
+  ```
+
+  
+#### InfluxDBv1
+
+* JSON 
+
+  ```json
+  
+  {
+      "influxdbUrl": "http://localhost:8086",
+      "database": "test_db",
+      "consistencyLevel": "ONE",
+      "logLevel": "NONE",
+      "retentionPolicy": "autogen",
+      "gzipEnable": false,
+      "batchTimeMs": 1000,
+      "batchSize": 100
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      influxdbUrl: "http://localhost:8086"
+      database: "test_db"
+      consistencyLevel: "ONE"
+      logLevel: "NONE"
+      retentionPolicy: "autogen"
+      gzipEnable: false
+      batchTimeMs: 1000
+      batchSize: 100
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-jdbc-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-jdbc-sink.md
new file mode 100644
index 0000000..77dbb61
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-jdbc-sink.md
@@ -0,0 +1,157 @@
+---
+id: io-jdbc-sink
+title: JDBC sink connector
+sidebar_label: "JDBC sink connector"
+original_id: io-jdbc-sink
+---
+
+The JDBC sink connectors allow pulling messages from Pulsar topics 
+and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite.
+
+> Currently, INSERT, DELETE and UPDATE operations are supported.
+
+## Configuration 
+
+The configuration of all JDBC sink connectors has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.<br /><br />**Note: `userName` is case-sensitive.**|
+| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`. <br /><br />**Note: `password` is case-sensitive.**|
+| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. |
+| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. |
+| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events.  |
+| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
+| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. |
+| `batchSize` | int|false | 200 | The batch size of updates made to the database. |
+
+### Example for ClickHouse
+
+* JSON 
+
+  ```json
+  
+  {
+      "userName": "clickhouse",
+      "password": "password",
+      "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink",
+      "tableName": "pulsar_clickhouse_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-clickhouse-sink"
+  topicName: "persistent://public/default/jdbc-clickhouse-topic"
+  sinkType: "jdbc-clickhouse"    
+  configs:
+      userName: "clickhouse"
+      password: "password"
+      jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink"
+      tableName: "pulsar_clickhouse_jdbc_sink"
+  
+  ```
+
+### Example for MariaDB
+
+* JSON 
+
+  ```json
+  
+  {
+      "userName": "mariadb",
+      "password": "password",
+      "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink",
+      "tableName": "pulsar_mariadb_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-mariadb-sink"
+  topicName: "persistent://public/default/jdbc-mariadb-topic"
+  sinkType: "jdbc-mariadb"    
+  configs:
+      userName: "mariadb"
+      password: "password"
+      jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink"
+      tableName: "pulsar_mariadb_jdbc_sink"
+  
+  ```
+
+### Example for PostgreSQL
+
+Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "userName": "postgres",
+      "password": "password",
+      "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink",
+      "tableName": "pulsar_postgres_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-postgres-sink"
+  topicName: "persistent://public/default/jdbc-postgres-topic"
+  sinkType: "jdbc-postgres"    
+  configs:
+      userName: "postgres"
+      password: "password"
+      jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink"
+      tableName: "pulsar_postgres_jdbc_sink"
+  
+  ```
+
+For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql).
+
+### Example for SQLite
+
+* JSON 
+
+  ```json
+  
+  {
+      "jdbcUrl": "jdbc:sqlite:db.sqlite",
+      "tableName": "pulsar_sqlite_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-sqlite-sink"
+  topicName: "persistent://public/default/jdbc-sqlite-topic"
+  sinkType: "jdbc-sqlite"    
+  configs:
+      jdbcUrl: "jdbc:sqlite:db.sqlite"
+      tableName: "pulsar_sqlite_jdbc_sink"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-kafka-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-kafka-sink.md
new file mode 100644
index 0000000..09dad4c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-kafka-sink.md
@@ -0,0 +1,72 @@
+---
+id: io-kafka-sink
+title: Kafka sink connector
+sidebar_label: "Kafka sink connector"
+original_id: io-kafka-sink
+---
+
+The Kafka sink connector pulls messages from Pulsar topics and persists the messages
+to Kafka topics.
+
+This guide explains how to configure and use the Kafka sink connector.
+
+## Configuration
+
+The configuration of the Kafka sink connector has the following parameters.
+
+### Property
+
+| Name | Type| Required | Default | Description 
+|------|----------|---------|-------------|-------------|
+|  `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. |
+|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes. <br />This controls the durability of the sent records.
+|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers.
+|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes.
+|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar.
+| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys.
+| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.<br /><br />The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java).
+|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers. <br /><br />**Note:  other properties specified in the connector configuration file take precedence over this configuration**.
+
+
+### Example
+
+Before using the Kafka sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "bootstrapServers": "localhost:6667",
+      "topic": "test",
+      "acks": "1",
+      "batchSize": "16384",
+      "maxRequestSize": "1048576",
+      "producerConfigProperties":
+       {
+          "client.id": "test-pulsar-producer",
+          "security.protocol": "SASL_PLAINTEXT",
+          "sasl.mechanism": "GSSAPI",
+          "sasl.kerberos.service.name": "kafka",
+          "acks": "all" 
+       }
+  }
+
+* YAML
+  
+  ```
+
+yaml
+  configs:
+      bootstrapServers: "localhost:6667"
+      topic: "test"
+      acks: "1"
+      batchSize: "16384"
+      maxRequestSize: "1048576"
+      producerConfigProperties:
+          client.id: "test-pulsar-producer"
+          security.protocol: "SASL_PLAINTEXT"
+          sasl.mechanism: "GSSAPI"
+          sasl.kerberos.service.name: "kafka"
+          acks: "all"   
+  ```
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-kafka-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-kafka-source.md
new file mode 100644
index 0000000..8d68e29
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-kafka-source.md
@@ -0,0 +1,197 @@
+---
+id: io-kafka-source
+title: Kafka source connector
+sidebar_label: "Kafka source connector"
+original_id: io-kafka-source
+---
+
+The Kafka source connector pulls messages from Kafka topics and persists the messages
+to Pulsar topics.
+
+This guide explains how to configure and use the Kafka source connector.
+
+## Configuration
+
+The configuration of the Kafka source connector has the following properties.
+
+### Property
+
+| Name | Type| Required | Default | Description 
+|------|----------|---------|-------------|-------------|
+|  `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. |
+| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. |
+| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. |
+| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.<br /><br /> This committed offset is used when the process fails as the position from which a new consumer begins. |
+| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. |
+| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities. <br /><br />**Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.|
+| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. |
+| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. |
+|  `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers. <br /><br />**Note: other properties specified in the connector configuration file take precedence over this configuration**. |
+| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.<br /> The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java).
+| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values.
+
+
+### Example
+
+Before using the Kafka source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "bootstrapServers": "pulsar-kafka:9092",
+      "groupId": "test-pulsar-io",
+      "topic": "my-topic",
+      "sessionTimeoutMs": "10000",
+      "autoCommitEnabled": false
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      bootstrapServers: "pulsar-kafka:9092"
+      groupId: "test-pulsar-io"
+      topic: "my-topic"
+      sessionTimeoutMs: "10000"
+      autoCommitEnabled: false
+  
+  ```
+
+## Usage
+
+Here is an example of using the Kafka source connecter with the configuration file as shown previously.
+
+1. Download a Kafka client and a Kafka connector.
+
+   ```bash
+   
+   $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar
+
+   $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar
+   
+   ```
+
+2. Create a network.
+
+   ```bash
+   
+   $ docker network create kafka-pulsar
+   
+   ```
+
+3. Pull a ZooKeeper image and start ZooKeeper.
+
+   ```bash
+   
+   $ docker pull wurstmeister/zookeeper
+
+   $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper
+   
+   ```
+
+4. Pull a Kafka image and start Kafka.
+
+   ```bash
+   
+   $ docker pull wurstmeister/kafka:2.11-1.0.2
+   
+   $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2
+   
+   ```
+
+5. Pull a Pulsar image and start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:2.4.0
+   
+   $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone
+   
+   ```
+
+6. Create a producer file _kafka-producer.py_.
+
+   ```python
+   
+   from kafka import KafkaProducer
+   producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092')
+   future = producer.send('my-topic', b'hello world')
+   future.get()
+   
+   ```
+
+7. Create a consumer file _pulsar-client.py_.
+
+   ```python
+   
+   import pulsar
+
+   client = pulsar.Client('pulsar://localhost:6650')
+   consumer = client.subscribe('my-topic', subscription_name='my-aa')
+
+   while True:
+       msg = consumer.receive()
+       print msg
+       print dir(msg)
+       print("Received message: '%s'" % msg.data())
+       consumer.acknowledge(msg)
+
+   client.close()
+   
+   ```
+
+8. Copy the following files to Pulsar.
+
+   ```bash
+   
+   $ docker cp pulsar-io-kafka-2.4.0.nar pulsar-kafka-standalone:/pulsar
+   $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf
+   $ docker cp kafka-clients-0.10.2.1.jar pulsar-kafka-standalone:/pulsar/lib
+   $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/
+   $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/
+   
+   ```
+
+9. Open a new terminal window and start the Kafka source connector in local run mode. 
+
+   ```bash
+   
+   $ docker exec -it pulsar-kafka-standalone /bin/bash
+
+   $ ./bin/pulsar-admin source localrun \
+   --archive ./pulsar-io-kafka-2.4.0.nar \
+   --classname org.apache.pulsar.io.kafka.KafkaBytesSource \
+   --tenant public \
+   --namespace default \
+   --name kafka \
+   --destination-topic-name my-topic \
+   --source-config-file ./conf/kafkaSourceConfig.yaml \
+   --parallelism 1
+   
+   ```
+
+10. Open a new terminal window and run the consumer.
+
+   ```bash
+   
+   $ docker exec -it pulsar-kafka-standalone /bin/bash
+
+   $ pip install kafka-python
+
+   $ python3 kafka-producer.py
+   
+   ```
+
+   The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   Received message: 'hello world'
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-kinesis-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-kinesis-sink.md
new file mode 100644
index 0000000..153587d
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-kinesis-sink.md
@@ -0,0 +1,80 @@
+---
+id: io-kinesis-sink
+title: Kinesis sink connector
+sidebar_label: "Kinesis sink connector"
+original_id: io-kinesis-sink
+---
+
+The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis.
+
+## Configuration
+
+The configuration of the Kinesis sink connector has the following property.
+
+### Property
+
+| Name | Type|Required | Default | Description
+|------|----------|----------|---------|-------------|
+`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.<br /><br />Below are the available options:<br /><br /><li>`ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream. <br /><br /></li><li>`FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON pa [...]
+`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not.
+`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br /><br />**Example**<br /> us-west-1, us-west-2
+`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}. <br /><br />It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink. <br /><br />If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPlu [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Built-in plugins
+
+The following are built-in `AwsCredentialProviderPlugin` plugins:
+
+* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin`
+  
+  This plugin takes no configuration, it uses the default AWS provider chain. 
+  
+  For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).
+
+* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin`
+  
+  This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL.
+
+  This configuration takes the form of a small json document like:
+
+  ```json
+  
+  {"roleArn": "arn...", "roleSessionName": "name"}
+  
+  ```
+
+### Example
+
+Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "awsEndpoint": "some.endpoint.aws",
+      "awsRegion": "us-east-1",
+      "awsKinesisStreamName": "my-stream",
+      "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+      "messageFormat": "ONLY_RAW_PAYLOAD",
+      "retainOrdering": "true"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      awsEndpoint: "some.endpoint.aws"
+      awsRegion: "us-east-1"
+      awsKinesisStreamName: "my-stream"
+      awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+      messageFormat: "ONLY_RAW_PAYLOAD"
+      retainOrdering: "true"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-kinesis-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-kinesis-source.md
new file mode 100644
index 0000000..0d07eef
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-kinesis-source.md
@@ -0,0 +1,81 @@
+---
+id: io-kinesis-source
+title: Kinesis source connector
+sidebar_label: "Kinesis source connector"
+original_id: io-kinesis-source
+---
+
+The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar.
+
+This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers.
+
+> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release.
+
+
+## Configuration
+
+The configuration of the Kinesis source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.<br /><br />Below are the available options:<br /><br /><li>`AT_TIMESTAMP`: start from the record at or after the specified timestamp.<br /><br /></li><li>`LATEST`: start after the most recent data record.<br /><br /></li><li>`TRIM_HORIZON`: start from the oldest available data record.</li>
+`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption.
+`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application. <br /><br />By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances.
+`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds.
+`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds.
+`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint.
+`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector. <br /><br />Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed.
+`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.<br /><br />If set to false, it uses polling.
+`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br /><br />**Example**<br /> us-west-1, us-west-2
+`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.<br /><br />`awsCredentialProviderPlugin` has the following built-in plugs:<br /><br /><li>`org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:<br /> this plugin uses the default AWS provider chain.<br />For more information, see [using the [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Example
+
+Before using the Kinesis source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "awsEndpoint": "https://some.endpoint.aws",
+      "awsRegion": "us-east-1",
+      "awsKinesisStreamName": "my-stream",
+      "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+      "applicationName": "My test application",
+      "checkpointInterval": "30000",
+      "backoffTime": "4000",
+      "numRetries": "3",
+      "receiveQueueSize": 2000,
+      "initialPositionInStream": "TRIM_HORIZON",
+      "startAtTime": "2019-03-05T19:28:58.000Z"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      awsEndpoint: "https://some.endpoint.aws"
+      awsRegion: "us-east-1"
+      awsKinesisStreamName: "my-stream"
+      awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+      applicationName: "My test application"
+      checkpointInterval: 30000
+      backoffTime: 4000
+      numRetries: 3
+      receiveQueueSize: 2000
+      initialPositionInStream: "TRIM_HORIZON"
+      startAtTime: "2019-03-05T19:28:58.000Z"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-mongo-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-mongo-sink.md
new file mode 100644
index 0000000..3e6b3e6
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-mongo-sink.md
@@ -0,0 +1,57 @@
+---
+id: io-mongo-sink
+title: MongoDB sink connector
+sidebar_label: "MongoDB sink connector"
+original_id: io-mongo-sink
+---
+
+The MongoDB sink connector pulls messages from Pulsar topics 
+and persists the messages to collections.
+
+## Configuration
+
+The configuration of the MongoDB sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects. <br /><br />For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). |
+| `database` | String| true| " " (empty string)| The database name to which the collection belongs. |
+| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. |
+| `batchSize` | int|false|100 | The batch size of writing messages to collections. |
+| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. |
+
+
+### Example
+
+Before using the Mongo sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "mongoUri": "mongodb://localhost:27017",
+      "database": "pulsar",
+      "collection": "messages",
+      "batchSize": "2",
+      "batchTimeMs": "500"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  {
+      mongoUri: "mongodb://localhost:27017"
+      database: "pulsar"
+      collection: "messages"
+      batchSize: 2
+      batchTimeMs: 500
+  }
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-netty-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-netty-source.md
new file mode 100644
index 0000000..e1ec8d8
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-netty-source.md
@@ -0,0 +1,241 @@
+---
+id: io-netty-source
+title: Netty source connector
+sidebar_label: "Netty source connector"
+original_id: io-netty-source
+---
+
+The Netty source connector opens a port that accepts incoming data via the configured network protocol 
+and publish it to user-defined Pulsar topics.
+
+This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports.
+
+## Configuration
+
+The configuration of the Netty source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `type` |String| true |tcp | The network protocol over which data is transmitted to netty. <br /><br />Below are the available options:<br /><li>tcp</li><li>http</li><li>udp </li>|
+| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. |
+| `port` | int|true | 10999 | The port on which the source instance listen. |
+| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. |
+
+
+### Example
+
+Before using the Netty source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "type": "tcp",
+      "host": "127.0.0.1",
+      "port": "10911",
+      "numberOfThreads": "1"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      type: "tcp"
+      host: "127.0.0.1"
+      port: 10999
+      numberOfThreads: 1
+  
+  ```
+
+## Usage 
+
+The following examples show how to use the Netty source connector with TCP and HTTP.
+
+### TCP 
+
+1. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:{version}
+
+   $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+   
+   ```
+
+2. Create a configuration file _netty-source-config.yaml_.
+
+   ```yaml
+   
+   configs:
+       type: "tcp"
+       host: "127.0.0.1"
+       port: 10999
+       numberOfThreads: 1
+   
+   ```
+
+3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server.
+
+   ```bash
+   
+   $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/
+   
+   ```
+
+4. Download the Netty source connector.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar
+   
+   ```
+
+5. Start the Netty source connector.
+
+   ```bash
+   
+   $ ./bin/pulsar-admin sources localrun \
+   --archive pulsar-io-@pulsar:version@.nar \
+   --tenant public \
+   --namespace default \
+   --name netty \
+   --destination-topic-name netty-topic \
+   --source-config-file netty-source-config.yaml \
+   --parallelism 1
+   
+   ```
+
+6. Consume data.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0
+   
+   ```
+
+7. Open another terminal window to send data to the Netty source.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ apt-get update
+   
+   $ apt-get -y install telnet
+
+   $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999
+   Trying 127.0.0.1...
+   Connected to 127.0.0.1.
+   Escape character is '^]'.
+   hello
+   world
+   
+   ```
+
+8. The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   ----- got message -----
+   hello
+
+   ----- got message -----
+   world
+   
+   ```
+
+### HTTP 
+
+1. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:{version}
+
+   $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+   
+   ```
+
+2. Create a configuration file _netty-source-config.yaml_.
+
+   ```yaml
+   
+   configs:
+       type: "http"
+       host: "127.0.0.1"
+       port: 10999
+       numberOfThreads: 1
+   
+   ```
+
+3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server.
+
+   ```bash
+   
+   $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/
+   
+   ```
+
+4. Download the Netty source connector.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar
+   
+   ```
+
+5. Start the Netty source connector.
+
+   ```bash
+   
+   $ ./bin/pulsar-admin sources localrun \
+   --archive pulsar-io-@pulsar:version@.nar \
+   --tenant public \
+   --namespace default \
+   --name netty \
+   --destination-topic-name netty-topic \
+   --source-config-file netty-source-config.yaml \
+   --parallelism 1
+   
+   ```
+
+6. Consume data.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0
+   
+   ```
+
+7. Open another terminal window to send data to the Netty source.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/
+   
+   ```
+
+8. The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   ----- got message -----
+   hello, world!
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-rabbitmq-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-rabbitmq-sink.md
new file mode 100644
index 0000000..d7fda99
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-rabbitmq-sink.md
@@ -0,0 +1,85 @@
+---
+id: io-rabbitmq-sink
+title: RabbitMQ sink connector
+sidebar_label: "RabbitMQ sink connector"
+original_id: io-rabbitmq-sink
+---
+
+The RabbitMQ sink connector pulls messages from Pulsar topics 
+and persist the messages to RabbitMQ queues.
+
+
+## Configuration 
+
+The configuration of the RabbitMQ sink connector has the following properties.
+
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `connectionName` |String| true | " " (empty string) | The connection name. |
+| `host` | String| true | " " (empty string) | The RabbitMQ host. |
+| `port` | int |true | 5672 | The RabbitMQ port. |
+| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. |
+| `username` | String|false | guest | The username used to authenticate to RabbitMQ. |
+| `password` | String|false | guest | The password used to authenticate to RabbitMQ. |
+| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
+| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number. <br /><br />0 means unlimited. |
+| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets. <br /><br />0 means unlimited. |
+| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds. <br /><br />0 means infinite. |
+| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
+| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. |
+| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.<br /><br /> 0 means unlimited. |
+| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. |
+
+
+### Example
+
+Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "host": "localhost",
+      "port": "5672",
+      "virtualHost": "/",
+      "username": "guest",
+      "password": "guest",
+      "queueName": "test-queue",
+      "connectionName": "test-connection",
+      "requestedChannelMax": "0",
+      "requestedFrameMax": "0",
+      "connectionTimeout": "60000",
+      "handshakeTimeout": "10000",
+      "requestedHeartbeat": "60",
+      "exchangeName": "test-exchange",
+      "routingKey": "test-key"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      host: "localhost"
+      port: 5672
+      virtualHost: "/",
+      username: "guest"
+      password: "guest"
+      queueName: "test-queue"
+      connectionName: "test-connection"
+      requestedChannelMax: 0
+      requestedFrameMax: 0
+      connectionTimeout: 60000
+      handshakeTimeout: 10000
+      requestedHeartbeat: 60
+      exchangeName: "test-exchange"
+      routingKey: "test-key"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-rabbitmq-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-rabbitmq-source.md
new file mode 100644
index 0000000..491df4d
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-rabbitmq-source.md
@@ -0,0 +1,82 @@
+---
+id: io-rabbitmq-source
+title: RabbitMQ source connector
+sidebar_label: "RabbitMQ source connector"
+original_id: io-rabbitmq-source
+---
+
+The RabbitMQ source connector receives messages from RabbitMQ clusters 
+and writes messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of the RabbitMQ source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `connectionName` |String| true | " " (empty string) | The connection name. |
+| `host` | String| true | " " (empty string) | The RabbitMQ host. |
+| `port` | int |true | 5672 | The RabbitMQ port. |
+| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. |
+| `username` | String|false | guest | The username used to authenticate to RabbitMQ. |
+| `password` | String|false | guest | The password used to authenticate to RabbitMQ. |
+| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
+| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number. <br /><br />0 means unlimited. |
+| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets. <br /><br />0 means unlimited. |
+| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds. <br /><br />0 means infinite. |
+| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
+| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. |
+| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.<br /><br /> 0 means unlimited. |
+| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. |
+
+### Example
+
+Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "host": "localhost",
+      "port": "5672",
+      "virtualHost": "/",
+      "username": "guest",
+      "password": "guest",
+      "queueName": "test-queue",
+      "connectionName": "test-connection",
+      "requestedChannelMax": "0",
+      "requestedFrameMax": "0",
+      "connectionTimeout": "60000",
+      "handshakeTimeout": "10000",
+      "requestedHeartbeat": "60",
+      "prefetchCount": "0",
+      "prefetchGlobal": "false"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      host: "localhost"
+      port: 5672
+      virtualHost: "/"
+      username: "guest"
+      password: "guest"
+      queueName: "test-queue"
+      connectionName: "test-connection"
+      requestedChannelMax: 0
+      requestedFrameMax: 0
+      connectionTimeout: 60000
+      handshakeTimeout: 10000
+      requestedHeartbeat: 60
+      prefetchCount: 0
+      prefetchGlobal: "false"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-redis-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-redis-sink.md
new file mode 100644
index 0000000..793d74a
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-redis-sink.md
@@ -0,0 +1,74 @@
+---
+id: io-redis-sink
+title: Redis sink connector
+sidebar_label: "Redis sink connector"
+original_id: io-redis-sink
+---
+
+The  Redis sink connector pulls messages from Pulsar topics 
+and persists the messages to a Redis database.
+
+
+
+## Configuration
+
+The configuration of the Redis sink connector has the following properties.
+
+
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. |
+| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. |
+| `redisDatabase` | int|true|0  | The Redis database to connect to. |
+| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster. <br /><br />Below are the available options: <br /><li>Standalone<br /></li><li>Cluster </li>|
+| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. |
+| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. |
+| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. |
+| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. |
+| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. |
+| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . |
+| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. |
+| `batchSize` | int|false|200 | The batch size of writing to Redis database. |
+
+
+### Example
+
+Before using the Redis sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "redisHosts": "localhost:6379",
+      "redisPassword": "fake@123",
+      "redisDatabase": "1",
+      "clientMode": "Standalone",
+      "operationTimeout": "2000",
+      "batchSize": "100",
+      "batchTimeMs": "1000",
+      "connectTimeout": "3000"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  {
+      redisHosts: "localhost:6379"
+      redisPassword: "fake@123"
+      redisDatabase: 1
+      clientMode: "Standalone"
+      operationTimeout: 2000
+      batchSize: 100
+      batchTimeMs: 1000
+      connectTimeout: 3000
+  }
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-solr-sink.md b/site2/website-next/versioned_docs/version-2.6.2/io-solr-sink.md
new file mode 100644
index 0000000..df2c361
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-solr-sink.md
@@ -0,0 +1,65 @@
+---
+id: io-solr-sink
+title: Solr sink connector
+sidebar_label: "Solr sink connector"
+original_id: io-solr-sink
+---
+
+The Solr sink connector pulls messages from Pulsar topics 
+and persists the messages to Solr collections.
+
+
+
+## Configuration
+
+The configuration of the Solr sink connector has the following properties.
+
+
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `solrUrl` | String|true|" " (empty string) | <li>Comma-separated zookeeper hosts with chroot used in the SolrCloud mode. <br />**Example**<br />`localhost:2181,localhost:2182/chroot` <br /><br /></li><li>URL to connect to Solr used in standalone mode. <br />**Example**<br />`localhost:8983/solr` </li>|
+| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster. <br /><br />Below are the available options:<br /><li>Standalone<br /></li><li> SolrCloud</li>|
+| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. |
+| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.|
+| `username` |String|false|  " " (empty string) | The username for basic authentication.<br /><br />**Note: `usename` is case-sensitive.** |
+| `password` | String|false|  " " (empty string) | The password for basic authentication. <br /><br />**Note: `password` is case-sensitive.** |
+
+
+
+### Example
+
+Before using the Solr sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "solrUrl": "localhost:2181,localhost:2182/chroot",
+      "solrMode": "SolrCloud",
+      "solrCollection": "techproducts",
+      "solrCommitWithinMs": 100,
+      "username": "fakeuser",
+      "password": "fake@123"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  {
+      solrUrl: "localhost:2181,localhost:2182/chroot"
+      solrMode: "SolrCloud"
+      solrCollection: "techproducts"
+      solrCommitWithinMs: 100
+      username: "fakeuser"
+      password: "fake@123"
+  }
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/io-twitter-source.md b/site2/website-next/versioned_docs/version-2.6.2/io-twitter-source.md
new file mode 100644
index 0000000..8de3504
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/io-twitter-source.md
@@ -0,0 +1,28 @@
+---
+id: io-twitter-source
+title: Twitter Firehose source connector
+sidebar_label: "Twitter Firehose source connector"
+original_id: io-twitter-source
+---
+
+The Twitter Firehose source connector receives tweets from Twitter Firehose and 
+writes the tweets to Pulsar topics.
+
+## Configuration
+
+The configuration of the Twitter Firehose source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.<br /><br />For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). |
+| `consumerSecret` | String |true | " " (empty string)  | The twitter OAuth consumer secret. |
+| `token` | String|true | " " (empty string)  | The twitter OAuth token. |
+| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. |
+| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.<br /><br />If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time.
+| `clientName` |  String |false | openconnector-twitter-source| The twitter firehose client name. |
+| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. |
+| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. |
+
+> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html).
diff --git a/site2/website-next/versioned_docs/version-2.6.2/getting-started-helm.md b/site2/website-next/versioned_docs/version-2.6.2/kubernetes-helm.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/getting-started-helm.md
rename to site2/website-next/versioned_docs/version-2.6.2/kubernetes-helm.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/getting-started-pulsar.md b/site2/website-next/versioned_docs/version-2.6.2/pulsar-2.0.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/getting-started-pulsar.md
rename to site2/website-next/versioned_docs/version-2.6.2/pulsar-2.0.md
diff --git a/site2/website-next/versioned_docs/version-2.6.3/reference-pulsar-admin.md b/site2/website-next/versioned_docs/version-2.6.2/pulsar-admin.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/reference-pulsar-admin.md
rename to site2/website-next/versioned_docs/version-2.6.2/pulsar-admin.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/reference-connector-admin.md b/site2/website-next/versioned_docs/version-2.6.2/reference-connector-admin.md
new file mode 100644
index 0000000..7b73ae8
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/reference-connector-admin.md
@@ -0,0 +1,11 @@
+---
+id: reference-connector-admin
+title: Connector Admin CLI
+sidebar_label: "Connector Admin CLI"
+original_id: reference-connector-admin
+---
+
+> **Important**
+>
+> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/).
+> 
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.6.2/security-token-admin.md b/site2/website-next/versioned_docs/version-2.6.2/security-token-admin.md
new file mode 100644
index 0000000..1679193
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.2/security-token-admin.md
@@ -0,0 +1,183 @@
+---
+id: security-token-admin
+title: Token authentication admin
+sidebar_label: "Token authentication admin"
+original_id: security-token-admin
+---
+
+## Token Authentication Overview
+
+Pulsar supports authenticating clients using security tokens that are based on [JSON Web Tokens](https://jwt.io/introduction/) ([RFC-7519](https://tools.ietf.org/html/rfc7519)).
+
+Tokens are used to identify a Pulsar client and associate with some "principal" (or "role") which
+will be then granted permissions to do some actions (eg: publish or consume from a topic).
+
+A user will typically be given a token string by an administrator (or some automated service).
+
+The compact representation of a signed JWT is a string that looks like:
+
+```
+
+ eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY
+
+```
+
+Application will specify the token when creating the client instance. An alternative is to pass
+a "token supplier", that is to say a function that returns the token when the client library
+will need one.
+
+> #### Always use TLS transport encryption
+> Sending a token is equivalent to sending a password over the wire. It is strongly recommended to
+> always use TLS encryption when talking to the Pulsar service. See
+> [Transport Encryption using TLS](security-tls-transport)
+
+## Secret vs Public/Private keys
+
+JWT support two different kind of keys in order to generate and validate the tokens:
+
+ * Symmetric :
+    - there is a single ***Secret*** key that is used both to generate and validate
+ * Asymmetric: there is a pair of keys.
+    - ***Private*** key is used to generate tokens
+    - ***Public*** key is used to validate tokens
+
+### Secret key
+
+When using a secret key, the administrator will create the key and he will
+use it to generate the client tokens. This key will be also configured to
+the brokers to allow them to validate the clients.
+
+#### Creating a secret key
+
+> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file.
+
+```shell
+
+$ bin/pulsar tokens create-secret-key --output my-secret.key
+
+```
+
+To generate base64 encoded private key
+
+```shell
+
+$ bin/pulsar tokens create-secret-key --output  /opt/my-secret.key --base64
+
+```
+
+### Public/Private keys
+
+With public/private, we need to create a pair of keys. Pulsar supports all algorithms supported by the Java JWT library shown [here](https://github.com/jwtk/jjwt#signature-algorithms-keys)
+
+#### Creating a key pair
+
+> Output file will be generated in the root of your pulsar installation directory. You can also provide absolute path for the output file.
+
+```shell
+
+$ bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key
+
+```
+
+ * `my-private.key` will be stored in a safe location and only used by administrator to generate
+   new tokens.
+ * `my-public.key` will be distributed to all Pulsar brokers. This file can be publicly shared without
+   any security concern.
+
+## Generating tokens
+
+A token is the credential associated with a user. The association is done through the "principal",
+or "role". In case of JWT tokens, this field it's typically referred to as **subject**, though
+it's exactly the same concept.
+
+The generated token is then required to have a **subject** field set.
+
+```shell
+
+$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \
+            --subject test-user
+
+```
+
+This will print the token string on stdout.
+
+Similarly, one can create a token by passing the "private" key:
+
+```shell
+
+$ bin/pulsar tokens create --private-key file:///path/to/my-private.key \
+            --subject test-user
+
+```
+
+Finally, a token can also be created with a pre-defined TTL. After that time,
+the token will be automatically invalidated.
+
+```shell
+
+$ bin/pulsar tokens create --secret-key file:///path/to/my-secret.key \
+            --subject test-user \
+            --expiry-time 1y
+
+```
+
+## Authorization
+
+The token itself doesn't have any permission associated. That will be determined by the
+authorization engine. Once the token is created, one can grant permission for this token to do certain
+actions. Eg. :
+
+```shell
+
+$ bin/pulsar-admin namespaces grant-permission my-tenant/my-namespace \
+            --role test-user \
+            --actions produce,consume
+
+```
+
+## Enabling Token Authentication ...
+
+### ... on Brokers
+
+To configure brokers to authenticate clients, put the following in `broker.conf`:
+
+```properties
+
+# Configuration to enable authentication and authorization
+authenticationEnabled=true
+authorizationEnabled=true
+authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken
+
+# If using secret key
+tokenSecretKey=file:///path/to/secret.key
+# The key can also be passed inline:
+# tokenSecretKey=data:;base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/lAT5ucEKU=
+
+# If using public/private
+# tokenPublicKey=file:///path/to/public.key
+
+```
+
+### ... on Proxies
+
+To configure proxies to authenticate clients, put the following in `proxy.conf`:
+
+The proxy will have its own token used when talking to brokers. The role token for this
+key pair should be configured in the ``proxyRoles`` of the brokers. See the [authorization guide](security-authorization) for more details.
+
+```properties
+
+# For clients connecting to the proxy
+authenticationEnabled=true
+authorizationEnabled=true
+authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderToken
+tokenSecretKey=file:///path/to/secret.key
+
+# For the proxy to connect to brokers
+brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken
+brokerClientAuthenticationParameters=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.9OHgE9ZUDeBTZs7nSMEFIuGNEX18FLR3qvy8mqxSxXw
+# Or, alternatively, read token from file
+# brokerClientAuthenticationParameters=file:///path/to/proxy-token.txt
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.2/getting-started-docker.md b/site2/website-next/versioned_docs/version-2.6.2/standalone-docker.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/getting-started-docker.md
rename to site2/website-next/versioned_docs/version-2.6.2/standalone-docker.md
diff --git a/site2/website-next/versioned_docs/version-2.6.3/administration-pulsar-manager.md b/site2/website-next/versioned_docs/version-2.6.3/administration-pulsar-manager.md
index 51ba663..ae6446b 100644
--- a/site2/website-next/versioned_docs/version-2.6.3/administration-pulsar-manager.md
+++ b/site2/website-next/versioned_docs/version-2.6.3/administration-pulsar-manager.md
@@ -103,7 +103,7 @@ If you want to enable JWT authentication, use one of the following methods.
 
 ```
 
-wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/apache-pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
+wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
 tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
 cd pulsar-manager
 tar -zxvf pulsar-manager.tar
diff --git a/site2/website-next/versioned_docs/version-2.6.3/client-libraries-go.md b/site2/website-next/versioned_docs/version-2.6.3/client-libraries-go.md
index c8b5047..df40107 100644
--- a/site2/website-next/versioned_docs/version-2.6.3/client-libraries-go.md
+++ b/site2/website-next/versioned_docs/version-2.6.3/client-libraries-go.md
@@ -192,8 +192,9 @@ if err != nil {
 defer client.Close()
 
 topicName := newTopicName()
-producer, err := client.CreateProducer(ProducerOptions{
-	Topic: topicName,
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic:           topicName,
+    DisableBatching: true,
 })
 if err != nil {
 	log.Fatal(err)
diff --git a/site2/website-next/versioned_docs/version-2.6.3/concepts-messaging.md b/site2/website-next/versioned_docs/version-2.6.3/concepts-messaging.md
index 995d632..29cebdf 100644
--- a/site2/website-next/versioned_docs/version-2.6.3/concepts-messaging.md
+++ b/site2/website-next/versioned_docs/version-2.6.3/concepts-messaging.md
@@ -66,7 +66,7 @@ When you enable chunking, read the following instructions.
 - Chunking is only supported for persisted topics.
 - Chunking is only supported for the exclusive and failover subscription types.
 
-When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
+When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
 
 The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChuckedMessage` param [...]
 
diff --git a/site2/website-next/versioned_docs/version-2.6.1/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.6.3/develop-binary-protocol.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/developing-binary-protocol.md
rename to site2/website-next/versioned_docs/version-2.6.3/develop-binary-protocol.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/developing-cpp.md b/site2/website-next/versioned_docs/version-2.6.3/develop-cpp.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/developing-cpp.md
rename to site2/website-next/versioned_docs/version-2.6.3/develop-cpp.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/developing-load-manager.md b/site2/website-next/versioned_docs/version-2.6.3/develop-load-manager.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/developing-load-manager.md
rename to site2/website-next/versioned_docs/version-2.6.3/develop-load-manager.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/developing-tools.md b/site2/website-next/versioned_docs/version-2.6.3/develop-tools.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/developing-tools.md
rename to site2/website-next/versioned_docs/version-2.6.3/develop-tools.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/getting-started-helm.md b/site2/website-next/versioned_docs/version-2.6.3/kubernetes-helm.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/getting-started-helm.md
rename to site2/website-next/versioned_docs/version-2.6.3/kubernetes-helm.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/getting-started-pulsar.md b/site2/website-next/versioned_docs/version-2.6.3/pulsar-2.0.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/getting-started-pulsar.md
rename to site2/website-next/versioned_docs/version-2.6.3/pulsar-2.0.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/reference-pulsar-admin.md b/site2/website-next/versioned_docs/version-2.6.3/pulsar-admin.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/reference-pulsar-admin.md
rename to site2/website-next/versioned_docs/version-2.6.3/pulsar-admin.md
diff --git a/site2/website-next/versioned_docs/version-2.6.3/reference-cli-tools.md b/site2/website-next/versioned_docs/version-2.6.3/reference-cli-tools.md
index 9c6ff20..a628c63 100644
--- a/site2/website-next/versioned_docs/version-2.6.3/reference-cli-tools.md
+++ b/site2/website-next/versioned_docs/version-2.6.3/reference-cli-tools.md
@@ -806,7 +806,7 @@ The table below lists the environment variables that you can use to configure th
 |BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful||
 
 
-### `autorecovery`
+### `auto-recovery`
 Runs an auto-recovery service
 
 Usage
diff --git a/site2/website-next/versioned_docs/version-2.6.1/getting-started-docker.md b/site2/website-next/versioned_docs/version-2.6.3/standalone-docker.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/getting-started-docker.md
rename to site2/website-next/versioned_docs/version-2.6.3/standalone-docker.md
diff --git a/site2/website-next/versioned_docs/version-2.6.4/administration-pulsar-manager.md b/site2/website-next/versioned_docs/version-2.6.4/administration-pulsar-manager.md
index 51ba663..ae6446b 100644
--- a/site2/website-next/versioned_docs/version-2.6.4/administration-pulsar-manager.md
+++ b/site2/website-next/versioned_docs/version-2.6.4/administration-pulsar-manager.md
@@ -103,7 +103,7 @@ If you want to enable JWT authentication, use one of the following methods.
 
 ```
 
-wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/apache-pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
+wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
 tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
 cd pulsar-manager
 tar -zxvf pulsar-manager.tar
diff --git a/site2/website-next/versioned_docs/version-2.6.4/client-libraries-dotnet.md b/site2/website-next/versioned_docs/version-2.6.4/client-libraries-dotnet.md
index e83d5a1..c35abb4 100644
--- a/site2/website-next/versioned_docs/version-2.6.4/client-libraries-dotnet.md
+++ b/site2/website-next/versioned_docs/version-2.6.4/client-libraries-dotnet.md
@@ -9,7 +9,7 @@ You can use the Pulsar C# client to create Pulsar producers and consumers in C#.
 
 ## Installation
 
-You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio, see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019).
+You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019).
 
 ### Prerequisites
 
diff --git a/site2/website-next/versioned_docs/version-2.6.4/client-libraries-go.md b/site2/website-next/versioned_docs/version-2.6.4/client-libraries-go.md
index 2987aff..c23aaec 100644
--- a/site2/website-next/versioned_docs/version-2.6.4/client-libraries-go.md
+++ b/site2/website-next/versioned_docs/version-2.6.4/client-libraries-go.md
@@ -192,8 +192,9 @@ if err != nil {
 defer client.Close()
 
 topicName := newTopicName()
-producer, err := client.CreateProducer(ProducerOptions{
-	Topic: topicName,
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic:           topicName,
+    DisableBatching: true,
 })
 if err != nil {
 	log.Fatal(err)
diff --git a/site2/website-next/versioned_docs/version-2.6.4/concepts-messaging.md b/site2/website-next/versioned_docs/version-2.6.4/concepts-messaging.md
index 5b1a6e3..5ad265b 100644
--- a/site2/website-next/versioned_docs/version-2.6.4/concepts-messaging.md
+++ b/site2/website-next/versioned_docs/version-2.6.4/concepts-messaging.md
@@ -66,7 +66,7 @@ When you enable chunking, read the following instructions.
 - Chunking is only supported for persisted topics.
 - Chunking is only supported for the exclusive and failover subscription types.
 
-When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
+When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
 
 The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChuckedMessage` param [...]
 
diff --git a/site2/website-next/versioned_docs/version-2.6.4/deploy-aws.md b/site2/website-next/versioned_docs/version-2.6.4/deploy-aws.md
index 6323051..7ae3bb0 100644
--- a/site2/website-next/versioned_docs/version-2.6.4/deploy-aws.md
+++ b/site2/website-next/versioned_docs/version-2.6.4/deploy-aws.md
@@ -210,7 +210,7 @@ Remember to enter this command just only once. If you attempt to enter this comm
 
 Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. 
 
-(Optional) If you want to use any [built-in IO connectors](io-connectors), edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. 
+(Optional) If you want to use any [built-in IO connectors](io-connectors) , edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use. 
 
 To run the playbook, enter this command:
 
diff --git a/site2/website-next/versioned_docs/version-2.6.0/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.6.4/develop-binary-protocol.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/developing-binary-protocol.md
rename to site2/website-next/versioned_docs/version-2.6.4/develop-binary-protocol.md
diff --git a/site2/website-next/versioned_docs/version-2.6.0/developing-cpp.md b/site2/website-next/versioned_docs/version-2.6.4/develop-cpp.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/developing-cpp.md
rename to site2/website-next/versioned_docs/version-2.6.4/develop-cpp.md
diff --git a/site2/website-next/versioned_docs/version-2.6.0/developing-load-manager.md b/site2/website-next/versioned_docs/version-2.6.4/develop-load-manager.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/developing-load-manager.md
rename to site2/website-next/versioned_docs/version-2.6.4/develop-load-manager.md
diff --git a/site2/website-next/versioned_docs/version-2.6.0/developing-tools.md b/site2/website-next/versioned_docs/version-2.6.4/develop-tools.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/developing-tools.md
rename to site2/website-next/versioned_docs/version-2.6.4/develop-tools.md
diff --git a/site2/website-next/versioned_docs/version-2.6.4/getting-started-helm.md b/site2/website-next/versioned_docs/version-2.6.4/kubernetes-helm.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.4/getting-started-helm.md
rename to site2/website-next/versioned_docs/version-2.6.4/kubernetes-helm.md
diff --git a/site2/website-next/versioned_docs/version-2.6.0/getting-started-pulsar.md b/site2/website-next/versioned_docs/version-2.6.4/pulsar-2.0.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/getting-started-pulsar.md
rename to site2/website-next/versioned_docs/version-2.6.4/pulsar-2.0.md
diff --git a/site2/website-next/versioned_docs/version-2.6.4/reference-cli-tools.md b/site2/website-next/versioned_docs/version-2.6.4/reference-cli-tools.md
index 9c6ff20..a628c63 100644
--- a/site2/website-next/versioned_docs/version-2.6.4/reference-cli-tools.md
+++ b/site2/website-next/versioned_docs/version-2.6.4/reference-cli-tools.md
@@ -806,7 +806,7 @@ The table below lists the environment variables that you can use to configure th
 |BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful||
 
 
-### `autorecovery`
+### `auto-recovery`
 Runs an auto-recovery service
 
 Usage
diff --git a/site2/website-next/versioned_docs/version-2.6.0/getting-started-docker.md b/site2/website-next/versioned_docs/version-2.6.4/standalone-docker.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/getting-started-docker.md
rename to site2/website-next/versioned_docs/version-2.6.4/standalone-docker.md
diff --git a/site2/website-next/versions.json b/site2/website-next/versions.json
index 3c22abe..a6046e3 100644
--- a/site2/website-next/versions.json
+++ b/site2/website-next/versions.json
@@ -1 +1 @@
-["2.9.1", "2.9.0"]
+["2.6.4", "2.6.3", "2.6.2", "2.6.1"]