You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by zh...@apache.org on 2020/06/17 13:19:03 UTC

[pulsar] branch master updated: Update the site for 2.6.0 (#7289)

This is an automated email from the ASF dual-hosted git repository.

zhaijia pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new df3668b  Update the site for 2.6.0 (#7289)
df3668b is described below

commit df3668bac1fdf1a2d0d364d18a1db4f84e6fb137
Author: lipenghui <pe...@apache.org>
AuthorDate: Wed Jun 17 21:18:50 2020 +0800

    Update the site for 2.6.0 (#7289)
    
    * Update the site for 2.6.0
    
    * Update the site for 2.6.0
---
 site2/website/releases.json                        |    1 +
 .../version-2.6.0/admin-api-brokers.md             |  149 ++
 .../version-2.6.0/admin-api-persistent-topics.md   |  697 ++++++
 .../version-2.6.0/administration-proxy.md          |  106 +
 .../version-2.6.0/administration-zk-bk.md          |  356 +++
 .../version-2.6.0/client-libraries-dotnet.md       |  430 ++++
 .../version-2.6.0/client-libraries-java.md         |  858 +++++++
 .../concepts-architecture-overview.md              |  153 ++
 .../version-2.6.0/concepts-clients.md              |   88 +
 .../version-2.6.0/concepts-messaging.md            |  494 ++++
 .../version-2.6.0/cookbooks-tiered-storage.md      |  301 +++
 .../version-2.6.0/deploy-kubernetes.md             |   11 +
 .../version-2.6.0/developing-load-manager.md       |  215 ++
 .../versioned_docs/version-2.6.0/functions-cli.md  |  198 ++
 .../version-2.6.0/functions-develop.md             |  984 ++++++++
 .../version-2.6.0/getting-started-clients.md       |   33 +
 .../version-2.6.0/getting-started-helm.md          |  333 +++
 .../version-2.6.0/getting-started-pulsar.md        |   67 +
 .../versioned_docs/version-2.6.0/helm-deploy.md    |  376 +++
 .../versioned_docs/version-2.6.0/helm-install.md   |   41 +
 .../versioned_docs/version-2.6.0/helm-overview.md  |  101 +
 .../versioned_docs/version-2.6.0/helm-prepare.md   |   85 +
 .../versioned_docs/version-2.6.0/helm-tools.md     |   43 +
 .../versioned_docs/version-2.6.0/helm-upgrade.md   |   34 +
 .../versioned_docs/version-2.6.0/io-connectors.md  |  226 ++
 .../version-2.6.0/io-debezium-source.md            |  496 ++++
 .../version-2.6.0/io-dynamodb-source.md            |   76 +
 .../versioned_docs/version-2.6.0/io-jdbc-sink.md   |  140 ++
 .../version-2.6.0/io-kinesis-sink.md               |   73 +
 .../version-2.6.0/io-kinesis-source.md             |   77 +
 .../versioned_docs/version-2.6.0/io-quickstart.md  |  813 +++++++
 .../website/versioned_docs/version-2.6.0/io-use.md | 1505 ++++++++++++
 .../version-2.6.0/performance-pulsar-perf.md       |  182 ++
 .../version-2.6.0/reference-configuration.md       |  551 +++++
 .../version-2.6.0/reference-metrics.md             |  419 ++++
 .../version-2.6.0/reference-pulsar-admin.md        | 2435 ++++++++++++++++++++
 .../version-2.6.0/security-encryption.md           |  180 ++
 .../versioned_docs/version-2.6.0/security-jwt.md   |  248 ++
 .../version-2.6.0/security-tls-authentication.md   |  186 ++
 .../version-2.6.0/security-tls-keystore.md         |  287 +++
 .../version-2.6.0/security-tls-transport.md        |  257 +++
 .../version-2.6.0/security-token-admin.md          |  159 ++
 .../version-2.6.0/sql-deployment-configurations.md |  159 ++
 .../versioned_sidebars/version-2.6.0-sidebars.json |  152 ++
 site2/website/versions.json                        |    1 +
 45 files changed, 14776 insertions(+)

diff --git a/site2/website/releases.json b/site2/website/releases.json
index 49c49d3..a2a6aab 100644
--- a/site2/website/releases.json
+++ b/site2/website/releases.json
@@ -1,5 +1,6 @@
 [
   "2.5.2",
+  "2.6.0",
   "2.5.1",
   "2.5.0",
   "2.4.2",
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-brokers.md b/site2/website/versioned_docs/version-2.6.0/admin-api-brokers.md
new file mode 100644
index 0000000..22fdabf
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-brokers.md
@@ -0,0 +1,149 @@
+---
+id: version-2.6.0-admin-api-brokers
+title: Managing Brokers
+sidebar_label: Brokers
+original_id: admin-api-brokers
+---
+
+Pulsar brokers consist of two components:
+
+1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup.
+2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers.
+
+[Brokers](reference-terminology.md#broker) can be managed via:
+
+* The [`brokers`](reference-pulsar-admin.md#brokers) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool
+* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API
+* The `brokers` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin.html} object in the [Java API](client-libraries-java.md)
+
+In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration).
+
+> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters.
+
+## Brokers resources
+
+### List active brokers
+
+Fetch all available active brokers that are serving traffic.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin brokers list use
+```
+
+```
+broker1.use.org.com:8080
+```
+
+###### REST
+
+{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers}
+
+###### Java
+
+```java
+admin.brokers().getActiveBrokers(clusterName)
+```
+
+#### list of namespaces owned by a given broker
+
+It finds all namespaces which are owned and served by a given broker.
+
+###### CLI
+
+```shell
+$ pulsar-admin brokers namespaces use \
+  --url broker1.use.org.com:8080
+```
+
+```json
+{
+  "my-property/use/my-ns/0x00000000_0xffffffff": {
+    "broker_assignment": "shared",
+    "is_controlled": false,
+    "is_active": true
+  }
+}
+```
+###### REST
+
+{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes}
+
+###### Java
+
+```java
+admin.brokers().getOwnedNamespaces(cluster,brokerUrl);
+```
+
+### Dynamic broker configuration
+
+One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker).
+
+But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values.
+
+* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more.
+* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint.
+
+### Update dynamic configuration
+
+#### pulsar-admin
+
+The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter:
+
+```shell
+$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration}
+
+#### Java
+
+```java
+admin.brokers().updateDynamicConfiguration(configName, configValue);
+```
+
+### List updated values
+
+Fetch a list of all potentially updatable configuration parameters.
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin brokers list-dynamic-config
+brokerShutdownTimeoutMs
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName}
+
+#### Java
+
+```java
+admin.brokers().getDynamicConfigurationNames();
+```
+
+### List all
+
+Fetch a list of all parameters that have been dynamically updated.
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin brokers get-all-dynamic-config
+brokerShutdownTimeoutMs:100
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations}
+
+#### Java
+
+```java
+admin.brokers().getAllDynamicConfigurations();
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.6.0/admin-api-persistent-topics.md
new file mode 100644
index 0000000..8aeb406
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-persistent-topics.md
@@ -0,0 +1,697 @@
+---
+id: version-2.6.0-admin-api-persistent-topics
+title: Managing persistent topics
+sidebar_label: Persistent topics
+original_id: admin-api-persistent-topics
+---
+
+Persistent helps to access topic which is a logical endpoint for publishing and consuming messages. Producers publish messages to the topic and consumers subscribe to the topic, to consume messages published to the topic.
+
+In all of the instructions and commands below, the topic name structure is:
+
+
+```shell
+persistent://tenant/namespace/topic
+```
+
+## Persistent topics resources
+
+### List of topics
+
+It provides a list of persistent topics exist under a given namespace.
+
+#### pulsar-admin
+
+List of topics can be fetched using [`list`](../../reference/CliTools#list) command.
+
+```shell
+$ pulsar-admin persistent list \
+  my-tenant/my-namespace
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace|operation/getList}
+
+#### Java
+
+```java
+String namespace = "my-tenant/my-namespace";
+admin.persistentTopics().getList(namespace);
+```
+
+### Grant permission
+
+It grants permissions on a client role to perform specific actions on a given topic.
+
+#### pulsar-admin
+
+Permission can be granted using [`grant-permission`](../../reference/CliTools#grant-permission) command.
+
+```shell
+$ pulsar-admin persistent grant-permission \
+  --actions produce,consume --role application1 \
+  persistent://test-tenant/ns1/tp1 \
+
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String role = "test-role";
+Set<AuthAction> actions  = Sets.newHashSet(AuthAction.produce, AuthAction.consume);
+admin.persistentTopics().grantPermission(topic, role, actions);
+```
+
+### Get permission
+
+Permission can be fetched using [`permissions`](../../reference/CliTools#permissions) command.
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin persistent permissions \
+  persistent://test-tenant/ns1/tp1 \
+
+{
+    "application1": [
+        "consume",
+        "produce"
+    ]
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getPermissions(topic);
+```
+
+### Revoke permission
+
+It revokes a permission which was granted on a client role.
+
+#### pulsar-admin
+
+Permission can be revoked using [`revoke-permission`](../../reference/CliTools#revoke-permission) command.
+
+```shell
+$ pulsar-admin persistent revoke-permission \
+  --role application1 \
+  persistent://test-tenant/ns1/tp1 \
+
+{
+  "application1": [
+    "consume",
+    "produce"
+  ]
+}
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/persistent/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String role = "test-role";
+admin.persistentTopics().revokePermissions(topic, role);
+```
+
+### Delete topic
+
+It deletes a topic. The topic cannot be deleted if there's any active subscription or producers connected to it.
+
+#### pulsar-admin
+
+Topic can be deleted using [`delete`](../../reference/CliTools#delete) command.
+
+```shell
+$ pulsar-admin persistent delete \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/persistent/:tenant/:namespace/:topic|operation/deleteTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().delete(topic);
+```
+
+### Unload topic
+
+It unloads a topic.
+
+#### pulsar-admin
+
+Topic can be unloaded using [`unload`](../../reference/CliTools#unload) command.
+
+```shell
+$ pulsar-admin persistent unload \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/unload|operation/unloadTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().unload(topic);
+```
+
+### Get stats
+
+It shows current statistics of a given non-partitioned topic.
+
+  -   **msgRateIn**: The sum of all local and replication publishers' publish rates in messages per second
+
+  -   **msgThroughputIn**: Same as above, but in bytes per second instead of messages per second
+
+  -   **msgRateOut**: The sum of all local and replication consumers' dispatch rates in messages per second
+
+  -   **msgThroughputOut**: Same as above, but in bytes per second instead of messages per second
+
+  -   **averageMsgSize**: The average size in bytes of messages published within the last interval
+
+  -   **storageSize**: The sum of the ledgers' storage size for this topic. Space used to store the messages for the topic
+
+  -   **publishers**: The list of all local publishers into the topic. There can be zero or thousands
+
+      -   **msgRateIn**: Total rate of messages published by this publisher in messages per second 
+
+      -   **msgThroughputIn**: Total throughput of the messages published by this publisher in bytes per second
+
+      -   **averageMsgSize**: Average message size in bytes from this publisher within the last interval
+
+      -   **producerId**: Internal identifier for this producer on this topic
+
+      -   **producerName**: Internal identifier for this producer, generated by the client library
+
+      -   **address**: IP address and source port for the connection of this producer
+
+      -   **connectedSince**: Timestamp this producer was created or last reconnected
+
+  -   **subscriptions**: The list of all local subscriptions to the topic
+
+      -   **my-subscription**: The name of this subscription (client defined)
+
+          -   **msgRateOut**: Total rate of messages delivered on this subscription (msg/s)
+
+          -   **msgThroughputOut**: Total throughput delivered on this subscription (bytes/s)
+
+          -   **msgBacklog**: Number of messages in the subscription backlog
+
+          -   **type**: This subscription type
+
+          -   **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL
+          
+          -   **lastExpireTimestamp**: The last message expire execution timestamp
+          
+          -   **lastConsumedFlowTimestamp**: The last flow command received timestamp 
+          
+          -   **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers
+          
+          -   **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers
+
+          -   **consumers**: The list of connected consumers for this subscription
+
+                -   **msgRateOut**: Total rate of messages delivered to the consumer (msg/s)
+
+                -   **msgThroughputOut**: Total throughput delivered to the consumer (bytes/s)
+
+                -   **consumerName**: Internal identifier for this consumer, generated by the client library
+
+                -   **availablePermits**: The number of messages this consumer has space for in the client library's listen queue. A value of 0 means the client library's queue is full and receive() isn't being called. A nonzero value means this consumer is ready to be dispatched messages.
+
+                -   **unackedMessages**: Number of unacknowledged messages for the consumer
+
+                -   **blockedConsumerOnUnackedMsgs**: Flag to verify if the consumer is blocked due to reaching threshold of unacked messages
+                
+                -   **lastConsumedTimestamp**: The timestamp of the consumer last consume a message
+          
+                -   **lastAckedTimestamp**: The timestamp of the consumer last ack a message
+
+  -   **replication**: This section gives the stats for cross-colo replication of this topic
+
+      -   **msgRateIn**: Total rate of messages received from the remote cluster (msg/s)
+
+      -   **msgThroughputIn**: Total throughput received from the remote cluster (bytes/s)
+
+      -   **msgRateOut**: Total rate of messages delivered to the replication-subscriber (msg/s)
+
+      -   **msgThroughputOut**: Total through delivered to the replication-subscriber (bytes/s)
+
+      -   **msgRateExpired**: Total rate of messages expired (msg/s)
+
+      -   **replicationBacklog**: Number of messages pending to be replicated to remote cluster
+
+      -   **connected**: Whether the outbound replicator is connected
+
+      -   **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is true
+
+      -   **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker
+
+      -   **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.
+
+      -   **outboundConnection**: Address of outbound replication connection
+
+      -   **outboundConnectedSince**: Timestamp of establishing outbound connection
+
+```json
+{
+  "msgRateIn": 4641.528542257553,
+  "msgThroughputIn": 44663039.74947473,
+  "msgRateOut": 0,
+  "msgThroughputOut": 0,
+  "averageMsgSize": 1232439.816728665,
+  "storageSize": 135532389160,
+  "publishers": [
+    {
+      "msgRateIn": 57.855383881403576,
+      "msgThroughputIn": 558994.7078932219,
+      "averageMsgSize": 613135,
+      "producerId": 0,
+      "producerName": null,
+      "address": null,
+      "connectedSince": null
+    }
+  ],
+  "subscriptions": {
+    "my-topic_subscription": {
+      "msgRateOut": 0,
+      "msgThroughputOut": 0,
+      "msgBacklog": 116632,
+      "type": null,
+      "msgRateExpired": 36.98245516804671,
+      "consumers": []
+    }
+  },
+  "replication": {}
+}
+```
+
+#### pulsar-admin
+
+Topic stats can be fetched using [`stats`](../../reference/CliTools#stats) command.
+
+```shell
+$ pulsar-admin persistent stats \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/stats|operation/getStats}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getStats(topic);
+```
+
+### Get internal stats
+
+It shows detailed statistics of a topic.
+
+  -   **entriesAddedCounter**: Messages published since this broker loaded this topic
+
+  -   **numberOfEntries**: Total number of messages being tracked
+
+  -   **totalSize**: Total storage size in bytes of all messages
+
+  -   **currentLedgerEntries**: Count of messages written to the ledger currently open for writing
+
+  -   **currentLedgerSize**: Size in bytes of messages written to ledger currently open for writing
+
+  -   **lastLedgerCreatedTimestamp**: time when last ledger was created
+
+  -   **lastLedgerCreationFailureTimestamp:** time when last ledger was failed
+
+  -   **waitingCursorsCount**: How many cursors are "caught up" and waiting for a new message to be published
+
+  -   **pendingAddEntriesCount**: How many messages have (asynchronous) write requests we are waiting on completion
+
+  -   **lastConfirmedEntry**: The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger has been opened or is currently being opened but has no entries written yet.
+
+  -   **state**: The state of this ledger for writing. LedgerOpened means we have a ledger open for saving published messages.
+
+  -   **ledgers**: The ordered list of all ledgers for this topic holding its messages
+
+      -   **ledgerId**: Id of this ledger
+
+      -   **entries**: Total number of entries belong to this ledger
+
+      -   **size**: Size of messages written to this ledger (in bytes)
+
+      -   **offloaded**: Whether this ledger is offloaded
+
+  -   **cursors**: The list of all cursors on this topic. There will be one for every subscription you saw in the topic stats.
+
+      -   **markDeletePosition**: All of messages before the markDeletePosition are acknowledged by the subscriber.
+
+      -   **readPosition**: The latest position of subscriber for reading message
+
+      -   **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting on new messages to be published.
+
+      -   **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers we have in progress
+
+      -   **messagesConsumedCounter**: Number of messages this cursor has acked since this broker loaded this topic
+
+      -   **cursorLedger**: The ledger being used to persistently store the current markDeletePosition
+
+      -   **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition
+
+      -   **individuallyDeletedMessages**: If Acks are being done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position
+
+      -   **lastLedgerSwitchTimestamp**: The last time the cursor ledger was rolled over
+
+      -   **state**: The state of the cursor ledger: Open means we have a cursor ledger for saving updates of the markDeletePosition.
+
+```json
+{
+    "entriesAddedCounter": 20449518,
+    "numberOfEntries": 3233,
+    "totalSize": 331482,
+    "currentLedgerEntries": 3233,
+    "currentLedgerSize": 331482,
+    "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825",
+    "lastLedgerCreationFailureTimestamp": null,
+    "waitingCursorsCount": 1,
+    "pendingAddEntriesCount": 0,
+    "lastConfirmedEntry": "324711539:3232",
+    "state": "LedgerOpened",
+    "ledgers": [
+        {
+            "ledgerId": 324711539,
+            "entries": 0,
+            "size": 0
+        }
+    ],
+    "cursors": {
+        "my-subscription": {
+            "markDeletePosition": "324711539:3133",
+            "readPosition": "324711539:3233",
+            "waitingReadOp": true,
+            "pendingReadOps": 0,
+            "messagesConsumedCounter": 20449501,
+            "cursorLedger": 324702104,
+            "cursorLedgerLastEntry": 21,
+            "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]",
+            "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313",
+            "state": "Open"
+        }
+    }
+}
+```
+
+
+#### pulsar-admin
+
+Topic internal-stats can be fetched using [`stats-internal`](../../reference/CliTools#stats-internal) command.
+
+```shell
+$ pulsar-admin persistent stats-internal \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/internalStats|operation/getInternalStats}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getInternalStats(topic);
+```
+
+### Peek messages
+
+It peeks N messages for a specific subscription of a given topic.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent peek-messages \
+  --count 10 --subscription my-subscription \
+  persistent://test-tenant/ns1/tp1 \
+
+Message ID: 315674752:0
+Properties:  {  "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451"  }
+msg-payload
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+int numMessages = 1;
+admin.persistentTopics().peekMessages(topic, subName, numMessages);
+```
+
+### Get message by ID
+
+It fetches the message with given ledger id and entry id.
+
+#### pulsar-admin
+
+```shell
+$ ./bin/pulsar-admin topics get-message-by-id \
+  persistent://public/default/my-topic \
+  -l 10 -e 0
+```
+
+#### REST API
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+long ledgerId = 10;
+long entryId = 10;
+admin.persistentTopics().getMessageById(topic, ledgerId, entryId);
+```
+
+### Skip messages
+
+It skips N messages for a specific subscription of a given topic.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent skip \
+  --count 10 --subscription my-subscription \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+int numMessages = 1;
+admin.persistentTopics().skipMessages(topic, subName, numMessages);
+```
+
+### Skip all messages
+
+It skips all old messages for a specific subscription of a given topic.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent skip-all \
+  --subscription my-subscription \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages}
+
+[More info](../../reference/RestApi#/admin/persistent/:tenant/:namespace/:topic/subscription/:subName/skip_all)
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+admin.persistentTopics().skipAllMessages(topic, subName);
+```
+
+### Reset cursor
+
+It resets a subscription’s cursor position back to the position which was recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent reset-cursor \
+  --subscription my-subscription --time 10 \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+long timestamp = 2342343L;
+admin.persistentTopics().skipAllMessages(topic, subName, timestamp);
+```
+
+### Lookup of topic
+
+It locates broker url which is serving the given topic.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent lookup \
+  persistent://test-tenant/ns1/tp1 \
+
+ "pulsar://broker1.org.com:4480"
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/lookup/v2/topic/persistent/:tenant:namespace/:topic|/}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.lookup().lookupDestination(topic);
+```
+
+### Get bundle
+
+It gives range of the bundle which contains given topic
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent bundle-range \
+  persistent://test-tenant/ns1/tp1 \
+
+ "0x00000000_0xffffffff"
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.lookup().getBundleRange(topic);
+```
+
+
+### Get subscriptions
+
+It shows all subscription names for a given topic.
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin persistent subscriptions \
+  persistent://test-tenant/ns1/tp1 \
+
+ my-subscription
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getSubscriptions(topic);
+```
+
+### Unsubscribe
+
+It can also help to unsubscribe a subscription which is no more processing further messages.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent unsubscribe \
+  --subscription my-subscription \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subscriptionName = "my-subscription";
+admin.persistentTopics().deleteSubscription(topic, subscriptionName);
+```
+
+### Last Message Id
+
+It gives the last commited message ID for a persistent topic, and it will be available in 2.3.0.
+
+```shell
+pulsar-admin topics last-message-id topic-name
+```
+
+#### REST API
+{% endpoint Get /admin/v2/persistent/:tenant/:namespace/:topic/lastMessageId %}
+
+#### Java
+
+```Java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getLastMessage(topic);
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/administration-proxy.md b/site2/website/versioned_docs/version-2.6.0/administration-proxy.md
new file mode 100644
index 0000000..4dce901
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/administration-proxy.md
@@ -0,0 +1,106 @@
+---
+id: version-2.6.0-administration-proxy
+title: The Pulsar proxy
+sidebar_label: Pulsar proxy
+original_id: administration-proxy
+---
+
+The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) is an optional gateway that you can run in front of the brokers in a Pulsar cluster. You can run a Pulsar proxy in cases when direction connections between clients and Pulsar brokers are either infeasible, undesirable, or both, for example when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform.
+
+## Configure the proxy
+
+The proxy must have some way to find the addresses of the brokers of the cluster. You can do this by either configuring the proxy to connect directly to service discovery or by specifying a broker URL in the configuration. 
+
+### Option 1: Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+```properties
+zookeeperServers=zk-0,zk-1,zk-2
+configurationStoreServers=zk-0:2184,zk-remote:2184
+```
+
+> If you use service discovery, the network ACL must allow the proxy to talk to the ZooKeeper nodes on the zookeeper client port, which is usually 2181, and on the configuration store client port, which is 2184 by default. Opening the network ACLs means that if someone compromises a proxy, they have full access to ZooKeeper. For this reason, using broker URLs to configure the proxy is more secure.
+
+### Option 2: Use broker URLs
+
+The more secure method of configuring the proxy is to specify a URL to connect to the brokers.
+
+> [Authorization](security-authorization#enable-authorization-and-assign-superusers) at the proxy requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you should disable the Proxy level authorization. Brokers still authorize requests after the proxy forwards them.
+
+You can configure the broker URLs in `conf/proxy.conf` as follows.
+
+```properties
+brokerServiceURL=pulsar://brokers.example.com:6650
+brokerWebServiceURL=http://brokers.example.com:8080
+functionWorkerWebServiceURL=http://function-workers.example.com:8080
+```
+
+Or if you use TLS:
+```properties
+brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651
+brokerWebServiceURLTLS=https://brokers.example.com:8443
+functionWorkerWebServiceURL=https://function-workers.example.com:8443
+```
+
+The hostname in the URLs provided should be a DNS entry which points to multiple brokers or a Virtual IP which is backed by multiple broker IP addresses so that the proxy does not lose connectivity to the pulsar cluster if a single broker becomes unavailable.
+
+The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs.
+
+Note that if you do not use functions, then you do not need to configure `functionWorkerWebServiceURL`.
+
+## Start the proxy
+
+To start the proxy:
+
+```bash
+$ cd /path/to/pulsar/directory
+$ bin/pulsar proxy
+```
+
+> You can run as many instances of the Pulsar proxy in a cluster as you want.
+
+
+## Stop the proxy
+
+The Pulsar proxy runs by default in the foreground. To stop the proxy, simply stop the process in which the proxy is running.
+
+## Proxy frontends
+
+You can run the Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer.
+
+## Use Pulsar clients with the proxy
+
+Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, then the connection URL for clients is `pulsar://pulsar.cluster.default:6650`.
+
+## Proxy configuration
+
+You can configure the Pulsar proxy using the [`proxy.conf`](reference-configuration.md#proxy) configuration file. The following parameters are available in that file:
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  The ZooKeeper quorum connection string (as a comma-separated list)  ||
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
+|servicePort| The port to use for server binary Protobuf requests |6650|
+|servicePortTls|  The port to use to server binary Protobuf TLS requests  |6651|
+|statusFilePath | Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks ||
+|advertisedAddress|Hostname or IP address the service advertises to the outside world.|`InetAddress.getLocalHost().getHostname()`|
+|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy  |false|
+|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true|
+|authenticationProviders| Authentication provider name list (a comma-separated list of class names) ||
+|authorizationEnabled|  Whether authorization is enforced by the Pulsar proxy |false|
+|authorizationProvider| Authorization provider as a fully qualified class name  |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider|
+|brokerClientAuthenticationPlugin|  The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientAuthenticationParameters|  The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientTrustCertsFilePath|  The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers ||
+|superUserRoles|  Role names that are treated as “super-users,” meaning that they are able to perform all admin ||
+|forwardAuthorizationCredentials| Whether client authorization credentials are forwared to the broker for re-authorization. Authentication must be enabled via authenticationEnabled=true for this to take effect.  |false|
+|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy rejects requests beyond that. |10000|
+|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy errors out requests beyond that. |50000|
+|tlsEnabledInProxy| Whether TLS is enabled for the proxy  |false|
+|tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar brokers |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file ||
+|tlsHostnameVerificationEnabled|  Whether the hostname is validated when the proxy creates a TLS connection with brokers  |false|
+|tlsRequireTrustedClientCertOnConnect|  Whether client certificates are required for TLS. Connections are rejected if the client certificate is not trusted. |false|
diff --git a/site2/website/versioned_docs/version-2.6.0/administration-zk-bk.md b/site2/website/versioned_docs/version-2.6.0/administration-zk-bk.md
new file mode 100644
index 0000000..ca7a0d7
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/administration-zk-bk.md
@@ -0,0 +1,356 @@
+---
+id: version-2.6.0-administration-zk-bk
+title: ZooKeeper and BookKeeper administration
+sidebar_label: ZooKeeper and BookKeeper
+original_id: administration-zk-bk
+---
+
+Pulsar relies on two external systems for essential tasks:
+
+* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks.
+* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data.
+
+ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects.
+
+> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar.
+
+
+## ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. 
+
+### Deploy local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar.
+
+To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*.
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this:
+
+```shell
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start zookeeper
+```
+
+### Deploy configuration store
+
+The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
+
+If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorom uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+```
+
+As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers.
+
+Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum.
+
+For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as
+
+```
+zk[1-3].${CLUSTER}.example.com
+```
+
+In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This guarantees that writes to configuration store is possible even if one of these regions is unreachable.
+
+The ZK configuration in all the servers looks like:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+```
+
+Additionally, ZK observers need to have:
+
+```properties
+peerType=observer
+```
+
+##### Start the service
+
+Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+$ bin/pulsar-daemon start configuration-store
+```
+
+
+
+### ZooKeeper configuration
+
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
+
+#### Local ZooKeeper
+
+The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters:
+
+|Name|Description|Default|
+|---|---|---|
+|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
+|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
+|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
+|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
+|clientPort|  The port on which the ZooKeeper server listens for connections. |2181|
+|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
+|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1|
+|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+
+
+#### Configuration Store
+
+The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters:
+
+
+## BookKeeper
+
+BookKeeper is responsible for all durable message storage in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*.
+
+> For a guide to managing message persistence, retention, and expiry in Pulsar, see [this cookbook](cookbooks-retention-expiry.md).
+
+### Hardware considerations
+
+Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, ensuring that the bookies have a suitable hardware configuration is essential. You can choose two key dimensions to bookie hardware capacity:
+
+* Disk I/O capacity read/write
+* Storage capacity
+
+Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is designed to use multiple devices:
+
+* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+* A **ledger storage device** is where data is stored until all consumers have acknowledged the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller.
+
+### Configure BookKeeper
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster.
+
+Minimum configuration changes required in `conf/bookkeeper.conf` are:
+
+```properties
+# Change to point to journal disk mount point
+journalDirectory=data/bookkeeper/journal
+
+# Point to ledger storage disk mount point
+ledgerDirectories=data/bookkeeper/ledgers
+
+# Point to local ZK quorum
+zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
+
+# Change the ledger manager type
+ledgerManagerType=hierarchical
+```
+
+To change the zookeeper root path that Bookkeeper uses, use zkLedgersRootPath=/MY-PREFIX/ledgers instead of zkServers=localhost:2181/MY-PREFIX
+
+> Consult the official [BookKeeper docs](http://bookkeeper.apache.org) for more information about BookKeeper.
+
+### Deploy BookKeeper
+
+BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
+
+Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Starting bookies manually
+
+You can start up a bookie in two ways: in the foreground or as a background daemon.
+
+To start up a bookie in the foreground, use the [`bookeeper`](reference-cli-tools.md#bookkeeper) CLI tool:
+
+```bash
+$ bin/bookkeeper bookie
+```
+
+To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start bookie
+```
+
+You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+
+```shell
+$ bin/bookkeeper shell bookiesanity
+```
+
+This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger.
+
+### Decommissioning bookies cleanly
+
+
+In case the user wants to decommission a bookie, the following process is useful to follow in order to verify if the
+decommissioning was safely done.
+
+#### Before we decommission
+1. Ensure state of your cluster can support the decommissioning of the target bookie.
+Check if `EnsembleSize >= Write Quorum >= Ack Quorum` stays true with one less bookie
+
+2. Ensure target bookie shows up in the listbookies command.
+
+3. Ensure that there is no other process ongoing (upgrade etc).
+
+#### Process of Decommissioning
+1. Log on to the bookie node, check if there are underreplicated ledgers.
+
+If there are, the decommission command will force them to be replicated.
+`$ bin/bookkeeper shell listunderreplicated`
+
+2. Stop the bookie by killing the bookie process. Make sure there are no liveness / readiness probes setup for the bookies to spin them back up if you are deployed in a kubernetes environment.
+
+3. Run the decommission command.
+If you have logged onto the node you wish to decommission, you don't need to provide `-bookieid`
+If you are running the decommission command for target bookie node from another bookie node you should mention 
+the target bookie id in the arguments for `-bookieid`
+`$ bin/bookkeeper shell decommissionbookie`
+or
+`$ bin/bookkeeper shell decommissionbookie -bookieid <target bookieid>`
+
+4. Validate that there are no ledgers on decommissioned bookie
+`$ bin/bookkeeper shell listledgers -bookieid <target bookieid>`
+
+Last step to verify is you could run this command to check if the bookie you decommissioned doesn’t show up in list bookies:
+
+```bash
+./bookkeeper shell listbookies -rw -h
+./bookkeeper shell listbookies -ro -h
+```
+
+## BookKeeper persistence policies
+
+In Pulsar, you can set *persistence policies*, at the namespace level, that determine how BookKeeper handles persistent storage of messages. Policies determine four things:
+
+* The number of acks (guaranteed copies) to wait for each ledger entry.
+* The number of bookies to use for a topic.
+* The number of writes to make for each ledger entry.
+* The throttling rate for mark-delete operations.
+
+### Set persistence policies
+
+You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level.
+
+#### Pulsar-admin
+
+Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are:
+
+Flag | Description | Default
+:----|:------------|:-------
+`-a`, `--bookkeeper-ack-quorom` | The number of acks (guaranteed copies) to wait on for each entry | 0
+`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0
+`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0
+`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0
+
+The following is an example:
+
+```shell
+$ pulsar-admin namespaces set-persistence my-tenant/my-ns \
+  --bookkeeper-ack-quorom 3 \
+  --bookeeper-ensemble 2
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence}
+
+#### Java
+
+```java
+int bkEnsemble = 2;
+int bkQuorum = 3;
+int bkAckQuorum = 2;
+double markDeleteRate = 0.7;
+PersistencePolicies policies =
+  new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate);
+admin.namespaces().setPersistence(namespace, policies);
+```
+
+### List persistence policies
+
+You can see which persistence policy currently applies to a namespace.
+
+#### Pulsar-admin
+
+Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace.
+
+The following is an example:
+
+```shell
+$ pulsar-admin namespaces get-persistence my-tenant/my-ns
+{
+  "bookkeeperEnsemble": 1,
+  "bookkeeperWriteQuorum": 1,
+  "bookkeeperAckQuorum", 1,
+  "managedLedgerMaxMarkDeleteRate": 0
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence}
+
+#### Java
+
+```java
+PersistencePolicies policies = admin.namespaces().getPersistence(namespace);
+```
+
+## How Pulsar uses ZooKeeper and BookKeeper
+
+This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster:
+
+![ZooKeeper and BookKeeper](assets/pulsar-system-architecture.png)
+
+Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies.
diff --git a/site2/website/versioned_docs/version-2.6.0/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.6.0/client-libraries-dotnet.md
new file mode 100644
index 0000000..f2e397e
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/client-libraries-dotnet.md
@@ -0,0 +1,430 @@
+---
+id: version-2.6.0-client-libraries-dotnet
+title: Pulsar C# client
+sidebar_label: C#
+original_id: client-libraries-dotnet
+---
+
+You can use the Pulsar C# client to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe.
+
+## Installation
+
+You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019).
+
+### Prerequisites
+
+Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads.
+
+### Procedures
+
+To install the Pulsar C# client library, following these steps:
+
+1. Create a project.
+
+   1. Create a folder for the project.
+
+   2. Open a terminal window and switch to the new folder.
+
+   3. Create the project using the following command.
+
+        ```
+        dotnet new console
+        ```
+
+   4. Use `dotnet run` to test that the app has been created properly.
+
+2. Add the Newtonsoft.Json NuGet package.
+
+   1. Use the following command to install the `Newtonsoft.json` package:
+
+        ```
+        dotnet add package Newtonsoft.Json
+        ```
+
+   2. After the command completes, open the `.csproj` file to see the added reference:
+
+        ```xml
+        <ItemGroup>
+        <PackageReference Include="Newtonsoft.Json" Version="12.0.1" />
+        </ItemGroup>
+        ```
+
+3. Use the Newtonsoft.Json API in the app.
+
+   1. Open the `Program.cs` file and add the following line at the top of the file:
+
+        ```c#
+        using Newtonsoft.Json;
+        ```
+
+   2. Add the following code before the `class Program` line:
+
+        ```c#
+        public class Account
+        {
+        public string Name { get; set; }
+        public string Email { get; set; }
+        public DateTime DOB { get; set; }
+        }
+        ```
+
+   3. Replace the `Main` function with the following:
+
+        ```c#
+        static void Main(string[] args)
+        {
+            Account account = new Account
+            {
+                Name = "John Doe",
+                Email = "john@nuget.org",
+                DOB = new DateTime(1980, 2, 20, 0, 0, 0, DateTimeKind.Utc),
+            };
+
+            string json = JsonConvert.SerializeObject(account, Formatting.Indented);
+            Console.WriteLine(json);
+        }
+        ```
+   4. Build and run the app by using the `dotnet run` command. The output should be the JSON representation of the `Account` object in the code:
+
+        ```output
+        {
+        "Name": "John Doe",
+        "Email": "john@nuget.org",
+        "DOB": "1980-02-20T00:00:00Z"
+        }
+        ```
+
+## Client
+
+This section describes some configuration examples for the Pulsar C# client.
+
+### Create client
+
+This example shows how to create a Pulsar C# client connected to the local host.
+
+```c#
+var client = PulsarClient.Builder().Build();
+```
+
+To create a Pulsar C# client by using the builder, you need to specify the following options:
+
+| Option | Description | Default |
+| ---- | ---- | ---- |
+| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 |
+| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s |
+
+### Create producer
+
+This section describes how to create a producer.
+
+- Create a producer by using the builder.
+
+    ```c#
+    var producer = client.NewProducer()
+                        .Topic("persistent://public/default/mytopic")
+                        .Create();
+    ```
+
+- Create a producer without using the builder.
+
+    ```c#
+    var options = new ProducerOptions("persistent://public/default/mytopic");
+    var producer = client.CreateProducer(options);
+    ```
+
+### Create consumer
+
+This section describes how to create a consumer.
+
+- Create a consumer by using the builder.
+
+    ```c#
+    var consumer = client.NewConsumer()
+                        .SubscriptionName("MySubscription")
+                        .Topic("persistent://public/default/mytopic")
+                        .Create();
+    ```
+
+- Create a consumer without using the builder.
+
+    ```c#
+    var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic");
+    var consumer = client.CreateConsumer(options);
+    ```
+
+### Create reader
+
+This section describes how to create a reader.
+
+- Create a reader by using the builder.
+
+    ```c#
+    var reader = client.NewReader()
+                    .StartMessageId(MessageId.Earliest)
+                    .Topic("persistent://public/default/mytopic")
+                    .Create();
+    ```
+
+- Create a reader without using the builder.
+
+    ```c#
+    var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic");
+    var reader = client.CreateReader(options);
+    ```
+
+### Configure encryption policies
+
+The Pulsar C# client supports four kinds of encryption policies:
+
+- `EnforceUnencrypted`: always use unencrypted connections.
+- `EnforceEncrypted`: always use encrypted connections)
+- `PreferUnencrypted`: use unencrypted connections, if possible.
+- `PreferEncrypted`: use encrypted connections, if possible.
+
+This example shows how to set the `EnforceUnencrypted` encryption policy.
+
+```c#
+var client = PulsarClient.Builder()
+                         .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted)
+                         .Build();
+```
+
+### Configure authentication
+
+Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication.
+
+If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps:
+
+1. Create an unencrypted and password-less pfx file.
+
+    ```c#
+    openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass:
+    ```
+
+2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client.
+
+    ```c#
+    var clientCertificate = new X509Certificate2("admin.pfx");
+    var client = PulsarClient.Builder()
+                            .AuthenticateUsingClientCertificate(clientCertificate)
+                            .Build();
+    ```
+
+## Producer
+
+A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer.
+
+## Send data
+
+This example shows how to send data.
+
+```c#
+var data = Encoding.UTF8.GetBytes("Hello World");
+await producer.Send(data);
+```
+
+### Send messages with customized metadata
+
+- Send messages with customized metadata by using the builder.
+
+    ```c#
+    var data = Encoding.UTF8.GetBytes("Hello World");
+    var messageId = await producer.NewMessage()
+                                .Property("SomeKey", "SomeValue")
+                                .Send(data);
+    ```
+
+- Send messages with customized metadata without using the builder.
+
+    ```c#
+    var data = Encoding.UTF8.GetBytes("Hello World");
+    var metadata = new MessageMetadata();
+    metadata["SomeKey"] = "SomeValue";
+    var messageId = await producer.Send(metadata, data));
+    ```
+
+## Consumer
+
+A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer.
+
+### Receive messages
+
+This example shows how a consumer receives messages from a topic.
+
+```c#
+await foreach (var message in consumer.Messages())
+{
+    Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
+}
+```
+
+### Acknowledge messages
+
+Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement).
+
+- Acknowledge messages individually.
+
+    ```c#
+    await foreach (var message in consumer.Messages())
+    {
+        Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
+    }
+    ```
+
+- Acknowledge messages cumulatively.
+
+    ```c#
+    await consumer.AcknowledgeCumulative(message);
+    ```
+
+### Unsubscribe from topics
+
+This example shows how a consumer unsubscribes from a topic.
+
+```c#
+await consumer.Unsubscribe();
+```
+
+#### Note
+
+> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic.
+
+## Reader
+
+A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages.
+
+This example shows how a reader receives messages.
+
+```c#
+await foreach (var message in reader.Messages())
+{
+    Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
+}
+```
+
+## Monitoring
+
+This section describes how to monitor the producer, consumer, and reader state.
+
+### Monitor producer
+
+The following table lists states available for the producer.
+
+| State | Description |
+| ---- | ----|
+| Closed | The producer or the Pulsar client has been disposed. |
+| Connected | All is well. |
+| Disconnected | The connection is lost and attempts are being made to reconnect. |
+| Faulted | An unrecoverable error has occurred. |
+
+This example shows how to monitor the producer state.
+
+```c#
+private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken)
+{
+    var state = ProducerState.Disconnected;
+
+    while (!cancellationToken.IsCancellationRequested)
+    {
+        state = await producer.StateChangedFrom(state, cancellationToken);
+
+        var stateMessage = state switch
+        {
+            ProducerState.Connected => $"The producer is connected",
+            ProducerState.Disconnected => $"The producer is disconnected",
+            ProducerState.Closed => $"The producer has closed",
+            ProducerState.Faulted => $"The producer has faulted",
+            _ => $"The producer has an unknown state '{state}'"
+        };
+
+        Console.WriteLine(stateMessage);
+
+        if (producer.IsFinalState(state))
+            return;
+    }
+}
+```
+
+### Monitor consumer state
+
+The following table lists states available for the consumer.
+
+| State | Description |
+| ---- | ----|
+| Active | All is well. |
+| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. |
+| Closed | The consumer or the Pulsar client has been disposed. |
+| Disconnected | The connection is lost and attempts are being made to reconnect. |
+| Faulted | An unrecoverable error has occurred. |
+| ReachedEndOfTopic | No more messages are delivered. |
+
+This example shows how to monitor the consumer state.
+
+```c#
+private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken)
+{
+    var state = ConsumerState.Disconnected;
+
+    while (!cancellationToken.IsCancellationRequested)
+    {
+        state = await consumer.StateChangedFrom(state, cancellationToken);
+
+        var stateMessage = state switch
+        {
+            ConsumerState.Active => "The consumer is active",
+            ConsumerState.Inactive => "The consumer is inactive",
+            ConsumerState.Disconnected => "The consumer is disconnected",
+            ConsumerState.Closed => "The consumer has closed",
+            ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic",
+            ConsumerState.Faulted => "The consumer has faulted",
+            _ => $"The consumer has an unknown state '{state}'"
+        };
+
+        Console.WriteLine(stateMessage);
+
+        if (consumer.IsFinalState(state))
+            return;
+    }
+}
+```
+
+### Monitor reader state
+
+The following table lists states available for the reader.
+
+| State | Description |
+| ---- | ----|
+| Closed | The reader or the Pulsar client has been disposed. |
+| Connected | All is well. |
+| Disconnected | The connection is lost and attempts are being made to reconnect.
+| Faulted | An unrecoverable error has occurred. |
+| ReachedEndOfTopic | No more messages are delivered. |
+
+This example shows how to monitor the reader state.
+
+```c#
+private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken)
+{
+    var state = ReaderState.Disconnected;
+
+    while (!cancellationToken.IsCancellationRequested)
+    {
+        state = await reader.StateChangedFrom(state, cancellationToken);
+
+        var stateMessage = state switch
+        {
+            ReaderState.Connected => "The reader is connected",
+            ReaderState.Disconnected => "The reader is disconnected",
+            ReaderState.Closed => "The reader has closed",
+            ReaderState.ReachedEndOfTopic => "The reader has reached end of topic",
+            ReaderState.Faulted => "The reader has faulted",
+            _ => $"The reader has an unknown state '{state}'"
+        };
+
+        Console.WriteLine(stateMessage);
+
+        if (reader.IsFinalState(state))
+            return;
+    }
+}
+```
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/client-libraries-java.md b/site2/website/versioned_docs/version-2.6.0/client-libraries-java.md
new file mode 100644
index 0000000..11c4519
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/client-libraries-java.md
@@ -0,0 +1,858 @@
+---
+id: version-2.6.0-client-libraries-java
+title: Pulsar Java client
+sidebar_label: Java
+original_id: client-libraries-java
+---
+
+You can use Pulsar Java client to create Java [producer](#producer), [consumer](#consumer), and [readers](#reader-interface) of messages and to perform [administrative tasks](admin-api-overview.md). The current version of the Java client is **{{pulsar:version}}**.
+
+All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe.
+
+Javadoc for the Pulsar client is divided into two domains by package as follows.
+
+Package | Description | Maven Artifact
+:-------|:------------|:--------------
+[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:{{pulsar:version}}](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C{{pulsar:version}}%7Cjar)
+[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:{{pulsar:version}}](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C{{pulsar:version}}%7Cjar)
+
+This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md).
+
+## Installation
+
+The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C{{pulsar:version}}%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration.
+
+### Maven
+
+If you use Maven, add the following information to the `pom.xml` file.
+
+```xml
+<!-- in your <properties> block -->
+<pulsar.version>{{pulsar:version}}</pulsar.version>
+
+<!-- in your <dependencies> block -->
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client</artifactId>
+  <version>${pulsar.version}</version>
+</dependency>
+```
+
+### Gradle
+
+If you use Gradle, add the following information to the `build.gradle` file.
+
+```groovy
+def pulsarVersion = '{{pulsar:version}}'
+
+dependencies {
+    compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion
+}
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`.
+
+```http
+pulsar://localhost:6650
+```
+
+If you have multiple brokers, the URL is as follows.
+
+```http
+pulsar://localhost:6550,localhost:6651,localhost:6652
+```
+
+A URL for a production Pulsar cluster is as follows.
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. 
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Client 
+
+You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this:
+
+```java
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650")
+        .build();
+```
+
+If you have multiple brokers, you can initiate a PulsarClient like this:
+```java
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652")
+        .build();
+```
+
+> ### Default broker URLs for standalone clusters
+> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default.
+
+If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+| Type | Name | <div style="width:260px">Description</div> | Default
+|---|---|---|---
+String | `serviceUrl` |Service URL provider for Pulsar service | None
+String | `authPluginClassName` | Name of the authentication plugin | None
+String | `authParams` | String represents parameters for the authentication plugin <br/><br/>**Example**<br/> key1:val1,key2:val2|None
+long|`operationTimeoutMs`|Operation timeout |30000
+long|`statsIntervalSeconds`|Interval between each stats info<br/><br/>Stats is activated with positive `statsInterval`<br/><br/>Set `statsIntervalSeconds` to 1 second at least |60
+int|`numIoThreads`| The number of threads used for handling connections to brokers | 1 
+int|`numListenerThreads`|The number of threads used for handling message listeners | 1 
+boolean|`useTcpNoDelay`|Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true
+boolean |`useTls` |Whether to use TLS encryption on the connection| false
+string | `tlsTrustCertsFilePath` |Path to the trusted TLS certificate file|None
+boolean|`tlsAllowInsecureConnection`|Whether the Pulsar client accepts untrusted TLS certificate from broker | false
+boolean | `tlsHostnameVerificationEnable` | Whether to enable TLS hostname verification|false
+int|`concurrentLookupRequest`|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000
+int|`maxLookupRequest`|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000
+int|`maxNumberOfRejectedRequestPerConnection`|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50
+int|`keepAliveIntervalSeconds`|Seconds of keeping alive interval for each client broker connection|30
+int|`connectionTimeoutMs`|Duration of waiting for a connection to a broker to be established <br/><br/>If the duration passes without a response from a broker, the connection attempt is dropped|10000
+int|`requestTimeoutMs`|Maximum duration for completing a request |60000
+int|`defaultBackoffIntervalNanos`| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100);
+long|`maxBackoffIntervalNanos`|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30)
+
+Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters.
+
+> In addition to client-level configuration, you can also apply [producer](#configuring-producers) and [consumer](#configuring-consumers) specific configuration as described in sections below.
+
+## Producer
+
+In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic).
+
+```java
+Producer<byte[]> producer = client.newProducer()
+        .topic("my-topic")
+        .create();
+
+// You can then send messages to the broker and topic you specified:
+producer.send("My message".getBytes());
+```
+
+By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schemas).
+
+```java
+Producer<String> stringProducer = client.newProducer(Schema.STRING)
+        .topic("my-topic")
+        .create();
+stringProducer.send("My message");
+```
+
+> Make sure that you close your producers, consumers, and clients when you do not need them.
+> ```java
+> producer.close();
+> consumer.close();
+> client.close();
+> ```
+>
+> Close operations can also be asynchronous:
+> ```java
+> producer.closeAsync()
+>    .thenRun(() -> System.out.println("Producer closed"))
+>    .exceptionally((ex) -> {
+>        System.err.println("Failed to close producer: " + ex);
+>        return null;
+>    });
+> ```
+
+### Configure producer
+
+If you instantiate a `Producer` object by specifying only a topic name as the example above, use the default configuration for producer. 
+
+If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+Type | Name| <div style="width:300px">Description</div>|  Default
+|---|---|---|---
+String|	`topicName`|	Topic name| null|
+String|`producerName`|Producer name| null
+long|`sendTimeoutMs`|Message send timeout in ms.<br/><br/>If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000
+boolean|`blockIfQueueFull`|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors. <br/><br>If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.<br/><br/>The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false
+int|`maxPendingMessages`|The maximum size of a queue holding pending messages.<br/><br/>For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker). <br/><br/>By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000
+int|`maxPendingMessagesAcrossPartitions`|The maximum number of pending messages across partitions. <br/><br/>Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000
+MessageRoutingMode|`messageRoutingMode`|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).<br/><br/> Apply the logic only when setting no key on messages. <br/><br/>Available options are as follows: <br/><br/><li>`pulsar.RoundRobinDistribution`: round robin<br/><br/> <li>`pulsar.UseSinglePartition`: publish all messages to a single partition<br/><br/><li>`pulsar.CustomPartition`: a custom partitioning scheme|`pulsar.RoundRob [...]
+HashingScheme|`hashingScheme`|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).<br/><br/>Available options are as follows:<br/><br/><li> `pulsar.JavaStringHash`: the equivalent of `String.hashCode()` in Java<br/><br/><li> `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function<br/><br/><li>`pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.or [...]
+ProducerCryptoFailureAction|`cryptoFailureAction`|Producer should take action when encryption fails.<br/><br/><li>**FAIL**: if encryption fails, unencrypted messages fail to send.</li><br/><li> **SEND**: if encryption fails, unencrypted messages are sent. |`ProducerCryptoFailureAction.FAIL`
+long|`batchingMaxPublishDelayMicros`|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1)
+int|batchingMaxMessages|The maximum number of messages permitted in a batch.|1000
+boolean|`batchingEnabled`|Enable batching of messages. |true
+CompressionType|`compressionType`|Message data compression type used by a producer. <br/><br/>Available options:<li>[`LZ4`](https://github.com/lz4/lz4)<br/><li>[`ZLIB`](https://zlib.net/)<br/><li>[`ZSTD`](https://facebook.github.io/zstd/)<br/><li>[`SNAPPY`](https://google.github.io/snappy/)| No compression
+
+You can configure parameters if you do not want to use the default configuration.
+
+For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example.
+
+```java
+Producer<byte[]> producer = client.newProducer()
+    .topic("my-topic")
+    .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS)
+    .sendTimeout(10, TimeUnit.SECONDS)
+    .blockIfQueueFull(true)
+    .create();
+```
+
+### Message routing
+
+When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics](cookbooks-partitioned.md) cookbook.
+
+### Async send
+
+You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer.
+
+The following is an example.
+
+```java
+producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> {
+    System.out.printf("Message with ID %s successfully sent", msgId);
+});
+```
+
+As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture).
+
+### Configure messages
+
+In addition to a value, you can set additional items on a given message:
+
+```java
+producer.newMessage()
+    .key("my-message-key")
+    .value("my-async-message".getBytes())
+    .property("my-key", "my-value")
+    .property("my-other-key", "my-other-value")
+    .send();
+```
+
+You can terminate the builder chain with `sendAsync()` and get a future return.
+
+## Consumer
+
+In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)).
+
+Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes).
+
+```java
+Consumer consumer = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscribe();
+```
+
+The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment [...]
+
+```java
+while (true) {
+  // Wait for a message
+  Message msg = consumer.receive();
+
+  try {
+      // Do something with the message
+      System.out.printf("Message received: %s", new String(msg.getData()));
+
+      // Acknowledge the message so that it can be deleted by the message broker
+      consumer.acknowledge(msg);
+  } catch (Exception e) {
+      // Message failed to process, redeliver later
+      consumer.negativeAcknowledge(msg);
+  }
+}
+```
+
+### Configure consumer
+
+If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. 
+
+When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+Type | Name| <div style="width:300px">Description</div>|  Default
+|---|---|---|---
+Set&lt;String&gt;|	`topicNames`|	Topic name|	Sets.newTreeSet()
+Pattern|   `topicsPattern`|	Topic pattern	|None
+String|	`subscriptionName`|	Subscription name|	None
+SubscriptionType| `subscriptionType`|	Subscription type <br/><br/>Three subscription types are available:<li>Exclusive</li><li>Failover</li><li>Shared</li>|SubscriptionType.Exclusive
+int | `receiverQueueSize` | Size of a consumer's receiver queue. <br/><br/>For example, the number of messages accumulated by a consumer before an application calls `Receive`. <br/><br/>A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000
+long|`acknowledgementsGroupTimeMicros`|Group a consumer acknowledgment for a specified time.<br/><br/>By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.<br/><br/>Setting a group time of 0 sends out acknowledgments immediately. <br/><br/>A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100)
+long|`negativeAckRedeliveryDelayMicros`|Delay to wait before redelivering messages that failed to be processed.<br/><br/> When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1)
+int |`maxTotalReceiverQueueSizeAcrossPartitions`|The max total receiver queue size across partitions.<br/><br/>This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000
+String|`consumerName`|Consumer name|null
+long|`ackTimeoutMillis`|Timeout of unacked messages|0
+long|`tickDurationMillis`|Granularity of the ack-timeout redelivery.<br/><br/>Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000
+int|`priorityLevel`|Priority level for a consumer to which a broker gives more priority while dispatching messages in the shared subscription mode. <br/><br/>The broker follows descending priorities. For example, 0=max-priority, 1, 2,...<br/><br/>In shared subscription mode, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.<br/><br/> **Example 1**<br/><br/>If a subscription has [...]
+ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.<br/><br/><li>**FAIL**: this is the default option to fail messages until crypto succeeds.</li><br/><li> **DISCARD**:silently acknowledge and not deliver message to an application.</li><br/><li>**CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.<br/><br/>The decompression of message fails. <b [...]
+SortedMap<String, String>|`properties`|A name or value property of this consumer.<br/><br/>`properties` is application defined metadata attached to a consumer. <br/><br/>When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap<>()
+boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.<br/><br/> A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.<br/><br/>Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or e [...]
+SubscriptionInitialPosition|`subscriptionInitialPosition`|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest
+int|`patternAutoDiscoveryPeriod`|Topic auto discovery period when using a pattern for topic's consumer.<br/><br/>The default and minimum value is 1 minute.|1
+RegexSubscriptionMode|`regexSubscriptionMode`|When subscribing to a topic using a regular expression, you can pick a certain type of topics.<br/><br/><li>**PersistentOnly**: only subscribe to persistent topics.</li><br/><li>**NonPersistentOnly**: only subscribe to non-persistent topics.</li><br/><li>**AllTopics**: subscribe to both persistent and non-persistent topics.</li>|RegexSubscriptionMode.PersistentOnly
+DeadLetterPolicy|`deadLetterPolicy`|Dead letter policy for consumers.<br/><br/>By default, some messages are probably redelivered many times, even to the extent that it never stops.<br/><br/>By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.<br/><br/>You can enable the dead letter mechanism by setting `deadLetterPolicy`.<br/><br/>**Exa [...]
+boolean|`autoUpdatePartitions`|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.<br/><br/>**Note**: this is only for partitioned consumers.|true
+boolean|`replicateSubscriptionState`|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false
+
+You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. 
+
+The following is an example.
+
+```java
+Consumer consumer = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .ackTimeout(10, TimeUnit.SECONDS)
+        .subscriptionType(SubscriptionType.Exclusive)
+        .subscribe();
+```
+
+### Async receive
+
+The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available.
+
+The following is an example.
+
+```java
+CompletableFuture<Message> asyncMessage = consumer.receiveAsync();
+```
+
+Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture).
+
+### Batch receive
+
+Use `batchReceive` to receive multiple messages for each call. 
+
+The following is an example.
+
+```java
+Messages messages = consumer.batchReceive();
+for (message in messages) {
+  // do something
+}
+consumer.acknowledge(messages)
+```
+
+> Note:
+>
+> Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages.
+>
+> The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout.
+>
+> ```java
+> Consumer consumer = client.newConsumer()
+>         .topic("my-topic")
+>         .subscriptionName("my-subscription")
+>         .batchReceivePolicy(BatchReceivePolicy.builder()
+>              .maxNumMessages(100)
+>              .maxNumBytes(1024 * 1024)
+>              .timeout(200, TimeUnit.MILLISECONDS)
+>              .build())
+>         .subscribe();
+> ```
+> The default batch receive policy is:
+> ```java
+> BatchReceivePolicy.builder()
+>     .maxNumMessage(-1)
+>     .maxNumBytes(10 * 1024 * 1024)
+>     .timeout(100, TimeUnit.MILLISECONDS)
+>     .build();
+> ```
+
+### Multi-topic subscriptions
+
+In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace.
+
+The followings are some examples.
+
+```java
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.PulsarClient;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.regex.Pattern;
+
+ConsumerBuilder consumerBuilder = pulsarClient.newConsumer()
+        .subscriptionName(subscription);
+
+// Subscribe to all topics in a namespace
+Pattern allTopicsInNamespace = Pattern.compile("public/default/.*");
+Consumer allTopicsConsumer = consumerBuilder
+        .topicsPattern(allTopicsInNamespace)
+        .subscribe();
+
+// Subscribe to a subsets of topics in a namespace, based on regex
+Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*");
+Consumer allTopicsConsumer = consumerBuilder
+        .topicsPattern(someTopicsInNamespace)
+        .subscribe();
+```
+
+In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`.
+
+```java
+Pattern pattern = Pattern.compile("public/default/.*");
+pulsarClient.newConsumer()
+        .subscriptionName("my-sub")
+        .topicsPattern(pattern)
+        .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics)
+        .subscribe();
+```
+
+> #### Note
+> 
+> By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`.
+
+You can also subscribe to an explicit list of topics (across namespaces if you wish):
+
+```java
+List<String> topics = Arrays.asList(
+        "topic-1",
+        "topic-2",
+        "topic-3"
+);
+
+Consumer multiTopicConsumer = consumerBuilder
+        .topics(topics)
+        .subscribe();
+
+// Alternatively:
+Consumer multiTopicConsumer = consumerBuilder
+        .topics(
+            "topic-1",
+            "topic-2",
+            "topic-3"
+        )
+        .subscribe();
+```
+
+You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example.
+
+```java
+Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*");
+consumerBuilder
+        .topics(topics)
+        .subscribeAsync()
+        .thenAccept(this::receiveMessageFromConsumer);
+
+private void receiveMessageFromConsumer(Consumer consumer) {
+    consumer.receiveAsync().thenAccept(message -> {
+                // Do something with the received message
+                receiveMessageFromConsumer(consumer);
+            });
+}
+```
+
+### Subscription modes
+
+Pulsar has various [subscription modes](concepts-messaging#subscription-modes) to match different scenarios. A topic can have multiple subscriptions with different subscription modes. However, a subscription can only have one subscription mode at a time.
+
+A subscription is identical with the subscription name which can specify only one subscription mode at a time. You cannot change the subscription mode unless all existing consumers of this subscription are offline.
+
+Different subscription modes have different message distribution modes. This section describes the differences of subscription modes and how to use them.
+
+In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages.
+
+```java
+Producer<String> producer = client.newProducer(Schema.STRING)
+        .topic("my-topic")
+        .enableBatching(false)
+        .create();
+// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4"
+producer.newMessage().key("key-1").value("message-1-1").send();
+producer.newMessage().key("key-1").value("message-1-2").send();
+producer.newMessage().key("key-1").value("message-1-3").send();
+producer.newMessage().key("key-2").value("message-2-1").send();
+producer.newMessage().key("key-2").value("message-2-2").send();
+producer.newMessage().key("key-2").value("message-2-3").send();
+producer.newMessage().key("key-3").value("message-3-1").send();
+producer.newMessage().key("key-3").value("message-3-2").send();
+producer.newMessage().key("key-4").value("message-4-1").send();
+producer.newMessage().key("key-4").value("message-4-2").send();
+```
+
+#### Exclusive
+
+Create a new consumer and subscribe with the `Exclusive` subscription mode.
+
+```java
+Consumer consumer = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Exclusive)
+        .subscribe()
+```
+
+Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order.
+
+> Note:
+>
+> If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. 
+
+#### Failover
+
+Create new consumers and subscribe with the`Failover` subscription mode.
+
+```java
+Consumer consumer1 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Failover)
+        .subscribe()
+Consumer consumer2 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Failover)
+        .subscribe()
+//conumser1 is the active consumer, consumer2 is the standby consumer.
+//consumer1 receives 5 messages and then crashes, consumer2 takes over as an  active consumer.
+
+  
+```
+
+Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. 
+
+If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive:
+
+```
+("key-1", "message-1-1")
+("key-1", "message-1-2")
+("key-1", "message-1-3")
+("key-2", "message-2-1")
+("key-2", "message-2-2")
+```
+
+consumer2 will receive:
+
+```
+("key-2", "message-2-3")
+("key-3", "message-3-1")
+("key-3", "message-3-2")
+("key-4", "message-4-1")
+("key-4", "message-4-2")
+```
+
+> Note:
+>
+> If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. 
+
+#### Shared
+
+Create new consumers and subscribe with `Shared` subscription mode:
+
+```java
+Consumer consumer1 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Shared)
+        .subscribe()
+  
+Consumer consumer2 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Shared)
+        .subscribe()
+//Both consumer1 and consumer 2 is active consumers.
+```
+
+In shared subscription mode, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers.
+
+If a broker dispatches only one message at a time, consumer1 receives the following information.
+
+```
+("key-1", "message-1-1")
+("key-1", "message-1-3")
+("key-2", "message-2-2")
+("key-3", "message-3-1")
+("key-4", "message-4-1")
+```
+
+consumer2 receives the follwoing information.
+
+```
+("key-1", "message-1-2")
+("key-2", "message-2-1")
+("key-2", "message-2-3")
+("key-3", "message-3-2")
+("key-4", "message-4-2")
+```
+
+`Shared` subscription is different from `Exclusive` and `Failover` subscription modes. `Shared` subscription has better flexibility, but cannot provide order guarantee.
+
+#### Key_shared
+
+This is a new subscription mode since 2.4.0 release, create new consumers and subscribe with `Key_Shared` subscription mode.
+
+```java
+Consumer consumer1 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Key_Shared)
+        .subscribe()
+  
+Consumer consumer2 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Key_Shared)
+        .subscribe()
+//Both consumer1 and consumer2 are active consumers.
+```
+
+`Key_Shared` subscription is like `Shared` subscription, all consumers can attach to the same subscription. But it is different from `Key_Shared` subscription, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time).
+
+consumer1 receives the follwoing information.
+
+```
+("key-1", "message-1-1")
+("key-1", "message-1-2")
+("key-1", "message-1-3")
+("key-3", "message-3-1")
+("key-3", "message-3-2")
+```
+
+consumer2 receives the follwoing information.
+
+```
+("key-2", "message-2-1")
+("key-2", "message-2-2")
+("key-2", "message-2-3")
+("key-4", "message-4-1")
+("key-4", "message-4-2")
+```
+
+If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`.
+
+```java
+Producer producer = client.newProducer()
+        .topic("my-topic")
+        .batcherBuilder(BatcherBuilder.KEY_BASED)
+        .create();
+```
+Or the producer can disable batching.
+
+```java
+Producer producer = client.newProducer()
+        .topic("my-topic")
+        .enableBatching(false)
+        .create();
+```
+> Note:
+>
+> If the message key is not specified, messages without key are dispatched to one consumer in order by default.
+
+## Reader 
+
+With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic, a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}, and {@inject: javadoc:ReaderConfiguration:/client/org/apache/pulsar/client/a [...]
+
+The following is an example.
+
+```java
+ReaderConfiguration conf = new ReaderConfiguration();
+byte[] msgIdBytes = // Some message ID byte array
+MessageId id = MessageId.fromByteArray(msgIdBytes);
+Reader reader = pulsarClient.newReader()
+        .topic(topic)
+        .startMessageId(id)
+        .create();
+
+while (true) {
+    Message message = reader.readNext();
+    // Process message
+}
+```
+
+In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application).
+
+The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message.
+
+When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+| Type | Name | <div style="width:300px">Description</div> | Default
+|---|---|---|---
+String|`topicName`|Topic name. |None
+int|`receiverQueueSize`|Size of a consumer's receiver queue.<br/><br/>For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.<br/><br/>A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000
+ReaderListener&lt;T&gt;|`readerListener`|A listener that is called for message received.|None
+String|`readerName`|Read name.|null
+String|`subscriptionRolePrefix`|Prefix of subscription role. |null
+CryptoKeyReader|`cryptoKeyReader`|Interface that abstracts the access to a key store.|null
+ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.<br/><br/><li>**FAIL**: this is the default option to fail messages until crypto succeeds.</li><br/><li> **DISCARD**: silently acknowledge and not deliver message to an application.</li><br/><li>**CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.<br/><br/>The message decompression fails. <br/ [...]
+boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.<br/><br/> A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.<br/><br/>`readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failur [...]
+boolean|`resetIncludeHead`|If set to true, the first message to be returned is the one specified by `messageId`.<br/><br/>If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false
+
+### Sticky key range reader
+
+In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader.
+
+The following is an example to create a sticky key range reader.
+
+```java
+pulsarClient.newReader()
+        .topic(topic)
+        .startMessageId(MessageId.earliest)
+        .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000))
+        .create();
+```
+
+Total hash range size is 65536, so the max end of the range should be less than or equal to 65535.
+
+## Schema
+
+In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). If you construct, say, a [producer](#producers) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example.
+
+```java
+Producer<byte[]> producer = client.newProducer()
+        .topic(topic)
+        .create();
+```
+
+The producer above is equivalent to a `Producer<byte[]>` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic).
+
+### Schema example
+
+Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic:
+
+```java
+public class SensorReading {
+    public float temperature;
+
+    public SensorReading(float temperature) {
+        this.temperature = temperature;
+    }
+
+    // A no-arg constructor is required
+    public SensorReading() {
+    }
+
+    public float getTemperature() {
+        return temperature;
+    }
+
+    public void setTemperature(float temperature) {
+        this.temperature = temperature;
+    }
+}
+```
+
+You could then create a `Producer<SensorReading>` (or `Consumer<SensorReading>`) like this:
+
+```java
+Producer<SensorReading> producer = client.newProducer(JSONSchema.of(SensorReading.class))
+        .topic("sensor-readings")
+        .create();
+```
+
+The following schema formats are currently available for Java:
+
+* No schema or the byte array schema (which can be applied using `Schema.BYTES`):
+
+  ```java
+  Producer<byte[]> bytesProducer = client.newProducer(Schema.BYTES)
+        .topic("some-raw-bytes-topic")
+        .create();
+  ```
+
+  Or, equivalently:
+
+  ```java
+  Producer<byte[]> bytesProducer = client.newProducer()
+        .topic("some-raw-bytes-topic")
+        .create();
+  ```
+
+* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`:
+
+  ```java
+  Producer<String> stringProducer = client.newProducer(Schema.STRING)
+        .topic("some-string-topic")
+        .create();
+  ```
+
+* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example.
+
+  ```java
+  Producer<MyPojo> pojoProducer = client.newProducer(Schema.JSON(MyPojo.class))
+        .topic("some-pojo-topic")
+        .create();
+  ```
+
+* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer:
+
+  ```java
+  Producer<MyProtobuf> protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class))
+        .topic("some-protobuf-topic")
+        .create();
+  ```
+
+* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema.
+  
+  ```java
+  Producer<MyAvro> avroProducer = client.newProducer(Schema.AVRO(MyAvro.class))
+        .topic("some-avro-topic")
+        .create();
+  ```
+
+## Authentication
+
+Pulsar currently supports two authentication schemes: [TLS](security-tls-authentication.md) and [Athenz](security-athenz.md). You can use the Pulsar Java client with both.
+
+### TLS Authentication
+
+To use [TLS](security-tls-authentication.md), you need to set TLS to `true` using the `setUseTls` method, point your Pulsar client to a TLS cert path, and provide paths to cert and key files.
+
+The following is an example.
+
+```java
+Map<String, String> authParams = new HashMap<>();
+authParams.put("tlsCertFile", "/path/to/client-cert.pem");
+authParams.put("tlsKeyFile", "/path/to/client-key.pem");
+
+Authentication tlsAuth = AuthenticationFactory
+        .create(AuthenticationTls.class.getName(), authParams);
+
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar+ssl://my-broker.com:6651")
+        .enableTls(true)
+        .tlsTrustCertsFilePath("/path/to/cacert.pem")
+        .authentication(tlsAuth)
+        .build();
+```
+
+### Athenz
+
+To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash:
+
+* `tenantDomain`
+* `tenantService`
+* `providerDomain`
+* `privateKey`
+
+You can also set an optional `keyId`. The following is an example.
+
+```java
+Map<String, String> authParams = new HashMap<>();
+authParams.put("tenantDomain", "shopping"); // Tenant domain name
+authParams.put("tenantService", "some_app"); // Tenant service name
+authParams.put("providerDomain", "pulsar"); // Provider domain name
+authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path
+authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0")
+
+Authentication athenzAuth = AuthenticationFactory
+        .create(AuthenticationAthenz.class.getName(), authParams);
+
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar+ssl://my-broker.com:6651")
+        .enableTls(true)
+        .tlsTrustCertsFilePath("/path/to/cacert.pem")
+        .authentication(athenzAuth)
+        .build();
+```
+
+> #### Supported pattern formats
+> The `privateKey` parameter supports the following three pattern formats:
+> * `file:///path/to/file`
+> * `file:/path/to/file`
+> * `data:application/x-pem-file;base64,<base64-encoded value>`
diff --git a/site2/website/versioned_docs/version-2.6.0/concepts-architecture-overview.md b/site2/website/versioned_docs/version-2.6.0/concepts-architecture-overview.md
new file mode 100644
index 0000000..e0a6b6c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/concepts-architecture-overview.md
@@ -0,0 +1,153 @@
+---
+id: version-2.6.0-concepts-architecture-overview
+title: Architecture Overview
+sidebar_label: Architecture
+original_id: concepts-architecture-overview
+---
+
+At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves.
+
+In a Pulsar cluster:
+
+* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more.
+* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages.
+* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters.
+
+The diagram below provides an illustration of a Pulsar cluster:
+
+![Pulsar architecture diagram](assets/pulsar-system-architecture.png)
+
+At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md).
+
+## Brokers
+
+The Pulsar message broker is a stateless component that's primarily responsible for running two other components:
+
+* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers
+* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers
+
+Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper.
+
+Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md).
+
+> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide.
+
+## Clusters
+
+A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of:
+
+* One or more Pulsar [brokers](#brokers)
+* A ZooKeeper quorum used for cluster-level configuration and coordination
+* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages
+
+Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md).
+
+> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide.
+
+## Metadata store
+
+Pulsar uses [Apache Zookeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. In a Pulsar instance:
+
+* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
+* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as ownership metadata, broker load reports, BookKeeper ledger metadata, and more.
+
+## Persistent storage
+
+Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target.
+
+This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server.
+
+### Apache BookKeeper
+
+Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar:
+
+* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time.
+* It offers very efficient storage for sequential data that handles entry replication.
+* It guarantees read consistency of ledgers in the presence of various system failures.
+* It offers even distribution of I/O across bookies.
+* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster.
+* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations.
+
+In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion.
+
+At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example:
+
+```http
+persistent://my-tenant/my-namespace/my-topic
+```
+
+> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage.
+
+
+You can see an illustration of how brokers and bookies interact in the diagram below:
+
+![Brokers and bookies](assets/broker-bookie.png)
+
+
+### Ledgers
+
+A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics:
+
+* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger.
+* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode.
+* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies).
+
+#### Ledger read consistency
+
+The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see  [...]
+
+#### Managed ledgers
+
+Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position.
+
+Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers:
+
+1. After a failure, a ledger is no longer writable and a new one needs to be created.
+2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers.
+
+### Journal storage
+
+In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter).
+
+## Pulsar proxy
+
+One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible.
+
+The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers.
+
+> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like.
+
+Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example:
+
+```bash
+$ bin/pulsar proxy \
+  --zookeeper-servers zk-0,zk-1,zk-2 \
+  --configuration-store-servers zk-0,zk-1,zk-2
+```
+
+> #### Pulsar proxy docs
+> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md).
+
+
+Some important things to know about the Pulsar proxy:
+
+* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy).
+* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy
+
+## Service discovery
+
+[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions in the [Deploying a Pulsar instance](deploy-bare-metal.md#service-discovery-setup) guide.
+
+You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means.
+
+The diagram below illustrates Pulsar service discovery:
+
+![alt-text](assets/pulsar-service-discovery.png)
+
+In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this:
+
+```python
+from pulsar import Client
+
+client = Client('pulsar://pulsar-cluster.acme.com:6650')
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/concepts-clients.md b/site2/website/versioned_docs/version-2.6.0/concepts-clients.md
new file mode 100644
index 0000000..c97b70b
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/concepts-clients.md
@@ -0,0 +1,88 @@
+---
+id: version-2.6.0-concepts-clients
+title: Pulsar Clients
+sidebar_label: Clients
+original_id: concepts-clients
+---
+
+Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md),  [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications.
+
+Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff.
+
+> #### Custom client libraries
+> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md)
+
+
+## Client setup phase
+
+When an application wants to create a producer/consumer, the Pulsar client library will initiate a setup phase that is composed of two steps:
+
+1. The client will attempt to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata will know who is serving the topic or, in case nobody is serving it, will try to assign it to the least loaded broker.
+1. Once the client library has the broker address, it will create a TCP connection (or reuse an existing connection from the pool) and authenticate it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client will send a command to create producer/consumer to the broker, which will comply after having validated the authorization policy.
+
+Whenever the TCP connection breaks, the client will immediately re-initiate this setup phase and will keep trying with exponential backoff to re-establish the producer or consumer until the operation succeeds.
+
+## Reader interface
+
+In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they've been processed.  Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription will begin reading with the first message created afterwards.  Whenever a consumer  [...]
+
+The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with:
+
+* The **earliest** available message in the topic
+* The **latest** available message in the topic
+* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache.
+
+The reader interface is helpful for use cases like using Pulsar to provide [effectively-once](https://streaml.io/blog/exactly-once/) processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic.
+
+Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name.
+
+[ **IMPORTANT** ]
+
+Unlike subscription/consumer, readers are non-durable in nature and will not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured.   If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted .  This will cause readers to essentially skip messages.  Configuring the data retention for a topic guarantees the reader with  [...]
+
+Please also note that a reader can have a "backlog", but the metric is just to allow users to know how behind the reader is and is not considered for any backlog quota calculations. 
+
+![The Pulsar consumer and reader interfaces](assets/pulsar-reader-consumer-interfaces.png)
+
+> ### Non-partitioned topics only
+> The reader interface for Pulsar cannot currently be used with [partitioned topics](concepts-messaging.md#partitioned-topics).
+
+Here's a Java example that begins reading from the earliest available message on a topic:
+
+```java
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.MessageId;
+import org.apache.pulsar.client.api.Reader;
+
+// Create a reader on a topic and for a specific message (and onward)
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic("reader-api-test")
+    .startMessageId(MessageId.earliest)
+    .create();
+
+while (true) {
+    Message message = reader.readNext();
+
+    // Process the message
+}
+```
+
+To create a reader that will read from the latest available message:
+
+```java
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(MessageId.latest)
+    .create();
+```
+
+To create a reader that will read from some message between earliest and latest:
+
+```java
+byte[] msgIdBytes = // Some byte array
+MessageId id = MessageId.fromByteArray(msgIdBytes);
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(id)
+    .create();
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/concepts-messaging.md b/site2/website/versioned_docs/version-2.6.0/concepts-messaging.md
new file mode 100644
index 0000000..7b78e63
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/concepts-messaging.md
@@ -0,0 +1,494 @@
+---
+id: version-2.6.0-concepts-messaging
+title: Messaging Concepts
+sidebar_label: Messaging
+original_id: concepts-messaging
+---
+
+Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern (short for pub-sub). In this pattern, [producers](#producers) publish messages to [topics](#topics). [Consumers](#consumers) [subscribe](#subscription-modes) to those topics, process incoming messages, and send an acknowledgement when processing is complete.
+
+Once a subscription has been created, all messages are [retained](concepts-architecture-overview.md#persistent-storage) by Pulsar, even if the consumer gets disconnected. Retained messages are discarded only when a consumer acknowledges that those messages are processed successfully.
+
+## Messages
+
+Messages are the basic "unit" of Pulsar. Messages are what producers publish to topics and what consumers then consume from topics (and acknowledge when the message has been processed). Messages are the analogue of letters in a postal service system.
+
+Component | Purpose
+:---------|:-------
+Value / data payload | The data carried by the message. All Pulsar messages carry raw bytes, although message data can also conform to data [schemas](schema-get-started.md).
+Key | Messages can optionally be tagged with keys, which can be useful for things like [topic compaction](concepts-topic-compaction.md).
+Properties | An optional key/value map of user-defined properties.
+Producer name | The name of the producer that produced the message (producers are automatically given default names, but you can apply your own explicitly as well).
+Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. A message's sequence ID is its ordering in that sequence.
+Publish time | The timestamp of when the message was published (automatically applied by the producer).
+Event time | An optional timestamp that applications can attach to the message representing when something happened, for example, when the message was processed. The event time of a message is 0 if none is explicitly set.
+TypedMessageBuilder | `TypedMessageBuilder` is used to construct a message. You can set message properties like the message key, message value with `TypedMessageBuilder`. </br> When you set `TypedMessageBuilder`, the best practice is to set the key as a string. If you set the key as other types, for example, an AVRO object, the key is sent as bytes, and it is difficult to get the AVRO object back on the consumer.
+
+> For a more in-depth breakdown of Pulsar message contents, see Pulsar [binary protocol](developing-binary-protocol.md).
+
+## Producers
+
+A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker) for processing.
+
+### Send modes
+
+Producers can send messages to brokers either synchronously (sync) or asynchronously (async).
+
+| Mode       | Description                                                                                                                                                                                                                                                                                                                                                              |
+|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Sync send  | The producer will wait for acknowledgement from the broker after sending each message. If acknowledgment isn't received then the producer will consider the send operation a failure.                                                                                                                                                                                    |
+| Async send | The producer will put the message in a blocking queue and return immediately. The client library will then send the message to the broker in the background. If the queue is full (max size [configurable](reference-configuration.md#broker)), the producer could be blocked or fail immediately when calling the API, depending on arguments passed to the producer. |
+
+### Compression
+
+Messages published by producers can be compressed during transportation in order to save bandwidth. Pulsar currently supports the following types of compression:
+
+* [LZ4](https://github.com/lz4/lz4)
+* [ZLIB](https://zlib.net/)
+* [ZSTD](https://facebook.github.io/zstd/)
+* [SNAPPY](https://google.github.io/snappy/)
+
+### Batching
+
+When batching is enabled, the producer accumulates and sends a batch of messages in a single request. The batch size is defined by the maximum number of messages and the maximum publish latency. Therefore, the backlog size represents the total number of batches instead of the total number of messages.
+
+In Pulsar, batches are tracked and stored as single units rather than as individual messages. Under the hood, the consumer unbundles a batch into individual messages. However, scheduled messages (configured through the `deliverAt` or the `deliverAfter` parameter) are always sent as individual messages even batching is enabled.
+
+In general, a batch is acknowledged when all its messages are acknowledged by the consumer. This means unexpected failures, negative acknowledgements, or acknowledgement timeouts can result in redelivery of all messages in a batch, even if some of the messages have already been acknowledged.
+
+To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar introduces batch index acknowledgement since Pulsar 2.6.0. When batch index acknowledgement is enabled, the consumer filters out the batch index that has been acknowledged and sends the batch index acknowledgement request to the broker. The broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer [...]
+
+By default, batch index acknowledgement is disabled (`batchIndexAcknowledgeEnable=false`). You can enable batch index acknowledgement by setting the `batchIndexAcknowledgeEnable` parameter to `true` at the broker side. Enabling batch index acknowledgement may bring more memory overheads. So, perform this operation with caution.
+
+## Consumers
+
+A consumer is a process that attaches to a topic via a subscription and then receives messages.
+
+A consumer sends a [flow permit request](developing-binary-protocol.md#flow-control) to a broker to get messages. There is a queue at the consumer side to receive messages pushed from the broker. The queue size is configurable by [`receiverQueueSize`](client-libraries-java.md#configure-consumer) (default: 1000). Each time `consumer.receive()` is called, a message is dequeued from the buffer.  
+
+### Receive modes
+
+Messages can be received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async).
+
+| Mode          | Description                                                                                                                                                                                                   |
+|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Sync receive  | A sync receive will be blocked until a message is available.                                                                                                                                                  |
+| Async receive | An async receive will return immediately with a future value---a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java, for example---that completes once a new message is available. |
+
+### Listeners
+
+Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received.
+
+### Acknowledgement
+
+When a consumer has consumed a message successfully, the consumer sends an acknowledgement request to the broker. This message is permanently [stored](concepts-architecture-overview.md#persistent-storage) and then it is deleted only after all the subscriptions have acknowledged it. If you want to store the message that has been acknowledged by a consumer, you need to configure the [message retention policy](concepts-messaging.md#message-retention-and-expiry).
+
+For a batch message, if batch index acknowledgement is enabled, the broker maintains the batch index acknowledgement status and tracks the acknowledgement status of each batch index to avoid dispatching acknowledged messages to the consumer. When all indexes of the batch message are acknowledged, the batch message is deleted. For details about the batch index acknowledgement, see [batching](#batching).
+
+Messages can be acknowledged either one by one or cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message will not be re-delivered to that consumer.
+
+Messages can be acknowledged in the following two ways:
+
+- Messages are acknowledged individually. With individual acknowledgement, the consumer needs to acknowledge each message and sends an acknowledgement request to the broker.
+- Messages are acknowledged cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message are not re-delivered to that consumer.
+
+> #### Note
+> 
+> Cumulative acknowledgement cannot be used in the [shared subscription mode](#subscription-modes), because the shared subscription mode involves multiple consumers having access to the same subscription. In the shared subscription mode, messages can be acknowledged individually.
+
+### Negative acknowledgement
+
+When a consumer does not consume a message successfully at a time, and wants to consume the message again, the consumer can send a negative acknowledgement to the broker, and then the broker will redeliver the message.
+
+Messages can be negatively acknowledged one by one or cumulatively, which depends on the consumption subscription mode.
+
+In the exclusive and failover subscription modes, consumers only negatively acknowledge the last message they have received.
+
+In the shared and Key_Shared subscription modes, you can negatively acknowledge messages individually.
+
+> Note
+> If batching is enabled, other messages in the same batch may be redelivered to the consumer as well as the negatively acknowledged messages.
+
+### Acknowledgement timeout
+
+When a message is not consumed successfully, and you want to trigger the broker to redeliver the message automatically, you can adopt the unacknowledged message automatic re-delivery mechanism. Client will track the unacknowledged messages within the entire `acktimeout` time range, and send a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified.
+
+> Note
+> If batching is enabled, other messages in the same batch may be redelivered to the consumer as well as the unacknowledged messages.
+
+> Note    
+> Prefer negative acknowledgements over acknowledgement timeout. Negative acknowledgement controls the re-delivery of individual messages with more precision, and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout.
+
+### Dead letter topic
+
+Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic.
+
+The following example shows how to enable dead letter topic in a Java client using the default dead letter topic:
+
+```java
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+              .topic(topic)
+              .subscriptionName("my-subscription")
+              .subscriptionType(SubscriptionType.Shared)
+              .deadLetterPolicy(DeadLetterPolicy.builder()
+                    .maxRedeliverCount(maxRedeliveryCount)
+                    .build())
+              .subscribe();
+                
+```
+The default dead letter topic uses this format: 
+```
+<topicname>-<subscriptionname>-DLQ
+```
+  
+If you want to specify the name of the dead letter topic, use this Java client example:
+
+```java
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+              .topic(topic)
+              .subscriptionName("my-subscription")
+              .subscriptionType(SubscriptionType.Shared)
+              .deadLetterPolicy(DeadLetterPolicy.builder()
+                    .maxRedeliverCount(maxRedeliveryCount)
+                    .deadLetterTopic("your-topic-name")
+                    .build())
+              .subscribe();
+                
+```
+  
+Dead letter topic depends on message re-delivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. 
+
+> Note    
+> Currently, dead letter topic is enabled only in the shared subscription mode.
+
+### Retry letter topic
+
+For many online business systems, a message needs to be re-consumed because any exception occurs in the business logic processing. Generally, users hope that they can flexibly configure the delay time for re-consuming the failed messages. In this case, you can configure the producer to send messages to both the business topic and the retry letter topic and you can enable automatic retry on the consumer. When automatic retry is enabled on the consumer, a message is stored in the retry let [...]
+
+By default, automatic retry is disabled. You can set `enableRetry` to `true` to enable automatic retry on the consumer.
+
+This example shows how to consumer messages from a retry letter topic.
+
+```java
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+                .topic(topic)
+                .subscriptionName("my-subscription")
+                .subscriptionType(SubscriptionType.Shared)
+                .enableRetry(true)
+                .receiverQueueSize(100)
+                .deadLetterPolicy(DeadLetterPolicy.builder()
+                        .maxRedeliverCount(maxRedeliveryCount)
+                        .retryLetterTopic("persistent://my-property/my-ns/my-subscription-custom-Retry")
+                        .build())
+                .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest)
+                .subscribe();
+```
+
+## Topics
+
+As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from [producers](reference-terminology.md#producer) to [consumers](reference-terminology.md#consumer). Topic names are URLs that have a well-defined structure:
+
+```http
+{persistent|non-persistent}://tenant/namespace/topic
+```
+
+Topic name component | Description
+:--------------------|:-----------
+`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics) (persistent is the default, so if you don't specify a type the topic will be persistent). With persistent topics, all messages are durably [persisted](concepts-architecture-overview.md#persistent-storage) on disk (that means on multiple disks unless the broker is standalone) [...]
+`tenant`             | The topic's tenant within the instance. Tenants are essential to multi-tenancy in Pulsar and can be spread across clusters.
+`namespace`          | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant can have multiple namespaces.
+`topic`              | The final part of the name. Topic names are freeform and have no special meaning in a Pulsar instance.
+
+
+> #### No need to explicitly create new topics
+> You don't need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar will automatically create that topic under the [namespace](#namespaces) provided in the [topic name](#topics).
+
+## Namespaces
+
+A namespace is a logical nomenclature within a tenant. A tenant can create multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace.
+
+## Subscriptions
+
+A subscription is a named configuration rule that determines how messages are delivered to consumers. There are four available subscription modes in Pulsar: [exclusive](#exclusive), [shared](#shared), [failover](#failover), and [key_shared](#key_shared). These modes are illustrated in the figure below.
+
+![Subscription modes](assets/pulsar-subscription-modes.png)
+
+> #### Pub-Sub, Queuing, or Both
+> There is a lot of flexibility in how to combine subscriptions:
+> * If you want to achieve traditional "fan-out pub-sub messaging" among consumers, you can make each consumer have a unique subscription name (exclusive)
+> * If you want to achieve "message queuing" among consumers, you can make multiple consumers have the same subscription name (shared, failover, key_shared)
+> * If you want to do both simultaneously, you can have some consumers with exclusive subscriptions while others do not
+
+### Exclusive
+
+In *exclusive* mode, only a single consumer is allowed to attach to the subscription. If more than one consumer attempts to subscribe to a topic using the same subscription, the consumer receives an error.
+
+In the diagram below, only **Consumer A-0** is allowed to consume messages.
+
+> Exclusive mode is the default subscription mode.
+
+![Exclusive subscriptions](assets/pulsar-exclusive-subscriptions.png)
+
+### Failover
+
+In *failover* mode, multiple consumers can attach to the same subscription. In failover mode, the broker selects the master consumer based on the priority level and the lexicographical sorting of a consumer name. If two consumers have an identical priority level, the broker selects the master consumer based on the lexicographical sorting. If these two consumers have different priority levels, the broker selects the consumer with a higher priority level as the master consumer. The master  [...]
+
+For partitioned topics, the broker assigns partitioned topics to the consumer with the highest priority level. If multiple consumers have the highest priority level, the broker evenly assigns topics to consumers with these consumers.
+
+In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next consumer in line to receive messages if **Consumer-B-0** is disconnected.
+
+![Failover subscriptions](assets/pulsar-failover-subscriptions.png)
+
+### Shared
+
+In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers.
+
+In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well.
+
+> #### Limitations of shared mode
+> When using shared mode, be aware that:
+> * Message ordering is not guaranteed.
+> * You cannot use cumulative acknowledgment with shared mode.
+
+![Shared subscriptions](assets/pulsar-shared-subscriptions.png)
+
+### Key_Shared
+
+In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message.
+
+> #### Limitations of Key_Shared mode
+> When using Key_Shared mode, be aware that:
+> * You need to specify a key or orderingKey for messages
+> * You cannot use cumulative acknowledgment with Key_Shared mode.
+
+![Key_Shared subscriptions](assets/pulsar-key-shared-subscriptions.png)
+
+**You can disable Key_Shared subscription in the `broker.config` file.**
+
+## Multi-topic subscriptions
+
+When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways:
+
+* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*`
+* By explicitly defining a list of topics
+
+> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces)
+
+When subscribing to multiple topics, the Pulsar client will automatically make a call to the Pulsar API to discover the topics that match the regex pattern/list and then subscribe to all of them. If any of the topics don't currently exist, the consumer will auto-subscribe to them once the topics are created.
+
+> #### No ordering guarantees across multiple topics
+> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same.
+
+Here are some multi-topic subscription examples for Java:
+
+```java
+import java.util.regex.Pattern;
+
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.PulsarClient;
+
+PulsarClient pulsarClient = // Instantiate Pulsar client object
+
+// Subscribe to all topics in a namespace
+Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*");
+Consumer<byte[]> allTopicsConsumer = pulsarClient.newConsumer()
+                .topicsPattern(allTopicsInNamespace)
+                .subscriptionName("subscription-1")
+                .subscribe();
+
+// Subscribe to a subsets of topics in a namespace, based on regex
+Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*");
+Consumer<byte[]> someTopicsConsumer = pulsarClient.newConsumer()
+                .topicsPattern(someTopicsInNamespace)
+                .subscriptionName("subscription-1")
+                .subscribe();
+```
+
+For code examples, see:
+
+* [Java](client-libraries-java.md#multi-topic-subscriptions)
+
+## Partitioned topics
+
+Normal topics can be served only by a single broker, which limits the topic's maximum throughput. *Partitioned topics* are a special type of topic that can be handled by multiple brokers, which allows for much higher throughput.
+
+Behind the scenes, a partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar.
+
+The diagram below illustrates this:
+
+![](assets/partitioning.png)
+
+Here, the topic **Topic1** has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically).
+
+Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines each message should be published to which partition, while the [subscription mode](#subscription-modes) determines which messages go to which consumers.
+
+Decisions about routing and subscription modes can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics.
+
+There is no difference between partitioned topics and normal topics in terms of how subscription modes work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer.
+
+Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic.
+
+### Routing modes
+
+When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to.
+
+There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available:
+
+Mode     | Description 
+:--------|:------------
+`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. 
+`SinglePartition`     | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition.
+`CustomPartition`     | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface.
+
+### Ordering guarantee
+
+The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee.
+
+If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode.
+
+Ordering guarantee | Description | Routing Mode and Key
+:------------------|:------------|:------------
+Per-key-partition  | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message.
+Per-producer       | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message.
+
+### Hashing scheme
+
+{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message.
+
+There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. 
+The default hashing function for producer is `JavaStringHash`.
+Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`.
+
+
+
+## Non-persistent topics
+
+
+By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover.
+
+Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss.
+
+Non-persistent topics have names of this form (note the `non-persistent` in the name):
+
+```http
+non-persistent://tenant/namespace/topic
+```
+
+> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md).
+
+In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases [...]
+
+> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it.
+
+By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the [`pulsar-admin topics`](referencereference--pulsar-admin/#topics-1) interface.
+
+### Performance
+
+Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic.
+
+### Client API
+
+Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription modes---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics.
+
+Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic:
+
+```java
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650")
+        .build();
+String npTopic = "non-persistent://public/default/my-topic";
+String subscriptionName = "my-subscription-name";
+
+Consumer<byte[]> consumer = client.newConsumer()
+        .topic(npTopic)
+        .subscriptionName(subscriptionName)
+        .subscribe();
+```
+
+Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic:
+
+```java
+Producer<byte[]> producer = client.newProducer()
+                .topic(npTopic)
+                .create();
+```
+
+## Message retention and expiry
+
+By default, Pulsar message brokers:
+
+* immediately delete *all* messages that have been acknowledged by a consumer, and
+* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog.
+
+Pulsar has two features, however, that enable you to override this default behavior:
+
+* Message **retention** enables you to store messages that have been acknowledged by a consumer
+* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged
+
+> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook.
+
+The diagram below illustrates both concepts:
+
+![Message retention and expiry](assets/retention-expiry.png)
+
+With message retention, shown at the top, a <span style="color: #89b557;">retention policy</span> applied to all topics in a namespace dicates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are <span style="color: #bb3b3e;">deleted</span>. Without a retention policy, *all* of the <span style="color: #19967d;">acknowledged messages</span> would be deleted.
+
+With message expiry, shown at the bottom, some messages are <span style="color: #bb3b3e;">deleted</span>, even though they <span style="color: #337db6;">haven't been acknowledged</span>, because they've expired according to the <span style="color: #e39441;">TTL applied to the namespace</span> (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old).
+
+## Message deduplication
+
+Message **duplication** occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message ***de*duplication** is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, *even if the message is received more than once*.
+
+The following diagram illustrates what happens when message deduplication is disabled vs. enabled:
+
+![Pulsar message deduplication](assets/message-deduplication.png)
+
+
+Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred.
+
+In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message.
+
+> Message deduplication is handled at the namespace level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md).
+
+
+### Producer idempotency
+
+The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, which means that you don't need to modify your Pulsar client code. Instead, you only need to make administrative changes (see the [Managi [...]
+
+### Deduplication and effectively-once semantics
+
+Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide [effectively-once](https://streaml.io/blog/exactly-once) processing semantics. Messaging systems that don't offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplica [...]
+
+> More in-depth information can be found in [this post](https://streaml.io/blog/pulsar-effectively-once/) on the [Streamlio blog](https://streaml.io/blog)
+
+## Delayed message delivery
+Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed.  
+
+Delayed message delivery only works well in Shared subscription mode. In Exclusive and Failover subscription mode, the delayed message is dispatched immediately.
+
+The diagram below illustrates the concept of delayed message delivery:
+
+![Delayed Message Delivery](assets/message_delay.png)
+
+A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`.
+
+### Broker 
+Delayed message delivery is enabled by default. You can change it in the broker configuration file as below:
+
+```
+# Whether to enable the delayed delivery for messages.
+# If disabled, messages are immediately delivered and there is no tracking overhead.
+delayedDeliveryEnabled=true
+
+# Control the ticking time for the retry of delayed message delivery,
+# affecting the accuracy of the delivery time compared to the scheduled time.
+# Default is 1 second.
+delayedDeliveryTickTimeMillis=1000
+```
+
+### Producer 
+The following is an example of delayed message delivery for a producer in Java:
+```java
+// message to be delivered at the configured delay interval
+producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send();
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.6.0/cookbooks-tiered-storage.md
new file mode 100644
index 0000000..a631f18
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/cookbooks-tiered-storage.md
@@ -0,0 +1,301 @@
+---
+id: version-2.6.0-cookbooks-tiered-storage
+title: Tiered Storage
+sidebar_label: Tiered Storage
+original_id: cookbooks-tiered-storage
+---
+
+Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster.
+
+* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support
+[Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short)
+for long term storage. With Jclouds, it is easy to add support for more
+[cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future.
+
+* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. 
+With Hadoop, it is easy to add support for more filesystem in the future.
+
+## When should I use Tiered Storage?
+
+Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history.
+
+## The offloading mechanism
+
+A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture.
+
+![Tiered storage](assets/pulsar-tiered-storage.png "Tiered Storage")
+
+The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded.
+
+On the broker, the administrator must configure the bucket and credentials for the cloud storage service.
+The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail.
+
+Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data.
+We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid
+getting charged for incomplete uploads.
+
+When ledgers are offloaded to long term storage, you can still query data in the offloaded ledgers with Pulsar SQL.
+
+## Configuring the offload driver
+
+Offloading is configured in ```broker.conf```.
+
+At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials.
+There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc.
+
+Currently we support driver of types:
+
+- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/)
+- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/)
+- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/)
+
+> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`,
+> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if
+> using a S3 compatible data store, other than AWS.
+
+```conf
+managedLedgerOffloadDriver=aws-s3
+```
+
+### "aws-s3" Driver configuration
+
+#### Bucket and Region
+
+Buckets are the basic containers that hold your data.
+Everything that you store in Cloud Storage must be contained in a bucket.
+You can use buckets to organize your data and control access to your data,
+but unlike directories and folders, you cannot nest buckets.
+
+```conf
+s3ManagedLedgerOffloadBucket=pulsar-topic-offload
+```
+
+Bucket Region is the region where bucket located. Bucket Region is not a required
+but a recommended configuration. If it is not configured, It will use the default region.
+
+With AWS S3, the default region is `US East (N. Virginia)`. Page
+[AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information.
+
+```conf
+s3ManagedLedgerOffloadRegion=eu-west-3
+```
+
+#### Authentication with AWS
+
+To be able to access AWS S3, you need to authenticate with AWS S3.
+Pulsar does not provide any direct means of configuring authentication for AWS S3,
+but relies on the mechanisms supported by the
+[DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html).
+
+Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways.
+
+1. Using ec2 instance metadata credentials
+
+If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials
+if no other mechanism is provided
+
+2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```.
+
+```bash
+export AWS_ACCESS_KEY_ID=ABC123456789
+export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
+```
+
+> \"export\" is important so that the variables are made available in the environment of spawned processes.
+
+
+3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`.
+
+```bash
+PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024"
+```
+
+4. Set the access credentials in ```~/.aws/credentials```.
+
+```conf
+[default]
+aws_access_key_id=ABC123456789
+aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
+```
+
+5. Assuming an IAM role
+
+If you want to assume an IAM role, this can be done via specifying the following:
+
+```conf
+s3ManagedLedgerOffloadRole=<aws role arn>
+s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload
+```
+
+This will use the `DefaultAWSCredentialsProviderChain` for assuming this role.
+
+> The broker must be rebooted for credentials specified in pulsar_env to take effect.
+
+#### Configuring the size of block read/write
+
+Pulsar also provides some knobs to configure the size of requests sent to AWS S3.
+
+- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes```  configures the maximum size of
+  a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB.
+- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for
+  each individual read when reading back data from AWS S3. Default is 1MB.
+
+In both cases, these should not be touched unless you know what you are doing.
+
+### "google-cloud-storage" Driver configuration
+
+Buckets are the basic containers that hold your data. Everything that you store in
+Cloud Storage must be contained in a bucket. You can use buckets to organize your data and
+control access to your data, but unlike directories and folders, you cannot nest buckets.
+
+```conf
+gcsManagedLedgerOffloadBucket=pulsar-topic-offload
+```
+
+Bucket Region is the region where bucket located. Bucket Region is not a required but
+a recommended configuration. If it is not configured, It will use the default region.
+
+Regarding GCS, buckets are default created in the `us multi-regional location`,
+page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information.
+
+```conf
+gcsManagedLedgerOffloadRegion=europe-west3
+```
+
+#### Authentication with GCS
+
+The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf`
+for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is
+a Json file, containing the GCS credentials of a service account.
+[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains
+more information of how to create this key file for authentication. More information about google cloud IAM
+is available [here](https://cloud.google.com/storage/docs/access-control/iam).
+
+To generate service account credentials or view the public credentials that you've already generated, follow the following steps:
+
+1. Open the [Service accounts page](https://console.developers.google.com/iam-admin/serviceaccounts).
+2. Select a project or create a new one.
+3. Click **Create service account**.
+4. In the **Create service account** window, type a name for the service account, and select **Furnish a new private key**. If you want to [grant G Suite domain-wide authority](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority) to the service account, also select **Enable G Suite Domain-wide Delegation**.
+5. Click **Create**.
+
+> Notes: Make ensure that the service account you create has permission to operate GCS, you need to assign **Storage Admin** permission to your service account in [here](https://cloud.google.com/storage/docs/access-control/iam).
+
+```conf
+gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json"
+```
+
+#### Configuring the size of block read/write
+
+Pulsar also provides some knobs to configure the size of requests sent to GCS.
+
+- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent
+  during a multipart upload. This cannot be smaller than 5MB. Default is 64MB.
+- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual
+  read when reading back data from GCS. Default is 1MB.
+
+In both cases, these should not be touched unless you know what you are doing.
+
+### "filesystem" Driver configuration
+
+
+#### Configure connection address
+
+You can configure the connection address in the `broker.conf` file.
+
+```conf
+fileSystemURI="hdfs://127.0.0.1:9000"
+```
+#### Configure Hadoop profile path
+
+The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on.
+
+```conf
+fileSystemProfilePath="../conf/filesystem_offload_core_site.xml"
+```
+
+The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop.
+
+**Example**
+
+```conf
+
+    <property>
+        <name>fs.defaultFS</name>
+        <value></value>
+    </property>
+    
+    <property>
+        <name>hadoop.tmp.dir</name>
+        <value>pulsar</value>
+    </property>
+    
+    <property>
+        <name>io.file.buffer.size</name>
+        <value>4096</value>
+    </property>
+    
+    <property>
+        <name>io.seqfile.compress.blocksize</name>
+        <value>1000000</value>
+    </property>
+    <property>
+    
+        <name>io.seqfile.compression.type</name>
+        <value>BLOCK</value>
+    </property>
+    
+    <property>
+        <name>io.map.index.interval</name>
+        <value>128</value>
+    </property>
+    
+```
+
+For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/).
+## Configuring offload to run automatically
+
+Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can.
+
+```bash
+$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace
+```
+
+> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full.
+
+
+## Triggering offload manually
+
+Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you.
+
+When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met.
+
+```bash
+$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1
+Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1
+```
+
+The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status.
+
+```bash
+$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1
+Offload is currently running
+```
+
+To wait for offload to complete, add the -w flag.
+
+```bash
+$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1
+Offload was a success
+```
+
+If there is an error offloading, the error will be propagated to the offload-status command.
+
+```bash
+$ bin/pulsar-admin topics offload-status persistent://public/default/topic1
+Error in offload
+null
+
+Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads.  Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhr [...]
+````
+
diff --git a/site2/website/versioned_docs/version-2.6.0/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.6.0/deploy-kubernetes.md
new file mode 100644
index 0000000..325ebef
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/deploy-kubernetes.md
@@ -0,0 +1,11 @@
+---
+id: version-2.6.0-deploy-kubernetes
+title: Deploy Pulsar on Kubernetes
+sidebar_label: Kubernetes
+original_id: deploy-kubernetes
+---
+
+To get up and running with these charts as fast as possible, in a **non-production** use case, we provide
+a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments.
+
+To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md).
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/developing-load-manager.md b/site2/website/versioned_docs/version-2.6.0/developing-load-manager.md
new file mode 100644
index 0000000..652dcf8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/developing-load-manager.md
@@ -0,0 +1,215 @@
+---
+id: version-2.6.0-develop-load-manager
+title: Modular load manager
+sidebar_label: Modular load manager
+original_id: develop-load-manager
+---
+
+The *modular load manager*, implemented in  [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load  [...]
+
+## Usage
+
+There are two ways that you can enable the modular load manager:
+
+1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`.
+2. Using the `pulsar-admin` tool. Here's an example:
+
+   ```shell
+   $ pulsar-admin update-dynamic-config \
+     --config loadManagerClassName \
+     --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
+   ```
+
+   You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`.
+
+## Verification
+
+There are a few different ways to determine which load manager is being used:
+
+1. Use `pulsar-admin` to examine the `loadManagerClassName` element:
+
+    ```shell
+   $ bin/pulsar-admin brokers get-all-dynamic-config
+   {
+     "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl"
+   }
+   ```
+
+   If there is no `loadManagerClassName` element, then the default load manager is used.
+
+2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager:
+
+    ```json
+    {
+      "bandwidthIn": {
+        "limit": 10240000.0,
+        "usage": 4.256510416666667
+      },
+      "bandwidthOut": {
+        "limit": 10240000.0,
+        "usage": 5.287239583333333
+      },
+      "bundles": [],
+      "cpu": {
+        "limit": 2400.0,
+        "usage": 5.7353247655435915
+      },
+      "directMemory": {
+        "limit": 16384.0,
+        "usage": 1.0
+      }
+    }
+    ```
+
+    With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this:
+
+    ```json
+    {
+      "systemResourceUsage": {
+        "bandwidthIn": {
+          "limit": 10240000.0,
+          "usage": 0.0
+        },
+        "bandwidthOut": {
+          "limit": 10240000.0,
+          "usage": 0.0
+        },
+        "cpu": {
+          "limit": 2400.0,
+          "usage": 0.0
+        },
+        "directMemory": {
+          "limit": 16384.0,
+          "usage": 1.0
+        },
+        "memory": {
+          "limit": 8192.0,
+          "usage": 3903.0
+        }
+      }
+    }
+    ```
+
+3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used.
+
+    Here is an example from the modular load manager:
+
+    ```
+    ===================================================================================================================
+    ||SYSTEM         |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+    ||               |0.00           |48.33          |0.01           |0.00           |0.00           |48.33          ||
+    ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
+    ||               |4              |4              |0              |2              |4              |0              ||
+    ||LATEST         |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+    ||SHORT          |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+    ||LONG           |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+    ===================================================================================================================
+    ```
+
+    Here is an example from the simple load manager:
+
+    ```
+    ===================================================================================================================
+    ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
+    ||               |4              |4              |0              |2              |0              |0              ||
+    ||RAW SYSTEM     |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+    ||               |0.25           |47.94          |0.01           |0.00           |0.00           |47.94          ||
+    ||ALLOC SYSTEM   |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+    ||               |0.20           |1.89           |               |1.27           |3.21           |3.21           ||
+    ||RAW MSG        |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.01           |0.01           |0.01           ||
+    ||ALLOC MSG      |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |54.84          |134.48         |189.31         |126.54         |320.96         |447.50         ||
+    ===================================================================================================================
+    ```
+
+It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper.
+
+## Implementation
+
+### Data
+
+The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class.
+Here, the available data is subdivided into the bundle data and the broker data.
+
+#### Broker
+
+The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts,
+one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker
+data which is written to ZooKeeper by the leader broker.
+
+##### Local Broker Data
+The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources:
+
+* CPU usage
+* JVM heap memory usage
+* Direct memory usage
+* Bandwidth in/out usage
+* Most recent total message rate in/out across all bundles
+* Total number of topics, bundles, producers, and consumers
+* Names of all bundles assigned to this broker
+* Most recent changes in bundle assignments for this broker
+
+The local broker data is updated periodically according to the service configuration
+"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will
+receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node
+`/loadbalance/brokers/<broker host/port>`
+
+##### Historical Broker Data
+
+The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class.
+
+In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information:
+
+* Message rate in/out for the entire broker
+* Message throughput in/out for the entire broker
+
+Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained.
+
+The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
+
+##### Bundle Data
+
+The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame:
+
+* Message rate in/out for this bundle
+* Message Throughput In/Out for this bundle
+* Current number of samples for this bundle
+
+The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where
+the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval
+for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the
+short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term
+data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame,
+the average is taken only over the existing samples. When no samples are available, default values are assumed until
+they are overwritten by the first sample. Currently, the default values are
+
+* Message rate in/out: 50 messages per second both ways
+* Message throughput in/out: 50KB per second both ways
+
+The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper.
+Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical
+broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
+
+### Traffic Distribution
+
+The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](h [...]
+
+#### Least Long Term Message Rate Strategy
+
+As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that
+the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based
+on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system
+resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the
+assignment process. This is done by weighting the final message rate according to
+`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration
+`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources
+that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed
+by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded,
+then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload
+threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly
+assigned.
+
diff --git a/site2/website/versioned_docs/version-2.6.0/functions-cli.md b/site2/website/versioned_docs/version-2.6.0/functions-cli.md
new file mode 100644
index 0000000..e3b54b6
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/functions-cli.md
@@ -0,0 +1,198 @@
+---
+id: version-2.6.0-functions-cli
+title: Pulsar Functions command line tool
+sidebar_label: Reference: CLI
+original_id: functions-cli
+---
+
+The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters.
+
+## localrun
+
+Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster.
+
+Name | Description | Default
+---|---|---
+auto-ack | Whether or not the framework acknowledges messages automatically. | true |
+broker-service-url | The URL for the Pulsar broker. | |
+classname | The class name of a Pulsar Function.| |
+client-auth-params | Client authentication parameter. | |
+client-auth-plugin | Client authentication plugin using which function-process can connect to broker. |  |
+CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| |
+custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | |
+custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | |
+dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions.  | |
+disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | |
+fqfn | The Fully Qualified Function Name (FQFN) for the function. |  |
+function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. |  |
+go | Path to the main Go executable binary for the function (if the function is written in Go). |  |
+hostname-verification-enabled | Enable hostname verification. | false
+inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | |
+jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. |  |
+instance-id-offset | Start the instanceIds from this offset. | 0
+log-topic | The topic to which the logs  a Pulsar Function are produced. |  |
+max-message-retries | How many times should we try to process a message before giving up. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+output | The output topic of a Pulsar Function (If none is specified, no output is written). |  |
+output-serde-classname | The SerDe class to be used for messages output by the function. |  |
+parallelism | The parallelism factor of  a Pulsar Function (i.e. the number of function instances to run). |  |
+processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE
+py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). |  |
+ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). |  |
+retain-ordering | Function consumes and processes messages in order. | |
+schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | <empty string>
+sliding-interval-count | The number of messages after which the window slides. |  |
+sliding-interval-duration-ms | The time duration after which the window slides. |  |
+subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. |  |
+tenant | The tenant of a Pulsar Function. |  |
+timeout-ms | The message timeout in milliseconds. |  |
+tls-allow-insecure | Allow insecure tls connection. | false
+tls-trust-cert-path | tls trust cert file path. |  |
+topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). |  |
+use-tls | Use tls connection. | false
+user-config | User-defined config key/values. |  |
+window-length-count | The number of messages per window. |  |
+window-length-duration-ms | The time duration of the window in milliseconds. | |
+
+
+## create
+
+Create and deploy a Pulsar Function in cluster mode.
+
+Name | Description | Default
+---|---|---
+auto-ack | Whether or not the framework acknowledges messages automatically. | true |
+classname | The class name of a Pulsar Function. |  |
+CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| |
+custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | |
+custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | |
+custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | |
+dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | |
+disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | |
+fqfn | The Fully Qualified Function Name (FQFN) for the function. |  |
+function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. |  |
+go | Path to the main Go executable binary for the function (if the function is written in Go). |  |
+inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | |
+jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. |  |
+log-topic | The topic to which the logs of a Pulsar Function are produced. |  |
+max-message-retries | How many times should we try to process a message before giving up. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+output | The output topic of a Pulsar Function (If none is specified, no output is written). |  |
+output-serde-classname | The SerDe class to be used for messages output by the function. |  |
+parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). |  |
+processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE
+py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). |  |
+ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). |  |
+retain-ordering | Function consumes and processes messages in order. |  |
+schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | <empty string>
+sliding-interval-count | The number of messages after which the window slides. |  |
+sliding-interval-duration-ms | The time duration after which the window slides. |  |
+subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. |  |
+tenant | The tenant of a Pulsar Function. |  |
+timeout-ms | The message timeout in milliseconds. |  |
+topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). |  |
+user-config | User-defined config key/values. |  |
+window-length-count | The number of messages per window. |  |
+window-length-duration-ms | The time duration of the window in milliseconds. | |
+
+## delete
+
+Delete a Pulsar Function that is running on a Pulsar cluster.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## update
+
+Update a Pulsar Function that has been deployed to a Pulsar cluster.
+
+Name | Description | Default
+---|---|---
+auto-ack | Whether or not the framework acknowledges messages automatically. | true |
+classname | The class name of a Pulsar Function. | |
+CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | |
+custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | |
+custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | |
+custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | |
+dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | |
+disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | |
+fqfn | The Fully Qualified Function Name (FQFN) for the function. |  |
+function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. |  |
+go | Path to the main Go executable binary for the function (if the function is written in Go). |  |
+inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | |
+jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. |  |
+log-topic | The topic to which the logs of a Pulsar Function are produced. |  |
+max-message-retries | How many times should we try to process a message before giving up. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+output | The output topic of a Pulsar Function (If none is specified, no output is written). |  |
+output-serde-classname | The SerDe class to be used for messages output by the function. |  |
+parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). |  |
+processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE
+py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). |  |
+ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). |  |
+retain-ordering | Function consumes and processes messages in order. |  |
+schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | <empty string>
+sliding-interval-count | The number of messages after which the window slides. |  |
+sliding-interval-duration-ms | The time duration after which the window slides. |  |
+subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. |  |
+tenant | The tenant of a Pulsar Function. |  |
+timeout-ms | The message timeout in milliseconds. |  |
+topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). |  |
+update-auth-data | Whether or not to update the auth data. | false
+user-config | User-defined config key/values. |  |
+window-length-count | The number of messages per window. |  |
+window-length-duration-ms | The time duration of the window in milliseconds. | |
+
+## get
+
+Fetch information about a Pulsar Function.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## restart
+
+Restart function instance.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+instance-id | The function instanceId (restart all instances if instance-id is not provided. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## stop
+
+Stops function instance.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+instance-id | The function instanceId (restart all instances if instance-id is not provided. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## start
+
+Starts a stopped function instance.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+instance-id | The function instanceId (restart all instances if instance-id is not provided. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
diff --git a/site2/website/versioned_docs/version-2.6.0/functions-develop.md b/site2/website/versioned_docs/version-2.6.0/functions-develop.md
new file mode 100644
index 0000000..2b179ec
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/functions-develop.md
@@ -0,0 +1,984 @@
+---
+id: version-2.6.0-functions-develop
+title: Develop Pulsar Functions
+sidebar_label: How-to: Develop
+original_id: functions-develop
+---
+
+This tutorial walks you through how to develop Pulsar Functions.
+
+## Available APIs
+In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go.
+
+Interface | Description | Use cases
+:---------|:------------|:---------
+Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context).
+Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context).
+
+The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+```Java
+import java.util.function.Function;
+
+public class JavaNativeExclamationFunction implements Function<String, String> {
+    @Override
+    public String apply(String input) {
+        return String.format("%s!", input);
+    }
+}
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java).
+
+<!--Python-->
+```python
+def process(input):
+    return "{}!".format(input)
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py).
+
+> Note
+> You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter.
+> 
+> If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to
+> start the functions. In this case, you can create a symlink. Your system will fail if
+> you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518).
+> 
+> ```bash
+> sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10
+> ```
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+The following example uses Pulsar Functions SDK.
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+```Java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+public class ExclamationFunction implements Function<String, String> {
+    @Override
+    public String process(String input, Context context) {
+        return String.format("%s!", input);
+    }
+}
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java).
+
+<!--Python-->
+```python
+from pulsar import Function
+
+class ExclamationFunction(Function):
+  def __init__(self):
+    pass
+
+  def process(self, input, context):
+    return input + '!'
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py).
+
+<!--Go-->
+```Go
+package main
+
+import (
+	"context"
+	"fmt"
+
+	"github.com/apache/pulsar/pulsar-function-go/pf"
+)
+
+func HandleRequest(ctx context.Context, in []byte) error{
+	fmt.Println(string(in) + "!")
+	return nil
+}
+
+func main() {
+	pf.Start(HandleRequest)
+}
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-function-go/examples/inputFunc.go#L20-L36).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## Schema registry
+Pulsar has a built in schema registry and comes bundled with a variety of popular schema types(avro, json and protobuf). Pulsar Functions can leverage existing schema information from input topics and derive the input type. The schema registry applies for output topic as well.
+
+## SerDe
+SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default:
+
+* `String`
+* `Double`
+* `Integer`
+* `Float`
+* `Long`
+* `Short`
+* `Byte`
+
+To customize Java types, you need to implement the following interface.
+
+```java
+public interface SerDe<T> {
+    T deserialize(byte[] input);
+    byte[] serialize(T input);
+}
+```
+
+<!--Python-->
+In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns.
+
+You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. 
+
+```bash
+$ bin/pulsar-admin functions create \
+  --tenant public \
+  --namespace default \
+  --name my_function \
+  --py my_function.py \
+  --classname my_function.MyFunction \
+  --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \
+  --output-serde-classname Serde3 \
+  --output output-topic-1
+```
+
+This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file.
+
+When using Pulsar Functions for Python, you have three SerDe options:
+
+1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used.
+2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe.
+3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for  [...]
+
+The table below shows when you should use each SerDe.
+
+SerDe option | When to use
+:------------|:-----------
+`IdentitySerde` | When you work with simple types like strings, Booleans, integers.
+`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`.
+Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes.
+
+<!--Go-->
+Currently, the feature is not available in Go.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### Example
+Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+
+```java
+public class Tweet {
+    private String username;
+    private String tweetContent;
+
+    public Tweet(String username, String tweetContent) {
+        this.username = username;
+        this.tweetContent = tweetContent;
+    }
+
+    // Standard setters and getters
+}
+```
+
+To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`.
+
+```java
+package com.example.serde;
+
+import org.apache.pulsar.functions.api.SerDe;
+
+import java.util.regex.Pattern;
+
+public class TweetSerde implements SerDe<Tweet> {
+    public Tweet deserialize(byte[] input) {
+        String s = new String(input);
+        String[] fields = s.split(Pattern.quote("|"));
+        return new Tweet(fields[0], fields[1]);
+    }
+
+    public byte[] serialize(Tweet input) {
+        return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes();
+    }
+}
+```
+
+To apply this customized SerDe to a particular Pulsar Function, you need to:
+
+* Package the `Tweet` and `TweetSerde` classes into a JAR.
+* Specify a path to the JAR and SerDe class name when deploying the function.
+
+The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar /path/to/your.jar \
+  --output-serde-classname com.example.serde.TweetSerde \
+  # Other function attributes
+```
+
+> #### Custom SerDe classes must be packaged with your function JARs
+> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error.
+
+<!--Python-->
+
+```python
+class Tweet(object):
+    def __init__(self, username, tweet_content):
+        self.username = username
+        self.tweet_content = tweet_content
+```
+
+In order to use this class in Pulsar Functions, you have two options:
+
+1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe.
+2. You can create your own SerDe class. The following is an example.
+
+  ```python
+from pulsar import SerDe
+
+class TweetSerDe(SerDe):
+
+    def serialize(self, input):
+        return bytes("{0}|{1}".format(input.username, input.tweet_content))
+
+    def deserialize(self, input_bytes):
+        tweet_components = str(input_bytes).split('|')
+        return Tweet(tweet_components[0], tweet_componentsp[1])
+  ```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+In both languages, however, you can write custom SerDe logic for more complex, application-specific types.
+
+## Context
+Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function.
+
+* The name and ID of a Pulsar Function.
+* The message ID of each message. Each Pulsar message is automatically assigned with an ID.
+* The key, event time, properties and partition key of each message.
+* The name of the topic to which the message is sent.
+* The names of all input topics as well as the output topic associated with the function.
+* The name of the class used for [SerDe](#serde).
+* The [tenant](reference-terminology.md#tenant) and namespace associated with the function.
+* The ID of the Pulsar Functions instance running the function.
+* The version of the function.
+* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages.
+* Access to arbitrary [user configuration](#user-config) values supplied via the CLI.
+* An interface for recording [metrics](#metrics).
+* An interface for storing and retrieving state in [state storage](#state-storage).
+* A function to publish new messages onto arbitrary topics.
+* A function to ack the message being processed (if auto-ack is disabled).
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows.
+
+```java
+public interface Context {
+    Record<?> getCurrentRecord();
+    Collection<String> getInputTopics();
+    String getOutputTopic();
+    String getOutputSchemaType();
+    String getTenant();
+    String getNamespace();
+    String getFunctionName();
+    String getFunctionId();
+    String getInstanceId();
+    String getFunctionVersion();
+    Logger getLogger();
+    void incrCounter(String key, long amount);
+    long getCounter(String key);
+    void putState(String key, ByteBuffer value);
+    void deleteState(String key);
+    ByteBuffer getState(String key);
+    Map<String, Object> getUserConfigMap();
+    Optional<Object> getUserConfigValue(String key);
+    Object getUserConfigValueOrDefault(String key, Object defaultValue);
+    void recordMetric(String metricName, double value);
+    <O> CompletableFuture<Void> publish(String topicName, O object, String schemaOrSerdeClassName);
+    <O> CompletableFuture<Void> publish(String topicName, O object);
+}
+```
+
+The following example uses several methods available via the `Context` object.
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+import java.util.stream.Collectors;
+
+public class ContextFunction implements Function<String, Void> {
+    public Void process(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", "));
+        String functionName = context.getFunctionName();
+
+        String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n",
+                input,
+                inputTopics);
+
+        LOG.info(logMessage);
+
+        String metricName = String.format("function-%s-messages-received", functionName);
+        context.recordMetric(metricName, 1);
+
+        return null;
+    }
+}
+```
+
+<!--Python-->
+```
+class ContextImpl(pulsar.Context):
+  def get_message_id(self):
+    ...
+  def get_message_key(self):
+    ...
+  def get_message_eventtime(self):
+    ...
+  def get_message_properties(self):
+    ...
+  def get_current_message_topic_name(self):
+    ...
+  def get_partition_key(self):
+    ...
+  def get_function_name(self):
+    ...
+  def get_function_tenant(self):
+    ...
+  def get_function_namespace(self):
+    ...
+  def get_function_id(self):
+    ...
+  def get_instance_id(self):
+    ...
+  def get_function_version(self):
+    ...
+  def get_logger(self):
+    ...
+  def get_user_config_value(self, key):
+    ...
+  def get_user_config_map(self):
+    ...
+  def record_metric(self, metric_name, metric_value):
+    ...
+  def get_input_topics(self):
+    ...
+  def get_output_topic(self):
+    ...
+  def get_output_serde_class_name(self):
+    ...
+  def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe",
+              properties=None, compression_type=None, callback=None, message_conf=None):
+    ...
+  def ack(self, msgid, topic):
+    ...
+  def get_and_reset_metrics(self):
+    ...
+  def reset_metrics(self):
+    ...
+  def get_metrics(self):
+    ...
+  def incr_counter(self, key, amount):
+    ...
+  def get_counter(self, key):
+    ...
+  def del_counter(self, key):
+    ...
+  def put_state(self, key, value):
+    ...
+  def get_state(self, key):
+    ...
+```
+
+<!--Go-->
+```
+func (c *FunctionContext) GetInstanceID() int {
+	return c.instanceConf.instanceID
+}
+
+func (c *FunctionContext) GetInputTopics() []string {
+	return c.inputTopics
+}
+
+func (c *FunctionContext) GetOutputTopic() string {
+	return c.instanceConf.funcDetails.GetSink().Topic
+}
+
+func (c *FunctionContext) GetFuncTenant() string {
+	return c.instanceConf.funcDetails.Tenant
+}
+
+func (c *FunctionContext) GetFuncName() string {
+	return c.instanceConf.funcDetails.Name
+}
+
+func (c *FunctionContext) GetFuncNamespace() string {
+	return c.instanceConf.funcDetails.Namespace
+}
+
+func (c *FunctionContext) GetFuncID() string {
+	return c.instanceConf.funcID
+}
+
+func (c *FunctionContext) GetFuncVersion() string {
+	return c.instanceConf.funcVersion
+}
+
+func (c *FunctionContext) GetUserConfValue(key string) interface{} {
+	return c.userConfigs[key]
+}
+
+func (c *FunctionContext) GetUserConfMap() map[string]interface{} {
+	return c.userConfigs
+}
+```
+
+The following example uses several methods available via the `Context` object.
+
+```
+import (
+    "context"
+    "fmt"
+
+    "github.com/apache/pulsar/pulsar-function-go/pf"
+)
+
+func contextFunc(ctx context.Context) {
+    if fc, ok := pf.FromContext(ctx); ok {
+        fmt.Printf("function ID is:%s, ", fc.GetFuncID())
+        fmt.Printf("function version is:%s\n", fc.GetFuncVersion())
+    }
+}
+```
+
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-function-go/examples/contextFunc.go#L29-L34).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### User config
+When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--user-config` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --name word-filter \
+  # Other function configs
+  --user-config '{"forbidden-word":"rosebud"}'
+```
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java--> 
+The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair.
+
+```bash
+$ bin/pulsar-admin functions create \
+  # Other function configs
+  --user-config '{"word-of-the-day":"verdure"}'
+```
+
+To access that value in a Java function:
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+import java.util.Optional;
+
+public class UserConfigFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        Optional<String> wotd = context.getUserConfigValue("word-of-the-day");
+        if (wotd.isPresent()) {
+            LOG.info("The word of the day is {}", wotd);
+        } else {
+            LOG.warn("No word of the day provided");
+        }
+        return null;
+    }
+}
+```
+
+The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line.
+
+You can also access the entire user config map or set a default value in case no value is present:
+
+```java
+// Get the whole config map
+Map<String, String> allConfigs = context.getUserConfigMap();
+
+// Get value or resort to default
+String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious");
+```
+
+> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type.
+
+<!--Python-->
+In Python function, you can access the configuration value like this.
+
+```python
+from pulsar import Function
+
+class WordFilter(Function):
+    def process(self, context, input):
+        forbidden_word = context.user_config()["forbidden-word"]
+
+        # Don't publish the message if it contains the user-supplied
+        # forbidden word
+        if forbidden_word in input:
+            pass
+        # Otherwise publish the message
+        else:
+            return input
+```
+
+The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair.
+
+```bash
+$ bin/pulsar-admin functions create \
+  # Other function configs \
+  --user-config '{"word-of-the-day":"verdure"}'
+```
+
+To access that value in a Python function:
+
+```python
+from pulsar import Function
+
+class UserConfigFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        wotd = context.get_user_config_value('word-of-the-day')
+        if wotd is None:
+            logger.warn('No word of the day provided')
+        else:
+            logger.info("The word of the day is {0}".format(wotd))
+```
+<!--Go--> 
+Currently, the feature is not available in Go.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### Logger
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`.
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+public class LoggingFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String messageId = new String(context.getMessageId());
+
+        if (input.contains("danger")) {
+            LOG.warn("A warning was received in message {}", messageId);
+        } else {
+            LOG.info("Message {} received\nContent: {}", messageId, input);
+        }
+
+        return null;
+    }
+}
+```
+
+If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar my-functions.jar \
+  --classname my.package.LoggingFunction \
+  --log-topic persistent://public/default/logging-function-logs \
+  # Other function configs
+```
+
+All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic.
+
+<!--Python-->
+Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`.
+
+```python
+from pulsar import Function
+
+class LoggingFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        msg_id = context.get_message_id()
+        if 'danger' in input:
+            logger.warn("A warning was received in message {0}".format(context.get_message_id()))
+        else:
+            logger.info("Message {0} received\nContent: {1}".format(msg_id, input))
+```
+
+If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --py logging_function.py \
+  --classname logging_function.LoggingFunction \
+  --log-topic logging-function-logs \
+  # Other function configs
+```
+
+All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic.
+
+<!--Go-->
+The following Go Function example shows different log levels based on the function input.
+
+```
+import (
+    "context"
+
+    "github.com/apache/pulsar/pulsar-function-go/pf"
+
+    log "github.com/apache/pulsar/pulsar-function-go/logutil"
+)
+
+func loggerFunc(ctx context.Context, input []byte) {
+	if len(input) <= 100 {
+		log.Infof("This input has a length of: %d", len(input))
+	} else {
+		log.Warnf("This input is getting too long! It has {%d} characters", len(input))
+	}
+}
+
+func main() {
+	pf.Start(loggerFunc)
+}
+```
+
+When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/logutil`, and you do not have to use the `getLogger()` context object. 
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## Metrics
+Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. 
+
+> If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. 
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+public class MetricRecorderFunction implements Function<Integer, Void> {
+    @Override
+    public void apply(Integer input, Context context) {
+        // Records the metric 1 every time a message arrives
+        context.recordMetric("hit-count", 1);
+
+        // Records the metric only if the arriving number equals 11
+        if (input == 11) {
+            context.recordMetric("elevens-count", 1);
+        }
+
+        return null;
+    }
+}
+```
+
+> For instructions on reading and using metrics, see the [Monitoring](deploy-monitoring.md) guide.
+
+<!--Python-->
+You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example.
+
+```python
+from pulsar import Function
+
+class MetricRecorderFunction(Function):
+    def process(self, input, context):
+        context.record_metric('hit-count', 1)
+
+        if input == 11:
+            context.record_metric('elevens-count', 1)
+```
+<!--Go-->
+Currently, the feature is not available in Go.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### Access metrics
+To access metrics created by Pulsar Functions, refer to [Monitoring](deploy-monitoring.md) in Pulsar. 
+
+## Security
+
+If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings).
+
+Pulsar Functions can support the following providers:
+
+- ClearTextSecretsProvider
+- EnvironmentBasedSecretsProvider
+
+> Pulsar Function supports ClearTextSecretsProvider by default.
+
+At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+You can get secret provider using the [`Context`](#context) object. The following is an example:
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+public class GetSecretProviderFunction implements Function<String, Void> {
+
+    @Override
+    public Void process(String input, Context context) throws Exception {
+        Logger LOG = context.getLogger();
+        String secretProvider = context.getSecret(input);
+
+        if (!secretProvider.isEmpty()) {
+            LOG.info("The secret provider is {}", secretProvider);
+        } else {
+            LOG.warn("No secret provider");
+        }
+
+        return null;
+    }
+}
+```
+
+<!--Python-->
+You can get secret provider using the [`Context`](#context) object. The following is an example:
+
+```python
+from pulsar import Function
+
+class GetSecretProviderFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        secret_provider = context.get_secret(input)
+        if secret_provider is None:
+            logger.warn('No secret provider')
+        else:
+            logger.info("The secret provider is {0}".format(secret_provider))
+```
+
+
+<!--Go-->
+Currently, the feature is not available in Go.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## State storage
+Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies.
+
+Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API.
+
+States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function.
+
+You can access states within Pulsar Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`.
+
+> Note  
+> State storage is not available in Go.
+
+### API
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions.
+
+#### incrCounter
+
+```java
+    /**
+     * Increment the builtin distributed counter refered by key
+     * @param key The name of the key
+     * @param amount The amount to be incremented
+     */
+    void incrCounter(String key, long amount);
+```
+
+Application can use `incrCounter` to change the counter of a given `key` by the given `amount`.
+
+#### getCounter
+
+```java
+    /**
+     * Retrieve the counter value for the key.
+     *
+     * @param key name of the key
+     * @return the amount of the counter value for this key
+     */
+    long getCounter(String key);
+```
+
+Application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`.
+
+Except the `counter` API, Pulsar also exposes a general key/value API for functions to store
+general key/value state.
+
+#### putState
+
+```java
+    /**
+     * Update the state value for the key.
+     *
+     * @param key name of the key
+     * @param value state value of the key
+     */
+    void putState(String key, ByteBuffer value);
+```
+
+#### getState
+
+```java
+    /**
+     * Retrieve the state value for the key.
+     *
+     * @param key name of the key
+     * @return the state value for the key.
+     */
+    ByteBuffer getState(String key);
+```
+
+#### deleteState
+
+```java
+    /**
+     * Delete the state value for the key.
+     *
+     * @param key   name of the key
+     */
+```
+
+Counters and binary values share the same keyspace, so this deletes either type.
+
+<!--Python-->
+Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions.
+
+#### incr_counter
+
+```python
+  def incr_counter(self, key, amount):
+    """incr the counter of a given key in the managed state"""
+```
+
+Application can use `incr_counter` to change the counter of a given `key` by the given `amount`.
+If the `key` does not exist, a new key is created.
+
+#### get_counter
+
+```python
+  def get_counter(self, key):
+    """get the counter of a given key in the managed state"""
+```
+
+Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`.
+
+Except the `counter` API, Pulsar also exposes a general key/value API for functions to store
+general key/value state.
+
+#### put_state
+
+```python
+  def put_state(self, key, value):
+    """update the value of a given key in the managed state"""
+```
+
+The key is a string, and the value is arbitrary binary data.
+
+#### get_state
+
+```python
+  def get_state(self, key):
+    """get the value of a given key in the managed state"""
+```
+
+#### del_counter
+
+```python
+  def del_counter(self, key):
+    """delete the counter of a given key in the managed state"""
+```
+
+Counters and binary values share the same keyspace, so this deletes either type.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### Query State
+
+A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage
+and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides
+CLI commands for querying its state.
+
+```shell
+$ bin/pulsar-admin functions querystate \
+    --tenant <tenant> \
+    --namespace <namespace> \
+    --name <function-name> \
+    --state-storage-url <bookkeeper-service-url> \
+    --key <state-key> \
+    [---watch]
+```
+
+If `--watch` is specified, the CLI will watch the value of the provided `state-key`.
+
+### Example
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+
+{@inject: github:`WordCountFunction`:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example
+demonstrating on how Application can easily store `state` in Pulsar Functions.
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+import java.util.Arrays;
+
+public class WordCountFunction implements Function<String, Void> {
+    @Override
+    public Void process(String input, Context context) throws Exception {
+        Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1));
+        return null;
+    }
+}
+```
+
+The logic of this `WordCount` function is pretty simple and straightforward:
+
+1. The function first splits the received `String` into multiple words using regex `\\.`.
+2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`).
+
+<!--Python-->
+
+```python
+from pulsar import Function
+
+class WordCount(Function):
+    def process(self, item, context):
+        for word in item.split():
+            context.incr_counter(word, 1)
+```
+
+The logic of this `WordCount` function is pretty simple and straightforward:
+
+1. The function first splits the received string into multiple words on space.
+2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
diff --git a/site2/website/versioned_docs/version-2.6.0/getting-started-clients.md b/site2/website/versioned_docs/version-2.6.0/getting-started-clients.md
new file mode 100644
index 0000000..c0cc7ac
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/getting-started-clients.md
@@ -0,0 +1,33 @@
+---
+id: version-2.6.0-client-libraries
+title: Pulsar client libraries
+sidebar_label: Use Pulsar with client libraries
+original_id: client-libraries
+---
+
+Pulsar supports the following client libraries:
+
+- [Java client](client-libraries-java.md)
+- [Go client](client-libraries-go.md)
+- [Python client](client-libraries-python.md)
+- [C++ client](client-libraries-cpp.md)
+- [Node.js client](client-libraries-node.md)
+- [WebSocket client](client-libraries-websocket.md)
+- [C# client](client-libraries-dotnet.md)
+
+## Feature matrix
+Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
+
+## Third-party clients
+
+Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages.
+
+> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | |
+| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
+| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar |
+| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB |
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/getting-started-helm.md b/site2/website/versioned_docs/version-2.6.0/getting-started-helm.md
new file mode 100644
index 0000000..89a10af
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/getting-started-helm.md
@@ -0,0 +1,333 @@
+---
+id: version-2.6.0-kubernetes-helm
+title: Get started in Kubernetes
+sidebar_label: Run Pulsar in Kubernetes
+original_id: kubernetes-helm
+---
+
+This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections:
+
+- Install the Apache Pulsar on Kubernetes using Helm
+- Start and stop Apache Pulsar
+- Create topics using `pulsar-admin`
+- Produce and consume messages using Pulsar clients
+- Monitor Apache Pulsar status with Prometheus and Grafana
+
+For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy.md).
+
+## Prerequisite
+
+- Kubernetes server 1.14.0+
+- kubectl 1.14.0+
+- Helm 3.0+
+
+> #### Tip
+> For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**.
+
+## Step 0: Prepare a Kubernetes cluster
+
+Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare.md) to prepare a Kubernetes cluster.
+
+We use [Minikube](https://kubernetes.io/docs/getting-started-guides/minikube/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps:
+
+1. Create a Kubernetes cluster on Minikube.
+
+    ```bash
+    minikube start --memory=8192 --cpus=4 --kubernetes-version=<k8s-version>
+    ```
+
+    The `<k8s-version>` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`.
+
+2. Set `kubectl` to use Minikube.
+
+    ```bash
+    kubectl config use-context minikube
+    ```
+
+3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below:
+
+    ```bash
+    minikube dashboard
+    ```
+    The command automatically triggers opening a webpage in your browser. 
+
+## Step 1: Install Pulsar Helm chart
+
+1. Clone the Pulsar Helm chart repository.
+
+    ```bash
+    git clone https://github.com/apache/pulsar
+    cd deployment/kubernetes/helm/
+    ```
+
+2. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager.
+
+    ```bash
+    ./scripts/pulsar/prepare_helm_release.sh \
+        -n pulsar \
+        -k pulsar-mini \
+        --control-center-admin pulsar \
+        --control-center-password pulsar \
+        -c
+    ```
+
+3. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes.
+
+    ```bash
+    helm install \
+        --values examples/values-minikube.yaml \
+        pulsar-mini pulsar
+    ```
+
+4. Check the status of all pods.
+
+    ```bash
+    kubectl get pods -n pulsar
+    ```
+
+    If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`.
+
+    **Output**
+
+    ```bash
+    NAME                                         READY   STATUS      RESTARTS   AGE
+    pulsar-mini-bookie-0                         1/1     Running     0          9m27s
+    pulsar-mini-bookie-init-5gphs                0/1     Completed   0          9m27s
+    pulsar-mini-broker-0                         1/1     Running     0          9m27s
+    pulsar-mini-grafana-6b7bcc64c7-4tkxd         1/1     Running     0          9m27s
+    pulsar-mini-prometheus-5fcf5dd84c-w8mgz      1/1     Running     0          9m27s
+    pulsar-mini-proxy-0                          1/1     Running     0          9m27s
+    pulsar-mini-pulsar-init-t7cqt                0/1     Completed   0          9m27s
+    pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs   1/1     Running     0          9m27s
+    pulsar-mini-toolset-0                        1/1     Running     0          9m27s
+    pulsar-mini-zookeeper-0                      1/1     Running     0          9m27s
+    ```
+
+5. Check the status of all services in the namespace `pulsar`.
+
+    ```bash
+    kubectl get services -n pulsar
+    ```
+
+    **Output**
+    
+    ```bash
+    NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
+    pulsar-mini-bookie           ClusterIP      None             <none>        3181/TCP,8000/TCP             11m
+    pulsar-mini-broker           ClusterIP      None             <none>        8080/TCP,6650/TCP             11m
+    pulsar-mini-grafana          LoadBalancer   10.106.141.246   <pending>     3000:31905/TCP                11m
+    pulsar-mini-prometheus       ClusterIP      None             <none>        9090/TCP                      11m
+    pulsar-mini-proxy            LoadBalancer   10.97.240.109    <pending>     80:32305/TCP,6650:31816/TCP   11m
+    pulsar-mini-pulsar-manager   LoadBalancer   10.103.192.175   <pending>     9527:30190/TCP                11m
+    pulsar-mini-toolset          ClusterIP      None             <none>        <none>                        11m
+    pulsar-mini-zookeeper        ClusterIP      None             <none>        2888/TCP,3888/TCP,2181/TCP    11m
+    ```
+
+## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics
+
+`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics.
+
+1. Enter the `toolset` container.
+
+    ```bash
+    kubectl exec -it -n pulsar pulsar-mini-toolset-0 /bin/bash
+    ```
+
+2. In the `toolset` container, create a tenant named `apache`.
+
+    ```bash
+    bin/pulsar-admin tenants create apache
+    ```
+
+    Then you can list the tenants to see if the tenant is created successfully.
+
+    ```bash
+    bin/pulsar-admin tenants list
+    ```
+
+    You should see a similar output as below. The tenant `apache` has been successfully created. 
+
+    ```bash
+    "apache"
+    "public"
+    "pulsar"
+    ```
+
+3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`.
+
+    ```bash
+    bin/pulsar-admin namespaces create apache/pulsar
+    ```
+
+    Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully.
+
+    ```bash
+    bin/pulsar-admin namespaces list apache
+    ```
+
+    You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. 
+
+    ```bash
+    "apache/pulsar"
+    ```
+
+4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`.
+
+    ```bash
+    bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4
+    ```
+
+5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`.
+
+    ```bash
+    bin/pulsar-admin topics list-partitioned-topics apache/pulsar
+    ```
+
+    Then you can see all the partitioned topics in the namespace `apache/pulsar`.
+
+    ```bash
+    "persistent://apache/pulsar/test-topic"
+    ```
+
+## Step 3: Use Pulsar client to produce and consume messages
+
+You can use the Pulsar client to create producers and consumers to produce and consume messages.
+
+By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to get the IP address of the proxy service.
+
+```bash
+kubectl get services -n pulsar | grep pulsar-mini-proxy
+```
+
+You will see a similar output as below.
+
+```bash
+pulsar-mini-proxy            LoadBalancer   10.97.240.109    <pending>     80:32305/TCP,6650:31816/TCP   28m
+```
+
+This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are exposed to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port.
+
+Then you can find the IP address of your Minikube server by running the following command.
+
+```bash
+minikube ip
+```
+
+At this point, you can get the service URLs to connect to your Pulsar client.
+
+```
+webServiceUrl=http://$(minikube ip):<exposed-http-port>/
+brokerServiceUrl=pulsar://$(minikube ip):<exposed-binary-port>/
+```
+
+Then you can proceed with the following steps:
+
+1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/en/download/).
+
+2. Decompress the tarball based on your download file.
+
+    ```bash
+    tar -xf <file-name>.tar.gz
+    ```
+
+3. Expose `PULSAR_HOME`.
+
+    (1) Enter the directory of the decompressed download file.
+
+    (2) Expose `PULSAR_HOME` as the environment variable.
+
+        ```bash
+        export PULSAR_HOME=$(pwd)
+        ```
+
+4. Configure the Pulsar client.
+
+    In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps.
+
+5. Create a subscription to consume messages from `apache/pulsar/test-topic`.
+
+    ```bash
+    bin/pulsar-client consume -s sub apache/pulsar/test-topic  -n 0
+    ```
+
+6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic.
+
+    ```bash
+    bin/pulsar-client produce apache/pulsar/test-topic  -m "---------hello apache pulsar-------" -n 10
+    ```
+
+7. Verify the results.
+
+    - From the producer side
+
+        **Output**
+        
+        The messages have been produced successfully.
+
+        ```bash
+        18:15:15.489 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced
+        ```
+
+    - From the consumer side
+
+        **Output**
+
+        At the same time, you can receive the messages as below.
+
+        ```bash
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ----- got message -----
+        ---------hello apache pulsar-------
+        ```
+
+## Step 4: Use Pulsar Manager to manage the cluster
+
+[Pulsar Manager](administration-pulsar-manager.md) is a web-based GUI management tool for managing and monitoring Pulsar.
+
+1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command:
+
+    ```bash
+    minikube service -n pulsar pulsar-mini-pulsar-manager 
+    ```
+
+2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager.
+
+3. In Pulsar Manager UI, you can create an environment. 
+
+    - Click `New Environment` button in the top-left corner.
+    - Type `pulsar-mini` for the field `Environment Name` in the popup window.
+    - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window.
+    - Click `Confirm` button in the popup window.
+
+4. After successfully created an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager.
+
+## Step 5: Use Prometheus and Grafana to monitor cluster
+
+Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards.
+
+1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command:
+
+    ```bash
+    minikube service pulsar-mini-grafana -n pulsar
+    ```
+
+2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard.
+
+3. You can view dashboards for different components of a Pulsar cluster.
diff --git a/site2/website/versioned_docs/version-2.6.0/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.6.0/getting-started-pulsar.md
new file mode 100644
index 0000000..549d020
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/getting-started-pulsar.md
@@ -0,0 +1,67 @@
+---
+id: version-2.6.0-pulsar-2.0
+title: Pulsar 2.0
+sidebar_label: Pulsar 2.0
+original_id: pulsar-2.0
+---
+
+Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more.
+
+## New features in Pulsar 2.0
+
+Feature | Description
+:-------|:-----------
+[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar
+
+## Major changes
+
+There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage.
+
+### Properties versus tenants
+
+Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be r [...]
+
+### Topic names
+
+Prior to version 2.0, *all* Pulsar topics had the following form:
+
+```http
+{persistent|non-persistent}://property/cluster/namespace/topic
+```
+Two important changes have been made in Pulsar 2.0:
+
+* There is no longer a [cluster component](#no-cluster)
+* Properties have been [renamed to tenants](#tenants)
+* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names
+* `/` is not allowed in topic name
+
+#### No cluster component
+
+The cluster component has been removed from topic names. Thus, all topic names now have the following form:
+
+```http
+{persistent|non-persistent}://tenant/namespace/topic
+```
+
+> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that.
+
+
+#### Flexible topic naming
+
+All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace:
+
+Topic aspect | Default
+:------------|:-------
+topic type | `persistent`
+tenant | `public`
+namespace | `default`
+
+The table below shows some example topic name translations that use implicit defaults:
+
+Input topic name | Translated topic name
+:----------------|:---------------------
+`my-topic` | `persistent://public/default/my-topic`
+`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic`
+
+> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead
+
diff --git a/site2/website/versioned_docs/version-2.6.0/helm-deploy.md b/site2/website/versioned_docs/version-2.6.0/helm-deploy.md
new file mode 100644
index 0000000..043b0ba
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/helm-deploy.md
@@ -0,0 +1,376 @@
+---
+id: version-2.6.0-helm-deploy
+title: Deploy Pulsar cluster using Helm
+sidebar_label: Deployment
+original_id: helm-deploy
+---
+
+Before running `helm install`, you need to decide how to run Pulsar.
+Options can be specified using Helm's `--set option.name=value` command line option.
+
+## Select configuration options
+
+In each section, collect the options that are combined to use with the `helm install` command.
+
+### Kubernetes namespace
+
+By default, the Pulsar Helm chart is installed to a namespace called `pulsar`.
+
+```yaml
+namespace: pulsar
+```
+
+To install the Pulsar Helm chart into a different Kubernetes namespace, you can include this option in the `helm install` command.
+
+```bash
+--set namespace=<different-k8s-namespace>
+```
+
+By default, the Pulsar Helm chart doesn't create the namespace.
+
+```yaml
+namespaceCreate: false
+```
+
+To use the Pulsar Helm chart to create the Kubernetes namespace automatically, you can include this option in the `helm install` command.
+
+```bash
+--set namespaceCreate=true
+```
+
+### Persistence
+
+By default, the Pulsar Helm chart creates Volume Claims with the expectation that a dynamic provisioner creates the underlying Persistent Volumes.
+
+```yaml
+volumes:
+  persistence: true
+  # configure the components to use local persistent volume
+  # the local provisioner should be installed prior to enable local persistent volume
+  local_storage: false
+```
+
+To use local persistent volumes as the persistent storage for Helm release, you can install the [local storage provisioner](#install-local-storage-provisioner) and include the following option in the `helm install` command. 
+
+```bash
+--set volumes.local_storage=true
+```
+
+> #### Note
+> 
+> Before installing the production instance of Pulsar, ensure to plan the storage settings to avoid extra storage migration work. Because after initial installation, you must edit Kubernetes objects manually if you want to change storage settings.
+
+The Pulsar Helm chart is designed for production use. To use the Pulsar Helm chart in a development environment (such as Minikube), you can disable persistence by including this option in your `helm install` command.
+
+```bash
+--set volumes.persistence=false
+```
+
+### Affinity 
+
+By default, `anti-affinity` is enabled to ensure pods of the same component can run on different nodes.
+
+```yaml
+affinity:
+  anti_affinity: true
+```
+
+To use the Pulsar Helm chart in a development environment (such as Minikue), you can disable `anti-affinity` by including this option in your `helm install` command.
+
+```bash
+--set affinity.anti_affinity=false
+```
+
+### Components
+
+The Pulsar Helm chart is designed for production usage. It deploys a production-ready Pulsar cluster, including Pulsar core components and monitoring components.
+
+You can customize the components to be deployed by turning on/off individual components.
+
+```yaml
+## Components
+##
+## Control what components of Apache Pulsar to deploy for the cluster
+components:
+  # zookeeper
+  zookeeper: true
+  # bookkeeper
+  bookkeeper: true
+  # bookkeeper - autorecovery
+  autorecovery: true
+  # broker
+  broker: true
+  # functions
+  functions: true
+  # proxy
+  proxy: true
+  # toolset
+  toolset: true
+  # pulsar manager
+  pulsar_manager: true
+
+## Monitoring Components
+##
+## Control what components of the monitoring stack to deploy for the cluster
+monitoring:
+  # monitoring - prometheus
+  prometheus: true
+  # monitoring - grafana
+  grafana: true
+```
+
+### Docker images
+
+The Pulsar Helm chart is designed to enable controlled upgrades. So it can configure independent image versions for components. You can customize the images by setting individual component.
+
+```yaml
+## Images
+##
+## Control what images to use for each component
+images:
+  zookeeper:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  bookie:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  autorecovery:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  broker:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  proxy:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  functions:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+  prometheus:
+    repository: prom/prometheus
+    tag: v1.6.3
+    pullPolicy: IfNotPresent
+  grafana:
+    repository: streamnative/apache-pulsar-grafana-dashboard-k8s
+    tag: 0.0.4
+    pullPolicy: IfNotPresent
+  pulsar_manager:
+    repository: apachepulsar/pulsar-manager
+    tag: v0.1.0
+    pullPolicy: IfNotPresent
+    hasCommand: false
+```
+
+### TLS
+
+The Pulsar Helm chart can be configured to enable TLS (Transport Layer Security) to protect all the traffic between components. Before enabling TLS, you have to provision TLS certificates for the required components.
+
+#### Provision TLS certificates using cert-manager
+
+To use the `cert-manager` to provision the TLS certificates, you have to install the [cert-manager](#install-cert-manager) before installing the Pulsar Helm chart. After successfully installing the cert-manager, you can set `certs.internal_issuer.enabled` to `true`. Therefore, the Pulsar Helm chart can use the `cert-manager` to generate `selfsigning` TLS certificates for the configured components.
+
+```yaml
+certs:
+  internal_issuer:
+    enabled: false
+    component: internal-cert-issuer
+    type: selfsigning
+```
+You can also customize the generated TLS certificates by configuring the fields as the following.
+
+```yaml
+tls:
+  # common settings for generating certs
+  common:
+    # 90d
+    duration: 2160h
+    # 15d
+    renewBefore: 360h
+    organization:
+      - pulsar
+    keySize: 4096
+    keyAlgorithm: rsa
+    keyEncoding: pkcs8
+```
+
+#### Enable TLS
+
+After installing the `cert-manager`, you can set `tls.enabled` to `true` to enable TLS encryption for the entire cluster.
+
+```yaml
+tls:
+  enabled: false
+```
+
+You can also configure whether to enable TLS encryption for individual component.
+
+```yaml
+tls:
+  # settings for generating certs for proxy
+  proxy:
+    enabled: false
+    cert_name: tls-proxy
+  # settings for generating certs for broker
+  broker:
+    enabled: false
+    cert_name: tls-broker
+  # settings for generating certs for bookies
+  bookie:
+    enabled: false
+    cert_name: tls-bookie
+  # settings for generating certs for zookeeper
+  zookeeper:
+    enabled: false
+    cert_name: tls-zookeeper
+  # settings for generating certs for recovery
+  autorecovery:
+    cert_name: tls-recovery
+  # settings for generating certs for toolset
+  toolset:
+    cert_name: tls-toolset
+```
+
+### Authentication
+
+By default, authentication is disabled. You can set `auth.authentication.enabled` to `true` to enable authentication.
+Currently, the Pulsar Helm chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use the JWT authentication provider.
+
+```yaml
+# Enable or disable broker authentication and authorization.
+auth:
+  authentication:
+    enabled: false
+    provider: "jwt"
+    jwt:
+      # Enable JWT authentication
+      # If the token is generated by a secret key, set the usingSecretKey as true.
+      # If the token is generated by a private key, set the usingSecretKey as false.
+      usingSecretKey: false
+  superUsers:
+    # broker to broker communication
+    broker: "broker-admin"
+    # proxy to broker communication
+    proxy: "proxy-admin"
+    # pulsar-admin client to broker/proxy communication
+    client: "admin"
+```
+
+To enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in the `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as Kubernetes secrets prefixed with `<pulsar-release-name>-token-`. You can use the following command to find those secrets.
+
+```bash
+kubectl get secrets -n <k8s-namespace>
+```
+
+### Authorization
+
+By default, authorization is disabled. Authorization can be enabled only when authentication is enabled.
+
+```yaml
+auth:
+  authorization:
+    enabled: false
+```
+
+To enable authorization, you can include this option in the `helm install` command.
+
+```bash
+--set auth.authorization.enabled=true
+```
+
+### CPU and RAM resource requirements
+
+By default, the resource requests and the number of replicas for the Pulsar components in the Pulsar Helm chart are adequate for a small production deployment. If you deploy a non-production instance, you can reduce the defaults to fit into a smaller cluster.
+
+Once you have all of your configuration options collected, you can install dependent charts before installing the Pulsar Helm chart.
+
+## Install dependent charts
+
+### Install local storage provisioner
+
+To use local persistent volumes as the persistent storage, you need to install a storage provisioner for [local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/).
+
+One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart.
+
+```
+helm repo add streamnative https://charts.streamnative.io
+helm repo update
+helm install pulsar-storage-provisioner streamnative/local-storage-provisioner
+```
+
+### Install cert-manager
+
+The Pulsar Helm chart uses the [cert-manager](https://github.com/jetstack/cert-manager) to provision and manage TLS certificates automatically. To enable TLS encryption for brokers or proxies, you need to install the cert-manager in advance.
+
+For details about how to install the cert-manager, follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm).
+
+Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar/blob/master/deployment/kubernetes/helm/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to the namespace `cert-manager`.
+
+```bash
+git clone https://github.com/apache/pulsar
+cd pulsar/deployment/kubernetes/helm
+./scripts/cert-manager/install-cert-manager.sh
+```
+
+## Prepare Helm release
+
+Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar/blob/master/deployment/kubernetes/helm/scripts/pulsar/prepare_helm_release.sh) to prepare the Helm release.
+
+```bash
+git clone https://github.com/apache/pulsar
+cd pulsar/deployment/kubernetes/helm
+./scripts/pulsar/prepare_helm_release.sh -n <k8s-namespace> -k <helm-release-name>
+```
+
+The `prepare_helm_release` creates the following resources:
+
+- A Kubernetes namespace for installing the Pulsar release
+- A secret for storing the username and password of the control center administrator. The username and password can be passed to `prepare_helm_release.sh` through flags `--control-center-admin` and `--control-center-password`. The username and password is used for logging into the Grafana dashboard and Pulsar Manager.
+- JWT secret keys and tokens for three super users: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates an asymmetric pubic/private key pair. You can choose to generate a symmetric secret key by specifying `--symmetric`.
+    - `proxy-admin` role is used for proxies to communicate to brokers.
+    - `broker-admin` role is used for inter-broker communications.
+    - `admin` role is used by the admin tools.
+
+## Deploy Pulsar cluster using Helm
+
+Once you have finished the following three things, you can install a Helm release.
+
+- Collect all of your configuration options.
+- Install dependent charts.
+- Prepare the Helm release.
+
+In this example, we name our Helm release `pulsar`.
+
+```bash
+git clone https://github.com/apache/pulsar
+cd pulsar/deployment/kubernetes/helm
+helm upgrade --install pulsar pulsar \
+    --timeout 10m \
+    --set [your configuration options]
+```
+
+You can also use the `--version <installation version>` option if you want to install a specific version of Pulsar Helm chart.
+
+## Monitor deployment
+
+A list of installed resources are output once the Pulsar cluster is deployed. This may take 5-10 minutes.
+
+The status of the deployment can be checked by running the `helm status pulsar` command, which can also be done while the deployment is taking place if you run the command in another terminal.
+
+## Access Pulsar cluster
+
+The default values will create a `ClusterIP` for the following resources, which you can use to interact with the cluster.
+
+- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster.
+- Pulsar Manager: You can access the Pulsar Manager UI at `http://<pulsar-manager-ip>:9527`.
+- Grafana Dashboard: You can access the Grafana dashboard at `http://<grafana-dashboard-ip>:3000`.
+
+To find the IP addresses of those components, run the following command:
+
+```bash
+kubectl get service -n <k8s-namespace>
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/helm-install.md b/site2/website/versioned_docs/version-2.6.0/helm-install.md
new file mode 100644
index 0000000..61bd7b1
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/helm-install.md
@@ -0,0 +1,41 @@
+---
+id: version-2.6.0-helm-install
+title: Install Apache Pulsar using Helm
+sidebar_label: Install
+original_id: helm-install
+---
+
+Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart.
+
+## Requirements
+
+To deploy Apache Pulsar on Kubernetes, the followings are required.
+
+- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin))
+- Helm v3 (3.0.2 or higher)
+- A Kubernetes cluster, version 1.14 or higher
+
+## Environment setup
+
+Before deploying Pulsar, you need to prepare your environment.
+
+### Tools
+
+Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer.
+
+## Cloud cluster preparation
+
+> #### Note 
+> Kubernetes 1.14 or higher is required.
+
+To create and connect to the Kubernetes cluster, follow the instructions:
+
+- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine)
+
+## Pulsar deployment
+
+Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md).
+
+## Pulsar upgrade
+
+To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md).
diff --git a/site2/website/versioned_docs/version-2.6.0/helm-overview.md b/site2/website/versioned_docs/version-2.6.0/helm-overview.md
new file mode 100644
index 0000000..9166041
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/helm-overview.md
@@ -0,0 +1,101 @@
+---
+id: version-2.6.0-helm-overview
+title: Apache Pulsar Helm Chart
+sidebar_label: Overview
+original_id: helm-overview
+---
+
+This is the official supported Helm chart to install Apache Pulsar on a cloud-native environment. It was enhanced based on StreamNative's [Helm Chart](https://github.com/streamnative/charts).
+
+## Introduction
+
+The Apache Pulsar Helm chart is one of the most convenient ways to operate Pulsar on Kubernetes. This Pulsar Helm chart contains all the required components to get started and can scale to large deployments.
+
+This chart includes all the components for a complete experience, but each part can be configured to be installed separately.
+
+- Pulsar core components:
+    - ZooKeeper
+    - Bookies
+    - Brokers
+    - Function workers
+    - Proxies
+- Control Center:
+    - Pulsar Manager
+    - Prometheus
+    - Grafana
+    - Alert Manager
+
+It includes support for:
+
+- Security
+    - Automatically provisioned TLS certificates, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/)
+        - self-signed
+        - [Let's Encrypt](https://letsencrypt.org/)
+    - TLS Encryption
+        - Proxy
+        - Broker
+        - Toolset
+        - Bookie
+        - ZooKeeper
+    - Authentication
+        - JWT
+    - Authorization
+- Storage
+    - Non-persistence storage
+    - Persistence volume
+    - Local persistent volumes
+- Functions
+    - Kubernetes Runtime
+    - Process Runtime
+    - Thread Runtime
+- Operations
+    - Independent image versions for all components, enabling controlled upgrades
+
+## Pulsar Helm chart quick start
+
+To get up and run with these charts as fast as possible, in a **non-production** use case, we provide a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments.
+
+This guide walks the user through deploying these charts with default values and features, but *does not* meet production ready requirements. To deploy these charts into production under sustained load, follow the complete [Installation Guide](helm-install.md).
+
+## Troubleshooting
+
+We have done our best to make these charts as seamless as possible. Occasionally, troubles do go outside of our control. We have collected tips and tricks for troubleshooting common issues. Please check them first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add to them by raising a [Pull Request](https://github.com/apache/pulsar/compare).
+
+## Installation
+
+The Apache Pulsar Helm chart contains all required dependencies.
+
+If you deploy a PoC for testing, we strongly suggest you follow our [Quick Start Guide](getting-started-helm.md) for your first iteration.
+
+1. [Preparation](helm-prepare.md)
+2. [Deployment](helm-deploy.md)
+
+## Upgrading
+
+Once the Pulsar Helm chart is installed, use the `helm upgrade` to complete configuration changes and chart updates.
+
+```bash
+git clone https://github.com/apache/pulsar
+cd deployment/kubernetes/helm
+helm get values <pulsar-release-name> > pulsar.yaml
+helm upgrade <pulsar-release-name> pulsar -f pulsar.yaml
+```
+
+For more detailed information, see [Upgrading](helm-upgrade.md).
+
+## Uninstallation
+
+To uninstall the Pulsar Helm chart, run the following command:
+
+```bash
+helm delete <pulsar-release-name>
+```
+
+For the purposes of continuity, these charts have some Kubernetes objects that cannot be removed when performing `helm delete`.
+It is recommended to *conciously* remove these items, as they affect re-deployment.
+
+* PVCs for stateful data: *consciously* remove these items.
+    - ZooKeeper: This is your metadata.
+    - BookKeeper: This is your data.
+    - Prometheus: This is your metrics data, which can be safely removed.
+* Secrets: if the secrets are generated by the [prepare release script](https://github.com/apache/pulsar/blob/master/deployment/kubernetes/helm/scripts/pulsar/prepare_helm_release.sh), they contain secret keys and tokens. You can use the [cleanup release script](https://github.com/apache/pulsar/blob/master/deployment/kubernetes/helm/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed.
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/helm-prepare.md b/site2/website/versioned_docs/version-2.6.0/helm-prepare.md
new file mode 100644
index 0000000..178f202
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/helm-prepare.md
@@ -0,0 +1,85 @@
+---
+id: version-2.6.0-helm-prepare
+title: Prepare Kubernetes resources
+sidebar_label: Prepare
+original_id: helm-prepare
+---
+
+For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart.
+
+- [Google Kubernetes Engine](#google-kubernetes-engine)
+  - [Manual cluster creation](#manual-cluster-creation)
+  - [Scripted cluster creation](#scripted-cluster-creation)
+    - [Create cluster with local SSDs](#create-cluster-with-local-ssds)
+- [Next Steps](#next-steps)
+
+## Google Kubernetes Engine
+
+To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well.
+
+- [Google Kubernetes Engine](#google-kubernetes-engine)
+  - [Manual cluster creation](#manual-cluster-creation)
+  - [Scripted cluster creation](#scripted-cluster-creation)
+    - [Create cluster with local SSDs](#create-cluster-with-local-ssds)
+- [Next Steps](#next-steps)
+
+### Manual cluster creation
+
+To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster).
+
+Alternatively, you can use the [instructions](#scripted-cluster-creation) below to provision a GKE cluster as needed.
+
+### Scripted cluster creation
+
+A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE.
+
+The script can:
+
+1. Create a new GKE cluster.
+2. Allow the cluster to modify DNS (Domain Name Server) records.
+3. Setup `kubectl`, and connect it to the cluster.
+
+Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work.
+
+The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively.
+
+The following table describes all variables.
+
+| **Variable** | **Description** | **Default value** |
+| ------------ | --------------- | ----------------- |
+| PROJECT      | ID of your GCP project | No default value. It requires to be set. |
+| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` |
+| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative |
+| INT_NETWORK | IP space to use within this cluster | `default` |
+| LOCAL_SSD_COUNT | Number of local SSD counts | 4 |
+| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` |
+| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 |
+| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false |
+| REGION | Compute region for the cluster | `us-east1` |
+| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false |
+| ZONE | Compute zone for the cluster | `us-east1-b` |
+| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` |
+| EXTRA_CREATE_ARGS | Extra arguments passed to create command | |
+
+Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required:
+
+```bash
+PROJECT=<gcloud project id> scripts/pulsar/gke_bootstrap_script.sh up
+```
+
+The script can also be used to clean up the created GKE resources.
+
+```bash
+PROJECT=<gcloud project id> scripts/pulsar/gke_bootstrap_script.sh down
+```
+
+#### Create cluster with local SSDs
+
+To install a Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so Specifying the `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs.
+
+```
+PROJECT=<gcloud project id> USE_LOCAL_SSD=true LOCAL_SSD_COUNT=<local-ssd-count> scripts/pulsar/gke_bootstrap_script.sh up
+```
+## Next Steps
+
+Continue with the [installation of the chart](helm-deploy.md) once you have the cluster up and running.
diff --git a/site2/website/versioned_docs/version-2.6.0/helm-tools.md b/site2/website/versioned_docs/version-2.6.0/helm-tools.md
new file mode 100644
index 0000000..5fd5bc0
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/helm-tools.md
@@ -0,0 +1,43 @@
+---
+id: version-2.6.0-helm-tools
+title: Required tools for deploying Pulsar Helm Chart
+sidebar_label: Required Tools
+original_id: helm-tools
+---
+
+Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally.
+
+## kubectl
+
+kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)).
+
+To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl).
+
+The server version of kubectl cannot be obtained until we connect to a cluster.
+
+## Helm
+
+Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3.
+
+### Get Helm
+
+You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/).
+
+### Next steps
+
+Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md).
+
+## Additional information
+
+### Templates
+
+Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig).
+
+For more information about how all the inner workings behave, check these documents:
+
+- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/)
+- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/)
+
+### Tips and tricks
+
+For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository.
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/helm-upgrade.md b/site2/website/versioned_docs/version-2.6.0/helm-upgrade.md
new file mode 100644
index 0000000..89f4483
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/helm-upgrade.md
@@ -0,0 +1,34 @@
+---
+id: version-2.6.0-helm-upgrade
+title: Upgrade Pulsar Helm release
+sidebar_label: Upgrade
+original_id: helm-upgrade
+---
+
+Before upgrading your Pulsar installation, you need to check the change log corresponding to the specific release you want to upgrade to and look for any release notes that might pertain to the new Pulsar helm chart version.
+
+We also recommend that you need to provide all values using the `helm upgrade --set key=value` syntax or the `-f values.yml` instead of using `--reuse-values`, because some of the current values might be deprecated.
+
+> #### Note
+>
+> You can retrieve your previous `--set` arguments cleanly, with `helm get values <release-name>`. If you direct this into a file (`helm get values <release-name> > pulsar.yml`), you can safely
+pass this file through `-f`. Thus `helm upgrade <release-name> pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`.
+
+## Steps
+
+To upgrade Apache Pulsar to a newer version, follow these steps:
+
+1. Check the change log for the specific version you would like to upgrade to.
+2. Go through [deployment documentation](helm-deploy.md) step by step.
+3. Extract your previous `--set` arguments with the following command.
+    ```bash
+    helm get values <release-name> > pulsar.yaml
+    ```
+4. Decide all the values you need to set.
+5. Perform the upgrade, with all `--set` arguments extracted in step 4.
+    ```bash
+    helm upgrade <release-name> pulsar \
+        --version <new version> \
+        -f pulsar.yaml \
+        --set ...
+    ```
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/io-connectors.md b/site2/website/versioned_docs/version-2.6.0/io-connectors.md
new file mode 100644
index 0000000..a1ef1b7
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/io-connectors.md
@@ -0,0 +1,226 @@
+---
+id: version-2.6.0-io-connectors
+title: Built-in connector
+sidebar_label: Built-in connector
+original_id: io-connectors
+---
+
+Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar. These connectors import and export data from some of the most commonly used data systems.
+
+Using any of these connectors is as easy as writing a simple connector and running the connector locally or submitting the connector to a Pulsar Functions cluster.
+
+## Source connector
+
+Pulsar has various source connectors, which are sorted alphabetically as below.
+
+### Canal
+
+* [Configuration](io-canal-source.md#configuration)
+
+* [Example](io-canal-source.md#usage)
+
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java)
+
+
+### Debezium MySQL
+
+* [Configuration](io-debezium-source.md#configuration)
+
+* [Example](io-debezium-source.md#example-of-mysql)
+ 
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)
+
+### Debezium PostgreSQL
+
+* [Configuration](io-debezium-source.md#configuration)
+
+* [Example](io-debezium-source.md#example-of-postgresql)
+
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/java/org/apache/pulsar/io/debezium/postgres/DebeziumPostgresSource.java)
+
+### Debezium MongoDB
+
+* [Configuration](io-debezium-source.md#configuration)
+
+* [Example](io-debezium-source.md#example-of-mongodb)
+
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/java/org/apache/pulsar/io/debezium/mongodb/DebeziumMongoDbSource.java)
+
+### DynamoDB
+
+* [Configuration](io-dynamodb-source.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/dynamodb/src/main/java/org/apache/pulsar/io/dynamodb/DynamoDBSource.java)
+
+### File
+
+* [Configuration](io-file-source.md#configuration)
+
+* [Example](io-file-source.md#usage)
+
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/file/src/main/java/org/apache/pulsar/io/file/FileSource.java)
+
+### Flume
+
+* [Configuration](io-flume-source.md#configuration)
+
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/FlumeConnector.java)
+
+### Twitter firehose
+
+* [Configuration](io-twitter-source.md#configuration)
+
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java)
+
+### Kafka
+
+* [Configuration](io-kafka-source.md#configuration)
+
+* [Example](io-kafka-source.md#usage)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java)
+
+### Kinesis
+
+* [Configuration](io-kinesis-source.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSource.java)
+  
+### Netty
+
+* [Configuration](io-netty-source.md#configuration)
+
+* [Example of TCP](io-netty-source.md#tcp)
+
+* [Example of HTTP](io-netty-source.md#http)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/netty/src/main/java/org/apache/pulsar/io/netty/NettySource.java)
+
+### RabbitMQ
+
+* [Configuration](io-rabbitmq-source.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java)
+  
+## Sink connector
+
+Pulsar has various sink connectors, which are sorted alphabetically as below.
+
+### Aerospike
+
+* [Configuration](io-aerospike-sink.md#configuration)
+
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java)
+  
+### Cassandra
+
+* [Configuration](io-cassandra-sink.md#configuration)
+  
+* [Example](io-cassandra-sink.md#usage)  
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java)
+
+### ElasticSearch
+
+* [Configuration](io-elasticsearch-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/elastic-search/src/main/java/org/apache/pulsar/io/elasticsearch/ElasticSearchSink.java)
+
+### Flume
+
+* [Configuration](io-flume-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/java/org/apache/pulsar/io/flume/sink/StringSink.java)
+
+### HBase
+
+* [Configuration](io-hbase-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hbase/src/main/java/org/apache/pulsar/io/hbase/HbaseAbstractConfig.java)
+  
+### HDFS2
+
+* [Configuration](io-hdfs2-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs2/src/main/java/org/apache/pulsar/io/hdfs2/AbstractHdfsConnector.java)
+
+### HDFS3
+
+* [Configuration](io-hdfs3-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/hdfs3/src/main/java/org/apache/pulsar/io/hdfs3/AbstractHdfsConnector.java)
+
+### InfluxDB
+
+* [Configuration](io-influxdb-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/influxdb/src/main/java/org/apache/pulsar/io/influxdb/InfluxDBGenericRecordSink.java)
+
+### JDBC ClickHouse
+
+* [Configuration](io-jdbc-sink.md#configuration)
+  
+* [Example](io-jdbc-sink.md#example-for-clickhouse)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/clickhouse/src/main/java/org/apache/pulsar/io/jdbc/ClickHouseJdbcAutoSchemaSink.java)
+
+### JDBC MariaDB
+
+* [Configuration](io-jdbc-sink.md#configuration)
+  
+* [Example](io-jdbc-sink.md#example-for-mariadb)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/mariadb/src/main/java/org/apache/pulsar/io/jdbc/MariadbJdbcAutoSchemaSink.java)
+
+### JDBC PostgreSQL
+
+* [Configuration](io-jdbc-sink.md#configuration)
+  
+* [Example](io-jdbc-sink.md#example-for-postgresql)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/postgres/src/main/java/org/apache/pulsar/io/jdbc/PostgresJdbcAutoSchemaSink.java)
+  
+### JDBC SQLite
+
+* [Configuration](io-jdbc-sink.md#configuration)
+  
+* [Example](io-jdbc-sink.md#example-for-sqlite)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/jdbc/sqlite/src/main/java/org/apache/pulsar/io/jdbc/SqliteJdbcAutoSchemaSink.java)
+  
+### Kafka
+
+* [Configuration](io-kafka-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java)
+  
+### Kinesis
+
+* [Configuration](io-kinesis-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java)
+  
+### MongoDB
+
+* [Configuration](io-mongo-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/mongo/src/main/java/org/apache/pulsar/io/mongodb/MongoSink.java)
+  
+### RabbitMQ
+
+* [Configuration](io-rabbitmq-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSink.java)
+  
+### Redis
+
+* [Configuration](io-redis-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/redis/src/main/java/org/apache/pulsar/io/redis/RedisAbstractConfig.java)
+  
+### Solr
+
+* [Configuration](io-solr-sink.md#configuration)
+  
+* [Java class](https://github.com/apache/pulsar/blob/master/pulsar-io/solr/src/main/java/org/apache/pulsar/io/solr/SolrSinkConfig.java)
+  
diff --git a/site2/website/versioned_docs/version-2.6.0/io-debezium-source.md b/site2/website/versioned_docs/version-2.6.0/io-debezium-source.md
new file mode 100644
index 0000000..4d552e8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/io-debezium-source.md
@@ -0,0 +1,496 @@
+---
+id: version-2.6.0-io-debezium-source
+title: Debezium source connector
+sidebar_label: Debezium source connector
+original_id: io-debezium-source
+---
+
+The Debezium source connector pulls messages from MySQL or PostgreSQL 
+and persists the messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of Debezium source connector has the following properties.
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `task.class` | true | null | A source task class that implemented in Debezium. |
+| `database.hostname` | true | null | The address of a database server. |
+| `database.port` | true | null | The port number of a database server.|
+| `database.user` | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | null | The password for a database user that has the required privileges. |
+| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the  connector.<br/><br/> This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value.  |
+| `database.history` | true | null | The name of the database history class. |
+| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements. <br/><br/>**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | true | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. |
+| `json-with-envelope` | false | false | Present the message only consist of payload.
+
+### Converter Options
+
+1. org.apache.kafka.connect.json.JsonConverter
+
+This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema `
+Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`,
+and the message only consist of payload.
+
+If the config `json-with-envelope` value is true, the consumer use the schema 
+`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload.
+
+2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter
+
+If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), 
+Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload.
+
+### MongoDB Configuration
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+
+
+
+## Example of MySQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration 
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+    ```json
+    {
+        "database.hostname": "localhost",
+        "database.port": "3306",
+        "database.user": "debezium",
+        "database.password": "dbz",
+        "database.server.id": "184054",
+        "database.server.name": "dbserver1",
+        "database.whitelist": "inventory",
+        "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+        "database.history.pulsar.topic": "history-topic",
+        "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+        "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+        "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+        "pulsar.service.url": "pulsar://127.0.0.1:6650",
+        "offset.storage.topic": "offset-topic"
+    }
+    ```
+
+* YAML 
+
+    You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file.
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "debezium-mysql-source"
+    topicName: "debezium-mysql-topic"
+    archive: "connectors/pulsar-io-debezium-mysql-{{pulsar:version}}.nar"
+    parallelism: 1
+
+    configs:
+
+        ## config for mysql, docker image: debezium/example-mysql:0.8
+        database.hostname: "localhost"
+        database.port: "3306"
+        database.user: "debezium"
+        database.password: "dbz"
+        database.server.id: "184054"
+        database.server.name: "dbserver1"
+        database.whitelist: "inventory"
+        database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
+        database.history.pulsar.topic: "history-topic"
+        database.history.pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+        ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
+        key.converter: "org.apache.kafka.connect.json.JsonConverter"
+        value.converter: "org.apache.kafka.connect.json.JsonConverter"
+
+        ## PULSAR_SERVICE_URL_CONFIG
+        pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+        ## OFFSET_STORAGE_TOPIC_CONFIG
+        offset.storage.topic: "offset-topic"
+    ```
+
+### Usage
+
+This example shows how to change the data of a MySQL table using the Pulsar Debezium connector.
+
+1. Start a MySQL server with a database from which Debezium can capture changes.
+
+    ```bash
+    $ docker run -it --rm \
+    --name mysql \
+    -p 3306:3306 \
+    -e MYSQL_ROOT_PASSWORD=debezium \
+    -e MYSQL_USER=mysqluser \
+    -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8
+    ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+    ```bash
+    $ bin/pulsar standalone
+    ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+     * Use the **JSON** configuration file as shown previously. 
+   
+        Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-{{pulsar:version}}.nar`.
+
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --archive connectors/pulsar-io-debezium-mysql-{{pulsar:version}}.nar \
+        --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \
+        --tenant public \
+        --namespace default \
+        --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","v [...]
+        ```
+
+    * Use the **YAML** configuration file as shown previously.
+  
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --source-config-file debezium-mysql-source-config.yaml
+        ```
+
+4. Subscribe the topic _sub-products_ for the table _inventory.products_.
+
+    ```bash
+    $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+    ```
+
+5. Start a MySQL client in docker.
+
+    ```bash
+    $ docker run -it --rm \
+    --name mysqlterm \
+    --link mysql \
+    --rm mysql:5.7 sh \
+    -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+    ```
+
+6. A MySQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+    ```
+    mysql> use inventory;
+    mysql> show tables;
+    mysql> SELECT * FROM  products;
+    mysql> UPDATE products SET name='1111111111' WHERE id=101;
+    mysql> UPDATE products SET name='1111111111' WHERE id=107;
+    ```
+
+    In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic.
+
+## Example of PostgreSQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+    ```json
+    {
+        "database.hostname": "localhost",
+        "database.port": "5432",
+        "database.user": "postgres",
+        "database.password": "postgres",
+        "database.dbname": "postgres",
+        "database.server.name": "dbserver1",
+        "schema.whitelist": "inventory",
+        "pulsar.service.url": "pulsar://127.0.0.1:6650"
+    }
+    ```
+
+* YAML 
+
+    You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file.
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "debezium-postgres-source"
+    topicName: "debezium-postgres-topic"
+    archive: "connectors/pulsar-io-debezium-postgres-{{pulsar:version}}.nar"
+    parallelism: 1
+
+    configs:
+
+        ## config for pg, docker image: debezium/example-postgress:0.8
+        database.hostname: "localhost"
+        database.port: "5432"
+        database.user: "postgres"
+        database.password: "postgres"
+        database.dbname: "postgres"
+        database.server.name: "dbserver1"
+        schema.whitelist: "inventory"
+
+        ## PULSAR_SERVICE_URL_CONFIG
+        pulsar.service.url: "pulsar://127.0.0.1:6650"
+    ```
+
+### Usage
+
+This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector.
+
+
+1. Start a PostgreSQL server with a database from which Debezium can capture changes.
+
+    ```bash
+    $ docker pull debezium/example-postgres:0.8
+    $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432  debezium/example-postgres:0.8
+    ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+    ```bash
+    $ bin/pulsar standalone
+    ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+     
+     Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-{{pulsar:version}}.nar`.
+
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --archive connectors/pulsar-io-debezium-postgres-{{pulsar:version}}.nar \
+        --name debezium-postgres-source \
+        --destination-topic-name debezium-postgres-topic \
+        --tenant public \
+        --namespace default \
+        --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+        ```
+   
+   * Use the **YAML** configuration file as shown previously.
+      
+        ```bash
+        $ bin/pulsar-admin source localrun  \
+        --source-config-file debezium-postgres-source-config.yaml
+        ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+    ```
+    $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+    ```
+
+5. Start a PostgreSQL client in docker.
+   
+    ```bash
+    $ docker exec -it pulsar-postgresql /bin/bash
+    ```
+
+6. A PostgreSQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+    ```
+    psql -U postgres postgres
+    postgres=# \c postgres;
+    You are now connected to database "postgres" as user "postgres".
+    postgres=# SET search_path TO inventory;
+    SET
+    postgres=# select * from products;
+     id  |        name        |                       description                       | weight
+    -----+--------------------+---------------------------------------------------------+--------
+     102 | car battery        | 12V car battery                                         |    8.1
+     103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 |    0.8
+     104 | hammer             | 12oz carpenter's hammer                                 |   0.75
+     105 | hammer             | 14oz carpenter's hammer                                 |  0.875
+     106 | hammer             | 16oz carpenter's hammer                                 |      1
+     107 | rocks              | box of assorted rocks                                   |    5.3
+     108 | jacket             | water resistent black wind breaker                      |    0.1
+     109 | spare tire         | 24 inch spare tire                                      |   22.2
+     101 | 1111111111         | Small 2-wheel scooter                                   |   3.14
+    (9 rows)
+    
+    postgres=# UPDATE products SET name='1111111111' WHERE id=107;
+    UPDATE 1
+    ```
+
+    In the terminal window of subscribing topic, you can receive the following messages.
+        
+    ```bash
+    ----- got message -----
+    {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products [...]
+    ```
+## Example of MongoDB
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+* JSON 
+
+    ```json
+    {
+        "mongodb.hosts": "rs0/mongodb:27017",
+        "mongodb.name": "dbserver1",
+        "mongodb.user": "debezium",
+        "mongodb.password": "dbz",
+        "mongodb.task.id": "1",
+        "database.whitelist": "inventory",
+        "pulsar.service.url": "pulsar://127.0.0.1:6650"
+    }
+    ```
+
+* YAML 
+
+    You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file.
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "debezium-mongodb-source"
+    topicName: "debezium-mongodb-topic"
+    archive: "connectors/pulsar-io-debezium-mongodb-{{pulsar:version}}.nar"
+    parallelism: 1
+
+    configs:
+
+        ## config for pg, docker image: debezium/example-mongodb:0.10
+        mongodb.hosts: "rs0/mongodb:27017",
+        mongodb.name: "dbserver1",
+        mongodb.user: "debezium",
+        mongodb.password: "dbz",
+        mongodb.task.id: "1",
+        database.whitelist: "inventory",
+
+        ## PULSAR_SERVICE_URL_CONFIG
+        pulsar.service.url: "pulsar://127.0.0.1:6650"
+    ```
+
+### Usage
+
+This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector.
+
+
+1. Start a MongoDB server with a database from which Debezium can capture changes.
+
+    ```bash
+    $ docker pull debezium/example-mongodb:0.10
+    $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017  debezium/example-mongodb:0.10
+    ```
+     Use the following commands to initialize the data.
+    
+     ``` bash
+     ./usr/local/bin/init-inventory.sh
+     ```
+     If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a```
+
+
+2. Start a Pulsar service locally in standalone mode.
+
+    ```bash
+    $ bin/pulsar standalone
+    ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+     
+     Make sure the nar file is available at `connectors/pulsar-io-mongodb-{{pulsar:version}}.nar`.
+
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --archive connectors/pulsar-io-debezium-mongodb-{{pulsar:version}}.nar \
+        --name debezium-mongodb-source \
+        --destination-topic-name debezium-mongodb-topic \
+        --tenant public \
+        --namespace default \
+        --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+        ```
+   
+   * Use the **YAML** configuration file as shown previously.
+      
+        ```bash
+        $ bin/pulsar-admin source localrun  \
+        --source-config-file debezium-mongodb-source-config.yaml
+        ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+    ```
+    $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+    ```
+
+5. Start a MongoDB client in docker.
+   
+    ```bash
+    $ docker exec -it pulsar-mongodb /bin/bash
+    ```
+
+6. A MongoDB client pops out. 
+   
+    ```bash
+    mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory
+    db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}})
+    ```
+
+    In the terminal window of subscribing topic, you can receive the following messages.
+        
+    ```bash
+    ----- got message -----
+    {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type [...]
+    ```
+   
+## FAQ
+ 
+### Debezium postgres connector will hang when create snap
+
+```$xslt
+#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000]
+    java.lang.Thread.State: WAITING (parking)
+     at sun.misc.Unsafe.park(Native Method)
+     - parking to wait for  <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
+     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
+     at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396)
+     at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649)
+     at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132)
+     at io.debezium.connector.postgresql.PostgresConnectorTask$$Lambda$203/385424085.accept(Unknown Source)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$$Lambda$240/1347039967.accept(Unknown Source)
+     at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$$Lambda$206/589332928.run(Unknown Source)
+     at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
+     at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717)
+     at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126)
+     at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)
+     at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127)
+     at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230)
+     at java.lang.Thread.run(Thread.java:748)
+``` 
+
+If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file:
+
+```$xslt
+max.queue.size=
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.6.0/io-dynamodb-source.md
new file mode 100644
index 0000000..7906ef1
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/io-dynamodb-source.md
@@ -0,0 +1,76 @@
+---
+id: version-2.6.0-io-dynamodb-source
+title: AWS DynamoDB source connector
+sidebar_label: AWS DynamoDB source connector
+original_id: io-dynamodb-source
+---
+
+The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar.
+
+This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter),
+which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual
+consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics.
+
+
+## Configuration
+
+The configuration of the DynamoDB source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.<br/><br/>Below are the available options:<br/><br/><li>`AT_TIMESTAMP`: start from the record at or after the specified timestamp.<br/><br/><li>`LATEST`: start after the most recent data record.<br/><br/><li>`TRIM_HORIZON`: start from the oldest available data record.
+`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption.
+`applicationName`|String|false|Pulsar IO connector|The name of the KCL application.  Must be unique, as it is used to define the table name for the dynamo table used for state tracking. <br/><br/>By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances.
+`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds.
+`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds.
+`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint.
+`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector. <br/><br/>Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed.
+`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br/><br/>**Example**<br/> us-west-1, us-west-2
+`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:`AwsCredentialProviderPlugin`:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.<br><br>`awsCredentialProviderPlugin` has the following built-in plugs:<br><br><li>`org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:<br> this plugin uses the default AWS provider chain.<br>For more information, see [using the default c [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Example
+
+Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "awsEndpoint": "https://some.endpoint.aws",
+        "awsRegion": "us-east-1",
+        "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291",
+        "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+        "applicationName": "My test application",
+        "checkpointInterval": "30000",
+        "backoffTime": "4000",
+        "numRetries": "3",
+        "receiveQueueSize": 2000,
+        "initialPositionInStream": "TRIM_HORIZON",
+        "startAtTime": "2019-03-05T19:28:58.000Z"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        awsEndpoint: "https://some.endpoint.aws"
+        awsRegion: "us-east-1"
+        awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291"
+        awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+        applicationName: "My test application"
+        checkpointInterval: 30000
+        backoffTime: 4000
+        numRetries: 3
+        receiveQueueSize: 2000
+        initialPositionInStream: "TRIM_HORIZON"
+        startAtTime: "2019-03-05T19:28:58.000Z"
+    ```
+
diff --git a/site2/website/versioned_docs/version-2.6.0/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.6.0/io-jdbc-sink.md
new file mode 100644
index 0000000..4221b42
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/io-jdbc-sink.md
@@ -0,0 +1,140 @@
+---
+id: version-2.6.0-io-jdbc-sink
+title: JDBC sink connector
+sidebar_label: JDBC sink connector
+original_id: io-jdbc-sink
+---
+
+The JDBC sink connectors allow pulling messages from Pulsar topics 
+and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite.
+
+> Currently, INSERT, DELETE and UPDATE operations are supported.
+
+## Configuration 
+
+The configuration of all JDBC sink connectors has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.<br><br>**Note: `userName` is case-sensitive.**|
+| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`. <br><br>**Note: `password` is case-sensitive.**|
+| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. |
+| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. |
+| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events.  |
+| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
+| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. |
+| `batchSize` | int|false | 200 | The batch size of updates made to the database. |
+
+### Example for ClickHouse
+
+* JSON 
+
+    ```json
+    {
+        "userName": "clickhouse",
+        "password": "password",
+        "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink",
+        "tableName": "pulsar_clickhouse_jdbc_sink"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "jdbc-clickhouse-sink"
+    topicName: "persistent://public/default/jdbc-clickhouse-topic"
+    sinkType: "jdbc-clickhouse"    
+    configs:
+        userName: "clickhouse"
+        password: "password"
+        jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink"
+        tableName: "pulsar_clickhouse_jdbc_sink"
+    ```
+
+### Example for MariaDB
+
+* JSON 
+
+    ```json
+    {
+        "userName": "mariadb",
+        "password": "password",
+        "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink",
+        "tableName": "pulsar_mariadb_jdbc_sink"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "jdbc-mariadb-sink"
+    topicName: "persistent://public/default/jdbc-mariadb-topic"
+    sinkType: "jdbc-mariadb"    
+    configs:
+        userName: "mariadb"
+        password: "password"
+        jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink"
+        tableName: "pulsar_mariadb_jdbc_sink"
+    ```
+
+### Example for PostgreSQL
+
+Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "userName": "postgres",
+        "password": "password",
+        "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink",
+        "tableName": "pulsar_postgres_jdbc_sink"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "jdbc-postgres-sink"
+    topicName: "persistent://public/default/jdbc-postgres-topic"
+    sinkType: "jdbc-postgres"    
+    configs:
+        userName: "postgres"
+        password: "password"
+        jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink"
+        tableName: "pulsar_postgres_jdbc_sink"
+    ```
+
+For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql).
+
+### Example for SQLite
+
+* JSON 
+
+    ```json
+    {
+        "jdbcUrl": "jdbc:sqlite:db.sqlite",
+        "tableName": "pulsar_sqlite_jdbc_sink"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "jdbc-sqlite-sink"
+    topicName: "persistent://public/default/jdbc-sqlite-topic"
+    sinkType: "jdbc-sqlite"    
+    configs:
+        jdbcUrl: "jdbc:sqlite:db.sqlite"
+        tableName: "pulsar_sqlite_jdbc_sink"
+    ```
diff --git a/site2/website/versioned_docs/version-2.6.0/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.6.0/io-kinesis-sink.md
new file mode 100644
index 0000000..46f85e4
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/io-kinesis-sink.md
@@ -0,0 +1,73 @@
+---
+id: version-2.6.0-io-kinesis-sink
+title: Kinesis sink connector
+sidebar_label: Kinesis sink connector
+original_id: io-kinesis-sink
+---
+
+The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis.
+
+## Configuration
+
+The configuration of the Kinesis sink connector has the following property.
+
+### Property
+
+| Name | Type|Required | Default | Description
+|------|----------|----------|---------|-------------|
+`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.<br/><br/>Below are the available options:<br/><br/><li>`ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream. <br/><br/><li>`FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON payload into  [...]
+`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not.
+`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br/><br/>**Example**<br/> us-west-1, us-west-2
+`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:`AwsCredentialProviderPlugin`:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}. <br/><br/>It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink. <br/><br/>If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPlugi [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Built-in plugins
+
+The following are built-in `AwsCredentialProviderPlugin` plugins:
+
+* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin`
+  
+    This plugin takes no configuration, it uses the default AWS provider chain. 
+    
+    For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).
+
+* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin`
+  
+    This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL.
+
+    This configuration takes the form of a small json document like:
+
+    ```json
+    {"roleArn": "arn...", "roleSessionName": "name"}
+    ```
+
+### Example
+
+Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+    ```json
+    {
+        "awsEndpoint": "https://some.endpoint.aws",
+        "awsRegion": "us-east-1",
+        "awsKinesisStreamName": "my-stream",
+        "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+        "messageFormat": "ONLY_RAW_PAYLOAD",
+        "retainOrdering": "true"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        awsEndpoint: "https://some.endpoint.aws"
+        awsRegion: "us-east-1"
+        awsKinesisStreamName: "my-stream"
+        awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+        messageFormat: "ONLY_RAW_PAYLOAD"
+        retainOrdering: "true"
+    ```
diff --git a/site2/website/versioned_docs/version-2.6.0/io-kinesis-source.md b/site2/website/versioned_docs/version-2.6.0/io-kinesis-source.md
new file mode 100644
index 0000000..8aa24dc
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/io-kinesis-source.md
@@ -0,0 +1,77 @@
+---
+id: version-2.6.0-io-kinesis-source
+title: Kinesis source connector
+sidebar_label: Kinesis source connector
+original_id: io-kinesis-source
+---
+
+The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar.
+
+This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers.
+
+> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release.
+
+
+## Configuration
+
+The configuration of the Kinesis source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.<br/><br/>Below are the available options:<br/><br/><li>`AT_TIMESTAMP`: start from the record at or after the specified timestamp.<br/><br/><li>`LATEST`: start after the most recent data record.<br/><br/><li>`TRIM_HORIZON`: start from the oldest available data record.
+`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption.
+`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application. <br/><br/>By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances.
+`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds.
+`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds.
+`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint.
+`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector. <br/><br/>Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed.
+`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.<br><br>If set to false, it uses polling.
+`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br/><br/>**Example**<br/> us-west-1, us-west-2
+`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:`AwsCredentialProviderPlugin`:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.<br><br>`awsCredentialProviderPlugin` has the following built-in plugs:<br><br><li>`org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:<br> this plugin uses the default AWS provider chain.<br>For more information, see [using the default c [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Example
+
+Before using the Kinesis source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "awsEndpoint": "https://some.endpoint.aws",
+        "awsRegion": "us-east-1",
+        "awsKinesisStreamName": "my-stream",
+        "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+        "applicationName": "My test application",
+        "checkpointInterval": "30000",
+        "backoffTime": "4000",
+        "numRetries": "3",
+        "receiveQueueSize": 2000,
+        "initialPositionInStream": "TRIM_HORIZON",
+        "startAtTime": "2019-03-05T19:28:58.000Z"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        awsEndpoint: "https://some.endpoint.aws"
+        awsRegion: "us-east-1"
+        awsKinesisStreamName: "my-stream"
+        awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+        applicationName: "My test application"
+        checkpointInterval: 30000
+        backoffTime: 4000
+        numRetries: 3
+        receiveQueueSize: 2000
+        initialPositionInStream: "TRIM_HORIZON"
+        startAtTime: "2019-03-05T19:28:58.000Z"
+    ```
+
diff --git a/site2/website/versioned_docs/version-2.6.0/io-quickstart.md b/site2/website/versioned_docs/version-2.6.0/io-quickstart.md
new file mode 100644
index 0000000..59c84ed
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/io-quickstart.md
@@ -0,0 +1,813 @@
+---
+id: version-2.6.0-io-quickstart
+title: How to connect Pulsar to database
+sidebar_label: Get started
+original_id: io-quickstart
+---
+
+This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code.  
+
+It is helpful to review the [concepts](io-overview.md) for Pulsar I/O with running the steps in this guide to gain a deeper understanding.   
+
+At the end of this tutorial, you are able to:
+
+- [Connect Pulsar to Cassandra](#Connect-Pulsar-to-Cassandra)
+  
+- [Connect Pulsar to PostgreSQL](#Connect-Pulsar-to-PostgreSQL)
+
+> #### Tip
+>
+> * These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However, all
+> the commands used in this tutorial can be used in a multi-nodes Pulsar cluster without any changes.
+>
+> * All the instructions are assumed to run at the root directory of a Pulsar binary distribution.
+
+## Install Pulsar and built-in connector
+
+Before connecting Pulsar to a database, you need to install Pulsar and the desired built-in connector.
+
+For more information about **how to install a standalone Pulsar and built-in connectors**, see [here](getting-started-standalone.md/#installing-pulsar).
+
+## Start Pulsar standalone 
+
+1. Start Pulsar locally.
+
+    ```bash
+    bin/pulsar standalone
+    ```
+
+    All the components of a Pulsar service are start in order. 
+    
+    You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly.
+
+2. Check Pulsar binary protocol port.
+
+    ```bash
+    telnet localhost 6650
+    ```
+
+3. Check Pulsar Function cluster.
+
+    ```bash
+    curl -s http://localhost:8080/admin/v2/worker/cluster
+    ```
+
+    **Example output**
+    ```json
+    [{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}]
+    ```
+
+4. Make sure a public tenant and a default namespace exist.
+
+    ```bash
+    curl -s http://localhost:8080/admin/v2/namespaces/public
+    ```
+
+    **Example output**
+    ```json
+    ["public/default","public/functions"]
+    ```
+
+5. All built-in connectors should be listed as available.
+
+    ```bash
+    curl -s http://localhost:8080/admin/v2/functions/connectors
+    ```
+
+    **Example output**
+
+    ```json
+    [{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink conn [...]
+    ```
+
+    If an error occurs when starting Pulsar service, you may see an exception at the terminal running `pulsar/standalone`,
+    or you can navigate to the `logs` directory under the Pulsar directory to view the logs.
+
+## Connect Pulsar to Cassandra
+
+This section demonstrates how to connect Pulsar to Cassandra.
+
+> #### Tip
+> 
+> * Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/).
+> 
+> * The Cassandra sink connector reads messages from Pulsar topics and writes the messages into Cassandra tables. For more information, see [Cassandra sink connector](io-cassandra-sink.md).
+
+### Setup a Cassandra cluster
+
+This example uses `cassandra` Docker image to start a single-node Cassandra cluster in Docker.
+
+1. Start a Cassandra cluster.
+
+    ```bash
+    docker run -d --rm --name=cassandra -p 9042:9042 cassandra
+    ```
+
+    > **Note**
+    > 
+    > Before moving to the next steps, make sure the Cassandra cluster is running.
+
+2. Make sure the Docker process is running.
+
+    ```bash
+    docker ps
+    ```
+
+3. Check the Cassandra logs to make sure the Cassandra process is running as expected.
+
+    ```bash
+    docker logs cassandra
+    ```
+
+4. Check the status of the Cassandra cluster.
+
+    ```bash
+    docker exec cassandra nodetool status
+    ```
+
+    **Example output**
+
+    ```
+    Datacenter: datacenter1
+    =======================
+    Status=Up/Down
+    |/ State=Normal/Leaving/Joining/Moving
+    --  Address     Load       Tokens       Owns (effective)  Host ID                               Rack
+    UN  172.17.0.2  103.67 KiB  256          100.0%            af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26  rack1
+    ```
+
+5. Use `cqlsh` to connect to the Cassandra cluster. 
+
+    ```bash
+    $ docker exec -ti cassandra cqlsh localhost
+    Connected to Test Cluster at localhost:9042.
+    [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
+    Use HELP for help.
+    cqlsh>
+    ```
+
+6. Create a keyspace `pulsar_test_keyspace`.
+
+    ```bash
+    cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1};
+    ```
+
+7. Create a table `pulsar_test_table`.
+
+    ```bash
+    cqlsh> USE pulsar_test_keyspace;
+    cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text);
+    ```
+
+### Configure a Cassandra sink
+
+Now that we have a Cassandra cluster running locally. 
+
+In this section, you need to configure a Cassandra sink connector.
+
+To run a Cassandra sink connector, you need to prepare a configuration file including the information that Pulsar connector runtime needs to know. 
+
+For example, how Pulsar connector can find the Cassandra cluster, what is the keyspace and the table that Pulsar connector uses for writing Pulsar messages to, and so on.
+
+You can create a configuration file through one of the following methods.
+
+* JSON
+
+    ```json
+    {
+        "roots": "localhost:9042",
+        "keyspace": "pulsar_test_keyspace",
+        "columnFamily": "pulsar_test_table",
+        "keyname": "key",
+        "columnName": "col"
+    }
+    ```
+
+* YAML
+  
+    ```yaml
+    configs:
+        roots: "localhost:9042"
+        keyspace: "pulsar_test_keyspace"
+        columnFamily: "pulsar_test_table"
+        keyname: "key"
+        columnName: "col"
+    ```
+  
+For more information, see [Cassandra sink connector](io-cassandra-sink.md).
+
+### Create a Cassandra sink
+
+You can use the [Connector Admin CLI](io-cli.md) 
+to create a sink connector and perform other operations on them.
+
+Run the following command to create a Cassandra sink connector with sink type _cassandra_ and the config file _examples/cassandra-sink.yml_ created previously.
+
+```bash
+bin/pulsar-admin sinks create \
+    --tenant public \
+    --namespace default \
+    --name cassandra-test-sink \
+    --sink-type cassandra \
+    --sink-config-file examples/cassandra-sink.yml \
+    --inputs test_cassandra
+```
+
+Once the command is executed, Pulsar creates the sink connector _cassandra-test-sink_. 
+
+This sink connector runs
+as a Pulsar Function and writes the messages produced in the topic _test_cassandra_ to the Cassandra table _pulsar_test_table_.
+
+### Inspect a Cassandra sink
+
+You can use the [Connector Admin CLI](io-cli.md) 
+to monitor a connector and perform other operations on it.
+
+* Get the information of a Cassandra sink. 
+
+  ```bash
+  bin/pulsar-admin sinks get \
+      --tenant public \
+      --namespace default \
+      --name cassandra-test-sink
+  ```
+
+  **Example output**
+
+  ```json
+  {
+    "tenant": "public",
+    "namespace": "default",
+    "name": "cassandra-test-sink",
+    "className": "org.apache.pulsar.io.cassandra.CassandraStringSink",
+    "inputSpecs": {
+      "test_cassandra": {
+        "isRegexPattern": false
+      }
+    },
+    "configs": {
+      "roots": "localhost:9042",
+      "keyspace": "pulsar_test_keyspace",
+      "columnFamily": "pulsar_test_table",
+      "keyname": "key",
+      "columnName": "col"
+    },
+    "parallelism": 1,
+    "processingGuarantees": "ATLEAST_ONCE",
+    "retainOrdering": false,
+    "autoAck": true,
+    "archive": "builtin://cassandra"
+  }
+  ```
+
+* Check the status of a Cassandra sink. 
+
+  ```bash
+  bin/pulsar-admin sinks status \
+      --tenant public \
+      --namespace default \
+      --name cassandra-test-sink
+  ```
+
+  **Example output**
+
+  ```json
+  {
+    "numInstances" : 1,
+    "numRunning" : 1,
+    "instances" : [ {
+      "instanceId" : 0,
+      "status" : {
+        "running" : true,
+        "error" : "",
+        "numRestarts" : 0,
+        "numReadFromPulsar" : 0,
+        "numSystemExceptions" : 0,
+        "latestSystemExceptions" : [ ],
+        "numSinkExceptions" : 0,
+        "latestSinkExceptions" : [ ],
+        "numWrittenToSink" : 0,
+        "lastReceivedTime" : 0,
+        "workerId" : "c-standalone-fw-localhost-8080"
+      }
+    } ]
+  }
+  ```
+
+### Verify a Cassandra sink
+
+1. Produce some messages to the input topic of the Cassandra sink _test_cassandra_.
+
+    ```bash
+    for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done
+    ```
+
+2. Inspect the status of the Cassandra sink _test_cassandra_.
+
+    ```bash
+    bin/pulsar-admin sinks status \
+        --tenant public \
+        --namespace default \
+        --name cassandra-test-sink
+    ```
+
+    You can see 10 messages are processed by the Cassandra sink _test_cassandra_.
+
+    **Example output**
+
+    ```json
+    {
+      "numInstances" : 1,
+      "numRunning" : 1,
+      "instances" : [ {
+        "instanceId" : 0,
+        "status" : {
+          "running" : true,
+          "error" : "",
+          "numRestarts" : 0,
+          "numReadFromPulsar" : 10,
+          "numSystemExceptions" : 0,
+          "latestSystemExceptions" : [ ],
+          "numSinkExceptions" : 0,
+          "latestSinkExceptions" : [ ],
+          "numWrittenToSink" : 10,
+          "lastReceivedTime" : 1551685489136,
+          "workerId" : "c-standalone-fw-localhost-8080"
+        }
+      } ]
+    }
+    ```
+
+3. Use `cqlsh` to connect to the Cassandra cluster.
+
+   ```bash
+   docker exec -ti cassandra cqlsh localhost
+   ```
+
+4. Check the data of the Cassandra table _pulsar_test_table_.
+
+   ```bash
+   cqlsh> use pulsar_test_keyspace;
+   cqlsh:pulsar_test_keyspace> select * from pulsar_test_table;
+
+   key    | col
+   --------+--------
+     key-5 |  key-5
+     key-0 |  key-0
+     key-9 |  key-9
+     key-2 |  key-2
+     key-1 |  key-1
+     key-3 |  key-3
+     key-6 |  key-6
+     key-7 |  key-7
+     key-4 |  key-4
+     key-8 |  key-8
+   ```
+
+### Delete a Cassandra Sink
+
+You can use the [Connector Admin CLI](io-cli.md) 
+to delete a connector and perform other operations on it.
+
+```bash
+bin/pulsar-admin sinks delete \
+    --tenant public \
+    --namespace default \
+    --name cassandra-test-sink
+```
+
+## Connect Pulsar to PostgreSQL
+
+This section demonstrates how to connect Pulsar to PostgreSQL.
+
+> #### Tip
+> 
+> * Make sure you have Docker installed. If you do not have one, see [install Docker](https://docs.docker.com/docker-for-mac/install/).
+> 
+> * The JDBC sink connector pulls messages from Pulsar topics 
+and persists the messages to ClickHouse, MariaDB, PostgreSQL, or SQlite. 
+>For more information, see [JDBC sink connector](io-jdbc-sink.md).
+
+
+### Setup a PostgreSQL cluster
+
+This example uses the PostgreSQL 12 docker image to start a single-node PostgreSQL cluster in Docker.
+
+1. Pull the PostgreSQL 12 image from Docker.
+
+    ```bash
+    $ docker pull postgres:12
+    ```
+
+2. Start PostgreSQL.
+
+    ```bash
+    $ docker run -d -it --rm \
+    --name pulsar-postgres \
+    -p 5432:5432 \
+    -e POSTGRES_PASSWORD=password \
+    -e POSTGRES_USER=postgres \    
+    postgres:12
+    ```
+
+    #### Tip
+    
+     Flag | Description | This example
+     ---|---|---|
+     `-d` | To start a container in detached mode. | /
+     `-it` | Keep STDIN open even if not attached and allocate a terminal. | /
+     `--rm` | Remove the container automatically when it exits. | /
+     `-name` | Assign a name to the container. | This example specifies _pulsar-postgres_ for the container.
+     `-p` | Publish the port of the container to the host. | This example publishes the port _5432_ of the container to the host.
+     `-e` | Set environment variables. | This example sets the following variables:<br>- The password for the user is _password_.<br>- The name for the user is _postgres_.
+
+     > #### Tip
+     >
+     > For more information about Docker commands, see [Docker CLI](https://docs.docker.com/engine/reference/commandline/run/).
+
+3. Check if PostgreSQL has been started successfully.
+
+    ```bash
+    $ docker logs -f pulsar-postgres
+    ```
+
+    PostgreSQL has been started successfully if the following message appears.
+
+    ```text
+    2020-05-11 20:09:24.492 UTC [1] LOG:  starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
+    2020-05-11 20:09:24.492 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
+    2020-05-11 20:09:24.492 UTC [1] LOG:  listening on IPv6 address "::", port 5432
+    2020-05-11 20:09:24.499 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
+    2020-05-11 20:09:24.523 UTC [55] LOG:  database system was shut down at 2020-05-11 20:09:24 UTC
+    2020-05-11 20:09:24.533 UTC [1] LOG:  database system is ready to accept connections
+    ```
+
+4. Access to PostgreSQL.
+
+    ```bash
+    $ docker exec -it pulsar-postgres /bin/bash     
+    ```
+
+5. Create a PostgreSQL table _pulsar_postgres_jdbc_sink_.
+
+    ```bash
+    $ psql -U postgres postgres
+    
+    postgres=# create table if not exists pulsar_postgres_jdbc_sink
+    (
+    id serial PRIMARY KEY,
+    name VARCHAR(255) NOT NULL    
+    );
+    ```
+
+### Configure a JDBC sink
+
+Now we have a PostgreSQL running locally. 
+
+In this section, you need to configure a JDBC sink connector.
+
+1. Add a configuration file.   
+   
+    To run a JDBC sink connector, you need to prepare a YAML configuration file including the information that Pulsar connector runtime needs to know. 
+    
+    For example, how Pulsar connector can find the PostgreSQL cluster, what is the JDBC URL and the table that Pulsar connector uses for writing messages to.
+
+    Create a _pulsar-postgres-jdbc-sink.yaml_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder.
+
+    ```yaml
+    configs:
+      userName: "postgres"
+      password: "password"
+      jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink"
+      tableName: "pulsar_postgres_jdbc_sink"
+    ```
+
+2. Create a schema.
+
+    Create a _avro-schema_ file, copy the following contents to this file, and place the file in the `pulsar/connectors` folder.
+
+    ```json
+    {
+      "type": "AVRO",
+      "schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}",
+      "properties": {}
+    }
+    ```
+
+    > #### Tip
+    >
+    > For more information about AVRO, see [Apache Avro](https://avro.apache.org/docs/1.9.1/).
+
+
+3. Upload a schema to a topic.  
+
+    This example uploads the _avro-schema_ schema to the _pulsar-postgres-jdbc-sink-topic_ topic.
+
+    ```bash
+    $ bin/pulsar-admin schemas upload pulsar-postgres-jdbc-sink-topic -f ./connectors/avro-schema
+    ```
+
+4. Check if the schema has been uploaded successfully.
+
+    ```bash
+    $ bin/pulsar-admin schemas get pulsar-postgres-jdbc-sink-topic
+    ```
+
+    The schema has been uploaded successfully if the following message appears.
+
+    ```json
+    {"name":"pulsar-postgres-jdbc-sink-topic","schema":"{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}","type":"AVRO","properties":{}}
+    ```
+
+### Create a JDBC sink
+
+You can use the [Connector Admin CLI](io-cli.md) 
+to create a sink connector and perform other operations on it.
+
+This example creates a sink connector and specifies the desired information.
+
+```bash
+$ bin/pulsar-admin sinks create \
+--archive ./connectors/pulsar-io-jdbc-postgres-{{pulsar:version}}.nar \
+--inputs pulsar-postgres-jdbc-sink-topic \
+--name pulsar-postgres-jdbc-sink \
+--sink-config-file ./connectors/pulsar-postgres-jdbc-sink.yaml \
+--parallelism 1
+```
+
+Once the command is executed, Pulsar creates a sink connector _pulsar-postgres-jdbc-sink_.
+
+This sink connector runs as a Pulsar Function and writes the messages produced in the topic _pulsar-postgres-jdbc-sink-topic_ to the PostgreSQL table _pulsar_postgres_jdbc_sink_.
+
+ #### Tip
+
+ Flag | Description | This example 
+ ---|---|---|
+ `--archive` | The path to the archive file for the sink. | _pulsar-io-jdbc-postgres-{{pulsar:version}}.nar_ |
+ `--inputs` | The input topic(s) of the sink. <br><br> Multiple topics can be specified as a comma-separated list.||
+ `--name` | The name of the sink. | _pulsar-postgres-jdbc-sink_ |
+ `--sink-config-file` | The path to a YAML config file specifying the configuration of the sink. | _pulsar-postgres-jdbc-sink.yaml_ |
+ `--parallelism` | The parallelism factor of the sink. <br><br> For example, the number of sink instances to run. |  _1_ |
+
+ > #### Tip
+ >
+ > For more information about `pulsar-admin sinks create options`, see [here](io-cli.md#sinks).
+
+The sink has been created successfully if the following message appears.
+
+```bash
+"Created successfully"
+```
+
+### Inspect a JDBC sink
+
+You can use the [Connector Admin CLI](io-cli.md) 
+to monitor a connector and perform other operations on it.
+
+* List all running JDBC sink(s).
+
+  ```bash
+  $ bin/pulsar-admin sinks list \
+  --tenant public \
+  --namespace default
+  ```
+
+  > #### Tip
+  > 
+  > For more information about `pulsar-admin sinks list options`, see [here](io-cli.md/#list-1).
+
+  The result shows that only the _postgres-jdbc-sink_ sink is running.
+
+  ```json
+  [
+  "pulsar-postgres-jdbc-sink"
+  ]
+  ```
+
+* Get the information of a JDBC sink.
+
+  ```bash
+  $ bin/pulsar-admin sinks get \
+  --tenant public \
+  --namespace default \
+  --name pulsar-postgres-jdbc-sink
+  ```
+
+  > #### Tip
+  > 
+  > For more information about `pulsar-admin sinks get options`, see [here](io-cli.md/#get-1).
+
+  The result shows the information of the sink connector, including tenant, namespace, topic and so on.
+
+  ```json
+  {
+    "tenant": "public",
+    "namespace": "default",
+    "name": "pulsar-postgres-jdbc-sink",
+    "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink",
+    "inputSpecs": {
+      "pulsar-postgres-jdbc-sink-topic": {
+        "isRegexPattern": false
+      }
+    },
+    "configs": {
+      "password": "password",
+      "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink",
+      "userName": "postgres",
+      "tableName": "pulsar_postgres_jdbc_sink"
+    },
+    "parallelism": 1,
+    "processingGuarantees": "ATLEAST_ONCE",
+    "retainOrdering": false,
+    "autoAck": true
+  }
+  ```
+
+* Get the status of a JDBC sink
+
+  ```bash
+  $ bin/pulsar-admin sinks status \
+  --tenant public \
+  --namespace default \
+  --name pulsar-postgres-jdbc-sink
+  ```
+
+  > #### Tip
+  > 
+  > For more information about `pulsar-admin sinks status options`, see [here](io-cli.md/#status-1).
+
+  The result shows the current status of sink connector, including the number of instance, running status, worker ID and so on.
+
+  ```json
+  {
+    "numInstances" : 1,
+    "numRunning" : 1,
+    "instances" : [ {
+      "instanceId" : 0,
+      "status" : {
+        "running" : true,
+        "error" : "",
+        "numRestarts" : 0,
+        "numReadFromPulsar" : 0,
+        "numSystemExceptions" : 0,
+        "latestSystemExceptions" : [ ],
+        "numSinkExceptions" : 0,
+        "latestSinkExceptions" : [ ],
+        "numWrittenToSink" : 0,
+        "lastReceivedTime" : 0,
+        "workerId" : "c-standalone-fw-192.168.2.52-8080"
+      }
+    } ]
+  }
+  ```
+
+### Stop a JDBC sink
+
+You can use the [Connector Admin CLI](io-cli.md) 
+to stop a connector and perform other operations on it.
+
+```bash
+$ bin/pulsar-admin sinks stop \
+--tenant public \
+--namespace default \
+--name pulsar-postgres-jdbc-sink
+```
+
+> #### Tip
+> 
+> For more information about `pulsar-admin sinks stop options`, see [here](io-cli.md/#stop-1).
+
+The sink instance has been stopped successfully if the following message disappears.
+
+```bash
+"Stopped successfully"
+```
+
+### Restart a JDBC sink
+
+You can use the [Connector Admin CLI](io-cli.md) 
+to restart a connector and perform other operations on it.
+
+```bash
+$ bin/pulsar-admin sinks restart \
+--tenant public \
+--namespace default \
+--name pulsar-postgres-jdbc-sink 
+```
+
+> #### Tip
+> 
+> For more information about `pulsar-admin sinks restart options`, see [here](io-cli.md/#restart-1).
+
+The sink instance has been started successfully if the following message disappears.
+
+```bash
+"Started successfully"
+```
+
+> #### Tip
+>
+> * Optionally, you can run a standalone sink connector using `pulsar-admin sinks localrun options`. 
+> 
+>   Note that `pulsar-admin sinks localrun options` **runs a sink connector locally**, while `pulsar-admin sinks start options` **starts a sink connector in a cluster**.
+>
+> * For more information about `pulsar-admin sinks localrun options`, see [here](io-cli.md#localrun-1).
+
+### Update a JDBC sink
+
+You can use the [Connector Admin CLI](io-cli.md) 
+to update a connector and perform other operations on it.
+
+This example updates the parallelism of the _pulsar-postgres-jdbc-sink_ sink connector to 2.
+
+```bash
+$ bin/pulsar-admin sinks update \
+--name pulsar-postgres-jdbc-sink \
+--parallelism 2
+```
+
+> #### Tip
+> 
+> For more information about `pulsar-admin sinks update options`, see [here](io-cli.md/#update-1).
+
+The sink connector has been updated successfully if the following message disappears.
+
+```bash
+"Updated successfully"
+```
+
+This example double-checks the information.
+
+```bash
+$ bin/pulsar-admin sinks get \
+--tenant public \
+--namespace default \
+--name pulsar-postgres-jdbc-sink
+```
+
+The result shows that the parallelism is 2.
+
+```json
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "pulsar-postgres-jdbc-sink",
+  "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink",
+  "inputSpecs": {
+    "pulsar-postgres-jdbc-sink-topic": {
+      "isRegexPattern": false
+    }
+  },
+  "configs": {
+    "password": "password",
+    "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink",
+    "userName": "postgres",
+    "tableName": "pulsar_postgres_jdbc_sink"
+  },
+  "parallelism": 2,
+  "processingGuarantees": "ATLEAST_ONCE",
+  "retainOrdering": false,
+  "autoAck": true
+}
+```
+
+### Delete a JDBC sink
+
+You can use the [Connector Admin CLI](io-cli.md) 
+to delete a connector and perform other operations on it.
+
+This example deletes the _pulsar-postgres-jdbc-sink_ sink connector.
+
+```bash
+$ bin/pulsar-admin sinks delete \
+--tenant public \
+--namespace default \
+--name pulsar-postgres-jdbc-sink
+```
+
+> #### Tip
+> 
+> For more information about `pulsar-admin sinks delete options`, see [here](io-cli.md/#delete-1).
+
+The sink connector has been deleted successfully if the following message appears.
+
+```text
+"Deleted successfully"
+```
+
+This example double-checks the status of the sink connector.
+
+```bash
+$ bin/pulsar-admin sinks get \
+--tenant public \
+--namespace default \
+--name pulsar-postgres-jdbc-sink
+```
+
+The result shows that the sink connector does not exist.
+
+```text
+HTTP 404 Not Found
+
+Reason: Sink pulsar-postgres-jdbc-sink doesn't exist
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/io-use.md b/site2/website/versioned_docs/version-2.6.0/io-use.md
new file mode 100644
index 0000000..1c2a660
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/io-use.md
@@ -0,0 +1,1505 @@
+---
+id: version-2.6.0-io-use
+title: How to use Pulsar connectors
+sidebar_label: Use
+original_id: io-use
+---
+
+This guide describes how to use Pulsar connectors.
+
+## Install a connector
+
+Pulsar bundles several [builtin connectors](io-connectors.md) used to move data in and out of commonly used systems (such as database and messaging system). Optionally, you can create and use your desired non-builtin connectors.
+
+> #### Note
+> 
+> When using a non-builtin connector, you need to specify the path of a archive file for the connector.
+
+To set up a builtin connector, follow
+the instructions [here](getting-started-standalone.md#installing-builtin-connectors).
+
+After the setup, the builtin connector is automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are required.
+
+## Configure a connector
+
+You can configure the following information:
+
+* [Configure a default storage location for a connector](#configure-a-default-storage-location-for-a-connector)
+
+* [Configure a connector with a YAML file](#configure-a-connector-with-yaml-file)
+
+### Configure a default storage location for a connector
+
+To configure a default folder for builtin connectors, set the `connectorsDirectory` parameter in the `./conf/functions_worker.yml` configuration file.
+
+**Example**
+
+Set the `./connectors` folder as the default storage location for builtin connectors.
+
+```
+########################
+# Connectors
+########################
+
+connectorsDirectory: ./connectors
+```
+
+### Configure a connector with a YAML file
+
+To configure a connector, you need to provide a YAML configuration file when creating a connector.
+
+The YAML configuration file tells Pulsar where to locate connectors and how to connect connectors with Pulsar topics.
+
+**Example 1**
+
+Below is a YAML configuration file of a Cassandra sink, which tells Pulsar:
+
+* Which Cassandra cluster to connect
+  
+* What is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data
+  
+* How to map Pulsar messages into Cassandra table key and columns
+
+```shell
+tenant: public
+namespace: default
+name: cassandra-test-sink
+...
+# cassandra specific config
+configs:
+    roots: "localhost:9042"
+    keyspace: "pulsar_test_keyspace"
+    columnFamily: "pulsar_test_table"
+    keyname: "key"
+    columnName: "col"
+```
+
+**Example 2**
+
+Below is a YAML configuration file of a Kafka source.
+
+```shell
+configs:
+   bootstrapServers: "pulsar-kafka:9092"
+   groupId: "test-pulsar-io"
+   topic: "my-topic"
+   sessionTimeoutMs: "10000"
+   autoCommitEnabled: "false"
+```
+
+**Example 3**
+
+Below is a YAML configuration file of a PostgreSQL JDBC sink.
+
+```shell
+configs:
+   userName: "postgres"
+   password: "password"
+   jdbcUrl: "jdbc:postgresql://localhost:5432/test_jdbc"
+   tableName: "test_jdbc"
+```
+
+## Get available connectors
+
+Before starting using connectors, you can perform the following operations:
+
+* [Reload connectors](#reload)
+  
+* [Get a list of available connectors](#get-available-connectors)
+
+### `reload`
+
+If you add or delete a nar file in a connector folder, reload the available builtin connector before using it.
+
+#### Source
+
+Use the `reload` subcommand.
+
+```shell
+$ pulsar-admin sources reload
+```
+
+For more information, see [`here`](io-cli#reload).
+
+#### Sink
+
+Use the `reload` subcommand.
+
+```shell
+$ pulsar-admin sinks reload
+```
+
+For more information, see [`here`](io-cli#reload-1).
+
+### `available`
+
+After reloading connectors (optional), you can get a list of available connectors.
+
+#### Source
+
+Use the `available-sources` subcommand.
+
+```shell
+$ pulsar-admin sources available-sources
+```
+
+#### Sink
+
+Use the `available-sinks` subcommand.
+
+```shell
+$ pulsar-admin sinks available-sinks
+```
+
+## Run a connector
+
+To run a connector, you can perform the following operations:
+
+* [Create a connector](#create)
+  
+* [Start a connector](#start)
+
+* [Run a connector locally](#localrun)
+
+### `create`
+
+You can create a connector using **Admin CLI**, **REST API** or **JAVA admin API**.f
+
+#### Source
+
+Create a source connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `create` subcommand.
+
+```
+$ pulsar-admin sources create options
+```
+
+For more information, see [here](io-cli.md#create).
+
+<!--REST API-->
+
+Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/registerSource}
+
+<!--Java Admin API-->
+
+* Create a source connector with a **local file**.
+
+    ```java
+    void createSource(SourceConfig sourceConfig,
+                      String fileName)
+               throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    |Name|Description
+    |---|---
+    `sourceConfig` | The source configuration object
+
+   **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`createSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSource-SourceConfig-java.lang.String-).
+
+* Create a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. 
+
+    ```java
+    void createSourceWithUrl(SourceConfig sourceConfig,
+                             String pkgUrl)
+                      throws PulsarAdminException
+    ```
+
+    Supported URLs are `http` and `file`.
+
+    **Example**
+
+    * HTTP: http://www.repo.com/fileName.jar
+
+    * File: file:///dir/fileName.jar
+
+    **Parameter**
+
+    Parameter| Description
+    |---|---
+    `sourceConfig` | The source configuration object
+    `pkgUrl` | URL from which pkg can be downloaded
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+  
+    For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#createSourceWithUrl-SourceConfig-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Create a sink connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `create` subcommand.
+
+```
+$ pulsar-admin sinks create options
+```
+
+For more information, see [here](io-cli.md#create-1).
+
+<!--REST API-->
+
+Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/registerSink}
+
+<!--Java Admin API-->
+
+* Create a sink connector with a **local file**.
+  
+    ```java
+    void createSink(SinkConfig sinkConfig,
+                    String fileName)
+             throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    |Name|Description
+    |---|---
+    `sinkConfig` | The sink configuration object
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`createSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSink-SinkConfig-java.lang.String-).
+
+* Create a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. 
+
+    ```java
+    void createSinkWithUrl(SinkConfig sinkConfig,
+                        String pkgUrl)
+                    throws PulsarAdminException
+    ```
+
+    Supported URLs are `http` and `file`.
+
+    **Example**
+
+    * HTTP: http://www.repo.com/fileName.jar
+
+    * File: file:///dir/fileName.jar
+
+    **Parameter**
+
+    Parameter| Description
+    |---|---
+    `sinkConfig` | The sink configuration object
+    `pkgUrl` | URL from which pkg can be downloaded
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+  
+    For more information, see [`createSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#createSinkWithUrl-SinkConfig-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### `start`
+
+You can start a connector using **Admin CLI** or **REST API**.
+
+#### Source
+
+Start a source connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `start` subcommand.
+
+```
+$ pulsar-admin sources start options
+```
+
+For more information, see [here](io-cli.md#start).
+
+<!--REST API-->
+
+* Start **all** source connectors.
+
+    Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/start|operation/startSource}
+
+* Start a **specified** source connector.
+
+    Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSource}
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Start a sink connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `start` subcommand.
+
+```
+$ pulsar-admin sinks start options
+```
+
+For more information, see [here](io-cli.md#start-1).
+
+<!--REST API-->
+
+* Start **all** sink connectors.
+
+    Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/start|operation/startSink}
+
+* Start a **specified** sink connector.
+
+    Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/start|operation/startSink}
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### `localrun`
+
+You can run a connector locally rather than deploying it on a Pulsar cluster using **Admin CLI**.
+
+#### Source
+
+Run a source connector locally.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `localrun` subcommand.
+
+```
+$ pulsar-admin sources localrun options
+```
+
+For more information, see [here](io-cli.md#localrun).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Run a sink connector locally.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `localrun` subcommand.
+
+```
+$ pulsar-admin sinks localrun options
+```
+
+For more information, see [here](io-cli.md#localrun-1).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## Monitor a connector
+
+To monitor a connector, you can perform the following operations:
+
+* [Get the information of a connector](#get)
+
+* [Get the list of all running connectors](#list)
+
+* [Get the current status of a connector](#status)
+
+### `get`
+
+You can get the information of a connector using **Admin CLI**, **REST API** or **JAVA admin API**.
+
+#### Source
+
+Get the information of a source connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `get` subcommand.
+
+```
+$ pulsar-admin sources get options
+```
+
+For more information, see [here](io-cli.md#get).
+
+<!--REST API-->
+
+Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/getSourceInfo}
+
+<!--Java Admin API-->
+
+```java
+SourceConfig getSource(String tenant,
+                       String namespace,
+                       String source)
+                throws PulsarAdminException
+```
+
+**Example**
+
+This is a sourceConfig.
+
+```java
+{
+ "tenant": "tenantName",
+ "namespace": "namespaceName",
+ "name": "sourceName",
+ "className": "className",
+ "topicName": "topicName",
+ "configs": {},
+ "parallelism": 1,
+ "processingGuarantees": "ATLEAST_ONCE",
+ "resources": {
+   "cpu": 1.0,
+   "ram": 1073741824,
+   "disk": 10737418240
+ }
+}
+```
+
+This is a sourceConfig example.
+
+```
+{
+ "tenant": "public",
+ "namespace": "default",
+ "name": "debezium-mysql-source",
+ "className": "org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource",
+ "topicName": "debezium-mysql-topic",
+ "configs": {
+   "database.user": "debezium",
+   "database.server.id": "184054",
+   "database.server.name": "dbserver1",
+   "database.port": "3306",
+   "database.hostname": "localhost",
+   "database.password": "dbz",
+   "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+   "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+   "database.whitelist": "inventory",
+   "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+   "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+   "pulsar.service.url": "pulsar://127.0.0.1:6650",
+   "database.history.pulsar.topic": "history-topic2"
+ },
+ "parallelism": 1,
+ "processingGuarantees": "ATLEAST_ONCE",
+ "resources": {
+   "cpu": 1.0,
+   "ram": 1073741824,
+   "disk": 10737418240
+ }
+}
+```
+
+**Exception**
+
+Exception name | Description
+|---|---
+`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission 
+`PulsarAdminException.NotFoundException` | Cluster doesn't exist
+`PulsarAdminException` | Unexpected error
+
+For more information, see [`getSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Get the information of a sink connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `get` subcommand.
+
+```
+$ pulsar-admin sinks get options
+```
+
+For more information, see [here](io-cli.md#get-1).
+
+<!--REST API-->
+
+Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/getSinkInfo}
+
+<!--Java Admin API-->
+
+```java
+SinkConfig getSink(String tenant,
+                   String namespace,
+                   String sink)
+            throws PulsarAdminException
+```
+
+**Example**
+
+This is a sinkConfig.
+
+```json
+{
+"tenant": "tenantName",
+"namespace": "namespaceName",
+"name": "sinkName",
+"className": "className",
+"inputSpecs": {
+"topicName": {
+    "isRegexPattern": false
+}
+},
+"configs": {},
+"parallelism": 1,
+"processingGuarantees": "ATLEAST_ONCE",
+"retainOrdering": false,
+"autoAck": true
+}
+```
+
+This is a sinkConfig example.
+
+```json
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "pulsar-postgres-jdbc-sink",
+  "className": "org.apache.pulsar.io.jdbc.PostgresJdbcAutoSchemaSink",
+  "inputSpecs": {
+  "pulsar-postgres-jdbc-sink-topic": {
+     "isRegexPattern": false
+    }
+  },
+  "configs": {
+    "password": "password",
+    "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink",
+    "userName": "postgres",
+    "tableName": "pulsar_postgres_jdbc_sink"
+  },
+  "parallelism": 1,
+  "processingGuarantees": "ATLEAST_ONCE",
+  "retainOrdering": false,
+  "autoAck": true
+}
+```
+
+**Parameter description**
+
+Name| Description
+|---|---
+`tenant` | Tenant name
+`namespace` | Namespace name
+`sink` | Sink name
+
+For more information, see [`getSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSink-java.lang.String-java.lang.String-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### `list`
+
+You can get the list of all running connectors using **Admin CLI**, **REST API** or **JAVA admin API**.
+
+#### Source
+
+Get the list of all running source connectors.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `list` subcommand.
+
+```
+$ pulsar-admin sources list options
+```
+
+For more information, see [here](io-cli.md#list).
+
+<!--REST API-->
+
+Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/|operation/listSources}
+
+<!--Java Admin API-->
+
+```java
+List<String> listSources(String tenant,
+                         String namespace)
+                  throws PulsarAdminException
+```
+
+**Response example**
+
+```java
+["f1", "f2", "f3"]
+```
+
+**Exception**
+
+Exception name | Description
+|---|---
+`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission
+`PulsarAdminException` | Unexpected error
+
+For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#listSources-java.lang.String-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Get the list of all running sink connectors.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `list` subcommand.
+
+```
+$ pulsar-admin sinks list options
+```
+
+For more information, see [here](io-cli.md#list-1).
+
+<!--REST API-->
+
+Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/|operation/listSinks}
+
+<!--Java Admin API-->
+
+```java
+List<String> listSinks(String tenant,
+                       String namespace)
+                throws PulsarAdminException
+```
+
+**Response example**
+
+```java
+["f1", "f2", "f3"]
+```
+
+**Exception**
+
+Exception name | Description
+|---|---
+`PulsarAdminException.NotAuthorizedException` | You don't have the admin permission 
+`PulsarAdminException` | Unexpected error
+
+For more information, see [`listSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#listSinks-java.lang.String-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### `status`
+
+You can get the current status of a connector using **Admin CLI**, **REST API** or **JAVA admin API**.
+
+#### Source
+
+Get the current status of a source connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `status` subcommand.
+
+```
+$ pulsar-admin sources status options
+```
+
+For more information, see [here](io-cli.md#status).
+
+<!--REST API-->
+
+* Get the current status of **all** source connectors.
+  
+  Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/status|operation/getSourceStatus}
+
+* Gets the current status of a **specified** source connector.
+
+  Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSourceStatus}
+  
+<!--Java Admin API-->
+
+* Get the current status of **all** source connectors.
+
+    ```java
+    SourceStatus getSourceStatus(String tenant,
+                                String namespace,
+                                String source)
+                        throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    Parameter| Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `sink` | Source name
+
+    **Exception**
+
+    Name | Description
+    |---|---
+    `PulsarAdminException` | Unexpected error
+
+    For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSource-java.lang.String-java.lang.String-java.lang.String-).
+
+* Gets the current status of a **specified** source connector.
+
+    ```java
+    SourceStatus.SourceInstanceStatus.SourceInstanceStatusData getSourceStatus(String tenant,
+                                                                               String namespace,
+                                                                               String source,
+                                                                               int id)
+                                                                        throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    Parameter| Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `sink` | Source name
+    `id` | Source instanceID
+
+    **Exception**
+
+    Exception name | Description
+    |---|---
+    `PulsarAdminException` | Unexpected error
+
+    For more information, see [`getSourceStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#getSourceStatus-java.lang.String-java.lang.String-java.lang.String-int-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Get the current status of a Pulsar sink connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `status` subcommand.
+
+```
+$ pulsar-admin sinks status options
+```
+
+For more information, see [here](io-cli.md#status-1).
+
+<!--REST API-->
+
+* Get the current status of **all** sink connectors.
+  
+  Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sinkName/status|operation/getSinkStatus}
+
+* Gets the current status of a **specified** sink connector.
+
+  Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v3/sinks/:tenant/:namespace/:sourceName/:instanceId/status|operation/getSinkInstanceStatus}
+
+<!--Java Admin API-->
+
+* Get the current status of **all** sink connectors.
+
+    ```java
+    SinkStatus getSinkStatus(String tenant,
+                             String namespace,
+                             String sink)
+                      throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    Parameter| Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `sink` | Source name
+
+    **Exception**
+
+    Exception name | Description
+    |---|---
+    `PulsarAdminException` | Unexpected error
+
+    For more information, see [`getSinkStatus`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-).
+
+* Gets the current status of a **specified** source connector.
+
+    ```java
+    SinkStatus.SinkInstanceStatus.SinkInstanceStatusData getSinkStatus(String tenant,
+                                                                       String namespace,
+                                                                       String sink,
+                                                                       int id)
+                                                                throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    Parameter| Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `sink` | Source name
+    `id` | Sink instanceID
+
+    **Exception**
+
+    Exception name | Description
+    |---|---
+    `PulsarAdminException` | Unexpected error
+
+    For more information, see [`getSinkStatusWithInstanceID`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#getSinkStatus-java.lang.String-java.lang.String-java.lang.String-int-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## Update a connector
+
+### `update`
+
+You can update a running connector using **Admin CLI**, **REST API** or **JAVA admin API**.
+
+#### Source
+
+Update a running Pulsar source connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `update` subcommand.
+
+```
+$ pulsar-admin sources update options
+```
+
+For more information, see [here](io-cli.md#update).
+
+<!--REST API-->
+
+Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/updateSource}
+  
+<!--Java Admin API-->
+
+* Update a running source connector with a **local file**.
+
+    ```java
+    void updateSource(SourceConfig sourceConfig,
+                    String fileName)
+            throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    |`sourceConfig` | The source configuration object
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission 
+    | `PulsarAdminException.NotFoundException` | Cluster doesn't exist
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`updateSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSource-SourceConfig-java.lang.String-).
+
+* Update a source connector using a **remote file** with a URL from which fun-pkg can be downloaded. 
+
+    ```java
+    void updateSourceWithUrl(SourceConfig sourceConfig,
+                         String pkgUrl)
+                  throws PulsarAdminException
+    ```
+
+    Supported URLs are `http` and `file`.
+
+    **Example**
+
+    * HTTP: http://www.repo.com/fileName.jar
+
+    * File: file:///dir/fileName.jar
+  
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    | `sourceConfig` | The source configuration object
+    | `pkgUrl` | URL from which pkg can be downloaded
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission 
+    | `PulsarAdminException.NotFoundException` | Cluster doesn't exist
+    | `PulsarAdminException` | Unexpected error
+
+For more information, see [`createSourceWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#updateSourceWithUrl-SourceConfig-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Update a running Pulsar sink connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `update` subcommand.
+
+```
+$ pulsar-admin sinks update options
+```
+
+For more information, see [here](io-cli.md#update-1).
+
+<!--REST API-->
+
+Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/updateSink}
+  
+<!--Java Admin API-->
+
+* Update a running sink connector with a **local file**.
+
+    ```java
+    void updateSink(SinkConfig sinkConfig,
+                    String fileName)
+         throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    |`sinkConfig` | The sink configuration object
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission
+    | `PulsarAdminException.NotFoundException` | Cluster doesn't exist
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`updateSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSink-SinkConfig-java.lang.String-).
+
+* Update a sink connector using a **remote file** with a URL from which fun-pkg can be downloaded. 
+
+    ```java
+    void updateSinkWithUrl(SinkConfig sinkConfig,
+                           String pkgUrl)
+                    throws PulsarAdminException
+    ```
+
+    Supported URLs are `http` and `file`.
+
+    **Example**
+
+    * HTTP: http://www.repo.com/fileName.jar
+
+    * File: file:///dir/fileName.jar
+  
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    | `sinkConfig` | The sink configuration object
+    | `pkgUrl` | URL from which pkg can be downloaded
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    |`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission 
+    |`PulsarAdminException.NotFoundException` | Cluster doesn't exist
+    |`PulsarAdminException` | Unexpected error
+
+For more information, see [`updateSinkWithUrl`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#updateSinkWithUrl-SinkConfig-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## Stop a connector
+
+### `stop`
+
+You can stop a connector using **Admin CLI**, **REST API** or **JAVA admin API**.
+
+#### Source
+
+Stop a source connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `stop` subcommand.
+
+```
+$ pulsar-admin sources stop options
+```
+
+For more information, see [here](io-cli.md#stop).
+
+<!--REST API-->
+
+* Stop **all** source connectors.
+  
+  Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/stopSource}
+
+* Stop a **specified** source connector.
+  
+  Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId|operation/stopSource}
+
+<!--Java Admin API-->
+
+* Stop **all** source connectors.
+
+    ```java
+    void stopSource(String tenant,
+                    String namespace,
+                    String source)
+            throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `source` | Source name
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-).
+
+* Stop a **specified** source connector. 
+
+    ```java
+    void stopSource(String tenant,
+                    String namespace,
+                    String source,
+                    int instanceId)
+             throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `source` | Source name
+     `instanceId` | Source instanceID
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`stopSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#stopSource-java.lang.String-java.lang.String-java.lang.String-int-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Stop a sink connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `stop` subcommand.
+
+```
+$ pulsar-admin sinks stop options
+```
+
+For more information, see [here](io-cli.md#stop-1).
+
+<!--REST API-->
+
+* Stop **all** sink connectors.
+  
+  Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sinks/:tenant/:namespace/:sinkName/stop|operation/stopSink}
+
+* Stop a **specified** sink connector.
+  
+  Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkeName/:instanceId/stop|operation/stopSink}
+
+<!--Java Admin API-->
+
+* Stop **all** sink connectors.
+
+    ```java
+    void stopSink(String tenant,
+                String namespace,
+                String sink)
+        throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `source` | Source name
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-).
+
+* Stop a **specified** sink connector. 
+
+    ```java
+    void stopSink(String tenant,
+                  String namespace,
+                  String sink,
+                  int instanceId)
+           throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `source` | Source name
+    `instanceId` | Source instanceID
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`stopSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#stopSink-java.lang.String-java.lang.String-java.lang.String-int-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## Restart a connector
+
+### `restart`
+
+You can restart a connector using **Admin CLI**, **REST API** or **JAVA admin API**.
+
+#### Source
+
+Restart a source connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `restart` subcommand.
+
+```
+$ pulsar-admin sources restart options
+```
+
+For more information, see [here](io-cli.md#restart).
+
+<!--REST API-->
+
+* Restart **all** source connectors.
+  
+  Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/restart|operation/restartSource}
+
+* Restart a **specified** source connector.
+  
+  Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sourceName/:instanceId/restart|operation/restartSource}
+
+<!--Java Admin API-->
+
+* Restart **all** source connectors.
+
+    ```java
+    void restartSource(String tenant,
+                       String namespace,
+                       String source)
+                throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `source` | Source name
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-).
+
+* Restart a **specified** source connector. 
+
+    ```java
+    void restartSource(String tenant,
+                       String namespace,
+                       String source,
+                       int instanceId)
+                throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `source` | Source name
+     `instanceId` | Source instanceID
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`restartSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#restartSource-java.lang.String-java.lang.String-java.lang.String-int-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Restart a sink connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `restart` subcommand.
+
+```
+$ pulsar-admin sinks restart options
+```
+
+For more information, see [here](io-cli.md#restart-1).
+
+<!--REST API-->
+
+* Restart **all** sink connectors.
+  
+  Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/restart|operation/restartSource}
+
+* Restart a **specified** sink connector.
+  
+  Send a `POST` request to this endpoint: {@inject: endpoint|POST|/admin/v3/sources/:tenant/:namespace/:sinkName/:instanceId/restart|operation/restartSource}
+
+<!--Java Admin API-->
+
+* Restart all Pulsar sink connectors.
+
+    ```java
+    void restartSink(String tenant,
+                     String namespace,
+                     String sink)
+              throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `sink` | Sink name
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-).
+
+* Restart a **specified** sink connector. 
+
+    ```java
+    void restartSink(String tenant,
+                     String namespace,
+                     String sink,
+                     int instanceId)
+              throws PulsarAdminException
+    ```
+
+    **Parameter**
+
+    | Name | Description
+    |---|---
+    `tenant` | Tenant name
+    `namespace` | Namespace name
+    `source` | Source name
+     `instanceId` | Sink instanceID
+
+    **Exception**
+
+    |Name|Description|
+    |---|---
+    | `PulsarAdminException` | Unexpected error
+
+    For more information, see [`restartSink`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#restartSink-java.lang.String-java.lang.String-java.lang.String-int-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## Delete a connector
+
+### `delete`
+
+You can delete a connector using **Admin CLI**, **REST API** or **JAVA admin API**.
+
+#### Source
+
+Delete a source connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `delete` subcommand.
+
+```
+$ pulsar-admin sources delete options
+```
+
+For more information, see [here](io-cli.md#delete).
+
+<!--REST API-->
+
+Delete al Pulsar source connector.
+  
+Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sources/:tenant/:namespace/:sourceName|operation/deregisterSource}
+
+<!--Java Admin API-->
+
+Delete a source connector.
+
+```java
+void deleteSource(String tenant,
+                  String namespace,
+                  String source)
+           throws PulsarAdminException
+```
+
+**Parameter**
+
+| Name | Description
+|---|---
+`tenant` | Tenant name
+`namespace` | Namespace name
+`source` | Source name
+
+**Exception**
+
+|Name|Description|
+|---|---
+|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission
+| `PulsarAdminException.NotFoundException` | Cluster doesn't exist
+| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty
+| `PulsarAdminException` | Unexpected error
+
+For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Source.html#deleteSource-java.lang.String-java.lang.String-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Sink
+
+Delete a sink connector.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the `delete` subcommand.
+
+```
+$ pulsar-admin sinks delete options
+```
+
+For more information, see [here](io-cli.md#delete-1).
+
+<!--REST API-->
+
+Delete a sink connector.
+  
+Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v3/sinks/:tenant/:namespace/:sinkName|operation/deregisterSink}
+
+<!--Java Admin API-->
+
+Delete a Pulsar sink connector.
+
+```java
+void deleteSink(String tenant,
+                String namespace,
+                String source)
+         throws PulsarAdminException
+```
+
+**Parameter**
+
+| Name | Description
+|---|---
+`tenant` | Tenant name
+`namespace` | Namespace name
+`sink` | Sink name
+
+**Exception**
+
+|Name|Description|
+|---|---
+|`PulsarAdminException.NotAuthorizedException`| You don't have the admin permission
+| `PulsarAdminException.NotFoundException` | Cluster doesn't exist
+| `PulsarAdminException.PreconditionFailedException` | Cluster is not empty
+| `PulsarAdminException` | Unexpected error
+
+For more information, see [`deleteSource`](https://pulsar.apache.org/api/admin/org/apache/pulsar/client/admin/Sink.html#deleteSink-java.lang.String-java.lang.String-java.lang.String-).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
diff --git a/site2/website/versioned_docs/version-2.6.0/performance-pulsar-perf.md b/site2/website/versioned_docs/version-2.6.0/performance-pulsar-perf.md
new file mode 100644
index 0000000..4ebf2c5
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/performance-pulsar-perf.md
@@ -0,0 +1,182 @@
+---
+id: version-2.6.0-performance-pulsar-perf
+title: Pulsar Perf
+sidebar_label: Pulsar Perf
+original_id: performance-pulsar-perf
+---
+
+This document describes how to use the Pulsar Perf for performance testing. For detailed information about performance tuning, see [here](https://streamnative.io/whitepaper/taking-a-deep-dive-into-apache-pulsar-architecture-for-performance-tuning/).
+
+## Pulsar Perf
+
+The Pulsar Perf is a built-in performance test tool for Apache Pulsar. You can use the Pulsar Perf to test message writing or reading performance.
+
+### Produce messages
+
+This example shows how the Pulsar Perf produces messages with default options. For all configuration options available for the `pulsar-perf produce` command, see [configuration options](#configuration-options-for-pulsar-perf-produce).
+
+```
+bin/pulsar-perf produce my-topic
+```
+
+After the command is executed, the test data is continuously output on the Console.
+
+**Output**
+
+```
+19:53:31.459 [pulsar-perf-producer-exec-1-1] INFO  org.apache.pulsar.testclient.PerformanceProducer - Created 1 producers
+19:53:31.482 [pulsar-timer-5-1] WARN  com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider
+19:53:40.861 [main] INFO  org.apache.pulsar.testclient.PerformanceProducer - Throughput produced:     93.7  msg/s ---      0.7 Mbit/s --- failure      0.0 msg/s --- Latency: mean:   3.575 ms - med:   3.460 - 95pct:   4.790 - 99pct:   5.308 - 99.9pct:   5.834 - 99.99pct:   6.609 - Max:   6.609
+19:53:50.909 [main] INFO  org.apache.pulsar.testclient.PerformanceProducer - Throughput produced:    100.0  msg/s ---      0.8 Mbit/s --- failure      0.0 msg/s --- Latency: mean:   3.437 ms - med:   3.328 - 95pct:   4.656 - 99pct:   5.071 - 99.9pct:   5.519 - 99.99pct:   5.588 - Max:   5.588
+19:54:00.926 [main] INFO  org.apache.pulsar.testclient.PerformanceProducer - Throughput produced:    100.0  msg/s ---      0.8 Mbit/s --- failure      0.0 msg/s --- Latency: mean:   3.376 ms - med:   3.276 - 95pct:   4.520 - 99pct:   4.939 - 99.9pct:   5.440 - 99.99pct:   5.490 - Max:   5.490
+19:54:10.940 [main] INFO  org.apache.pulsar.testclient.PerformanceProducer - Throughput produced:    100.0  msg/s ---      0.8 Mbit/s --- failure      0.0 msg/s --- Latency: mean:   3.298 ms - med:   3.220 - 95pct:   4.474 - 99pct:   4.926 - 99.9pct:   5.645 - 99.99pct:   5.654 - Max:   5.654
+19:54:20.956 [main] INFO  org.apache.pulsar.testclient.PerformanceProducer - Throughput produced:    100.1  msg/s ---      0.8 Mbit/s --- failure      0.0 msg/s --- Latency: mean:   3.308 ms - med:   3.199 - 95pct:   4.532 - 99pct:   4.871 - 99.9pct:   5.291 - 99.99pct:   5.323 - Max:   5.323
+19:54:30.972 [main] INFO  org.apache.pulsar.testclient.PerformanceProducer - Throughput produced:    100.0  msg/s ---      0.8 Mbit/s --- failure      0.0 msg/s --- Latency: mean:   3.249 ms - med:   3.144 - 95pct:   4.437 - 99pct:   4.970 - 99.9pct:   5.329 - 99.99pct:   5.414 - Max:   5.414
+19:54:40.987 [main] INFO  org.apache.pulsar.testclient.PerformanceProducer - Throughput produced:    100.0  msg/s ---      0.8 Mbit/s --- failure      0.0 msg/s --- Latency: mean:   3.435 ms - med:   3.361 - 95pct:   4.772 - 99pct:   5.150 - 99.9pct:   5.373 - 99.99pct:   5.837 - Max:   5.837
+^C19:54:44.325 [Thread-1] INFO  org.apache.pulsar.testclient.PerformanceProducer - Aggregated throughput stats --- 7286 records sent --- 99.140 msg/s --- 0.775 Mbit/s
+19:54:44.336 [Thread-1] INFO  org.apache.pulsar.testclient.PerformanceProducer - Aggregated latency stats --- Latency: mean:   3.383 ms - med:   3.293 - 95pct:   4.610 - 99pct:   5.059 - 99.9pct:   5.588 - 99.99pct:   5.837 - 99.999pct:   6.609 - Max:   6.609
+```
+
+From the above test data, you can get the throughput statistics and the write latency statistics. The aggregated statistics is printed when the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf. After the Pulsar Perf is stopped, the [HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) formatted test result appears under your directory. The document looks like `perf-producer-1589370810837.hgrm`. You can also check the test result through [HdrHistogram Plo [...]
+
+#### Configuration options for `pulsar-perf produce`
+
+You can get all options by executing the `bin/pulsar-perf produce -h` command. Therefore, you can modify these options as required.
+
+The following table lists configuration options available for the `pulsar-perf produce` command.
+
+| Option | Description | Default value|
+|----|----|----|
+| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A |
+| auth_plugin | Set the authentication plugin class name. | N/A |
+| batch-max-bytes | Set the maximum number of bytes for each batch. | 4194304 |
+| batch-max-messages | Set the maximum number of messages for each batch. | 1000 |
+| batch-time-window | Set a window for a batch of messages. | 1 ms |
+| compression | Compress the message payload. | N/A |
+| conf-file | Set the configuration file. | N/A |
+| delay | Mark messages with a given delay. | 0s |
+| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A |
+| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A |
+| exit-on-failure | Configure whether to exit from the process on publish failure. | false |
+| help | Configure the help message. | false |
+| max-connections | Set the maximum number of TCP connections to a single broker. | 100 |
+| max-outstanding | Set the maximum number of outstanding messages. | 1000 |
+| max-outstanding-across-partitions | Set the maximum number of outstanding messages across partitions. | 50000 |
+| num-messages | Set the number of messages to be published in total. If it is set to 0, it keeps publishing messages. | 0 |
+| num-producers | Set the number of producers for each topic. | 1 |
+| num-test-threads |  Set the number of test threads. | 1 |
+| num-topic | Set the number of topics. | 1 |
+| payload-delimiter | Set the delimiter used to split lines when using payload from a file. | \n |
+| payload-file | Use the payload from an UTF-8 encoded text file and a payload is randomly selected when messages are published. | N/A |
+| rate | Set the publish rate of messages across topics. | 100 |
+| service-url | Set the Pulsar service URL. | |
+| size | Set the message size. | 1024 bytes |
+| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 |
+| test-duration | Set the test duration. If it is set to 0, it keeps publishing tests. | 0s |
+| trust-cert-file | Set the path for the trusted TLS certificate file. | <empty string> |
+| warmup-time | Set the warm-up time. | 1s |
+
+### Consume messages
+
+This example shows how the Pulsar Perf consumes messages with default options.
+
+```
+bin/pulsar-perf consume my-topic
+```
+
+After the command is executed, the test data is continuously output on the Console.
+
+**Output**
+
+```
+20:35:37.071 [main] INFO  org.apache.pulsar.testclient.PerformanceConsumer - Start receiving from 1 consumers on 1 topics
+20:35:41.150 [pulsar-client-io-1-9] WARN  com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider
+20:35:47.092 [main] INFO  org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 59.572  msg/s -- 0.465 Mbit/s --- Latency: mean: 11.298 ms - med: 10 - 95pct: 15 - 99pct: 98 - 99.9pct: 137 - 99.99pct: 152 - Max: 152
+20:35:57.104 [main] INFO  org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.958  msg/s -- 0.781 Mbit/s --- Latency: mean: 9.176 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 18 - Max: 18
+20:36:07.115 [main] INFO  org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.006  msg/s -- 0.781 Mbit/s --- Latency: mean: 9.316 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17
+20:36:17.125 [main] INFO  org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 100.085  msg/s -- 0.782 Mbit/s --- Latency: mean: 9.327 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17
+20:36:27.136 [main] INFO  org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.900  msg/s -- 0.780 Mbit/s --- Latency: mean: 9.404 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17
+20:36:37.147 [main] INFO  org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 99.985  msg/s -- 0.781 Mbit/s --- Latency: mean: 8.998 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 17 - 99.99pct: 17 - Max: 17
+^C20:36:42.755 [Thread-1] INFO  org.apache.pulsar.testclient.PerformanceConsumer - Aggregated throughput stats --- 6051 records received --- 92.125 msg/s --- 0.720 Mbit/s
+20:36:42.759 [Thread-1] INFO  org.apache.pulsar.testclient.PerformanceConsumer - Aggregated latency stats --- Latency: mean: 9.422 ms - med: 9 - 95pct: 15 - 99pct: 16 - 99.9pct: 98 - 99.99pct: 137 - 99.999pct: 152 - Max: 152
+```
+
+From the output test data, you can get the throughput statistics and the end-to-end latency statistics. The aggregated statistics is printed after the Pulsar Perf is stopped. You can press **Ctrl**+**C** to stop the Pulsar Perf.
+
+#### Configuration options for `pulsar-perf consume`
+
+You can get all options by executing the `bin/pulsar-perf consume -h` command. Therefore, you can modify these options as required.
+
+The following table lists configuration options available for the `pulsar-perf consume` command.
+
+| Option | Description | Default value |
+|----|----|----|
+| acks-delay-millis | Set the acknowledgment grouping delay in milliseconds. | 100 ms |
+| auth-params | Set the authentication parameters, whose format is determined by the implementation of the `configure` method in the authentication plugin class, such as "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}". | N/A |
+| auth_plugin | Set the authentication plugin class name. | N/A |
+| conf-file | Set the configuration file. | N/A |
+| encryption-key-name | Set the name of the public key used to encrypt the payload. | N/A |
+| encryption-key-value-file | Set the file which contains the public key used to encrypt the payload. | N/A |
+| help | Configure the help message. | false |
+| max-connections | Set the maximum number of TCP connections to a single broker. | 100 |
+| num-consumers | Set the number of consumers for each topic. | 1 |
+| num-topic | Set the number of topics. | 1 |
+| rate | Simulate a slow message consumer (rate in msg/s). | 0.0 |
+| receiver-queue-size | Set the size of the receiver queue. | 1000 |
+| replicated | Configure whether the subscription status should be replicated. | false |
+| service-url | Set the Pulsar service URL. | |
+| stats-interval-seconds | Set the statistics interval. If it is set to 0, statistics is disabled. | 0 |
+| subscriber-name | Set the subscriber name prefix. | sub |
+| subscription-type | Set the subscription type. <li> Exclusive <li> Shared <li> Failover <li> Key_Shared | Exclusive |
+| trust-cert-file | Set the path for the trusted TLS certificate file. | <empty string> |
+
+### Configurations
+
+By default, the Pulsar Perf uses `conf/client.conf` as the default configuration and uses `conf/log4j2.yaml` as the default Log4j configuration. If you want to connect to other Pulsar clusters, you can update the `brokerServiceUrl` in the client configuration.
+
+You can use the following commands to change the configuration file and the Log4j configuration file.
+
+```
+export PULSAR_CLIENT_CONF=<your-config-file>
+export PULSAR_LOG_CONF=<your-log-config-file>
+```
+
+In addition, you can use the following command to configure the JVM configuration through environment variables:
+
+```
+export PULSAR_EXTRA_OPTS='-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g'
+```
+
+## HdrHistogram Plotter
+
+The [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html) is a visualization tool for checking Pulsar Perf test results, which makes it easier to observe the test results.
+
+To check test results through the HdrHistogram Plotter, follow these steps:
+
+1. Clone the HdrHistogram repository from GitHub to the local.
+
+    ```
+    git clone https://github.com/HdrHistogram/HdrHistogram.git
+    ```
+
+2. Switch to the HdrHistogram folder.
+
+    ```
+    cd HdrHistogram
+    ```
+3. Install the HdrHistogram Plotter.
+
+    ```
+    mvn clean install -DskipTests
+    ```
... 5121 lines suppressed ...