You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by li...@apache.org on 2021/11/25 02:12:55 UTC

[pulsar] branch master updated: [website][upgrade]feat: website upgrade / docs migration - 2.6.0 / development/reference (#12935)

This is an automated email from the ASF dual-hosted git repository.

liuyu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new f5f9552  [website][upgrade]feat: website upgrade / docs migration - 2.6.0 / development/reference (#12935)
f5f9552 is described below

commit f5f9552d200cf528bcc8502e8329e3a4a97ddcc6
Author: Li Li <ur...@gmail.com>
AuthorDate: Thu Nov 25 10:11:54 2021 +0800

    [website][upgrade]feat: website upgrade / docs migration - 2.6.0 / development/reference (#12935)
    
    * [website][upgrade]feat: website upgrade / docs migration - 2.6.0 / get started/concepts/schema
    
    Signed-off-by: LiLi <ur...@gmail.com>
    
    * [website][upgrade]feat: website upgrade / docs migration - 2.6.0 / functions/io/sql
    
    Signed-off-by: LiLi <ur...@gmail.com>
    
    * [website][upgrade]feat: website upgrade / docs migration - 2.6.0 / helm/deploy/administration
    
    Signed-off-by: LiLi <ur...@gmail.com>
    
    * [website][upgrade]feat: website upgrade / docs migration - 2.6.0 / client/performance/secrurity
    
    Signed-off-by: LiLi <ur...@gmail.com>
    
    * [website][upgrade]feat: website upgrade / docs migration - 2.6.0 / adaptors/admin/cookbooks
    
    Signed-off-by: LiLi <ur...@gmail.com>
    
    * [website][upgrade]feat: website upgrade / docs migration - 2.6.0 / reference
    
    Signed-off-by: LiLi <ur...@gmail.com>
    
    Co-authored-by: Anonymitaet <50...@users.noreply.github.com>
---
 .../version-2.6.0/admin-api-schemas.md             |    7 +
 .../version-2.6.0/administration-dashboard.md      |   76 +
 .../version-2.6.0/client-libraries-cgo.md          |  579 ++++
 .../versioned_docs/version-2.6.0/develop-schema.md |   62 +
 .../version-2.6.0/developing-binary-protocol.md    |  581 ++++
 .../versioned_docs/version-2.6.0/developing-cpp.md |  114 +
 .../version-2.6.0/developing-load-manager.md       |  227 ++
 .../version-2.6.0/developing-tools.md              |  111 +
 .../version-2.6.0/functions-metrics.md             |    7 +
 .../getting-started-concepts-and-architecture.md   |   16 +
 .../version-2.6.0/io-aerospike-sink.md             |   26 +
 .../version-2.6.0/io-canal-source.md               |  235 ++
 .../version-2.6.0/io-cassandra-sink.md             |   57 +
 .../version-2.6.0/io-cdc-debezium.md               |  543 ++++
 .../version-2.6.0/io-debezium-source.md            |  564 ++++
 .../version-2.6.0/io-dynamodb-source.md            |   80 +
 .../version-2.6.0/io-elasticsearch-sink.md         |  173 ++
 .../versioned_docs/version-2.6.0/io-file-source.md |  160 +
 .../versioned_docs/version-2.6.0/io-flume-sink.md  |   56 +
 .../version-2.6.0/io-flume-source.md               |   56 +
 .../versioned_docs/version-2.6.0/io-hbase-sink.md  |   67 +
 .../versioned_docs/version-2.6.0/io-hdfs2-sink.md  |   59 +
 .../versioned_docs/version-2.6.0/io-hdfs3-sink.md  |   59 +
 .../version-2.6.0/io-influxdb-sink.md              |  119 +
 .../versioned_docs/version-2.6.0/io-jdbc-sink.md   |  157 +
 .../versioned_docs/version-2.6.0/io-kafka-sink.md  |   72 +
 .../version-2.6.0/io-kafka-source.md               |  197 ++
 .../version-2.6.0/io-kinesis-sink.md               |   80 +
 .../version-2.6.0/io-kinesis-source.md             |   81 +
 .../versioned_docs/version-2.6.0/io-mongo-sink.md  |   57 +
 .../version-2.6.0/io-netty-source.md               |  241 ++
 .../version-2.6.0/io-rabbitmq-sink.md              |   85 +
 .../version-2.6.0/io-rabbitmq-source.md            |   82 +
 .../versioned_docs/version-2.6.0/io-redis-sink.md  |   74 +
 .../versioned_docs/version-2.6.0/io-solr-sink.md   |   65 +
 .../version-2.6.0/io-twitter-source.md             |   28 +
 .../versioned_docs/version-2.6.0/io-twitter.md     |    7 +
 .../version-2.6.0/reference-cli-tools.md           |  923 ++++++
 .../version-2.6.0/reference-configuration.md       |  550 ++++
 .../version-2.6.0/reference-connector-admin.md     |   11 +
 .../version-2.6.0/reference-metrics.md             |  444 +++
 .../version-2.6.0/reference-pulsar-admin.md        | 3084 ++++++++++++++++++++
 .../version-2.6.0/reference-terminology.md         |  168 ++
 .../version-2.6.0/security-token-admin.md          |  183 ++
 .../versioned_sidebars/version-2.6.0-sidebars.json |   52 +
 .../versioned_docs/version-2.6.0/develop-schema.md |   58 +
 46 files changed, 10733 insertions(+)

diff --git a/site2/website-next/versioned_docs/version-2.6.0/admin-api-schemas.md b/site2/website-next/versioned_docs/version-2.6.0/admin-api-schemas.md
new file mode 100644
index 0000000..9ffe21f
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/admin-api-schemas.md
@@ -0,0 +1,7 @@
+---
+id: admin-api-schemas
+title: Managing Schemas
+sidebar_label: "Schemas"
+original_id: admin-api-schemas
+---
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/administration-dashboard.md b/site2/website-next/versioned_docs/version-2.6.0/administration-dashboard.md
new file mode 100644
index 0000000..514b076
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/administration-dashboard.md
@@ -0,0 +1,76 @@
+---
+id: administration-dashboard
+title: Pulsar dashboard
+sidebar_label: "Dashboard"
+original_id: administration-dashboard
+---
+
+:::note
+
+Pulsar dashboard is deprecated. If you want to manage and monitor the stats of your topics, use [Pulsar Manager](administration-pulsar-manager). 
+
+:::
+
+Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form.
+
+The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database.
+
+You can use the [Django](https://www.djangoproject.com) web app to render the collected data.
+
+## Install
+
+The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+```shell
+
+$ SERVICE_URL=http://broker.example.com:8080/
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  apachepulsar/pulsar-dashboard:@pulsar:version@
+
+```
+
+You can find the {@inject: github:Dockerfile:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well:
+
+```shell
+
+$ docker build -t apachepulsar/pulsar-dashboard dashboard
+
+```
+
+If token authentication is enabled:
+> Provided token should have super-user access. 
+
+```shell
+
+$ SERVICE_URL=http://broker.example.com:8080/
+$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  -e JWT_TOKEN=$JWT_TOKEN \
+  apachepulsar/pulsar-dashboard
+
+```
+
+ 
+You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://<broker-ip>:8080` by default. `<broker-ip>` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard.
+
+Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses.
+
+> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container
+
+If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to
+be the IP of the machine.
+
+Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to
+explicitly set the advertise address to the host IP. For example:
+
+```shell
+
+$ bin/pulsar standalone --advertised-address 1.2.3.4
+
+```
+
+### Known issues
+
+Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported.
diff --git a/site2/website-next/versioned_docs/version-2.6.0/client-libraries-cgo.md b/site2/website-next/versioned_docs/version-2.6.0/client-libraries-cgo.md
new file mode 100644
index 0000000..c79f7bb
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/client-libraries-cgo.md
@@ -0,0 +1,579 @@
+---
+id: client-libraries-cgo
+title: Pulsar CGo client
+sidebar_label: "CGo(deprecated)"
+original_id: client-libraries-cgo
+---
+
+You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+
+Currently, the following Go clients are maintained in two repositories.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
+| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+
+> **API docs available as well**  
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
+
+## Installation
+
+### Requirements
+
+Pulsar Go client library is based on the C++ client library. Follow
+the instructions for [C++ library](client-libraries-cpp) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
+
+### Install go package
+
+> **Compatibility Warning**  
+> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
+
+You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
+
+```bash
+
+$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
+
+```
+
+Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
+
+```bash
+
+$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@
+
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+
+import "github.com/apache/pulsar/pulsar-client-go/pulsar"
+
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+
+pulsar://localhost:6650
+
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+
+pulsar://pulsar.us-west.example.com:6650
+
+```
+
+If you're using [TLS](security-tls-authentication) authentication, the URL will look like something like this:
+
+```http
+
+pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+## Create a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+```go
+
+import (
+    "log"
+    "runtime"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+        OperationTimeoutSeconds: 5,
+        MessageListenerThreads: runtime.NumCPU(),
+    })
+
+    if err != nil {
+        log.Fatalf("Could not instantiate Pulsar client: %v", err)
+    }
+}
+
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
+`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
+`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
+`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
+`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
+`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
+`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
+`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
+`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
+`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic: "my-topic",
+})
+
+if err != nil {
+    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
+}
+
+defer producer.Close()
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Hello, Pulsar"),
+}
+
+if err := producer.Send(context.Background(), msg); err != nil {
+    log.Fatalf("Producer could not send message: %v", err)
+}
+
+```
+
+> **Blocking operation**  
+> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
+`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
+`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
+`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
+`Schema()` | | Schema
+
+Here's a more involved example usage of a producer:
+
+```go
+
+import (
+    "context"
+    "fmt"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client to instantiate a producer
+    producer, err := client.CreateProducer(pulsar.ProducerOptions{
+        Topic: "my-topic",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    ctx := context.Background()
+
+    // Send 10 messages synchronously and 10 messages asynchronously
+    for i := 0; i < 10; i++ {
+        // Create a message
+        msg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("message-%d", i)),
+        }
+
+        // Attempt to send the message
+        if err := producer.Send(ctx, msg); err != nil {
+            log.Fatal(err)
+        }
+
+        // Create a different message to send asynchronously
+        asyncMsg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
+        }
+
+        // Attempt to send the message asynchronously and handle the response
+        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
+            if err != nil { log.Fatal(err) }
+
+            fmt.Printf("the %s successfully published", string(msg.Payload))
+        })
+    }
+}
+
+```
+
+### Producer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
+`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
+`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
+`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication) feature. | 30 seconds
+`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
+`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
+`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
+`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
+`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
+`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
+`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
+`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
+`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms
+`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+
+msgChannel := make(chan pulsar.ConsumerMessage)
+
+consumerOpts := pulsar.ConsumerOptions{
+    Topic:            "my-topic",
+    SubscriptionName: "my-subscription-1",
+    Type:             pulsar.Exclusive,
+    MessageChannel:   msgChannel,
+}
+
+consumer, err := client.Subscribe(consumerOpts)
+
+if err != nil {
+    log.Fatalf("Could not establish subscription: %v", err)
+}
+
+defer consumer.Close()
+
+for cm := range msgChannel {
+    msg := cm.Message
+
+    fmt.Printf("Message ID: %s", msg.ID())
+    fmt.Printf("Message value: %s", string(msg.Payload()))
+
+    consumer.Ack(msg)
+}
+
+```
+
+> **Blocking operation**  
+> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
+`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
+`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
+`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
+`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
+`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
+
+#### Receive example
+
+Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
+
+```go
+
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client object to instantiate a consumer
+    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+        Topic:            "my-golang-topic",
+        SubscriptionName: "sub-1",
+        Type: pulsar.Exclusive,
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    defer consumer.Close()
+
+    ctx := context.Background()
+
+    // Listen indefinitely on the topic
+    for {
+        msg, err := consumer.Receive(ctx)
+        if err != nil { log.Fatal(err) }
+
+        // Do something with the message
+        err = processMessage(msg)
+
+        if err == nil {
+            // Message processed successfully
+            consumer.Ack(msg)
+        } else {
+            // Failed to process messages
+            consumer.Nack(msg)
+        }
+    }
+}
+
+```
+
+### Consumer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
+`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`SubscriptionName` | The subscription name for this consumer |
+`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
+`Name` | The name of the consumer |
+`AckTimeout` | Set the timeout for unacked messages | 0
+`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
+`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
+`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
+`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
+`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
+`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic: "my-golang-topic",
+    StartMessageId: pulsar.LatestMessage,
+})
+
+```
+
+> **Blocking operation**  
+> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
+
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+#### "Next" example
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatalf("Could not create client: %v", err) }
+
+    // Use the client to instantiate a reader
+    reader, err := client.CreateReader(pulsar.ReaderOptions{
+        Topic:          "my-golang-topic",
+        StartMessageID: pulsar.EarliestMessage,
+    })
+
+    if err != nil { log.Fatalf("Could not create reader: %v", err) }
+
+    defer reader.Close()
+
+    ctx := context.Background()
+
+    // Listen on the topic for incoming messages
+    for {
+        msg, err := reader.Next(ctx)
+        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
+
+        // Process the message
+    }
+}
+
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic:          "my-golang-topic",
+    StartMessageID: DeserializeMessageID(lastSavedId),
+})
+
+```
+
+### Reader configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages 
+`Name` | The name of the reader 
+`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
+`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
+`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
+`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Here is some message data"),
+    Key: "message-key",
+    Properties: map[string]string{
+        "foo": "bar",
+    },
+    EventTime: time.Now(),
+    ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if err := producer.send(msg); err != nil {
+    log.Fatalf("Could not publish message due to: %v", err)
+}
+
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+
+opts := pulsar.ClientOptions{
+    URL: "pulsar+ssl://my-cluster.com:6651",
+    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+
+```
+
+## Schema
+
+This example shows how to create a producer and consumer with schema.
+
+```go
+
+var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
+    		"\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
+jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
+// create producer
+producer, err := client.CreateProducerWithSchema(ProducerOptions{
+	Topic: "jsonTopic",
+}, jsonSchema)
+err = producer.Send(context.Background(), ProducerMessage{
+	Value: &testJson{
+		ID:   100,
+		Name: "pulsar",
+	},
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+//create consumer
+var s testJson
+consumerJS := NewJsonSchema(exampleSchemaDef, nil)
+consumer, err := client.SubscribeWithSchema(ConsumerOptions{
+	Topic:            "jsonTopic",
+	SubscriptionName: "sub-2",
+}, consumerJS)
+if err != nil {
+	log.Fatal(err)
+}
+msg, err := consumer.Receive(context.Background())
+if err != nil {
+	log.Fatal(err)
+}
+err = msg.GetValue(&s)
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(s.ID) // output: 100
+fmt.Println(s.Name) // output: pulsar
+defer consumer.Close()
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/develop-schema.md b/site2/website-next/versioned_docs/version-2.6.0/develop-schema.md
new file mode 100644
index 0000000..e71c04e
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/develop-schema.md
@@ -0,0 +1,62 @@
+---
+id: develop-schema
+title: Custom schema storage
+sidebar_label: "Custom schema storage"
+original_id: develop-schema
+---
+
+By default, Pulsar stores data type [schemas](concepts-schema-registry) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation.
+
+In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface).
+
+## SchemaStorage interface
+
+The `SchemaStorage` interface has the following methods:
+
+```java
+
+public interface SchemaStorage {
+    // How schemas are updated
+    CompletableFuture<SchemaVersion> put(String key, byte[] value, byte[] hash);
+
+    // How schemas are fetched from storage
+    CompletableFuture<StoredSchema> get(String key, SchemaVersion version);
+
+    // How schemas are deleted
+    CompletableFuture<SchemaVersion> delete(String key);
+
+    // Utility method for converting a schema version byte array to a SchemaVersion object
+    SchemaVersion versionFromBytes(byte[] version);
+
+    // Startup behavior for the schema storage client
+    void start() throws Exception;
+
+    // Shutdown behavior for the schema storage client
+    void close() throws Exception;
+}
+
+```
+
+> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class.
+
+## SchemaStorageFactory interface 
+
+```java
+
+public interface SchemaStorageFactory {
+    @NotNull
+    SchemaStorage create(PulsarService pulsar) throws Exception;
+}
+
+```
+
+> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class.
+
+## Deployment
+
+In order to use your custom schema storage implementation, you'll need to:
+
+1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file.
+1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar).
+1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation).
+1. Start up Pulsar.
diff --git a/site2/website-next/versioned_docs/version-2.6.0/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.6.0/developing-binary-protocol.md
new file mode 100644
index 0000000..31b5b32
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/developing-binary-protocol.md
@@ -0,0 +1,581 @@
+---
+id: develop-binary-protocol
+title: Pulsar binary protocol specification
+sidebar_label: "Binary protocol"
+original_id: develop-binary-protocol
+---
+
+Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency.
+
+Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below.
+
+> ### Connection sharing
+> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction.
+
+All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand.
+
+## Framing
+
+Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB.
+
+The Pulsar protocol allows for two types of commands:
+
+1. **Simple commands** that do not carry a message payload.
+2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers.
+
+> Message payloads are passed in raw format rather than protobuf format for efficiency reasons.
+
+### Simple commands
+
+Simple (payload-free) commands have this basic structure:
+
+| Component   | Description                                                                             | Size (in bytes) |
+|:------------|:----------------------------------------------------------------------------------------|:----------------|
+| totalSize   | The size of the frame, counting everything that comes after it (in bytes)               | 4               |
+| commandSize | The size of the protobuf-serialized command                                             | 4               |
+| message     | The protobuf message serialized in a raw binary format (rather than in protobuf format) |                 |
+
+### Payload commands
+
+Payload commands have this basic structure:
+
+| Component    | Description                                                                                 | Size (in bytes) |
+|:-------------|:--------------------------------------------------------------------------------------------|:----------------|
+| totalSize    | The size of the frame, counting everything that comes after it (in bytes)                   | 4               |
+| commandSize  | The size of the protobuf-serialized command                                                 | 4               |
+| message      | The protobuf message serialized in a raw binary format (rather than in protobuf format)     |                 |
+| magicNumber  | A 2-byte byte array (`0x0e01`) identifying the current format                               | 2               |
+| checksum     | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4               |
+| metadataSize | The size of the message [metadata](#message-metadata)                                       | 4               |
+| metadata     | The message [metadata](#message-metadata) stored as a binary protobuf message               |                 |
+| payload      | Anything left in the frame is considered the payload and can include any sequence of bytes  |                 |
+
+## Message metadata
+
+Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer.
+
+| Field                                | Description                                                                                                                                                                                                                                               |
+|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `producer_name`                      | The name of the producer that published the message                                                                                                                                                                                         |
+| `sequence_id`                        | The sequence ID of the message, assigned by producer                                                                                                                                                                                        |
+| `publish_time`                       | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC)                                                                                                                                                    |
+| `properties`                         | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. |
+| `replicated_from` *(optional)*       | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published                                                                                                             |
+| `partition_key` *(optional)*         | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose                                                                                                                          |
+| `compression` *(optional)*           | Signals that payload has been compressed and with which compression library                                                                                                                                                                               |
+| `uncompressed_size` *(optional)*     | If compression is used, the producer must fill the uncompressed size field with the original payload size                                                                                                                                                 |
+| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch                                                                                                                   |
+
+### Batch messages
+
+When using batch messages, the payload will be containing a list of entries,
+each of them with its individual metadata, defined by the `SingleMessageMetadata`
+object.
+
+
+For a single batch, the payload format will look like this:
+
+
+| Field         | Description                                                 |
+|:--------------|:------------------------------------------------------------|
+| metadataSizeN | The size of the single message metadata serialized Protobuf |
+| metadataN     | Single message metadata                                     |
+| payloadN      | Message payload passed by application                       |
+
+Each metadata field looks like this;
+
+| Field                      | Description                                             |
+|:---------------------------|:--------------------------------------------------------|
+| properties                 | Application-defined properties                          |
+| partition key *(optional)* | Key to indicate the hashing to a particular partition   |
+| payload_size               | Size of the payload for the single message in the batch |
+
+When compression is enabled, the whole batch will be compressed at once.
+
+## Interactions
+
+### Connection establishment
+
+After opening a TCP connection to a broker, typically on port 6650, the client
+is responsible to initiate the session.
+
+![Connect interaction](/assets/binary-protocol-connect.png)
+
+After receiving a `Connected` response from the broker, the client can
+consider the connection ready to use. Alternatively, if the broker doesn't
+validate the client authentication, it will reply with an `Error` command and
+close the TCP connection.
+
+Example:
+
+```protobuf
+
+message CommandConnect {
+  "client_version" : "Pulsar-Client-Java-v1.15.2",
+  "auth_method_name" : "my-authentication-plugin",
+  "auth_data" : "my-auth-data",
+  "protocol_version" : 6
+}
+
+```
+
+Fields:
+ * `client_version` → String based identifier. Format is not enforced
+ * `auth_method_name` → *(optional)* Name of the authentication plugin if auth
+   enabled
+ * `auth_data` → *(optional)* Plugin specific authentication data
+ * `protocol_version` → Indicates the protocol version supported by the
+   client. Broker will not send commands introduced in newer revisions of the
+   protocol. Broker might be enforcing a minimum version
+
+```protobuf
+
+message CommandConnected {
+  "server_version" : "Pulsar-Broker-v1.15.2",
+  "protocol_version" : 6
+}
+
+```
+
+Fields:
+ * `server_version` → String identifier of broker version
+ * `protocol_version` → Protocol version supported by the broker. Client
+   must not attempt to send commands introduced in newer revisions of the
+   protocol
+
+### Keep Alive
+
+To identify prolonged network partitions between clients and brokers or cases
+in which a machine crashes without interrupting the TCP connection on the remote
+end (eg: power outage, kernel panic, hard reboot...), we have introduced a
+mechanism to probe for the availability status of the remote peer.
+
+Both clients and brokers are sending `Ping` commands periodically and they will
+close the socket if a `Pong` response is not received within a timeout (default
+used by broker is 60s).
+
+A valid implementation of a Pulsar client is not required to send the `Ping`
+probe, though it is required to promptly reply after receiving one from the
+broker in order to prevent the remote side from forcibly closing the TCP connection.
+
+
+### Producer
+
+In order to send messages, a client needs to establish a producer. When creating
+a producer, the broker will first verify that this particular client is
+authorized to publish on the topic.
+
+Once the client gets confirmation of the producer creation, it can publish
+messages to the broker, referring to the producer id negotiated before.
+
+![Producer interaction](/assets/binary-protocol-producer.png)
+
+##### Command Producer
+
+```protobuf
+
+message CommandProducer {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "producer_id" : 1,
+  "request_id" : 1
+}
+
+```
+
+Parameters:
+ * `topic` → Complete topic name to where you want to create the producer on
+ * `producer_id` → Client generated producer identifier. Needs to be unique
+    within the same connection
+ * `request_id` → Identifier for this request. Used to match the response with
+    the originating request. Needs to be unique within the same connection
+ * `producer_name` → *(optional)* If a producer name is specified, the name will
+    be used, otherwise the broker will generate a unique name. Generated
+    producer name is guaranteed to be globally unique. Implementations are
+    expected to let the broker generate a new producer name when the producer
+    is initially created, then reuse it when recreating the producer after
+    reconnections.
+
+The broker will reply with either `ProducerSuccess` or `Error` commands.
+
+##### Command ProducerSuccess
+
+```protobuf
+
+message CommandProducerSuccess {
+  "request_id" :  1,
+  "producer_name" : "generated-unique-producer-name"
+}
+
+```
+
+Parameters:
+ * `request_id` → Original id of the `CreateProducer` request
+ * `producer_name` → Generated globally unique producer name or the name
+    specified by the client, if any.
+
+##### Command Send
+
+Command `Send` is used to publish a new message within the context of an
+already existing producer. This command is used in a frame that includes command
+as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section.
+
+```protobuf
+
+message CommandSend {
+  "producer_id" : 1,
+  "sequence_id" : 0,
+  "num_messages" : 1
+}
+
+```
+
+Parameters:
+ * `producer_id` → id of an existing producer
+ * `sequence_id` → each message has an associated sequence id which is expected
+   to be implemented with a counter starting at 0. The `SendReceipt` that
+   acknowledges the effective publishing of a messages will refer to it by
+   its sequence id.
+ * `num_messages` → *(optional)* Used when publishing a batch of messages at
+   once.
+
+##### Command SendReceipt
+
+After a message has been persisted on the configured number of replicas, the
+broker will send the acknowledgment receipt to the producer.
+
+```protobuf
+
+message CommandSendReceipt {
+  "producer_id" : 1,
+  "sequence_id" : 0,
+  "message_id" : {
+    "ledgerId" : 123,
+    "entryId" : 456
+  }
+}
+
+```
+
+Parameters:
+ * `producer_id` → id of producer originating the send request
+ * `sequence_id` → sequence id of the published message
+ * `message_id` → message id assigned by the system to the published message
+   Unique within a single cluster. Message id is composed of 2 longs, `ledgerId`
+   and `entryId`, that reflect that this unique id is assigned when appending
+   to a BookKeeper ledger
+
+
+##### Command CloseProducer
+
+**Note**: *This command can be sent by either producer or broker*.
+
+When receiving a `CloseProducer` command, the broker will stop accepting any
+more messages for the producer, wait until all pending messages are persisted
+and then reply `Success` to the client.
+
+The broker can send a `CloseProducer` command to client when it's performing
+a graceful failover (eg: broker is being restarted, or the topic is being unloaded
+by load balancer to be transferred to a different broker).
+
+When receiving the `CloseProducer`, the client is expected to go through the
+service discovery lookup again and recreate the producer again. The TCP
+connection is not affected.
+
+### Consumer
+
+A consumer is used to attach to a subscription and consume messages from it.
+After every reconnection, a client needs to subscribe to the topic. If a
+subscription is not already there, a new one will be created.
+
+![Consumer](/assets/binary-protocol-consumer.png)
+
+#### Flow control
+
+After the consumer is ready, the client needs to *give permission* to the
+broker to push messages. This is done with the `Flow` command.
+
+A `Flow` command gives additional *permits* to send messages to the consumer.
+A typical consumer implementation will use a queue to accumulate these messages
+before the application is ready to consume them.
+
+After the application has dequeued half of the messages in the queue, the consumer 
+sends permits to the broker to ask for more messages (equals to half of the messages in the queue).
+
+For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue.
+Then the consumer sends permits to the broker to ask for 500 messages.
+
+##### Command Subscribe
+
+```protobuf
+
+message CommandSubscribe {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "subscription" : "my-subscription-name",
+  "subType" : "Exclusive",
+  "consumer_id" : 1,
+  "request_id" : 1
+}
+
+```
+
+Parameters:
+ * `topic` → Complete topic name to where you want to create the consumer on
+ * `subscription` → Subscription name
+ * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared
+ * `consumer_id` → Client generated consumer identifier. Needs to be unique
+    within the same connection
+ * `request_id` → Identifier for this request. Used to match the response with
+    the originating request. Needs to be unique within the same connection
+ * `consumer_name` → *(optional)* Clients can specify a consumer name. This
+    name can be used to track a particular consumer in the stats. Also, in
+    Failover subscription type, the name is used to decide which consumer is
+    elected as *master* (the one receiving messages): consumers are sorted by
+    their consumer name and the first one is elected master.
+
+##### Command Flow
+
+```protobuf
+
+message CommandFlow {
+  "consumer_id" : 1,
+  "messagePermits" : 1000
+}
+
+```
+
+Parameters:
+* `consumer_id` → Id of an already established consumer
+* `messagePermits` → Number of additional permits to grant to the broker for
+  pushing more messages
+
+##### Command Message
+
+Command `Message` is used by the broker to push messages to an existing consumer,
+within the limits of the given permits.
+
+
+This command is used in a frame that includes the message payload as well, for
+which the complete format is specified in the [payload commands](#payload-commands)
+section.
+
+```protobuf
+
+message CommandMessage {
+  "consumer_id" : 1,
+  "message_id" : {
+    "ledgerId" : 123,
+    "entryId" : 456
+  }
+}
+
+```
+
+##### Command Ack
+
+An `Ack` is used to signal to the broker that a given message has been
+successfully processed by the application and can be discarded by the broker.
+
+In addition, the broker will also maintain the consumer position based on the
+acknowledged messages.
+
+```protobuf
+
+message CommandAck {
+  "consumer_id" : 1,
+  "ack_type" : "Individual",
+  "message_id" : {
+    "ledgerId" : 123,
+    "entryId" : 456
+  }
+}
+
+```
+
+Parameters:
+ * `consumer_id` → Id of an already established consumer
+ * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative`
+ * `message_id` → Id of the message to acknowledge
+ * `validation_error` → *(optional)* Indicates that the consumer has discarded
+   the messages due to: `UncompressedSizeCorruption`,
+   `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError`
+
+##### Command CloseConsumer
+
+***Note***: **This command can be sent by either producer or broker*.
+
+This command behaves the same as [`CloseProducer`](#command-closeproducer)
+
+##### Command RedeliverUnacknowledgedMessages
+
+A consumer can ask the broker to redeliver some or all of the pending messages
+that were pushed to that particular consumer and not yet acknowledged.
+
+The protobuf object accepts a list of message ids that the consumer wants to
+be redelivered. If the list is empty, the broker will redeliver all the
+pending messages.
+
+On redelivery, messages can be sent to the same consumer or, in the case of a
+shared subscription, spread across all available consumers.
+
+
+##### Command ReachedEndOfTopic
+
+This is sent by a broker to a particular consumer, whenever the topic
+has been "terminated" and all the messages on the subscription were
+acknowledged.
+
+The client should use this command to notify the application that no more
+messages are coming from the consumer.
+
+##### Command ConsumerStats
+
+This command is sent by the client to retrieve Subscriber and Consumer level 
+stats from the broker.
+Parameters:
+ * `request_id` → Id of the request, used to correlate the request 
+      and the response.
+ * `consumer_id` → Id of an already established consumer.
+
+##### Command ConsumerStatsResponse
+
+This is the broker's response to ConsumerStats request by the client. 
+It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request.
+If the `error_code` or the `error_message` field is set it indicates that the request has failed.
+
+##### Command Unsubscribe
+
+This command is sent by the client to unsubscribe the `consumer_id` from the associated topic.
+Parameters:
+ * `request_id` → Id of the request.
+ * `consumer_id` → Id of an already established consumer which needs to unsubscribe.
+
+
+## Service discovery
+
+### Topic lookup
+
+Topic lookup needs to be performed each time a client needs to create or
+reconnect a producer or a consumer. Lookup is used to discover which particular
+broker is serving the topic we are about to use.
+
+Lookup can be done with a REST call as described in the [admin API](admin-api-persistent-topics.md#lookup-of-topic)
+docs.
+
+Since Pulsar-1.16 it is also possible to perform the lookup within the binary
+protocol.
+
+For the sake of example, let's assume we have a service discovery component
+running at `pulsar://broker.example.com:6650`
+
+Individual brokers will be running at `pulsar://broker-1.example.com:6650`,
+`pulsar://broker-2.example.com:6650`, ...
+
+A client can use a connection to the discovery service host to issue a
+`LookupTopic` command. The response can either be a broker hostname to
+connect to, or a broker hostname to which retry the lookup.
+
+The `LookupTopic` command has to be used in a connection that has already
+gone through the `Connect` / `Connected` initial handshake.
+
+![Topic lookup](/assets/binary-protocol-topic-lookup.png)
+
+```protobuf
+
+message CommandLookupTopic {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "request_id" : 1,
+  "authoritative" : false
+}
+
+```
+
+Fields:
+ * `topic` → Topic name to lookup
+ * `request_id` → Id of the request that will be passed with its response
+ * `authoritative` → Initial lookup request should use false. When following a
+   redirect response, client should pass the same value contained in the
+   response
+
+##### LookupTopicResponse
+
+Example of response with successful lookup:
+
+```protobuf
+
+message CommandLookupTopicResponse {
+  "request_id" : 1,
+  "response" : "Connect",
+  "brokerServiceUrl" : "pulsar://broker-1.example.com:6650",
+  "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651",
+  "authoritative" : true
+}
+
+```
+
+Example of lookup response with redirection:
+
+```protobuf
+
+message CommandLookupTopicResponse {
+  "request_id" : 1,
+  "response" : "Redirect",
+  "brokerServiceUrl" : "pulsar://broker-2.example.com:6650",
+  "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651",
+  "authoritative" : true
+}
+
+```
+
+In this second case, we need to reissue the `LookupTopic` command request
+to `broker-2.example.com` and this broker will be able to give a definitive
+answer to the lookup request.
+
+### Partitioned topics discovery
+
+Partitioned topics metadata discovery is used to find out if a topic is a
+"partitioned topic" and how many partitions were set up.
+
+If the topic is marked as "partitioned", the client is expected to create
+multiple producers or consumers, one for each partition, using the `partition-X`
+suffix.
+
+This information only needs to be retrieved the first time a producer or
+consumer is created. There is no need to do this after reconnections.
+
+The discovery of partitioned topics metadata works very similar to the topic
+lookup. The client send a request to the service discovery address and the
+response will contain actual metadata.
+
+##### Command PartitionedTopicMetadata
+
+```protobuf
+
+message CommandPartitionedTopicMetadata {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "request_id" : 1
+}
+
+```
+
+Fields:
+ * `topic` → the topic for which to check the partitions metadata
+ * `request_id` → Id of the request that will be passed with its response
+
+
+##### Command PartitionedTopicMetadataResponse
+
+Example of response with metadata:
+
+```protobuf
+
+message CommandPartitionedTopicMetadataResponse {
+  "request_id" : 1,
+  "response" : "Success",
+  "partitions" : 32
+}
+
+```
+
+## Protobuf interface
+
+All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}.
diff --git a/site2/website-next/versioned_docs/version-2.6.0/developing-cpp.md b/site2/website-next/versioned_docs/version-2.6.0/developing-cpp.md
new file mode 100644
index 0000000..9da7a3a
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/developing-cpp.md
@@ -0,0 +1,114 @@
+---
+id: develop-cpp
+title: Building Pulsar C++ client
+sidebar_label: "Building Pulsar C++ client"
+original_id: develop-cpp
+---
+
+## Supported platforms
+
+The Pulsar C++ client has been successfully tested on **MacOS** and **Linux**.
+
+## System requirements
+
+You need to have the following installed to use the C++ client:
+
+* [CMake](https://cmake.org/)
+* [Boost](http://www.boost.org/)
+* [Protocol Buffers](https://developers.google.com/protocol-buffers/) 2.6
+* [Log4CXX](https://logging.apache.org/log4cxx)
+* [libcurl](https://curl.haxx.se/libcurl/)
+* [Google Test](https://github.com/google/googletest)
+* [JsonCpp](https://github.com/open-source-parsers/jsoncpp)
+
+## Compilation
+
+There are separate compilation instructions for [MacOS](#macos) and [Linux](#linux). For both systems, start by cloning the Pulsar repository:
+
+```shell
+
+$ git clone https://github.com/apache/pulsar
+
+```
+
+### Linux
+
+First, install all of the necessary dependencies:
+
+```shell
+
+$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \
+  libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev
+
+```
+
+Then compile and install [Google Test](https://github.com/google/googletest):
+
+```shell
+
+# libgtest-dev version is 1.18.0 or above
+$ cd /usr/src/googletest
+$ sudo cmake .
+$ sudo make
+$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/
+
+# less than 1.18.0
+$ cd /usr/src/gtest
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgtest.a /usr/lib
+
+$ cd /usr/src/gmock
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgmock.a /usr/lib
+
+```
+
+Finally, compile the Pulsar client library for C++ inside the Pulsar repo:
+
+```shell
+
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+
+```
+
+The resulting files, `libpulsar.so` and `libpulsar.a`, will be placed in the `lib` folder of the repo while two tools, `perfProducer` and `perfConsumer`, will be placed in the `perf` directory.
+
+### MacOS
+
+First, install all of the necessary dependencies:
+
+```shell
+
+# OpenSSL installation
+$ brew install openssl
+$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/
+$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/
+
+# Protocol Buffers installation
+$ brew tap homebrew/versions
+$ brew install protobuf260
+$ brew install boost
+$ brew install log4cxx
+
+# Google Test installation
+$ git clone https://github.com/google/googletest.git
+$ cd googletest
+$ cmake .
+$ make install
+
+```
+
+Then compile the Pulsar client library in the repo that you cloned:
+
+```shell
+
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/developing-load-manager.md b/site2/website-next/versioned_docs/version-2.6.0/developing-load-manager.md
new file mode 100644
index 0000000..509209b
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/developing-load-manager.md
@@ -0,0 +1,227 @@
+---
+id: develop-load-manager
+title: Modular load manager
+sidebar_label: "Modular load manager"
+original_id: develop-load-manager
+---
+
+The *modular load manager*, implemented in  [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load  [...]
+
+## Usage
+
+There are two ways that you can enable the modular load manager:
+
+1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`.
+2. Using the `pulsar-admin` tool. Here's an example:
+
+   ```shell
+   
+   $ pulsar-admin update-dynamic-config \
+    --config loadManagerClassName \
+    --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
+   
+   ```
+
+   You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`.
+
+## Verification
+
+There are a few different ways to determine which load manager is being used:
+
+1. Use `pulsar-admin` to examine the `loadManagerClassName` element:
+
+   ```shell
+   
+   $ bin/pulsar-admin brokers get-all-dynamic-config
+   {
+    "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl"
+   }
+   
+   ```
+
+   If there is no `loadManagerClassName` element, then the default load manager is used.
+
+2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager:
+
+   ```json
+   
+   {
+     "bandwidthIn": {
+       "limit": 10240000.0,
+       "usage": 4.256510416666667
+     },
+     "bandwidthOut": {
+       "limit": 10240000.0,
+       "usage": 5.287239583333333
+     },
+     "bundles": [],
+     "cpu": {
+       "limit": 2400.0,
+       "usage": 5.7353247655435915
+     },
+     "directMemory": {
+       "limit": 16384.0,
+       "usage": 1.0
+     }
+   }
+   
+   ```
+
+   With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this:
+
+   ```json
+   
+   {
+     "systemResourceUsage": {
+       "bandwidthIn": {
+         "limit": 10240000.0,
+         "usage": 0.0
+       },
+       "bandwidthOut": {
+         "limit": 10240000.0,
+         "usage": 0.0
+       },
+       "cpu": {
+         "limit": 2400.0,
+         "usage": 0.0
+       },
+       "directMemory": {
+         "limit": 16384.0,
+         "usage": 1.0
+       },
+       "memory": {
+         "limit": 8192.0,
+         "usage": 3903.0
+       }
+     }
+   }
+   
+   ```
+
+3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used.
+
+   Here is an example from the modular load manager:
+
+   ```
+   
+   ===================================================================================================================
+   ||SYSTEM         |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+   ||               |0.00           |48.33          |0.01           |0.00           |0.00           |48.33          ||
+   ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
+   ||               |4              |4              |0              |2              |4              |0              ||
+   ||LATEST         |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+   ||SHORT          |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+   ||LONG           |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+   ===================================================================================================================
+   
+   ```
+
+   Here is an example from the simple load manager:
+
+   ```
+   
+   ===================================================================================================================
+   ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
+   ||               |4              |4              |0              |2              |0              |0              ||
+   ||RAW SYSTEM     |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+   ||               |0.25           |47.94          |0.01           |0.00           |0.00           |47.94          ||
+   ||ALLOC SYSTEM   |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+   ||               |0.20           |1.89           |               |1.27           |3.21           |3.21           ||
+   ||RAW MSG        |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+   ||               |0.00           |0.00           |0.00           |0.01           |0.01           |0.01           ||
+   ||ALLOC MSG      |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+   ||               |54.84          |134.48         |189.31         |126.54         |320.96         |447.50         ||
+   ===================================================================================================================
+   
+   ```
+
+It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper.
+
+## Implementation
+
+### Data
+
+The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class.
+Here, the available data is subdivided into the bundle data and the broker data.
+
+#### Broker
+
+The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts,
+one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker
+data which is written to ZooKeeper by the leader broker.
+
+##### Local Broker Data
+The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources:
+
+* CPU usage
+* JVM heap memory usage
+* Direct memory usage
+* Bandwidth in/out usage
+* Most recent total message rate in/out across all bundles
+* Total number of topics, bundles, producers, and consumers
+* Names of all bundles assigned to this broker
+* Most recent changes in bundle assignments for this broker
+
+The local broker data is updated periodically according to the service configuration
+"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will
+receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node
+`/loadbalance/brokers/<broker host/port>`
+
+##### Historical Broker Data
+
+The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class.
+
+In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information:
+
+* Message rate in/out for the entire broker
+* Message throughput in/out for the entire broker
+
+Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained.
+
+The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
+
+##### Bundle Data
+
+The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame:
+
+* Message rate in/out for this bundle
+* Message Throughput In/Out for this bundle
+* Current number of samples for this bundle
+
+The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where
+the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval
+for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the
+short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term
+data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame,
+the average is taken only over the existing samples. When no samples are available, default values are assumed until
+they are overwritten by the first sample. Currently, the default values are
+
+* Message rate in/out: 50 messages per second both ways
+* Message throughput in/out: 50KB per second both ways
+
+The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper.
+Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical
+broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
+
+### Traffic Distribution
+
+The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](h [...]
+
+#### Least Long Term Message Rate Strategy
+
+As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that
+the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based
+on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system
+resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the
+assignment process. This is done by weighting the final message rate according to
+`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration
+`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources
+that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed
+by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded,
+then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload
+threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly
+assigned.
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/developing-tools.md b/site2/website-next/versioned_docs/version-2.6.0/developing-tools.md
new file mode 100644
index 0000000..b545779
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/developing-tools.md
@@ -0,0 +1,111 @@
+---
+id: develop-tools
+title: Simulation tools
+sidebar_label: "Simulation tools"
+original_id: develop-tools
+---
+
+It is sometimes necessary create an test environment and incur artificial load to observe how well load managers
+handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an
+effort to make create this load and observe the effects on the managers more easily.
+
+## Simulation Client
+The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes.
+Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact
+with the simulation client directly, but instead delegates their requests to the simulation controller, which will then
+send signals to clients to start incurring load. The client implementation is in the class
+`org.apache.pulsar.testclient.LoadSimulationClient`.
+
+### Usage
+To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows:
+
+```
+
+pulsar-perf simulation-client --port <listen port> --service-url <pulsar service url>
+
+```
+
+The client will then be ready to receive controller commands.
+## Simulation Controller
+The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old
+topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class
+`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send
+command with.
+
+### Usage
+To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows:
+
+```
+
+pulsar-perf simulation-controller --cluster <cluster to simulate on> --client-port <listen port for clients>
+--clients <comma-separated list of client host names>
+
+```
+
+The clients should already be started before the controller is started. You will then be presented with a simple prompt,
+where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic
+names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic
+`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is
+`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions:
+
+* Create a topic with a producer and a consumer
+  * `trade <tenant> <namespace> <topic> [--rate <message rate per second>]
+  [--rand-rate <lower bound>,<upper bound>]
+  [--size <message size in bytes>]`
+* Create a group of topics with a producer and a consumer
+  * `trade_group <tenant> <group> <num_namespaces> [--rate <message rate per second>]
+  [--rand-rate <lower bound>,<upper bound>]
+  [--separation <separation between creating topics in ms>] [--size <message size in bytes>]
+  [--topics-per-namespace <number of topics to create per namespace>]`
+* Change the configuration of an existing topic
+  * `change <tenant> <namespace> <topic> [--rate <message rate per second>]
+  [--rand-rate <lower bound>,<upper bound>]
+  [--size <message size in bytes>]`
+* Change the configuration of a group of topics
+  * `change_group <tenant> <group> [--rate <message rate per second>] [--rand-rate <lower bound>,<upper bound>]
+  [--size <message size in bytes>] [--topics-per-namespace <number of topics to create per namespace>]`
+* Shutdown a previously created topic
+  * `stop <tenant> <namespace> <topic>`
+* Shutdown a previously created group of topics
+  * `stop_group <tenant> <group>`
+* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history
+  * `copy <tenant> <source zookeeper> <target zookeeper> [--rate-multiplier value]`
+* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on)
+  * `simulate <tenant> <zookeeper> [--rate-multiplier value]`
+* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper.
+  * `stream <tenant> <zookeeper> [--rate-multiplier value]`
+
+The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created
+when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped
+with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form
+`zookeeper_host:port`.
+
+### Difference Between Copy, Simulate, and Stream
+The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when
+you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus,
+`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are
+simulating on, and then it will get the full benefit of the historical data of the source in both load manager
+implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes
+that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent
+historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the
+clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams
+load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the
+user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to
+be sent at only `5%` of the rate of the load that is being simulated.
+
+## Broker Monitor
+To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is
+implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the
+console as it is updated using watchers.
+
+### Usage
+To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script:
+
+```
+
+pulsar-perf monitor-brokers --connect-string <zookeeper host:port>
+
+```
+
+The console will then continuously print load data until it is interrupted.
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/functions-metrics.md b/site2/website-next/versioned_docs/version-2.6.0/functions-metrics.md
new file mode 100644
index 0000000..8add669
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/functions-metrics.md
@@ -0,0 +1,7 @@
+---
+id: functions-metrics
+title: Metrics for Pulsar Functions
+sidebar_label: "Metrics"
+original_id: functions-metrics
+---
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/getting-started-concepts-and-architecture.md b/site2/website-next/versioned_docs/version-2.6.0/getting-started-concepts-and-architecture.md
new file mode 100644
index 0000000..fe9c3fb
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/getting-started-concepts-and-architecture.md
@@ -0,0 +1,16 @@
+---
+id: concepts-architecture
+title: Pulsar concepts and architecture
+sidebar_label: "Concepts and architecture"
+original_id: concepts-architecture
+---
+
+
+
+
+
+
+
+
+
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-aerospike-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-aerospike-sink.md
new file mode 100644
index 0000000..63d7338
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-aerospike-sink.md
@@ -0,0 +1,26 @@
+---
+id: io-aerospike-sink
+title: Aerospike sink connector
+sidebar_label: "Aerospike sink connector"
+original_id: io-aerospike-sink
+---
+
+The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters.
+
+## Configuration
+
+The configuration of the Aerospike sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.<br /><br />Each host can be specified as a valid IP address or hostname followed by an optional port number. | 
+| `keyspace` | String| true |No default value |The Aerospike namespace. |
+| `columnName` | String | true| No default value|The Aerospike column name. |
+|`userName`|String|false|NULL|The Aerospike username.|
+|`password`|String|false|NULL|The Aerospike password.|
+| `keySet` | String|false |NULL | The Aerospike set name. |
+| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. |
+| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions.  |
+| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. |
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-canal-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-canal-source.md
new file mode 100644
index 0000000..d1fd43b
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-canal-source.md
@@ -0,0 +1,235 @@
+---
+id: io-canal-source
+title: Canal source connector
+sidebar_label: "Canal source connector"
+original_id: io-canal-source
+---
+
+The Canal source connector pulls messages from MySQL to Pulsar topics.
+
+## Configuration
+
+The configuration of Canal source connector has the following properties.
+
+### Property
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `username` | true | None | Canal server account (not MySQL).|
+| `password` | true | None | Canal server password (not MySQL). |
+|`destination`|true|None|Source destination that Canal source connector connects to.
+| `singleHostname` | false | None | Canal server address.|
+| `singlePort` | false | None | Canal server port.|
+| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.<br /><br /><li>true: **cluster** mode.<br />If set to true, it talks to `zkServers` to figure out the actual database host.<br /><br /></li><li>false: **standalone** mode.<br />If set to false, it connects to the database specified by `singleHostname` and `singlePort`. </li>|
+| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.|
+| `batchSize` | false | 1000 | Batch size to fetch from Canal. |
+
+### Example
+
+Before using the Canal connector, you can create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "zkServers": "127.0.0.1:2181",
+      "batchSize": "5120",
+      "destination": "example",
+      "username": "",
+      "password": "",
+      "cluster": false,
+      "singleHostname": "127.0.0.1",
+      "singlePort": "11111",
+  }
+  
+  ```
+
+* YAML
+
+  You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file.
+
+  ```yaml
+  
+  configs:
+      zkServers: "127.0.0.1:2181"
+      batchSize: 5120
+      destination: "example"
+      username: ""
+      password: ""
+      cluster: false
+      singleHostname: "127.0.0.1"
+      singlePort: 11111
+  
+  ```
+
+## Usage
+
+Here is an example of storing MySQL data using the configuration file as above.
+
+1. Start a MySQL server.
+
+   ```bash
+   
+   $ docker pull mysql:5.7
+   $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7
+   
+   ```
+
+2. Create a configuration file `mysqld.cnf`.
+
+   ```bash
+   
+   [mysqld]
+   pid-file    = /var/run/mysqld/mysqld.pid
+   socket      = /var/run/mysqld/mysqld.sock
+   datadir     = /var/lib/mysql
+   #log-error  = /var/log/mysql/error.log
+   # By default we only accept connections from localhost
+   #bind-address   = 127.0.0.1
+   # Disabling symbolic-links is recommended to prevent assorted security risks
+   symbolic-links=0
+   log-bin=mysql-bin
+   binlog-format=ROW
+   server_id=1
+   
+   ```
+
+3. Copy the configuration file `mysqld.cnf` to MySQL server.
+
+   ```bash
+   
+   $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/
+   
+   ```
+
+4.  Restart the MySQL server.
+
+   ```bash
+   
+   $ docker restart pulsar-mysql
+   
+   ```
+
+5.  Create a test database in MySQL server.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mysql /bin/bash
+   $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;'
+   
+   ```
+
+6. Start a Canal server and connect to MySQL server.
+
+   ```
+   
+   $ docker pull canal/canal-server:v1.1.2
+   $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2
+   
+   ```
+
+7. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:2.3.0
+   $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone
+   
+   ```
+
+8. Modify the configuration file `canal-mysql-source-config.yaml`.
+
+   ```yaml
+   
+   configs:
+       zkServers: ""
+       batchSize: "5120"
+       destination: "test"
+       username: ""
+       password: ""
+       cluster: false
+       singleHostname: "pulsar-canal-server"
+       singlePort: "11111"
+   
+   ```
+
+9. Create a consumer file `pulsar-client.py`.
+
+   ```python
+   
+   import pulsar
+
+   client = pulsar.Client('pulsar://localhost:6650')
+   consumer = client.subscribe('my-topic',
+                               subscription_name='my-sub')
+
+   while True:
+       msg = consumer.receive()
+       print("Received message: '%s'" % msg.data())
+       consumer.acknowledge(msg)
+
+   client.close()
+   
+   ```
+
+10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file  `pulsar-client.py` to Pulsar server.
+
+   ```bash
+   
+   $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/
+   $ docker cp pulsar-client.py pulsar-standalone:/pulsar/
+   
+   ```
+
+11. Download a Canal connector and start it.
+
+   ```bash
+   
+   $ docker exec -it pulsar-standalone /bin/bash
+   $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors
+   $ ./bin/pulsar-admin source localrun \
+   --archive ./connectors/pulsar-io-canal-2.3.0.nar \
+   --classname org.apache.pulsar.io.canal.CanalStringSource \
+   --tenant public \
+   --namespace default \
+   --name canal \
+   --destination-topic-name my-topic \
+   --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \
+   --parallelism 1
+   
+   ```
+
+12. Consume data from MySQL. 
+
+   ```bash
+   
+   $ docker exec -it pulsar-standalone /bin/bash
+   $ python pulsar-client.py
+   
+   ```
+
+13. Open another window to log in MySQL server.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mysql /bin/bash
+   $ mysql -h 127.0.0.1 -uroot -pcanal
+   
+   ```
+
+14. Create a table, and insert, delete, and update data in MySQL server.
+
+   ```bash
+   
+   mysql> use test;
+   mysql> show tables;
+   mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL,
+   `test_author` VARCHAR(40) NOT NULL,
+   `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8;
+   mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW());
+   mysql> UPDATE test_table SET test_title='c' WHERE test_title='a';
+   mysql> DELETE FROM test_table WHERE test_title='c';
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-cassandra-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-cassandra-sink.md
new file mode 100644
index 0000000..b27a754
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-cassandra-sink.md
@@ -0,0 +1,57 @@
+---
+id: io-cassandra-sink
+title: Cassandra sink connector
+sidebar_label: "Cassandra sink connector"
+original_id: io-cassandra-sink
+---
+
+The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters.
+
+## Configuration
+
+The configuration of the Cassandra sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.|
+| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages. <br /><br />**Note: `keyspace` should be created prior to a Cassandra sink.**|
+| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family. <br /><br />The column is used for storing Pulsar message keys. <br /><br />If a Pulsar message doesn't have any key associated, the message value is used as the key. |
+| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.<br /><br />**Note: `columnFamily` should be created prior to a Cassandra sink.**|
+| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.<br /><br /> The column is used for storing Pulsar message values. |
+
+### Example
+
+Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "roots": "localhost:9042",
+      "keyspace": "pulsar_test_keyspace",
+      "columnFamily": "pulsar_test_table",
+      "keyname": "key",
+      "columnName": "col"
+  }
+  
+  ```
+
+* YAML
+
+  ```
+  
+  configs:
+      roots: "localhost:9042"
+      keyspace: "pulsar_test_keyspace"
+      columnFamily: "pulsar_test_table"
+      keyname: "key"
+      columnName: "col"
+  
+  ```
+
+## Usage
+
+For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra).
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-cdc-debezium.md b/site2/website-next/versioned_docs/version-2.6.0/io-cdc-debezium.md
new file mode 100644
index 0000000..fa2efe9
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-cdc-debezium.md
@@ -0,0 +1,543 @@
+---
+id: io-cdc-debezium
+title: Debezium source connector
+sidebar_label: "Debezium source connector"
+original_id: io-cdc-debezium
+---
+
+The Debezium source connector pulls messages from MySQL or PostgreSQL 
+and persists the messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of Debezium source connector has the following properties.
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `task.class` | true | null | A source task class that implemented in Debezium. |
+| `database.hostname` | true | null | The address of a database server. |
+| `database.port` | true | null | The port number of a database server.|
+| `database.user` | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | null | The password for a database user that has the required privileges. |
+| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the  connector.<br /><br /> This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value.  |
+| `database.history` | true | null | The name of the database history class. |
+| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements. <br /><br />**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | true | null | Pulsar cluster service URL for the offset topic used in Debezium. You can use the `bin/pulsar-admin --admin-url http://pulsar:8080 sources localrun --source-config-file configs/pg-pulsar-config.yaml` command to point to the target Pulsar cluster. |
+| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. |
+| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+
+
+
+## Example of MySQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration 
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "3306",
+      "database.user": "debezium",
+      "database.password": "dbz",
+      "database.server.id": "184054",
+      "database.server.name": "dbserver1",
+      "database.whitelist": "inventory",
+      "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+      "database.history.pulsar.topic": "history-topic",
+      "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "offset.storage.topic": "offset-topic"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mysql-source"
+  topicName: "debezium-mysql-topic"
+  archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for mysql, docker image: debezium/example-mysql:0.8
+      database.hostname: "localhost"
+      database.port: "3306"
+      database.user: "debezium"
+      database.password: "dbz"
+      database.server.id: "184054"
+      database.server.name: "dbserver1"
+      database.whitelist: "inventory"
+      database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
+      database.history.pulsar.topic: "history-topic"
+      database.history.pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
+      key.converter: "org.apache.kafka.connect.json.JsonConverter"
+      value.converter: "org.apache.kafka.connect.json.JsonConverter"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## OFFSET_STORAGE_TOPIC_CONFIG
+      offset.storage.topic: "offset-topic"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MySQL table using the Pulsar Debezium connector.
+
+1. Start a MySQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysql \
+   -p 3306:3306 \
+   -e MYSQL_ROOT_PASSWORD=debezium \
+   -e MYSQL_USER=mysqluser \
+   -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+    * Use the **JSON** configuration file as shown previously. 
+   
+       Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \
+       --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","va [...]
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --source-config-file debezium-mysql-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the table _inventory.products_.
+
+   ```bash
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MySQL client in docker.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysqlterm \
+   --link mysql \
+   --rm mysql:5.7 sh \
+   -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+   
+   ```
+
+6. A MySQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   mysql> use inventory;
+   mysql> show tables;
+   mysql> SELECT * FROM  products;
+   mysql> UPDATE products SET name='1111111111' WHERE id=101;
+   mysql> UPDATE products SET name='1111111111' WHERE id=107;
+   
+   ```
+
+   In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic.
+
+## Example of PostgreSQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "5432",
+      "database.user": "postgres",
+      "database.password": "postgres",
+      "database.dbname": "postgres",
+      "database.server.name": "dbserver1",
+      "schema.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-postgres-source"
+  topicName: "debezium-postgres-topic"
+  archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-postgress:0.8
+      database.hostname: "localhost"
+      database.port: "5432"
+      database.user: "postgres"
+      database.password: "postgres"
+      database.dbname: "postgres"
+      database.server.name: "dbserver1"
+      schema.whitelist: "inventory"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector.
+
+
+1. Start a PostgreSQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-postgres:0.8
+   $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432  debezium/example-postgres:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \
+       --name debezium-postgres-source \
+       --destination-topic-name debezium-postgres-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-postgres-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a PostgreSQL client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-postgresql /bin/bash
+   
+   ```
+
+6. A PostgreSQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   psql -U postgres postgres
+   postgres=# \c postgres;
+   You are now connected to database "postgres" as user "postgres".
+   postgres=# SET search_path TO inventory;
+   SET
+   postgres=# select * from products;
+    id  |        name        |                       description                       | weight
+   -----+--------------------+---------------------------------------------------------+--------
+    102 | car battery        | 12V car battery                                         |    8.1
+    103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 |    0.8
+    104 | hammer             | 12oz carpenter's hammer                                 |   0.75
+    105 | hammer             | 14oz carpenter's hammer                                 |  0.875
+    106 | hammer             | 16oz carpenter's hammer                                 |      1
+    107 | rocks              | box of assorted rocks                                   |    5.3
+    108 | jacket             | water resistent black wind breaker                      |    0.1
+    109 | spare tire         | 24 inch spare tire                                      |   22.2
+    101 | 1111111111         | Small 2-wheel scooter                                   |   3.14
+   (9 rows)
+   
+   postgres=# UPDATE products SET name='1111111111' WHERE id=107;
+   UPDATE 1
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products. [...]
+   
+   ```
+
+## Example of MongoDB
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+* JSON 
+
+  ```json
+  
+  {
+      "mongodb.hosts": "rs0/mongodb:27017",
+      "mongodb.name": "dbserver1",
+      "mongodb.user": "debezium",
+      "mongodb.password": "dbz",
+      "mongodb.task.id": "1",
+      "database.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mongodb-source"
+  topicName: "debezium-mongodb-topic"
+  archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-postgress:0.10
+      mongodb.hosts: "rs0/mongodb:27017",
+      mongodb.name: "dbserver1",
+      mongodb.user: "debezium",
+      mongodb.password: "dbz",
+      mongodb.task.id: "1",
+      database.whitelist: "inventory",
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector.
+
+
+1. Start a MongoDB server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-mongodb:0.10
+   $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017  debezium/example-mongodb:0.10
+   
+   ```
+
+    Use the following commands to initialize the data.
+
+    ``` bash
+    
+    ./usr/local/bin/init-inventory.sh
+    
+    ```
+
+    If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a```
+
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \
+       --name debezium-mongodb-source \
+       --destination-topic-name debezium-mongodb-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-mongodb-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MongoDB client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mongodb /bin/bash
+   
+   ```
+
+6. A MongoDB client pops out. 
+
+   ```bash
+   
+   mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory
+   db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}})
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type" [...]
+   
+   ```
+
+## FAQ
+ 
+### Debezium postgres connector will hang when create snap
+
+```$xslt
+
+#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000]
+    java.lang.Thread.State: WAITING (parking)
+     at sun.misc.Unsafe.park(Native Method)
+     - parking to wait for  <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
+     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
+     at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396)
+     at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649)
+     at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132)
+     at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source)
+     at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source)
+     at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
+     at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717)
+     at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126)
+     at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)
+     at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127)
+     at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230)
+     at java.lang.Thread.run(Thread.java:748)
+
+```
+
+If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file:
+
+```$xslt
+
+max.queue.size=
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-debezium-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-debezium-source.md
new file mode 100644
index 0000000..808051b
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-debezium-source.md
@@ -0,0 +1,564 @@
+---
+id: io-debezium-source
+title: Debezium source connector
+sidebar_label: "Debezium source connector"
+original_id: io-debezium-source
+---
+
+The Debezium source connector pulls messages from MySQL or PostgreSQL 
+and persists the messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of Debezium source connector has the following properties.
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `task.class` | true | null | A source task class that implemented in Debezium. |
+| `database.hostname` | true | null | The address of a database server. |
+| `database.port` | true | null | The port number of a database server.|
+| `database.user` | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | null | The password for a database user that has the required privileges. |
+| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the  connector.<br /><br /> This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value.  |
+| `database.history` | true | null | The name of the database history class. |
+| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements. <br /><br />**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | true | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. |
+| `json-with-envelope` | false | false | Present the message only consist of payload.
+
+### Converter Options
+
+1. org.apache.kafka.connect.json.JsonConverter
+
+This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema `
+Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`,
+and the message only consist of payload.
+
+If the config `json-with-envelope` value is true, the consumer use the schema 
+`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload.
+
+2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter
+
+If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), 
+Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload.
+
+### MongoDB Configuration
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+
+
+
+## Example of MySQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration 
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "3306",
+      "database.user": "debezium",
+      "database.password": "dbz",
+      "database.server.id": "184054",
+      "database.server.name": "dbserver1",
+      "database.whitelist": "inventory",
+      "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+      "database.history.pulsar.topic": "history-topic",
+      "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650",
+      "offset.storage.topic": "offset-topic"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mysql-source"
+  topicName: "debezium-mysql-topic"
+  archive: "connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for mysql, docker image: debezium/example-mysql:0.8
+      database.hostname: "localhost"
+      database.port: "3306"
+      database.user: "debezium"
+      database.password: "dbz"
+      database.server.id: "184054"
+      database.server.name: "dbserver1"
+      database.whitelist: "inventory"
+      database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
+      database.history.pulsar.topic: "history-topic"
+      database.history.pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
+      key.converter: "org.apache.kafka.connect.json.JsonConverter"
+      value.converter: "org.apache.kafka.connect.json.JsonConverter"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+      ## OFFSET_STORAGE_TOPIC_CONFIG
+      offset.storage.topic: "offset-topic"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MySQL table using the Pulsar Debezium connector.
+
+1. Start a MySQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysql \
+   -p 3306:3306 \
+   -e MYSQL_ROOT_PASSWORD=debezium \
+   -e MYSQL_USER=mysqluser \
+   -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+    * Use the **JSON** configuration file as shown previously. 
+   
+       Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mysql-@pulsar:version@.nar \
+       --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","va [...]
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --source-config-file debezium-mysql-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the table _inventory.products_.
+
+   ```bash
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MySQL client in docker.
+
+   ```bash
+   
+   $ docker run -it --rm \
+   --name mysqlterm \
+   --link mysql \
+   --rm mysql:5.7 sh \
+   -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+   
+   ```
+
+6. A MySQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   mysql> use inventory;
+   mysql> show tables;
+   mysql> SELECT * FROM  products;
+   mysql> UPDATE products SET name='1111111111' WHERE id=101;
+   mysql> UPDATE products SET name='1111111111' WHERE id=107;
+   
+   ```
+
+   In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic.
+
+## Example of PostgreSQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+  ```json
+  
+  {
+      "database.hostname": "localhost",
+      "database.port": "5432",
+      "database.user": "postgres",
+      "database.password": "postgres",
+      "database.dbname": "postgres",
+      "database.server.name": "dbserver1",
+      "schema.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-postgres-source"
+  topicName: "debezium-postgres-topic"
+  archive: "connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-postgress:0.8
+      database.hostname: "localhost"
+      database.port: "5432"
+      database.user: "postgres"
+      database.password: "postgres"
+      database.dbname: "postgres"
+      database.server.name: "dbserver1"
+      schema.whitelist: "inventory"
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector.
+
+
+1. Start a PostgreSQL server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-postgres:0.8
+   $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432  debezium/example-postgres:0.8
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-postgres-@pulsar:version@.nar \
+       --name debezium-postgres-source \
+       --destination-topic-name debezium-postgres-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-postgres-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a PostgreSQL client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-postgresql /bin/bash
+   
+   ```
+
+6. A PostgreSQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+   ```
+   
+   psql -U postgres postgres
+   postgres=# \c postgres;
+   You are now connected to database "postgres" as user "postgres".
+   postgres=# SET search_path TO inventory;
+   SET
+   postgres=# select * from products;
+    id  |        name        |                       description                       | weight
+   -----+--------------------+---------------------------------------------------------+--------
+    102 | car battery        | 12V car battery                                         |    8.1
+    103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 |    0.8
+    104 | hammer             | 12oz carpenter's hammer                                 |   0.75
+    105 | hammer             | 14oz carpenter's hammer                                 |  0.875
+    106 | hammer             | 16oz carpenter's hammer                                 |      1
+    107 | rocks              | box of assorted rocks                                   |    5.3
+    108 | jacket             | water resistent black wind breaker                      |    0.1
+    109 | spare tire         | 24 inch spare tire                                      |   22.2
+    101 | 1111111111         | Small 2-wheel scooter                                   |   3.14
+   (9 rows)
+   
+   postgres=# UPDATE products SET name='1111111111' WHERE id=107;
+   UPDATE 1
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products. [...]
+   
+   ```
+
+## Example of MongoDB
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+* JSON 
+
+  ```json
+  
+  {
+      "mongodb.hosts": "rs0/mongodb:27017",
+      "mongodb.name": "dbserver1",
+      "mongodb.user": "debezium",
+      "mongodb.password": "dbz",
+      "mongodb.task.id": "1",
+      "database.whitelist": "inventory",
+      "pulsar.service.url": "pulsar://127.0.0.1:6650"
+  }
+  
+  ```
+
+* YAML 
+
+  You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file.
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "debezium-mongodb-source"
+  topicName: "debezium-mongodb-topic"
+  archive: "connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar"
+  parallelism: 1
+
+  configs:
+
+      ## config for pg, docker image: debezium/example-mongodb:0.10
+      mongodb.hosts: "rs0/mongodb:27017",
+      mongodb.name: "dbserver1",
+      mongodb.user: "debezium",
+      mongodb.password: "dbz",
+      mongodb.task.id: "1",
+      database.whitelist: "inventory",
+
+      ## PULSAR_SERVICE_URL_CONFIG
+      pulsar.service.url: "pulsar://127.0.0.1:6650"
+  
+  ```
+
+### Usage
+
+This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector.
+
+
+1. Start a MongoDB server with a database from which Debezium can capture changes.
+
+   ```bash
+   
+   $ docker pull debezium/example-mongodb:0.10
+   $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017  debezium/example-mongodb:0.10
+   
+   ```
+
+    Use the following commands to initialize the data.
+
+    ``` bash
+    
+    ./usr/local/bin/init-inventory.sh
+    
+    ```
+
+    If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a```
+
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+    
+    Make sure the nar file is available at `connectors/pulsar-io-mongodb-@pulsar:version@.nar`.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun \
+       --archive connectors/pulsar-io-debezium-mongodb-@pulsar:version@.nar \
+       --name debezium-mongodb-source \
+       --destination-topic-name debezium-mongodb-topic \
+       --tenant public \
+       --namespace default \
+       --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin source localrun  \
+       --source-config-file debezium-mongodb-source-config.yaml
+       
+       ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+   ```
+   
+   $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+   
+   ```
+
+5. Start a MongoDB client in docker.
+
+   ```bash
+   
+   $ docker exec -it pulsar-mongodb /bin/bash
+   
+   ```
+
+6. A MongoDB client pops out. 
+
+   ```bash
+   
+   mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory
+   db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}})
+   
+   ```
+
+   In the terminal window of subscribing topic, you can receive the following messages.
+
+   ```bash
+   
+   ----- got message -----
+   {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type" [...]
+   
+   ```
+
+## FAQ
+ 
+### Debezium postgres connector will hang when create snap
+
+```$xslt
+
+#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000]
+    java.lang.Thread.State: WAITING (parking)
+     at sun.misc.Unsafe.park(Native Method)
+     - parking to wait for  <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
+     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
+     at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396)
+     at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649)
+     at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132)
+     at io.debezium.connector.postgresql.PostgresConnectorTask$Lambda$203/385424085.accept(Unknown Source)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$240/1347039967.accept(Unknown Source)
+     at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$Lambda$206/589332928.run(Unknown Source)
+     at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
+     at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717)
+     at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126)
+     at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)
+     at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127)
+     at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230)
+     at java.lang.Thread.run(Thread.java:748)
+
+```
+
+If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file:
+
+```$xslt
+
+max.queue.size=
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-dynamodb-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-dynamodb-source.md
new file mode 100644
index 0000000..ce58578
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-dynamodb-source.md
@@ -0,0 +1,80 @@
+---
+id: io-dynamodb-source
+title: AWS DynamoDB source connector
+sidebar_label: "AWS DynamoDB source connector"
+original_id: io-dynamodb-source
+---
+
+The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar.
+
+This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter),
+which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual
+consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics.
+
+
+## Configuration
+
+The configuration of the DynamoDB source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.<br /><br />Below are the available options:<br /><br /><li>`AT_TIMESTAMP`: start from the record at or after the specified timestamp.<br /><br /></li><li>`LATEST`: start after the most recent data record.<br /><br /></li><li>`TRIM_HORIZON`: start from the oldest available data record.</li>
+`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption.
+`applicationName`|String|false|Pulsar IO connector|The name of the KCL application.  Must be unique, as it is used to define the table name for the dynamo table used for state tracking. <br /><br />By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances.
+`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds.
+`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds.
+`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint.
+`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector. <br /><br />Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed.
+`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br /><br />**Example**<br /> us-west-1, us-west-2
+`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.<br /><br />`awsCredentialProviderPlugin` has the following built-in plugs:<br /><br /><li>`org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:<br /> this plugin uses the default AWS provider chain.<br />For more information, see [using the [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Example
+
+Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "awsEndpoint": "https://some.endpoint.aws",
+      "awsRegion": "us-east-1",
+      "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291",
+      "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+      "applicationName": "My test application",
+      "checkpointInterval": "30000",
+      "backoffTime": "4000",
+      "numRetries": "3",
+      "receiveQueueSize": 2000,
+      "initialPositionInStream": "TRIM_HORIZON",
+      "startAtTime": "2019-03-05T19:28:58.000Z"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      awsEndpoint: "https://some.endpoint.aws"
+      awsRegion: "us-east-1"
+      awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291"
+      awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+      applicationName: "My test application"
+      checkpointInterval: 30000
+      backoffTime: 4000
+      numRetries: 3
+      receiveQueueSize: 2000
+      initialPositionInStream: "TRIM_HORIZON"
+      startAtTime: "2019-03-05T19:28:58.000Z"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-elasticsearch-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-elasticsearch-sink.md
new file mode 100644
index 0000000..4acedd3
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-elasticsearch-sink.md
@@ -0,0 +1,173 @@
+---
+id: io-elasticsearch-sink
+title: ElasticSearch sink connector
+sidebar_label: "ElasticSearch sink connector"
+original_id: io-elasticsearch-sink
+---
+
+The ElasticSearch sink connector pulls messages from Pulsar topics and persists the messages to indexes.
+
+## Configuration
+
+The configuration of the ElasticSearch sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. |
+| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. |
+| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to. <br /><br /> The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. |
+| `indexNumberOfShards` | int| false |1| The number of shards of the index. |
+| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. |
+| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster. <br /><br />If `username` is set, then `password` should also be provided. |
+| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster. <br /><br />If `username` is set, then `password` should also be provided.  |
+
+## Example
+
+Before using the ElasticSearch sink connector, you need to create a configuration file through one of the following methods.
+
+### Configuration
+
+#### For Elasticsearch After 6.2
+
+* JSON 
+
+  ```json
+  
+  {
+      "elasticSearchUrl": "http://localhost:9200",
+      "indexName": "my_index",
+      "username": "scooby",
+      "password": "doobie"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      elasticSearchUrl: "http://localhost:9200"
+      indexName: "my_index"
+      username: "scooby"
+      password: "doobie"
+  
+  ```
+
+#### For Elasticsearch Before 6.2
+
+* JSON 
+
+  ```json
+  
+  {
+      "elasticSearchUrl": "http://localhost:9200",
+      "indexName": "my_index",
+      "typeName": "doc",
+      "username": "scooby",
+      "password": "doobie"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      elasticSearchUrl: "http://localhost:9200"
+      indexName: "my_index"
+      typeName: "doc"
+      username: "scooby"
+      password: "doobie"
+  
+  ```
+
+### Usage
+
+1. Start a single node Elasticsearch cluster.
+
+   ```bash
+   
+   $ docker run -p 9200:9200 -p 9300:9300 \
+       -e "discovery.type=single-node" \
+       docker.elastic.co/elasticsearch/elasticsearch:7.5.1
+   
+   ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+   ```bash
+   
+   $ bin/pulsar standalone
+   
+   ```
+
+   Make sure the NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`.
+
+3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods.
+   * Use the **JSON** configuration as shown previously. 
+
+       ```bash
+       
+       $ bin/pulsar-admin sinks localrun \
+           --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \
+           --tenant public \
+           --namespace default \
+           --name elasticsearch-test-sink \
+           --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \
+           --inputs elasticsearch_test
+       
+       ```
+
+   * Use the **YAML** configuration file as shown previously.
+
+       ```bash
+       
+       $ bin/pulsar-admin sinks localrun \
+           --archive connectors/pulsar-io-elastic-search-@pulsar:version@.nar \
+           --tenant public \
+           --namespace default \
+           --name elasticsearch-test-sink \
+           --sink-config-file elasticsearch-sink.yml \
+           --inputs elasticsearch_test
+       
+       ```
+
+4. Publish records to the topic.
+
+   ```bash
+   
+   $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}"
+   
+   ```
+
+5. Check documents in Elasticsearch.
+   
+   * refresh the index
+
+       ```bash
+       
+           $ curl -s http://localhost:9200/my_index/_refresh
+       
+       ```
+
+ 
+   * search documents
+
+       ```bash
+       
+           $ curl -s http://localhost:9200/my_index/_search
+       
+       ```
+
+       You can see the record that published earlier has been successfully written into Elasticsearch.
+
+       ```json
+       
+       {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}}
+       
+       ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-file-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-file-source.md
new file mode 100644
index 0000000..e9d710c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-file-source.md
@@ -0,0 +1,160 @@
+---
+id: io-file-source
+title: File source connector
+sidebar_label: "File source connector"
+original_id: io-file-source
+---
+
+The File source connector pulls messages from files in directories and persists the messages to Pulsar topics.
+
+## Configuration
+
+The configuration of the File source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `inputDirectory` | String|true  | No default value|The input directory to pull files. |
+| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.|
+| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. |
+| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. |
+| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. |
+| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed. <br /><br />Any file younger than `minimumFileAge` (according to the last modification date) is ignored. |
+| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed. <br /><br />Any file older than `maximumFileAge` (according to last modification date) is ignored. |
+| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. |
+| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. |
+| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. |
+| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. |
+| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.<br /><br /> This allows you to process a larger number of files concurrently. <br /><br />However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. |
+
+### Example
+
+Before using the File source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "inputDirectory": "/Users/david",
+      "recurse": true,
+      "keepFile": true,
+      "fileFilter": "[^\\.].*",
+      "pathFilter": "*",
+      "minimumFileAge": 0,
+      "maximumFileAge": 9999999999,
+      "minimumSize": 1,
+      "maximumSize": 5000000,
+      "ignoreHiddenFiles": true,
+      "pollingInterval": 5000,
+      "numWorkers": 1
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      inputDirectory: "/Users/david"
+      recurse: true
+      keepFile: true
+      fileFilter: "[^\\.].*"
+      pathFilter: "*"
+      minimumFileAge: 0
+      maximumFileAge: 9999999999
+      minimumSize: 1
+      maximumSize: 5000000
+      ignoreHiddenFiles: true
+      pollingInterval: 5000
+      numWorkers: 1
+  
+  ```
+
+## Usage
+
+Here is an example of using the File source connecter.
+
+1. Pull a Pulsar image.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:{version}
+   
+   ```
+
+2. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+   
+   ```
+
+3. Create a configuration file _file-connector.yaml_.
+
+   ```yaml
+   
+   configs:
+       inputDirectory: "/opt"
+   
+   ```
+
+4. Copy the configuration file _file-connector.yaml_ to the container.
+
+   ```bash
+   
+   $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/
+   
+   ```
+
+5. Download the File source connector.
+
+   ```bash
+   
+   $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar
+   
+   ```
+
+6. Start the File source connector.
+
+   ```bash
+   
+   $ docker exec -it pulsar-standalone /bin/bash
+
+   $ ./bin/pulsar-admin sources localrun \
+   --archive /pulsar/pulsar-io-file-{version}.nar \
+   --name file-test \
+   --destination-topic-name  pulsar-file-test \
+   --source-config-file /pulsar/file-connector.yaml
+   
+   ```
+
+7. Start a consumer.
+
+   ```bash
+   
+   ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test
+   
+   ```
+
+8. Write the message to the file _test.txt_.
+
+   ```bash
+   
+   echo "hello world!" > /opt/test.txt
+   
+   ```
+
+   The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   ----- got message -----
+   hello world!
+   
+   ```
+
+   
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-flume-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-flume-sink.md
new file mode 100644
index 0000000..b2ace53
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-flume-sink.md
@@ -0,0 +1,56 @@
+---
+id: io-flume-sink
+title: Flume sink connector
+sidebar_label: "Flume sink connector"
+original_id: io-flume-sink
+---
+
+The Flume sink connector pulls messages from Pulsar topics to logs.
+
+## Configuration
+
+The configuration of the Flume sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`name`|String|true|"" (empty string)|The name of the agent.
+`confFile`|String|true|"" (empty string)|The configuration file.
+`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed.
+`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection.
+`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration.
+
+### Example
+
+Before using the Flume sink connector, you need to create a configuration file through one of the following methods.
+
+> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf).
+
+* JSON 
+
+  ```json
+  
+  {
+      "name": "a1",
+      "confFile": "sink.conf",
+      "noReloadConf": "false",
+      "zkConnString": "",
+      "zkBasePath": ""
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      name: a1
+      confFile: sink.conf
+      noReloadConf: false
+      zkConnString: ""
+      zkBasePath: ""
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-flume-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-flume-source.md
new file mode 100644
index 0000000..b7fd7ed
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-flume-source.md
@@ -0,0 +1,56 @@
+---
+id: io-flume-source
+title: Flume source connector
+sidebar_label: "Flume source connector"
+original_id: io-flume-source
+---
+
+The Flume source connector pulls messages from logs to Pulsar topics.
+
+## Configuration
+
+The configuration of the Flume source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`name`|String|true|"" (empty string)|The name of the agent.
+`confFile`|String|true|"" (empty string)|The configuration file.
+`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed.
+`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection.
+`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration.
+
+### Example
+
+Before using the Flume source connector, you need to create a configuration file through one of the following methods.
+
+> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf).
+
+* JSON 
+
+  ```json
+  
+  {
+      "name": "a1",
+      "confFile": "source.conf",
+      "noReloadConf": "false",
+      "zkConnString": "",
+      "zkBasePath": ""
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      name: a1
+      confFile: source.conf
+      noReloadConf: false
+      zkConnString: ""
+      zkBasePath: ""
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-hbase-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-hbase-sink.md
new file mode 100644
index 0000000..1737b00
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-hbase-sink.md
@@ -0,0 +1,67 @@
+---
+id: io-hbase-sink
+title: HBase sink connector
+sidebar_label: "HBase sink connector"
+original_id: io-hbase-sink
+---
+
+The HBase sink connector pulls the messages from Pulsar topics 
+and persists the messages to HBase tables
+
+## Configuration
+
+The configuration of the HBase sink connector has the following properties.
+
+### Property
+
+| Name | Type|Default | Required | Description |
+|------|---------|----------|-------------|---
+| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. |
+| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. |
+| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. |
+| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. |
+| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. |
+| `rowKeyName` | String|None | true | HBase table rowkey name. |
+| `familyName` | String|None | true | HBase table column family name. |
+| `qualifierNames` |String| None | true | HBase table column qualifier names. |
+| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. |
+| `batchSize` | int|200| false | Batch size of updates made to the HBase table. |
+
+### Example
+
+Before using the HBase sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "hbaseConfigResources": "hbase-site.xml",
+      "zookeeperQuorum": "localhost",
+      "zookeeperClientPort": "2181",
+      "zookeeperZnodeParent": "/hbase",
+      "tableName": "pulsar_hbase",
+      "rowKeyName": "rowKey",
+      "familyName": "info",
+      "qualifierNames": [ 'name', 'address', 'age']
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      hbaseConfigResources: "hbase-site.xml"
+      zookeeperQuorum: "localhost"
+      zookeeperClientPort: "2181"
+      zookeeperZnodeParent: "/hbase"
+      tableName: "pulsar_hbase"
+      rowKeyName: "rowKey"
+      familyName: "info"
+      qualifierNames: [ 'name', 'address', 'age']
+  
+  ```
+
+  
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-hdfs2-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-hdfs2-sink.md
new file mode 100644
index 0000000..9d834e7
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-hdfs2-sink.md
@@ -0,0 +1,59 @@
+---
+id: io-hdfs2-sink
+title: HDFS2 sink connector
+sidebar_label: "HDFS2 sink connector"
+original_id: io-hdfs2-sink
+---
+
+The HDFS2 sink connector pulls the messages from Pulsar topics 
+and persists the messages to HDFS files.
+
+## Configuration
+
+The configuration of the HDFS2 sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.<br /><br />**Example**<br />'core-site.xml'<br />'hdfs-site.xml' |
+| `directory` | String | true | None|The HDFS directory where files read from or written to. |
+| `encoding` | String |false |None |The character encoding for the files.<br /><br />**Example**<br />UTF-8<br />ASCII |
+| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS. <br /><br />Below are the available options:<br /><li>BZIP2<br /></li><li>DEFLATE<br /></li><li>GZIP<br /></li><li>LZ4<br /></li><li>SNAPPY</li>|
+| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. |
+| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. |
+| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.<br /><br />**Example**<br /> The value of topicA result in files named topicA-. |
+| `fileExtension` | String| false | None| The extension added to the files written to HDFS.<br /><br />**Example**<br />'.txt'<br /> '.seq' |
+| `separator` | char|false |None |The character used to separate records in a text file. <br /><br />If no value is provided, the contents from all records are concatenated together in one continuous byte array. |
+| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. |
+| `maxPendingRecords` |int| false|Integer.MAX_VALUE |  The maximum number of records that hold in memory before acking. <br /><br />Setting this property to 1 makes every record send to disk before the record is acked.<br /><br />Setting this property to a higher value allows buffering records before flushing them to disk. 
+
+### Example
+
+Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "hdfsConfigResources": "core-site.xml",
+      "directory": "/foo/bar",
+      "filenamePrefix": "prefix",
+      "compression": "SNAPPY"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      hdfsConfigResources: "core-site.xml"
+      directory: "/foo/bar"
+      filenamePrefix: "prefix"
+      compression: "SNAPPY"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-hdfs3-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-hdfs3-sink.md
new file mode 100644
index 0000000..aec065a
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-hdfs3-sink.md
@@ -0,0 +1,59 @@
+---
+id: io-hdfs3-sink
+title: HDFS3 sink connector
+sidebar_label: "HDFS3 sink connector"
+original_id: io-hdfs3-sink
+---
+
+The HDFS3 sink connector pulls the messages from Pulsar topics 
+and persists the messages to HDFS files.
+
+## Configuration
+
+The configuration of the HDFS3 sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.<br /><br />**Example**<br />'core-site.xml'<br />'hdfs-site.xml' |
+| `directory` | String | true | None|The HDFS directory where files read from or written to. |
+| `encoding` | String |false |None |The character encoding for the files.<br /><br />**Example**<br />UTF-8<br />ASCII |
+| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS. <br /><br />Below are the available options:<br /><li>BZIP2<br /></li><li>DEFLATE<br /></li><li>GZIP<br /></li><li>LZ4<br /></li><li>SNAPPY</li>|
+| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. |
+| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. |
+| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.<br /><br />**Example**<br /> The value of topicA result in files named topicA-. |
+| `fileExtension` | String| false | None| The extension added to the files written to HDFS.<br /><br />**Example**<br />'.txt'<br /> '.seq' |
+| `separator` | char|false |None |The character used to separate records in a text file. <br /><br />If no value is provided, the contents from all records are concatenated together in one continuous byte array. |
+| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. |
+| `maxPendingRecords` |int| false|Integer.MAX_VALUE |  The maximum number of records that hold in memory before acking. <br /><br />Setting this property to 1 makes every record send to disk before the record is acked.<br /><br />Setting this property to a higher value allows buffering records before flushing them to disk. 
+
+### Example
+
+Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "hdfsConfigResources": "core-site.xml",
+      "directory": "/foo/bar",
+      "filenamePrefix": "prefix",
+      "compression": "SNAPPY"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      hdfsConfigResources: "core-site.xml"
+      directory: "/foo/bar"
+      filenamePrefix: "prefix"
+      compression: "SNAPPY"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-influxdb-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-influxdb-sink.md
new file mode 100644
index 0000000..9382f8c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-influxdb-sink.md
@@ -0,0 +1,119 @@
+---
+id: io-influxdb-sink
+title: InfluxDB sink connector
+sidebar_label: "InfluxDB sink connector"
+original_id: io-influxdb-sink
+---
+
+The InfluxDB sink connector pulls messages from Pulsar topics 
+and persists the messages to InfluxDB.
+
+The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively.
+
+## Configuration
+
+The configuration of the InfluxDB sink connector has the following properties.
+
+### Property
+#### InfluxDBv2
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
+| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. |
+| `organization` | String| true|" " (empty string)  | The InfluxDB organization to write to. |
+| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. |
+| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB. <br /><br />Below are the available options:<li>ns<br /></li><li>us<br /></li><li>ms<br /></li><li>s</li>|
+| `logLevel` | String|false| NONE|The log level for InfluxDB request and response. <br /><br />Below are the available options:<li>NONE<br /></li><li>BASIC<br /></li><li>HEADERS<br /></li><li>FULL</li>|
+| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
+| `batchTimeMs` |long|false| 1000L |   The InfluxDB operation time in milliseconds. |
+| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+
+#### InfluxDBv1
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
+| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. |
+| `password` | String| false|" " (empty string)  | The password used to authenticate to InfluxDB. |
+| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. |
+| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB. <br /><br />Below are the available options:<li>ALL<br /></li><li> ANY<br /></li><li>ONE<br /></li><li>QUORUM </li>|
+| `logLevel` | String|false| NONE|The log level for InfluxDB request and response. <br /><br />Below are the available options:<li>NONE<br /></li><li>BASIC<br /></li><li>HEADERS<br /></li><li>FULL</li>|
+| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. |
+| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
+| `batchTimeMs` |long|false| 1000L |   The InfluxDB operation time in milliseconds. |
+| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+
+### Example
+Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods.
+#### InfluxDBv2
+* JSON
+
+  ```json
+  
+  {
+      "influxdbUrl": "http://localhost:9999",
+      "organization": "example-org",
+      "bucket": "example-bucket",
+      "token": "xxxx",
+      "precision": "ns",
+      "logLevel": "NONE",
+      "gzipEnable": false,
+      "batchTimeMs": 1000,
+      "batchSize": 100
+  }
+  
+  ```
+
+  
+* YAML
+
+  ```yaml
+  
+  configs:
+      influxdbUrl: "http://localhost:9999"
+      organization: "example-org"
+      bucket: "example-bucket"
+      token: "xxxx"
+      precision: "ns"
+      logLevel: "NONE"
+      gzipEnable: false
+      batchTimeMs: 1000
+      batchSize: 100
+  
+  ```
+
+  
+#### InfluxDBv1
+
+* JSON 
+
+  ```json
+  
+  {
+      "influxdbUrl": "http://localhost:8086",
+      "database": "test_db",
+      "consistencyLevel": "ONE",
+      "logLevel": "NONE",
+      "retentionPolicy": "autogen",
+      "gzipEnable": false,
+      "batchTimeMs": 1000,
+      "batchSize": 100
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      influxdbUrl: "http://localhost:8086"
+      database: "test_db"
+      consistencyLevel: "ONE"
+      logLevel: "NONE"
+      retentionPolicy: "autogen"
+      gzipEnable: false
+      batchTimeMs: 1000
+      batchSize: 100
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-jdbc-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-jdbc-sink.md
new file mode 100644
index 0000000..77dbb61
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-jdbc-sink.md
@@ -0,0 +1,157 @@
+---
+id: io-jdbc-sink
+title: JDBC sink connector
+sidebar_label: "JDBC sink connector"
+original_id: io-jdbc-sink
+---
+
+The JDBC sink connectors allow pulling messages from Pulsar topics 
+and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite.
+
+> Currently, INSERT, DELETE and UPDATE operations are supported.
+
+## Configuration 
+
+The configuration of all JDBC sink connectors has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.<br /><br />**Note: `userName` is case-sensitive.**|
+| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`. <br /><br />**Note: `password` is case-sensitive.**|
+| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. |
+| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. |
+| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events.  |
+| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
+| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. |
+| `batchSize` | int|false | 200 | The batch size of updates made to the database. |
+
+### Example for ClickHouse
+
+* JSON 
+
+  ```json
+  
+  {
+      "userName": "clickhouse",
+      "password": "password",
+      "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink",
+      "tableName": "pulsar_clickhouse_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-clickhouse-sink"
+  topicName: "persistent://public/default/jdbc-clickhouse-topic"
+  sinkType: "jdbc-clickhouse"    
+  configs:
+      userName: "clickhouse"
+      password: "password"
+      jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink"
+      tableName: "pulsar_clickhouse_jdbc_sink"
+  
+  ```
+
+### Example for MariaDB
+
+* JSON 
+
+  ```json
+  
+  {
+      "userName": "mariadb",
+      "password": "password",
+      "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink",
+      "tableName": "pulsar_mariadb_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-mariadb-sink"
+  topicName: "persistent://public/default/jdbc-mariadb-topic"
+  sinkType: "jdbc-mariadb"    
+  configs:
+      userName: "mariadb"
+      password: "password"
+      jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink"
+      tableName: "pulsar_mariadb_jdbc_sink"
+  
+  ```
+
+### Example for PostgreSQL
+
+Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "userName": "postgres",
+      "password": "password",
+      "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink",
+      "tableName": "pulsar_postgres_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-postgres-sink"
+  topicName: "persistent://public/default/jdbc-postgres-topic"
+  sinkType: "jdbc-postgres"    
+  configs:
+      userName: "postgres"
+      password: "password"
+      jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink"
+      tableName: "pulsar_postgres_jdbc_sink"
+  
+  ```
+
+For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql).
+
+### Example for SQLite
+
+* JSON 
+
+  ```json
+  
+  {
+      "jdbcUrl": "jdbc:sqlite:db.sqlite",
+      "tableName": "pulsar_sqlite_jdbc_sink"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  tenant: "public"
+  namespace: "default"
+  name: "jdbc-sqlite-sink"
+  topicName: "persistent://public/default/jdbc-sqlite-topic"
+  sinkType: "jdbc-sqlite"    
+  configs:
+      jdbcUrl: "jdbc:sqlite:db.sqlite"
+      tableName: "pulsar_sqlite_jdbc_sink"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-kafka-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-kafka-sink.md
new file mode 100644
index 0000000..09dad4c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-kafka-sink.md
@@ -0,0 +1,72 @@
+---
+id: io-kafka-sink
+title: Kafka sink connector
+sidebar_label: "Kafka sink connector"
+original_id: io-kafka-sink
+---
+
+The Kafka sink connector pulls messages from Pulsar topics and persists the messages
+to Kafka topics.
+
+This guide explains how to configure and use the Kafka sink connector.
+
+## Configuration
+
+The configuration of the Kafka sink connector has the following parameters.
+
+### Property
+
+| Name | Type| Required | Default | Description 
+|------|----------|---------|-------------|-------------|
+|  `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. |
+|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes. <br />This controls the durability of the sent records.
+|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers.
+|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes.
+|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar.
+| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys.
+| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.<br /><br />The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java).
+|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers. <br /><br />**Note:  other properties specified in the connector configuration file take precedence over this configuration**.
+
+
+### Example
+
+Before using the Kafka sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "bootstrapServers": "localhost:6667",
+      "topic": "test",
+      "acks": "1",
+      "batchSize": "16384",
+      "maxRequestSize": "1048576",
+      "producerConfigProperties":
+       {
+          "client.id": "test-pulsar-producer",
+          "security.protocol": "SASL_PLAINTEXT",
+          "sasl.mechanism": "GSSAPI",
+          "sasl.kerberos.service.name": "kafka",
+          "acks": "all" 
+       }
+  }
+
+* YAML
+  
+  ```
+
+yaml
+  configs:
+      bootstrapServers: "localhost:6667"
+      topic: "test"
+      acks: "1"
+      batchSize: "16384"
+      maxRequestSize: "1048576"
+      producerConfigProperties:
+          client.id: "test-pulsar-producer"
+          security.protocol: "SASL_PLAINTEXT"
+          sasl.mechanism: "GSSAPI"
+          sasl.kerberos.service.name: "kafka"
+          acks: "all"   
+  ```
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-kafka-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-kafka-source.md
new file mode 100644
index 0000000..8d68e29
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-kafka-source.md
@@ -0,0 +1,197 @@
+---
+id: io-kafka-source
+title: Kafka source connector
+sidebar_label: "Kafka source connector"
+original_id: io-kafka-source
+---
+
+The Kafka source connector pulls messages from Kafka topics and persists the messages
+to Pulsar topics.
+
+This guide explains how to configure and use the Kafka source connector.
+
+## Configuration
+
+The configuration of the Kafka source connector has the following properties.
+
+### Property
+
+| Name | Type| Required | Default | Description 
+|------|----------|---------|-------------|-------------|
+|  `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. |
+| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. |
+| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. |
+| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.<br /><br /> This committed offset is used when the process fails as the position from which a new consumer begins. |
+| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. |
+| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities. <br /><br />**Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.|
+| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. |
+| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. |
+|  `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers. <br /><br />**Note: other properties specified in the connector configuration file take precedence over this configuration**. |
+| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.<br /> The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java).
+| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values.
+
+
+### Example
+
+Before using the Kafka source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "bootstrapServers": "pulsar-kafka:9092",
+      "groupId": "test-pulsar-io",
+      "topic": "my-topic",
+      "sessionTimeoutMs": "10000",
+      "autoCommitEnabled": false
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      bootstrapServers: "pulsar-kafka:9092"
+      groupId: "test-pulsar-io"
+      topic: "my-topic"
+      sessionTimeoutMs: "10000"
+      autoCommitEnabled: false
+  
+  ```
+
+## Usage
+
+Here is an example of using the Kafka source connecter with the configuration file as shown previously.
+
+1. Download a Kafka client and a Kafka connector.
+
+   ```bash
+   
+   $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar
+
+   $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar
+   
+   ```
+
+2. Create a network.
+
+   ```bash
+   
+   $ docker network create kafka-pulsar
+   
+   ```
+
+3. Pull a ZooKeeper image and start ZooKeeper.
+
+   ```bash
+   
+   $ docker pull wurstmeister/zookeeper
+
+   $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper
+   
+   ```
+
+4. Pull a Kafka image and start Kafka.
+
+   ```bash
+   
+   $ docker pull wurstmeister/kafka:2.11-1.0.2
+   
+   $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2
+   
+   ```
+
+5. Pull a Pulsar image and start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:2.4.0
+   
+   $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone
+   
+   ```
+
+6. Create a producer file _kafka-producer.py_.
+
+   ```python
+   
+   from kafka import KafkaProducer
+   producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092')
+   future = producer.send('my-topic', b'hello world')
+   future.get()
+   
+   ```
+
+7. Create a consumer file _pulsar-client.py_.
+
+   ```python
+   
+   import pulsar
+
+   client = pulsar.Client('pulsar://localhost:6650')
+   consumer = client.subscribe('my-topic', subscription_name='my-aa')
+
+   while True:
+       msg = consumer.receive()
+       print msg
+       print dir(msg)
+       print("Received message: '%s'" % msg.data())
+       consumer.acknowledge(msg)
+
+   client.close()
+   
+   ```
+
+8. Copy the following files to Pulsar.
+
+   ```bash
+   
+   $ docker cp pulsar-io-kafka-2.4.0.nar pulsar-kafka-standalone:/pulsar
+   $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf
+   $ docker cp kafka-clients-0.10.2.1.jar pulsar-kafka-standalone:/pulsar/lib
+   $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/
+   $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/
+   
+   ```
+
+9. Open a new terminal window and start the Kafka source connector in local run mode. 
+
+   ```bash
+   
+   $ docker exec -it pulsar-kafka-standalone /bin/bash
+
+   $ ./bin/pulsar-admin source localrun \
+   --archive ./pulsar-io-kafka-2.4.0.nar \
+   --classname org.apache.pulsar.io.kafka.KafkaBytesSource \
+   --tenant public \
+   --namespace default \
+   --name kafka \
+   --destination-topic-name my-topic \
+   --source-config-file ./conf/kafkaSourceConfig.yaml \
+   --parallelism 1
+   
+   ```
+
+10. Open a new terminal window and run the consumer.
+
+   ```bash
+   
+   $ docker exec -it pulsar-kafka-standalone /bin/bash
+
+   $ pip install kafka-python
+
+   $ python3 kafka-producer.py
+   
+   ```
+
+   The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   Received message: 'hello world'
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-kinesis-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-kinesis-sink.md
new file mode 100644
index 0000000..153587d
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-kinesis-sink.md
@@ -0,0 +1,80 @@
+---
+id: io-kinesis-sink
+title: Kinesis sink connector
+sidebar_label: "Kinesis sink connector"
+original_id: io-kinesis-sink
+---
+
+The Kinesis sink connector pulls data from Pulsar and persists data into Amazon Kinesis.
+
+## Configuration
+
+The configuration of the Kinesis sink connector has the following property.
+
+### Property
+
+| Name | Type|Required | Default | Description
+|------|----------|----------|---------|-------------|
+`messageFormat`|MessageFormat|true|ONLY_RAW_PAYLOAD|Message format in which Kinesis sink converts Pulsar messages and publishes to Kinesis streams.<br /><br />Below are the available options:<br /><br /><li>`ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream. <br /><br /></li><li>`FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties and encryptionCtx, and publishes JSON pa [...]
+`retainOrdering`|boolean|false|false|Whether Pulsar connectors to retain ordering when moving messages from Pulsar to Kinesis or not.
+`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br /><br />**Example**<br /> us-west-1, us-west-2
+`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}. <br /><br />It is a factory class which creates an AWSCredentialsProvider that is used by Kinesis sink. <br /><br />If it is empty, the Kinesis sink creates a default AWSCredentialsProvider which accepts json-map of credentials in `awsCredentialPlu [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Built-in plugins
+
+The following are built-in `AwsCredentialProviderPlugin` plugins:
+
+* `org.apache.pulsar.io.aws.AwsDefaultProviderChainPlugin`
+  
+  This plugin takes no configuration, it uses the default AWS provider chain. 
+  
+  For more information, see [AWS documentation](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).
+
+* `org.apache.pulsar.io.aws.STSAssumeRoleProviderPlugin`
+  
+  This plugin takes a configuration (via the `awsCredentialPluginParam`) that describes a role to assume when running the KCL.
+
+  This configuration takes the form of a small json document like:
+
+  ```json
+  
+  {"roleArn": "arn...", "roleSessionName": "name"}
+  
+  ```
+
+### Example
+
+Before using the Kinesis sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "awsEndpoint": "some.endpoint.aws",
+      "awsRegion": "us-east-1",
+      "awsKinesisStreamName": "my-stream",
+      "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+      "messageFormat": "ONLY_RAW_PAYLOAD",
+      "retainOrdering": "true"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      awsEndpoint: "some.endpoint.aws"
+      awsRegion: "us-east-1"
+      awsKinesisStreamName: "my-stream"
+      awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+      messageFormat: "ONLY_RAW_PAYLOAD"
+      retainOrdering: "true"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-kinesis-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-kinesis-source.md
new file mode 100644
index 0000000..0d07eef
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-kinesis-source.md
@@ -0,0 +1,81 @@
+---
+id: io-kinesis-source
+title: Kinesis source connector
+sidebar_label: "Kinesis source connector"
+original_id: io-kinesis-source
+---
+
+The Kinesis source connector pulls data from Amazon Kinesis and persists data into Pulsar.
+
+This connector uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual consuming of messages. The KCL uses DynamoDB to track state for consumers.
+
+> Note: currently, the Kinesis source connector only supports raw messages. If you use KMS encrypted messages, the encrypted messages are sent to downstream. This connector will support decrypting messages in the future release.
+
+
+## Configuration
+
+The configuration of the Kinesis source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.<br /><br />Below are the available options:<br /><br /><li>`AT_TIMESTAMP`: start from the record at or after the specified timestamp.<br /><br /></li><li>`LATEST`: start after the most recent data record.<br /><br /></li><li>`TRIM_HORIZON`: start from the oldest available data record.</li>
+`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption.
+`applicationName`|String|false|Pulsar IO connector|The name of the Amazon Kinesis application. <br /><br />By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances.
+`checkpointInterval`|long|false|60000|The frequency of the Kinesis stream checkpoint in milliseconds.
+`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds.
+`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint.
+`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector. <br /><br />Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed.
+`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`useEnhancedFanOut`|boolean|false|true|If set to true, it uses Kinesis enhanced fan-out.<br /><br />If set to false, it uses polling.
+`awsEndpoint`|String|false|" " (empty string)|The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br /><br />**Example**<br /> us-west-1, us-west-2
+`awsKinesisStreamName`|String|true|" " (empty string)|The Kinesis stream name.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:AwsCredentialProviderPlugin:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.<br /><br />`awsCredentialProviderPlugin` has the following built-in plugs:<br /><br /><li>`org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:<br /> this plugin uses the default AWS provider chain.<br />For more information, see [using the [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Example
+
+Before using the Kinesis source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "awsEndpoint": "https://some.endpoint.aws",
+      "awsRegion": "us-east-1",
+      "awsKinesisStreamName": "my-stream",
+      "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+      "applicationName": "My test application",
+      "checkpointInterval": "30000",
+      "backoffTime": "4000",
+      "numRetries": "3",
+      "receiveQueueSize": 2000,
+      "initialPositionInStream": "TRIM_HORIZON",
+      "startAtTime": "2019-03-05T19:28:58.000Z"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      awsEndpoint: "https://some.endpoint.aws"
+      awsRegion: "us-east-1"
+      awsKinesisStreamName: "my-stream"
+      awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+      applicationName: "My test application"
+      checkpointInterval: 30000
+      backoffTime: 4000
+      numRetries: 3
+      receiveQueueSize: 2000
+      initialPositionInStream: "TRIM_HORIZON"
+      startAtTime: "2019-03-05T19:28:58.000Z"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-mongo-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-mongo-sink.md
new file mode 100644
index 0000000..3e6b3e6
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-mongo-sink.md
@@ -0,0 +1,57 @@
+---
+id: io-mongo-sink
+title: MongoDB sink connector
+sidebar_label: "MongoDB sink connector"
+original_id: io-mongo-sink
+---
+
+The MongoDB sink connector pulls messages from Pulsar topics 
+and persists the messages to collections.
+
+## Configuration
+
+The configuration of the MongoDB sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects. <br /><br />For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). |
+| `database` | String| true| " " (empty string)| The database name to which the collection belongs. |
+| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. |
+| `batchSize` | int|false|100 | The batch size of writing messages to collections. |
+| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. |
+
+
+### Example
+
+Before using the Mongo sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "mongoUri": "mongodb://localhost:27017",
+      "database": "pulsar",
+      "collection": "messages",
+      "batchSize": "2",
+      "batchTimeMs": "500"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  {
+      mongoUri: "mongodb://localhost:27017"
+      database: "pulsar"
+      collection: "messages"
+      batchSize: 2
+      batchTimeMs: 500
+  }
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-netty-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-netty-source.md
new file mode 100644
index 0000000..e1ec8d8
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-netty-source.md
@@ -0,0 +1,241 @@
+---
+id: io-netty-source
+title: Netty source connector
+sidebar_label: "Netty source connector"
+original_id: io-netty-source
+---
+
+The Netty source connector opens a port that accepts incoming data via the configured network protocol 
+and publish it to user-defined Pulsar topics.
+
+This connector can be used in a containerized (for example, k8s) deployment. Otherwise, if the connector is running in process or thread mode, the instance may be conflicting on listening to ports.
+
+## Configuration
+
+The configuration of the Netty source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `type` |String| true |tcp | The network protocol over which data is transmitted to netty. <br /><br />Below are the available options:<br /><li>tcp</li><li>http</li><li>udp </li>|
+| `host` | String|true | 127.0.0.1 | The host name or address on which the source instance listen. |
+| `port` | int|true | 10999 | The port on which the source instance listen. |
+| `numberOfThreads` |int| true |1 | The number of threads of Netty TCP server to accept incoming connections and handle the traffic of accepted connections. |
+
+
+### Example
+
+Before using the Netty source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "type": "tcp",
+      "host": "127.0.0.1",
+      "port": "10911",
+      "numberOfThreads": "1"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      type: "tcp"
+      host: "127.0.0.1"
+      port: 10999
+      numberOfThreads: 1
+  
+  ```
+
+## Usage 
+
+The following examples show how to use the Netty source connector with TCP and HTTP.
+
+### TCP 
+
+1. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:{version}
+
+   $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+   
+   ```
+
+2. Create a configuration file _netty-source-config.yaml_.
+
+   ```yaml
+   
+   configs:
+       type: "tcp"
+       host: "127.0.0.1"
+       port: 10999
+       numberOfThreads: 1
+   
+   ```
+
+3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server.
+
+   ```bash
+   
+   $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/
+   
+   ```
+
+4. Download the Netty source connector.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar
+   
+   ```
+
+5. Start the Netty source connector.
+
+   ```bash
+   
+   $ ./bin/pulsar-admin sources localrun \
+   --archive pulsar-io-@pulsar:version@.nar \
+   --tenant public \
+   --namespace default \
+   --name netty \
+   --destination-topic-name netty-topic \
+   --source-config-file netty-source-config.yaml \
+   --parallelism 1
+   
+   ```
+
+6. Consume data.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0
+   
+   ```
+
+7. Open another terminal window to send data to the Netty source.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ apt-get update
+   
+   $ apt-get -y install telnet
+
+   $ root@1d19327b2c67:/pulsar# telnet 127.0.0.1 10999
+   Trying 127.0.0.1...
+   Connected to 127.0.0.1.
+   Escape character is '^]'.
+   hello
+   world
+   
+   ```
+
+8. The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   ----- got message -----
+   hello
+
+   ----- got message -----
+   world
+   
+   ```
+
+### HTTP 
+
+1. Start Pulsar standalone.
+
+   ```bash
+   
+   $ docker pull apachepulsar/pulsar:{version}
+
+   $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-netty-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+   
+   ```
+
+2. Create a configuration file _netty-source-config.yaml_.
+
+   ```yaml
+   
+   configs:
+       type: "http"
+       host: "127.0.0.1"
+       port: 10999
+       numberOfThreads: 1
+   
+   ```
+
+3. Copy the configuration file _netty-source-config.yaml_ to Pulsar server.
+
+   ```bash
+   
+   $ docker cp netty-source-config.yaml pulsar-netty-standalone:/pulsar/conf/
+   
+   ```
+
+4. Download the Netty source connector.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   curl -O http://mirror-hk.koddos.net/apache/pulsar/pulsar-{version}/connectors/pulsar-io-netty-{version}.nar
+   
+   ```
+
+5. Start the Netty source connector.
+
+   ```bash
+   
+   $ ./bin/pulsar-admin sources localrun \
+   --archive pulsar-io-@pulsar:version@.nar \
+   --tenant public \
+   --namespace default \
+   --name netty \
+   --destination-topic-name netty-topic \
+   --source-config-file netty-source-config.yaml \
+   --parallelism 1
+   
+   ```
+
+6. Consume data.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ ./bin/pulsar-client consume -t Exclusive -s netty-sub netty-topic -n 0
+   
+   ```
+
+7. Open another terminal window to send data to the Netty source.
+
+   ```bash
+   
+   $ docker exec -it pulsar-netty-standalone /bin/bash
+   
+   $ curl -X POST --data 'hello, world!' http://127.0.0.1:10999/
+   
+   ```
+
+8. The following information appears on the consumer terminal window.
+
+   ```bash
+   
+   ----- got message -----
+   hello, world!
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-rabbitmq-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-rabbitmq-sink.md
new file mode 100644
index 0000000..d7fda99
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-rabbitmq-sink.md
@@ -0,0 +1,85 @@
+---
+id: io-rabbitmq-sink
+title: RabbitMQ sink connector
+sidebar_label: "RabbitMQ sink connector"
+original_id: io-rabbitmq-sink
+---
+
+The RabbitMQ sink connector pulls messages from Pulsar topics 
+and persist the messages to RabbitMQ queues.
+
+
+## Configuration 
+
+The configuration of the RabbitMQ sink connector has the following properties.
+
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `connectionName` |String| true | " " (empty string) | The connection name. |
+| `host` | String| true | " " (empty string) | The RabbitMQ host. |
+| `port` | int |true | 5672 | The RabbitMQ port. |
+| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. |
+| `username` | String|false | guest | The username used to authenticate to RabbitMQ. |
+| `password` | String|false | guest | The password used to authenticate to RabbitMQ. |
+| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
+| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number. <br /><br />0 means unlimited. |
+| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets. <br /><br />0 means unlimited. |
+| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds. <br /><br />0 means infinite. |
+| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
+| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. |
+| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.<br /><br /> 0 means unlimited. |
+| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. |
+
+
+### Example
+
+Before using the RabbitMQ sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "host": "localhost",
+      "port": "5672",
+      "virtualHost": "/",
+      "username": "guest",
+      "password": "guest",
+      "queueName": "test-queue",
+      "connectionName": "test-connection",
+      "requestedChannelMax": "0",
+      "requestedFrameMax": "0",
+      "connectionTimeout": "60000",
+      "handshakeTimeout": "10000",
+      "requestedHeartbeat": "60",
+      "exchangeName": "test-exchange",
+      "routingKey": "test-key"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      host: "localhost"
+      port: 5672
+      virtualHost: "/",
+      username: "guest"
+      password: "guest"
+      queueName: "test-queue"
+      connectionName: "test-connection"
+      requestedChannelMax: 0
+      requestedFrameMax: 0
+      connectionTimeout: 60000
+      handshakeTimeout: 10000
+      requestedHeartbeat: 60
+      exchangeName: "test-exchange"
+      routingKey: "test-key"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-rabbitmq-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-rabbitmq-source.md
new file mode 100644
index 0000000..491df4d
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-rabbitmq-source.md
@@ -0,0 +1,82 @@
+---
+id: io-rabbitmq-source
+title: RabbitMQ source connector
+sidebar_label: "RabbitMQ source connector"
+original_id: io-rabbitmq-source
+---
+
+The RabbitMQ source connector receives messages from RabbitMQ clusters 
+and writes messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of the RabbitMQ source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `connectionName` |String| true | " " (empty string) | The connection name. |
+| `host` | String| true | " " (empty string) | The RabbitMQ host. |
+| `port` | int |true | 5672 | The RabbitMQ port. |
+| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. |
+| `username` | String|false | guest | The username used to authenticate to RabbitMQ. |
+| `password` | String|false | guest | The password used to authenticate to RabbitMQ. |
+| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
+| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number. <br /><br />0 means unlimited. |
+| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets. <br /><br />0 means unlimited. |
+| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds. <br /><br />0 means infinite. |
+| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
+| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. |
+| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.<br /><br /> 0 means unlimited. |
+| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. |
+
+### Example
+
+Before using the RabbitMQ source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+  ```json
+  
+  {
+      "host": "localhost",
+      "port": "5672",
+      "virtualHost": "/",
+      "username": "guest",
+      "password": "guest",
+      "queueName": "test-queue",
+      "connectionName": "test-connection",
+      "requestedChannelMax": "0",
+      "requestedFrameMax": "0",
+      "connectionTimeout": "60000",
+      "handshakeTimeout": "10000",
+      "requestedHeartbeat": "60",
+      "prefetchCount": "0",
+      "prefetchGlobal": "false"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  configs:
+      host: "localhost"
+      port: 5672
+      virtualHost: "/"
+      username: "guest"
+      password: "guest"
+      queueName: "test-queue"
+      connectionName: "test-connection"
+      requestedChannelMax: 0
+      requestedFrameMax: 0
+      connectionTimeout: 60000
+      handshakeTimeout: 10000
+      requestedHeartbeat: 60
+      prefetchCount: 0
+      prefetchGlobal: "false"
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-redis-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-redis-sink.md
new file mode 100644
index 0000000..793d74a
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-redis-sink.md
@@ -0,0 +1,74 @@
+---
+id: io-redis-sink
+title: Redis sink connector
+sidebar_label: "Redis sink connector"
+original_id: io-redis-sink
+---
+
+The  Redis sink connector pulls messages from Pulsar topics 
+and persists the messages to a Redis database.
+
+
+
+## Configuration
+
+The configuration of the Redis sink connector has the following properties.
+
+
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. |
+| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. |
+| `redisDatabase` | int|true|0  | The Redis database to connect to. |
+| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster. <br /><br />Below are the available options: <br /><li>Standalone<br /></li><li>Cluster </li>|
+| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. |
+| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. |
+| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. |
+| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. |
+| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. |
+| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . |
+| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. |
+| `batchSize` | int|false|200 | The batch size of writing to Redis database. |
+
+
+### Example
+
+Before using the Redis sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "redisHosts": "localhost:6379",
+      "redisPassword": "fake@123",
+      "redisDatabase": "1",
+      "clientMode": "Standalone",
+      "operationTimeout": "2000",
+      "batchSize": "100",
+      "batchTimeMs": "1000",
+      "connectTimeout": "3000"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  {
+      redisHosts: "localhost:6379"
+      redisPassword: "fake@123"
+      redisDatabase: 1
+      clientMode: "Standalone"
+      operationTimeout: 2000
+      batchSize: 100
+      batchTimeMs: 1000
+      connectTimeout: 3000
+  }
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-solr-sink.md b/site2/website-next/versioned_docs/version-2.6.0/io-solr-sink.md
new file mode 100644
index 0000000..df2c361
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-solr-sink.md
@@ -0,0 +1,65 @@
+---
+id: io-solr-sink
+title: Solr sink connector
+sidebar_label: "Solr sink connector"
+original_id: io-solr-sink
+---
+
+The Solr sink connector pulls messages from Pulsar topics 
+and persists the messages to Solr collections.
+
+
+
+## Configuration
+
+The configuration of the Solr sink connector has the following properties.
+
+
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `solrUrl` | String|true|" " (empty string) | <li>Comma-separated zookeeper hosts with chroot used in the SolrCloud mode. <br />**Example**<br />`localhost:2181,localhost:2182/chroot` <br /><br /></li><li>URL to connect to Solr used in standalone mode. <br />**Example**<br />`localhost:8983/solr` </li>|
+| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster. <br /><br />Below are the available options:<br /><li>Standalone<br /></li><li> SolrCloud</li>|
+| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. |
+| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.|
+| `username` |String|false|  " " (empty string) | The username for basic authentication.<br /><br />**Note: `usename` is case-sensitive.** |
+| `password` | String|false|  " " (empty string) | The password for basic authentication. <br /><br />**Note: `password` is case-sensitive.** |
+
+
+
+### Example
+
+Before using the Solr sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+  ```json
+  
+  {
+      "solrUrl": "localhost:2181,localhost:2182/chroot",
+      "solrMode": "SolrCloud",
+      "solrCollection": "techproducts",
+      "solrCommitWithinMs": 100,
+      "username": "fakeuser",
+      "password": "fake@123"
+  }
+  
+  ```
+
+* YAML
+
+  ```yaml
+  
+  {
+      solrUrl: "localhost:2181,localhost:2182/chroot"
+      solrMode: "SolrCloud"
+      solrCollection: "techproducts"
+      solrCommitWithinMs: 100
+      username: "fakeuser"
+      password: "fake@123"
+  }
+  
+  ```
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-twitter-source.md b/site2/website-next/versioned_docs/version-2.6.0/io-twitter-source.md
new file mode 100644
index 0000000..8de3504
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-twitter-source.md
@@ -0,0 +1,28 @@
+---
+id: io-twitter-source
+title: Twitter Firehose source connector
+sidebar_label: "Twitter Firehose source connector"
+original_id: io-twitter-source
+---
+
+The Twitter Firehose source connector receives tweets from Twitter Firehose and 
+writes the tweets to Pulsar topics.
+
+## Configuration
+
+The configuration of the Twitter Firehose source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.<br /><br />For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). |
+| `consumerSecret` | String |true | " " (empty string)  | The twitter OAuth consumer secret. |
+| `token` | String|true | " " (empty string)  | The twitter OAuth token. |
+| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. |
+| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.<br /><br />If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time.
+| `clientName` |  String |false | openconnector-twitter-source| The twitter firehose client name. |
+| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. |
+| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. |
+
+> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html).
diff --git a/site2/website-next/versioned_docs/version-2.6.0/io-twitter.md b/site2/website-next/versioned_docs/version-2.6.0/io-twitter.md
new file mode 100644
index 0000000..3b2f632
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/io-twitter.md
@@ -0,0 +1,7 @@
+---
+id: io-twitter
+title: Twitter Firehose Connector
+sidebar_label: "Twitter Firehose Connector"
+original_id: io-twitter
+---
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/reference-cli-tools.md b/site2/website-next/versioned_docs/version-2.6.0/reference-cli-tools.md
new file mode 100644
index 0000000..5af678f
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/reference-cli-tools.md
@@ -0,0 +1,923 @@
+---
+id: reference-cli-tools
+title: Pulsar command-line tools
+sidebar_label: "Pulsar CLI tools"
+original_id: reference-cli-tools
+---
+
+Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more.
+
+All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone). The following tools are currently documented:
+
+* [`pulsar`](#pulsar)
+* [`pulsar-client`](#pulsar-client)
+* [`pulsar-daemon`](#pulsar-daemon)
+* [`pulsar-perf`](#pulsar-perf)
+* [`bookkeeper`](#bookkeeper)
+* [`broker-tool`](#broker-tool)
+
+> ### Getting help
+> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example:
+
+> ```shell
+> 
+> $ bin/pulsar broker --help
+>
+> 
+> ```
+
+
+## `pulsar`
+
+The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground.
+
+These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar.
+
+Usage:
+
+```bash
+
+$ pulsar command
+
+```
+
+Commands:
+* `bookie`
+* `broker`
+* `compact-topic`
+* `discovery`
+* `configuration-store`
+* `initialize-cluster-metadata`
+* `proxy`
+* `standalone`
+* `websocket`
+* `zookeeper`
+* `zookeeper-shell`
+
+Example:
+
+```bash
+
+$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker
+
+```
+
+The table below lists the environment variables that you can use to configure the `pulsar` tool.
+
+|Variable|Description|Default|
+|---|---|---|
+|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`|
+|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`|
+|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`|
+|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`|
+|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`|
+|`PULSAR_DISCOVERY_CONF`|Configuration file for discovery service|`conf/discovery.conf`|
+|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`|
+|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`|
+|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm||
+|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath||
+|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored||
+|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful||
+
+
+
+### `bookie`
+
+Starts up a bookie server
+
+Usage:
+
+```bash
+
+$ pulsar bookie options
+
+```
+
+Options
+
+|Option|Description|Default|
+|---|---|---|
+|`-readOnly`|Force start a read-only bookie server|false|
+|`-withAutoRecovery`|Start auto-recover service bookie server|false|
+
+
+Example
+
+```bash
+
+$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \
+  -readOnly \
+  -withAutoRecovery
+
+```
+
+### `broker`
+
+Starts up a Pulsar broker
+
+Usage
+
+```bash
+
+$ pulsar broker options
+
+```
+
+Options
+
+|Option|Description|Default|
+|---|---|---|
+|`-bc` , `--bookie-conf`|Configuration file for BookKeeper||
+|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false|
+|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false|
+
+Example
+
+```bash
+
+$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker
+
+```
+
+### `compact-topic`
+
+Run compaction against a Pulsar topic (in a new process)
+
+Usage
+
+```bash
+
+$ pulsar compact-topic options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-t` , `--topic`|The Pulsar topic that you would like to compact||
+
+Example
+
+```bash
+
+$ pulsar compact-topic --topic topic-to-compact
+
+```
+
+### `discovery`
+
+Run a discovery server
+
+Usage
+
+```bash
+
+$ pulsar discovery
+
+```
+
+Example
+
+```bash
+
+$ PULSAR_DISCOVERY_CONF=/path/to/discovery.conf pulsar discovery
+
+```
+
+### `configuration-store`
+
+Starts up the Pulsar configuration store
+
+Usage
+
+```bash
+
+$ pulsar configuration-store
+
+```
+
+Example
+
+```bash
+
+$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store
+
+```
+
+### `initialize-cluster-metadata`
+
+One-time cluster metadata initialization
+
+Usage
+
+```bash
+
+$ pulsar initialize-cluster-metadata options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-ub` , `--broker-service-url`|The broker service URL for the new cluster||
+|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption||
+|`-c` , `--cluster`|Cluster name||
+|`--configuration-store`|The configuration store quorum connection string||
+|`-uw` , `--web-service-url`|The web service URL for the new cluster||
+|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption||
+|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string||
+
+
+### `proxy`
+
+Manages the Pulsar proxy
+
+Usage
+
+```bash
+
+$ pulsar proxy options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--configuration-store`|Configuration store connection string||
+|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string||
+
+Example
+
+```bash
+
+$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \
+  --zookeeper-servers zk-0,zk-1,zk2 \
+  --configuration-store zk-0,zk-1,zk-2
+
+```
+
+### `standalone`
+
+Run a broker service with local bookies and local ZooKeeper
+
+Usage
+
+```bash
+
+$ pulsar standalone options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-a` , `--advertised-address`|The standalone broker advertised address||
+|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookkeeper|
+|`--bookkeeper-port`|Local bookies’ base port|3181|
+|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false|
+|`--num-bookies`|The number of local bookies|1|
+|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)||
+|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data||
+|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper|
+|`--zookeeper-port` |Local ZooKeeper’s port|2181|
+
+Example
+
+```bash
+
+$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone
+
+```
+
+### `websocket`
+
+Usage
+
+```bash
+
+$ pulsar websocket
+
+```
+
+Example
+
+```bash
+
+$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket
+
+```
+
+### `zookeeper`
+
+Starts up a ZooKeeper cluster
+
+Usage
+
+```bash
+
+$ pulsar zookeeper
+
+```
+
+Example
+
+```bash
+
+$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper
+
+```
+
+### `zookeeper-shell`
+
+Connects to a running ZooKeeper cluster using the ZooKeeper shell
+
+Usage
+
+```bash
+
+$ pulsar zookeeper-shell options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--conf`|Configuration file for ZooKeeper||
+|`-server`|Configuration zk address, eg: `127.0.0.1:2181`||
+
+
+
+## `pulsar-client`
+
+The pulsar-client tool
+
+Usage
+
+```bash
+
+$ pulsar-client command
+
+```
+
+Commands
+* `produce`
+* `consume`
+
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"|{"saslJaasClientSectionName":"PulsarClient", "serverType":"broker"}|
+|`--auth-plugin`|Authentication plugin class name|org.apache.pulsar.client.impl.auth.AuthenticationSasl|
+|`--proxy-protocol`|Proxy protocol to select type of routing at proxy||
+|`--proxy-url`|Proxy-server URL to which to connect||
+|`--url`|Broker URL to which to connect|pulsar://localhost:6650/ <br /> ws://localhost:8080 |
+|`-h`, `--help`|Show this help
+
+
+### `produce`
+Send a message or messages to a specific broker and topic
+
+Usage
+
+```bash
+
+$ pulsar-client produce topic options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]|
+|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]|
+|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1|
+|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0|
+|`-c`, `--chunking`|Split the message and publish in chunks if the message size is larger than the allowed max size|false|
+|`-s`, `--separator`|Character to split messages string with.|","|
+|`-k`, `--key`|Message key to add|key=value string, like k1=v1,k2=v2.|
+|`-p`, `--properties`|Properties to add. If you want to add multiple properties, use the comma as the separator, e.g. `k1=v1,k2=v2`.| |
+
+
+### `consume`
+Consume messages from a specific broker and topic
+
+Usage
+
+```bash
+
+$ pulsar-client consume topic options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--hex`|Display binary messages in hexadecimal format.|false|
+|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|1|
+|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0|
+|`--regex`|Indicate the topic name is a regex pattern|false|
+|`-s`, `--subscription-name`|Subscription name||
+|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover, Key_Shared.|Exclusive|
+|`-p`, `--subscription-position`|The position of the subscription. Possible values: Latest, Earliest.|Latest|
+|`-q`, `--queue-size`|The size of consumer's receiver queue.|0|
+|`-mc`, `--max_chunked_msg`|Max pending chunk messages.|0|
+|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full.|false|
+
+
+
+## `pulsar-daemon`
+A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup.
+
+pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command.
+
+Usage
+
+```bash
+
+$ pulsar-daemon command
+
+```
+
+Commands
+* `start`
+* `stop`
+
+
+### `start`
+Start a service in the background using nohup.
+
+Usage
+
+```bash
+
+$ pulsar-daemon start service
+
+```
+
+### `stop`
+Stop a service that’s already been started using start.
+
+Usage
+
+```bash
+
+$ pulsar-daemon stop service options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|-force|Stop the service forcefully if not stopped by normal shutdown.|false|
+
+
+
+## `pulsar-perf`
+A tool for performance testing a Pulsar broker.
+
+Usage
+
+```bash
+
+$ pulsar-perf command
+
+```
+
+Commands
+* `consume`
+* `produce`
+* `read`
+* `websocket-producer`
+* `managed-ledger`
+* `monitor-brokers`
+* `simulation-client`
+* `simulation-controller`
+* `help`
+
+Environment variables
+
+The table below lists the environment variables that you can use to configure the pulsar-perf tool.
+
+|Variable|Description|Default|
+|---|---|---|
+|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml|
+|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf|
+|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM||
+|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath||
+
+
+### `consume`
+Run a consumer
+
+Usage
+
+```
+
+$ pulsar-perf consume options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
+|`--auth_plugin`|Authentication plugin class name||
+|`-ac`, `--auto_ack_chunk_q_full`|Auto ack for the oldest message in consumer's receiver queue if the queue full|false|
+|`--acks-delay-millis`|Acknowlegments grouping delay in millis|100|
+|`-k`, `--encryption-key-name`|The private key name to decrypt payload||
+|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload||
+|`-h`, `--help`|Help message|false|
+|`--conf-file`|Configuration file||
+|`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0|
+|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
+|`-mc`, `--max_chunked_msg`|Max pending chunk messages|0|
+|`-n`, `--num-consumers`|Number of consumers (per topic)|1|
+|`-t`, `--num-topic`|The number of topics|1|
+|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0|
+|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000|
+|`--replicated`|Whether the subscription status should be replicated|false|
+|`-u`, `--service-url`|Pulsar service URL||
+|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0|
+|`-s`, `--subscriber-name`|Subscriber name prefix|sub|
+|`-st`, `--subscription-type`|Subscriber type. Possible values are Exclusive, Shared, Failover, Key_Shared.|Exclusive|
+|`-sp`, `--subscription-position`|Subscriber position. Possible values are Latest, Earliest.|Latest|
+|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages|0|
+|`--trust-cert-file`|Path for the trusted TLS certificate file||
+
+
+### `produce`
+Run a producer
+
+Usage
+
+```bash
+
+$ pulsar-perf produce options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
+|`--auth_plugin`|Authentication plugin class name||
+|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1|
+|`-bb`, `--batch-max-bytes`|Maximum number of bytes per batch|4194304|
+|`-bm`, `--batch-max-messages`|Maximum number of messages per batch|1000|
+|`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false|
+|`-d`, `--delay`|Mark messages with a given delay in seconds|0s|
+|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.||
+|`--conf-file`|Configuration file||
+|`-k`, `--encryption-key-name`|The public key name to encrypt payload||
+|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload||
+|`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false|
+|`-h`, `--help`|Help message|false|
+|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
+|`-o`, `--max-outstanding`|Max number of outstanding messages|1000|
+|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000|
+|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages.|0|
+|`-n`, `--num-producers`|The number of producers (per topic)|1|
+|`-threads`, `--num-test-threads`|Number of test threads|1|
+|`-t`, `--num-topic`|The number of topics|1|
+|`-f`, `--payload-file`|Use payload from an UTF-8 encoded text file and a payload will be randomly selected when publishing messages||
+|`-e`, `--payload-delimiter`|The delimiter used to split lines when using payload from a file|\n|
+|`-r`, `--rate`|Publish rate msg/s across topics|100|
+|`-u`, `--service-url`|Pulsar service URL||
+|`-s`, `--size`|Message size (in bytes)|1024|
+|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0|
+|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages.|0|
+|`--trust-cert-file`|Path for the trusted TLS certificate file||
+|`--warmup-time`|Warm-up time in seconds|1|
+
+
+### `read`
+Run a topic reader
+
+Usage
+
+```bash
+
+$ pulsar-perf read options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
+|`--auth_plugin`|Authentication plugin class name||
+|`--conf-file`|Configuration file||
+|`-h`, `--help`|Help message|false|
+|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
+|`-t`, `--num-topic`|The number of topics|1|
+|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0|
+|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000|
+|`-u`, `--service-url`|Pulsar service URL||
+|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest|
+|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0|
+|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps consuming messages.|0|
+|`--trust-cert-file`|Path for the trusted TLS certificate file||
+|`--use-tls`|Use TLS encryption on the connection|false|
+
+
+### `websocket-producer`
+Run a websocket producer
+
+Usage
+
+```bash
+
+$ pulsar-perf websocket-producer options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
+|`--auth_plugin`|Authentication plugin class name||
+|`--conf-file`|Configuration file||
+|`-h`, `--help`|Help message|false|
+|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0|
+|`-t`, `--num-topic`|The number of topics|1|
+|`-f`, `--payload-file`|Use payload from a file instead of empty buffer||
+|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"||
+|`-r`, `--rate`|Publish rate msg/s across topics|100|
+|`-s`, `--size`|Message size in byte|1024|
+|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0|
+
+
+### `managed-ledger`
+Write directly on managed-ledgers
+
+Usage
+
+```bash
+
+$ pulsar-perf managed-ledger options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-a`, `--ack-quorum`|Ledger ack quorum|1|
+|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C|
+|`-e`, `--ensemble-size`|Ledger ensemble size|1|
+|`-h`, `--help`|Help message|false|
+|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1|
+|`-o`, `--max-outstanding`|Max number of outstanding requests|1000|
+|`-m`, `--num-messages`|Number of messages to publish in total. If the value is 0 or smaller than 0, it keeps publishing messages|0|
+|`-t`, `--num-topic`|Number of managed ledgers|1|
+|`-r`, `--rate`|Write rate msg/s across managed ledgers|100|
+|`-s`, `--size`|Message size in byte|1024|
+|`-time`, `--test-duration`|Test duration (in seconds). If the value is 0 or smaller than 0, it keeps publishing messages|0|
+|`--threads`|Number of threads writing|1|
+|`-w`, `--write-quorum`|Ledger write quorum|1|
+|`-zk`, `--zookeeperServers`|ZooKeeper connection string||
+
+
+### `monitor-brokers`
+Continuously receive broker data and/or load reports
+
+Usage
+
+```bash
+
+$ pulsar-perf monitor-brokers options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--connect-string`|A connection string for one or more ZooKeeper servers||
+|`-h`, `--help`|Help message|false|
+
+
+### `simulation-client`
+Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`.
+
+Usage
+
+```bash
+
+$ pulsar-perf simulation-client options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--port`|Port to listen on for controller|0|
+|`--service-url`|Pulsar Service URL||
+|`-h`, `--help`|Help message|false|
+
+### `simulation-controller`
+Run a simulation controller to give commands to servers
+
+Usage
+
+```bash
+
+$ pulsar-perf simulation-controller options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--client-port`|The port that the clients are listening on|0|
+|`--clients`|Comma-separated list of client hostnames||
+|`--cluster`|The cluster to test on||
+|`-h`, `--help`|Help message|false|
+
+
+### `help`
+This help message
+
+Usage
+
+```bash
+
+$ pulsar-perf help
+
+```
+
+## `bookkeeper`
+A tool for managing BookKeeper.
+
+Usage
+
+```bash
+
+$ bookkeeper command
+
+```
+
+Commands
+* `auto-recovery`
+* `bookie`
+* `localbookie`
+* `upgrade`
+* `shell`
+
+
+Environment variables
+
+The table below lists the environment variables that you can use to configure the bookkeeper tool.
+
+|Variable|Description|Default|
+|---|---|---|
+|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml|
+|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf|
+|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM||
+|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath||
+|ENTRY_FORMATTER_CLASS|The Java class used to format entries||
+|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored||
+|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful||
+
+
+### `auto-recovery`
+Runs an auto-recovery service daemon
+
+Usage
+
+```bash
+
+$ bookkeeper auto-recovery options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--conf`|Configuration for the auto-recovery daemon||
+
+
+### `bookie`
+Starts up a BookKeeper server (aka bookie)
+
+Usage
+
+```bash
+
+$ bookkeeper bookie options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--conf`|Configuration for the auto-recovery daemon||
+|-readOnly|Force start a read-only bookie server|false|
+|-withAutoRecovery|Start auto-recovery service bookie server|false|
+
+
+### `localbookie`
+Runs a test ensemble of N bookies locally
+
+Usage
+
+```bash
+
+$ bookkeeper localbookie N
+
+```
+
+### `upgrade`
+Upgrade the bookie’s filesystem
+
+Usage
+
+```bash
+
+$ bookkeeper upgrade options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--conf`|Configuration for the auto-recovery daemon||
+|`-u`, `--upgrade`|Upgrade the bookie’s directories||
+
+
+### `shell`
+Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument.
+
+Usage
+
+```bash
+
+$ bookkeeper shell
+
+```
+
+Example
+
+```bash
+
+$ bookkeeper shell bookiesanity
+
+```
+
+## `broker-tool`
+
+The `broker- tool` is used for operations on a specific broker.
+
+Usage
+
+```bash
+
+$ broker-tool command
+
+```
+
+Commands
+* `load-report`
+* `help`
+
+Example
+Two ways to get more information about a command as below:
+
+```bash
+
+$ broker-tool help command
+$ broker-tool command --help
+
+```
+
+### `load-report`
+
+Collect the load report of a specific broker. 
+The command is run on a broker, and used for troubleshooting why broker can’t collect right load report.
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--interval`| Interval to collect load report, in milliseconds ||
+|`-h`, `--help`| Display help information ||
+
diff --git a/site2/website-next/versioned_docs/version-2.6.0/reference-configuration.md b/site2/website-next/versioned_docs/version-2.6.0/reference-configuration.md
new file mode 100644
index 0000000..8ac18c1
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/reference-configuration.md
@@ -0,0 +1,550 @@
+---
+id: reference-configuration
+title: Pulsar configuration
+sidebar_label: "Pulsar configuration"
+original_id: reference-configuration
+---
+
+
+
+
+Pulsar configuration can be managed via a series of configuration files contained in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone)
+
+- [BookKeeper](#bookkeeper)
+- [Broker](#broker)
+- [Client](#client)
+- [Service discovery](#service-discovery)
+- [Log4j](#log4j)
+- [Log4j shell](#log4j-shell)
+- [Standalone](#standalone)
+- [WebSocket](#websocket)
+- [Pulsar proxy](#pulsar-proxy)
+- [ZooKeeper](#zookeeper)
+
+## BookKeeper
+
+BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages.
+
+
+|Name|Description|Default|
+|---|---|---|
+|bookiePort|The port on which the bookie server listens.|3181|
+|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (i.e. the interface used to establish its identity). By default, loopback interfaces are not allowed as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cl [...]
+|listeningInterface|The network interface on which the bookie listens. If not set, the bookie will listen on all interfaces.|eth0|
+|journalDirectory|The directory where Bookkeeper outputs its write-ahead log (WAL)|data/bookkeeper/journal|
+|ledgerDirectories|The directory where Bookkeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by comma, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers|
+|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical|
+|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers|
+|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage|
+|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true|
+|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|2147483648|
+|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2|
+|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled.|3600|
+|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5|
+|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled.|86400|
+|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This pa [...]
+|compactionRate|The rate at which compaction will read entries, in adds per second.|1000|
+|isThrottleByBytes|Throttle compaction by bytes or by entries.|false|
+|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000|
+|compactionRateByBytes|Set the rate at which compaction reads entries. The unit is bytes added per second.|1000000|
+|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048|
+|journalMaxBackups|The max number of old journal files to keep. Keeping a number of old journal files would help data recovery in special cases.|5|
+|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16|
+|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64|
+|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true|
+|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true|
+|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1|
+|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096|
+|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288|
+|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false|
+|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8|
+|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|5000|
+| openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that's still being written to upon bookie failure. | 30000 |
+|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000|
+|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000|
+|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server  [...]
+|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000|
+|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181|
+|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000|
+|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true|
+|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0|
+|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficient when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192|
+|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in lager number of ledgers with f [...]
+|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If “readOnlyModeEnabled=true” then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true|
+|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partitions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95|
+|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000|
+|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800|
+|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400|
+|numAddWorkerThreads|number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0|
+|numReadWorkerThreads|number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8|
+|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500|
+|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096|
+|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536|
+|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ipaddress for the registration.|false|
+|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider|
+|prometheusStatsHttpPort||8000|
+|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log For good performance, it should be big enough to hold a substantial amount of entries in the flush interval.|25% of direct memory|
+|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens|25% of direct memory|
+|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000|
+|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases|10% of direct memory|
+|dbStorage_rocksDB_writeBufferSizeMB||64|
+|dbStorage_rocksDB_sstSizeInMB||64|
+|dbStorage_rocksDB_blockSize||65536|
+|dbStorage_rocksDB_bloomFilterBitsPerKey||10|
+|dbStorage_rocksDB_numLevels||-1|
+|dbStorage_rocksDB_numFilesInLevel0||4|
+|dbStorage_rocksDB_maxSizeInLevel1MB||256|
+| nettyMaxFrameSizeBytes | Set the maximum netty frame size in bytes. If the size of a received message is larger than the configured value, the message is rejected. | 1 GB |
+
+
+## Broker
+
+Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more.
+
+|Name|Description|Default|
+|---|---|---|
+|advertisedListeners|Specify multiple advertised listeners for the broker.<br /><br />The format is `<listener_name>:pulsar://<host>:<port>`.<br /><br />If there are multiple listeners, separate them with commas.<br /><br />**Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/|
+|internalListenerName|Specify the internal listener name for the broker.<br /><br />**Note**: the listener name must be contained in `advertisedListeners`.<br /><br /> If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/|
+|enablePersistentTopics|  Whether persistent topics are enabled on the broker |true|
+|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true|
+|functionsWorkerEnabled|  Whether the Pulsar Functions worker service is enabled in the broker  |false|
+|zookeeperServers|  Zookeeper quorum connection string  ||
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|brokerServicePort| Broker data port  |6650|
+|brokerServicePortTls|  Broker data port for TLS  |6651|
+|webServicePort|  Port to use to server HTTP request  |8080|
+|webServicePortTls| Port to use to server HTTPS request |8443|
+|webSocketServiceEnabled| Enable the WebSocket API service in broker  |false|
+|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0.  |0.0.0.0|
+|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used.  ||
+|clusterName| Name of the cluster to which this broker belongs to ||
+|brokerDeduplicationEnabled|  Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis.  |false|
+|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes.  |10000|
+|brokerDeduplicationEntriesInterval|  The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000|
+|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360|
+|dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 |
+|dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | 
+|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000|
+|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed  |60000|
+|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false|
+|backlogQuotaCheckEnabled|  Enable backlog quota check. Enforces action on topic when the quota is reached  |true|
+|backlogQuotaCheckIntervalInSeconds|  How often to check for topics that have reached the quota |60|
+|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit | -1 |
+|allowAutoTopicCreation| Enable topic auto creation if a new producer or consumer connected |true|
+|allowAutoTopicCreationType| The topic type (partitioned or non-partitioned) that is allowed to be automatically created. |Partitioned|
+|allowAutoSubscriptionCreation| Enable subscription auto creation if a new consumer connected |true|
+|defaultNumPartitions| The number of partitioned topics that is allowed to be automatically created if `allowAutoTopicCreationType` is partitioned |1|
+|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute.  |true|
+|brokerDeleteInactiveTopicsFrequencySeconds|  How often to check for inactive topics  |60|
+| brokerDeleteInactiveTopicsMode | Set the mode to delete inactive topics. <li> `delete_when_no_subscriptions`: delete the topic which has no subscriptions or active producers. </li><li> `delete_when_subscriptions_caught_up`: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers. </li>| `delete_when_no_subscriptions` |
+| brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum duration for inactive topics. If it is not specified, the `brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A |
+|messageExpiryCheckIntervalInMinutes| How frequently to proactively check and purge expired messages  |5|
+|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to see if topics with compaction policies need to be compacted  |60|
+|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed.  |1000|
+|clientLibraryVersionCheckEnabled|  Enable check for minimum allowed client library version |false|
+|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information  |true|
+|statusFilePath|  Path for the file used to determine the rotation status for the broker when responding to service discovery health checks ||
+|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles)  |false|
+|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0|
+|tlsEnabled|  Enable TLS  |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate file ||
+|tlsAllowInsecureConnection|  Accept untrusted TLS certificate from client  |false|
+|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.2```, ```TLSv1.1```, ```TLSv1``` ||
+|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```||
+|tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false|
+|tlsProvider| TLS Provider for KeyStore type ||
+|tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS|
+|tlsKeyStore| TLS KeyStore path in broker ||
+|tlsKeyStorePassword| TLS KeyStore password for broker ||
+|brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false|
+|brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers ||
+|brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS|
+|brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers ||
+|brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers ||
+|brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g.  [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]||
+|brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g.  [TLSv1.2, TLSv1.1, TLSv1] ||
+|ttlDurationDefaultInSeconds|  The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0|
+|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`||
+|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`||
+|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256|
+|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank ||
+|tokenAudienceClaim| The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. If not set, audience will not be verified. ||
+|tokenAudience| The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. ||
+|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction  |50000|
+|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction  |200000|
+|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true|
+|subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consuming. <br /><br />Setting this configuration to a value **greater than 0** deletes inactive subscriptions automatically.<br />Setting this configuration to **0** does not delete inactive subscriptions automatically. <br /><br /> Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0. <br />I [...]
+|maxConcurrentLookupRequest|  Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000|
+|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000|
+|authenticationEnabled| Enable authentication |false|
+|authenticationProviders| Authentication provider name list, which is comma separated list of class names  ||
+|authorizationEnabled|  Enforce authorization |false|
+|superUserRoles|  Role names that are treated as “super-user”, meaning they will be able to do all admin operations and publish/consume from all topics ||
+|brokerClientAuthenticationPlugin|  Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters  ||
+|brokerClientAuthenticationParameters|||
+|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication  ||
+|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false|
+|bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers ||
+|bookkeeperClientAuthenticationPlugin|  Authentication plugin to use when connecting to bookies ||
+|bookkeeperClientAuthenticationParametersName|  BookKeeper auth plugin implementation specifics parameters name and values  ||
+|bookkeeperClientAuthenticationParameters|||
+|bookkeeperClientTimeoutInSeconds|  Timeout for BK add / read operations  |30|
+|bookkeeperClientSpeculativeReadTimeoutInMillis|  Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0|
+|bookkeeperClientHealthCheckEnabled|  Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies  |true|
+|bookkeeperClientHealthCheckIntervalSeconds||60|
+|bookkeeperClientHealthCheckErrorThresholdPerInterval||5|
+|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800|
+|bookkeeperClientRackawarePolicyEnabled|  Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble  |true|
+|bookkeeperClientRegionawarePolicyEnabled|  Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored  |false|
+|bookkeeperClientReorderReadSequenceEnabled|  Enable/disable reordering read sequence on reading entries.  |false|
+|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker  ||
+|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available.  ||
+|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list.  ||
+|bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400|
+|bookkeeperClientGetBookieInfoRetryIntervalSeconds|  Set the interval to retry a failed bookie info lookup |60|
+|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read  all entries for a ledger. | true |
+|managedLedgerDefaultEnsembleSize|  Number of bookies to use when creating a ledger |2|
+|managedLedgerDefaultWriteQuorum| Number of copies to store for each message  |2|
+|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2|
+|managedLedgerCacheSizeMB|  Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory ||
+|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false|
+|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered  |0.9|
+|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 |
+|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 |
+|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000|
+|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages  |1.0|
+|managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true: <ul><li>The max rollover time has been reached</li><li>The max entries have been written to the ledger</li><li>The max ledger size has been written to the ledger</li></ul>|50000|
+|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic  |10|
+|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240|
+|managedLedgerCursorMaxEntriesPerLedger|  Max number of entries to append to a cursor ledger  |50000|
+|managedLedgerCursorRolloverTimeInSeconds|  Max time before triggering a rollover on a cursor ledger  |14400|
+|managedLedgerMaxUnackedRangesToPersist|  Max number of “acknowledgment holes” that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in “ranges” of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redel [...]
+|autoSkipNonRecoverableData|  Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false|
+|loadBalancerEnabled| Enable load balancer  |true|
+|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection ||
+|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update  |10|
+|loadBalancerReportUpdateMaxIntervalMinutes|  maximum interval to update load report  |15|
+|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect  |1|
+|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers  |30|
+|loadBalancerSheddingGracePeriodMinutes|  Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30|
+|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker  |50000|
+|loadBalancerBrokerUnderloadedThresholdPercentage|  Usage threshold to determine a broker as under-loaded |1|
+|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded  |85|
+|loadBalancerResourceQuotaUpdateIntervalMinutes|  Interval to update namespace bundle resource quota |15|
+|loadBalancerBrokerComfortLoadLevelPercentage|  Usage threshold to determine a broker is having just right level of load  |65|
+|loadBalancerAutoBundleSplitEnabled|  enable/disable namespace bundle auto split  |false|
+|loadBalancerNamespaceBundleMaxTopics|  maximum topics in a bundle, otherwise bundle split will be triggered  |1000|
+|loadBalancerNamespaceBundleMaxSessions|  maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered  |1000|
+|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered  |1000|
+|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered  |100|
+|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace  |128|
+|replicationMetricsEnabled| Enable replication metrics  |true|
+|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links.  |16|
+|replicationProducerQueueSize|  Replicator producer queue size  |1000|
+|replicatorPrefix|  Replicator prefix used for replicator producer name and cursor name pulsar.repl||
+|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false|
+|defaultRetentionTimeInMinutes| Default message retention time  ||
+|defaultRetentionSizeInMB|  Default retention size  |0|
+|keepAliveIntervalSeconds|  How often to check whether the connections are still alive  |30|
+|loadManagerClassName|  Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl|
+|supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]|
+|defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide|
+|managedLedgerOffloadDriver|  Driver to use to offload old data to long term storage (Possible values: S3)  ||
+|managedLedgerOffloadMaxThreads|  Maximum number of thread pool threads for ledger offloading |2|
+|managedLedgerUnackedRangesOpenCacheSetEnabled|  Use Open Range-Set to cache unacknowledged messages |true|
+|managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000|
+|managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)|
+|s3ManagedLedgerOffloadRegion|  For Amazon S3 ledger offload, AWS region  ||
+|s3ManagedLedgerOffloadBucket|  For Amazon S3 ledger offload, Bucket to place offloaded ledger into ||
+|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) ||
+|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864|
+|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default)  |1048576|
+|s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 ||
+|s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload|
+| acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false |
+| maxMessageSize | Set the maximum size of a message. | 5 MB |
+| preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false |
+
+
+
+
+## Client
+
+The [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool can be used to publish messages to Pulsar and consume messages from Pulsar topics. This tool can be used in lieu of a client library.
+
+|Name|Description|Default|
+|---|---|---|
+|webServiceUrl| The web URL for the cluster.  |http://localhost:8080/|
+|brokerServiceUrl|  The Pulsar protocol URL for the cluster.  |pulsar://localhost:6650/|
+|authPlugin|  The authentication plugin.  ||
+|authParams|  The authentication parameters for the cluster, as a comma-separated string. ||
+|useTls|  Whether or not TLS authentication will be enforced in the cluster.  |false|
+|tlsAllowInsecureConnection|||
+|tlsTrustCertsFilePath|||
+
+
+## Service discovery
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  Zookeeper quorum connection string (comma-separated)  ||
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout |30000|
+|servicePort| Port to use to server binary-proto request  |6650|
+|servicePortTls|  Port to use to server binary-proto-tls request  |6651|
+|webServicePort|  Port that discovery service listen on |8080|
+|webServicePortTls| Port to use to server HTTPS request |8443|
+|bindOnLocalhost| Control whether to bind directly on localhost rather than on normal hostname  |false|
+|authenticationEnabled| Enable authentication |false|
+|authenticationProviders| Authentication provider name list, which is comma separated list of class names (comma-separated) ||
+|authorizationEnabled|  Enforce authorization |false|
+|superUserRoles|  Role names that are treated as “super-user”, meaning they will be able to do all admin operations and publish/consume from all topics (comma-separated) ||
+|tlsEnabled|  Enable TLS  |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+
+
+
+## Log4j
+
+
+|Name|Default|
+|---|---|
+|pulsar.root.logger|  WARN,CONSOLE|
+|pulsar.log.dir|  logs|
+|pulsar.log.file| pulsar.log|
+|log4j.rootLogger|  ${pulsar.root.logger}|
+|log4j.appender.CONSOLE|  org.apache.log4j.ConsoleAppender|
+|log4j.appender.CONSOLE.Threshold|  DEBUG|
+|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n|
+|log4j.appender.ROLLINGFILE|  org.apache.log4j.DailyRollingFileAppender|
+|log4j.appender.ROLLINGFILE.Threshold|  DEBUG|
+|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}|
+|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n|
+|log4j.appender.TRACEFILE|  org.apache.log4j.FileAppender|
+|log4j.appender.TRACEFILE.Threshold|  TRACE|
+|log4j.appender.TRACEFILE.File| pulsar-trace.log|
+|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n|
+
+
+## Log4j shell
+
+|Name|Default|
+|---|---|
+|bookkeeper.root.logger|  ERROR,CONSOLE|
+|log4j.rootLogger|  ${bookkeeper.root.logger}|
+|log4j.appender.CONSOLE|  org.apache.log4j.ConsoleAppender|
+|log4j.appender.CONSOLE.Threshold|  DEBUG|
+|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n|
+|log4j.logger.org.apache.zookeeper| ERROR|
+|log4j.logger.org.apache.bookkeeper|  ERROR|
+|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO|
+
+
+## Standalone
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  The quorum connection string for local ZooKeeper  ||
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|brokerServicePort| The port on which the standalone broker listens for connections |6650|
+|webServicePort|  The port used by the standalone broker for HTTP requests  |8080|
+|bindAddress| The hostname or IP address on which the standalone service binds  |0.0.0.0|
+|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used.  ||
+|clusterName| The name of the cluster that this broker belongs to. |standalone|
+|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000|
+|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000|
+|skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false|
+|backlogQuotaCheckEnabled|  Enable the backlog quota check, which enforces a specified action when the quota is reached.  |true|
+|backlogQuotaCheckIntervalInSeconds|  How often to check for topics that have reached the backlog quota.  |60|
+|backlogQuotaDefaultLimitGB|  The default per-topic backlog quota limit.  |10|
+|ttlDurationDefaultInSeconds|  The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0|
+|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. If topics are not consumed for some while, these inactive topics might be cleaned up. Deleting inactive topics is enabled by default. The default period is 1 minute. |true|
+|brokerDeleteInactiveTopicsFrequencySeconds|  How often to check for inactive topics, in seconds. |60|
+|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5|
+|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed.  |1000|
+|clientLibraryVersionCheckEnabled|  Enable checks for minimum allowed client library version. |false|
+|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information  |true|
+|statusFilePath|  The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs|
+|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000|
+|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer.  |200000|
+|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0|
+|authenticationEnabled| Enable authentication for the broker. |false|
+|authenticationProviders| A comma-separated list of class names for authentication providers. |false|
+|authorizationEnabled|  Enforce authorization in brokers. |false|
+|superUserRoles|  Role names that are treated as “superusers.” Superusers are authorized to perform all admin tasks. ||
+|brokerClientAuthenticationPlugin|  The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. ||
+|brokerClientAuthenticationParameters|  The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin.  ||
+|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list.  ||
+|exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false|
+|bookkeeperClientAuthenticationPlugin|  Authentication plugin to be used when connecting to bookies (BookKeeper servers). ||
+|bookkeeperClientAuthenticationParametersName|  BookKeeper authentication plugin implementation parameters and values.  ||
+|bookkeeperClientAuthenticationParameters|  Parameters associated with the bookkeeperClientAuthenticationParametersName ||
+|bookkeeperClientTimeoutInSeconds|  Timeout for BookKeeper add and read operations. |30|
+|bookkeeperClientSpeculativeReadTimeoutInMillis|  Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads.  |0|
+|bookkeeperClientHealthCheckEnabled|  Enable bookie health checks.  |true|
+|bookkeeperClientHealthCheckIntervalSeconds|  The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks.  |60|
+|bookkeeperClientHealthCheckErrorThresholdPerInterval|  Error threshold for health checks.  |5|
+|bookkeeperClientHealthCheckQuarantineTimeInSeconds|  If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800|
+|bookkeeperClientRackawarePolicyEnabled|    |true|
+|bookkeeperClientRegionawarePolicyEnabled|    |false|
+|bookkeeperClientReorderReadSequenceEnabled|    |false|
+|bookkeeperClientIsolationGroups|||
+|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available.  ||
+|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list.  ||
+|managedLedgerDefaultEnsembleSize|    |1|
+|managedLedgerDefaultWriteQuorum|   |1|
+|managedLedgerDefaultAckQuorum|   |1|
+|managedLedgerCacheSizeMB|    |1024|
+|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false|
+|managedLedgerCacheEvictionWatermark|   |0.9|
+|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 |
+|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 |
+|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000|
+|managedLedgerUnackedRangesOpenCacheSetEnabled|  Use Open Range-Set to cache unacknowledged messages |true|
+|managedLedgerDefaultMarkDeleteRateLimit|   |0.1|
+|managedLedgerMaxEntriesPerLedger|    |50000|
+|managedLedgerMinLedgerRolloverTimeMinutes|   |10|
+|managedLedgerMaxLedgerRolloverTimeMinutes|   |240|
+|managedLedgerCursorMaxEntriesPerLedger|    |50000|
+|managedLedgerCursorRolloverTimeInSeconds|    |14400|
+|autoSkipNonRecoverableData|    |false|
+|loadBalancerEnabled|   |false|
+|loadBalancerPlacementStrategy|   |weightedRandomSelection|
+|loadBalancerReportUpdateThresholdPercentage|   |10|
+|loadBalancerReportUpdateMaxIntervalMinutes|    |15|
+|loadBalancerHostUsageCheckIntervalMinutes|  |1|
+|loadBalancerSheddingIntervalMinutes|   |30|
+|loadBalancerSheddingGracePeriodMinutes|    |30|
+|loadBalancerBrokerMaxTopics|   |50000|
+|loadBalancerBrokerUnderloadedThresholdPercentage|    |1|
+|loadBalancerBrokerOverloadedThresholdPercentage|   |85|
+|loadBalancerResourceQuotaUpdateIntervalMinutes|    |15|
+|loadBalancerBrokerComfortLoadLevelPercentage|    |65|
+|loadBalancerAutoBundleSplitEnabled|    |false|
+|loadBalancerNamespaceBundleMaxTopics|    |1000|
+|loadBalancerNamespaceBundleMaxSessions|    |1000|
+|loadBalancerNamespaceBundleMaxMsgRate|   |1000|
+|loadBalancerNamespaceBundleMaxBandwidthMbytes|   |100|
+|loadBalancerNamespaceMaximumBundles|   |128|
+|replicationMetricsEnabled|   |true|
+|replicationConnectionsPerBroker|   |16|
+|replicationProducerQueueSize|    |1000|
+|defaultRetentionTimeInMinutes|   |0|
+|defaultRetentionSizeInMB|    |0|
+|keepAliveIntervalSeconds|    |30|
+
+
+
+
+
+## WebSocket
+
+|Name|Description|Default|
+|---|---|---|
+|configurationStoreServers    |||
+|zooKeeperSessionTimeoutMillis|   |30000|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
+|serviceUrl|||
+|serviceUrlTls|||
+|brokerServiceUrl|||
+|brokerServiceUrlTls|||
+|webServicePort||8080|
+|webServicePortTls||8443|
+|bindAddress||0.0.0.0|
+|clusterName |||
+|authenticationEnabled||false|
+|authenticationProviders|||
+|authorizationEnabled||false|
+|superUserRoles |||
+|brokerClientAuthenticationPlugin|||
+|brokerClientAuthenticationParameters|||
+|tlsEnabled||false|
+|tlsAllowInsecureConnection||false|
+|tlsCertificateFilePath|||
+|tlsKeyFilePath |||
+|tlsTrustCertsFilePath|||
+
+
+## Pulsar proxy
+
+The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file.
+
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  The ZooKeeper quorum connection string (as a comma-separated list)  ||
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
+|servicePort| The port to use for server binary Protobuf requests |6650|
+|servicePortTls|  The port to use to server binary Protobuf TLS requests  |6651|
+|statusFilePath|  Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks ||
+|advertisedAddress|Hostname or IP address the service advertises to the outside world.|`InetAddress.getLocalHost().getHostname()`|
+|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy  |false|
+|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true|
+|authenticationProviders| Authentication provider name list (a comma-separated list of class names) ||
+|authorizationEnabled|  Whether authorization is enforced by the Pulsar proxy |false|
+|authorizationProvider| Authorization provider as a fully qualified class name  |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider|
+|brokerClientAuthenticationPlugin|  The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientAuthenticationParameters|  The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientTrustCertsFilePath|  The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers ||
+|superUserRoles|  Role names that are treated as “super-users,” meaning that they will be able to perform all admin ||
+|forwardAuthorizationCredentials| Whether client authorization credentials are forwarded to the broker for re-authorization. Authentication must be enabled via authenticationEnabled=true for this to take effect.  |false|
+|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000|
+|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000|
+|tlsEnabledInProxy| Whether TLS is enabled for the proxy  |false|
+|tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar brokers |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file ||
+|tlsHostnameVerificationEnabled|  Whether the hostname is validated when the proxy creates a TLS connection with brokers  |false|
+|tlsRequireTrustedClientCertOnConnect|  Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false|
+|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.2```, ```TLSv1.1```, ```TLSv1``` ||
+|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```||
+|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`||
+|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:;base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`||
+|tokenPublicAlg| Configure the algorithm to be used to validate auth tokens. This can be any of the asymettric algorithms supported by Java JWT (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256|
+|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank ||
+| proxyLogLevel | Set the Pulsar Proxy log level. <li> If the value is set to 0, no TCP channel information is logged. </li><li> If the value is set to 1, only the TCP channel information and command information (without message body) are parsed and logged. </li><li> If the value is set to 2, all TCP channel information, command information, and message body are parsed and logged. </li>| 0 |
+
+## ZooKeeper
+
+ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available:
+
+
+|Name|Description|Default|
+|---|---|---|
+|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
+|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
+|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
+|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
+|clientPort|  The port on which the ZooKeeper server will listen for connections. |2181|
+|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
+|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1|
+|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+
+
+
+
+In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding
+a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster:
+
+```properties
+
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+
+```
+
+> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration
diff --git a/site2/website-next/versioned_docs/version-2.6.0/reference-connector-admin.md b/site2/website-next/versioned_docs/version-2.6.0/reference-connector-admin.md
new file mode 100644
index 0000000..4b0402f
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/reference-connector-admin.md
@@ -0,0 +1,11 @@
+---
+id: reference-connector-admin
+title: Connector Admin CLI
+sidebar_label: "Connector Admin CLI"
+original_id: reference-connector-admin
+---
+
+> **Important**
+>
+> For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/).
+> 
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.6.0/reference-metrics.md b/site2/website-next/versioned_docs/version-2.6.0/reference-metrics.md
new file mode 100644
index 0000000..238a1b5
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/reference-metrics.md
@@ -0,0 +1,444 @@
+---
+id: reference-metrics
+title: Pulsar Metrics
+sidebar_label: "Pulsar Metrics"
+original_id: reference-metrics
+---
+
+
+
+Pulsar exposes metrics in Prometheus format that can be collected and used for monitoring the health of the cluster.
+
+* [ZooKeeper](#zookeeper)
+* [BookKeeper](#bookkeeper)
+* [Broker](#broker)
+* [Pulsar Functions](#pulsar-functions)
+* [Proxy](#proxy)
+* [Pulsar SQL Worker](#pulsar-sql-worker)
+
+## Overview
+
+The metrics exposed by Pulsar are in Prometheus format. The types of metrics are:
+
+- [Counter](https://prometheus.io/docs/concepts/metric_types/#counter): a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero on restart.
+- [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge): a *gauge* is a metric that represents a single numerical value that can arbitrarily go up and down.
+- [Histogram](https://prometheus.io/docs/concepts/metric_types/#histogram): a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. The `_bucket` suffix is the number of observations within a histogram bucket, configured with parameter `{le="<upper inclusive bound>"}`. The `_count` suffix is the number of observations, shown as a time series and behaves like a counter. The `_sum` suffix is the sum of observed val [...]
+- [Summary](https://prometheus.io/docs/concepts/metric_types/#summary): similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window.
+
+## ZooKeeper
+
+The ZooKeeper metrics are exposed under "/metrics" at port 8000. You can use a different port
+by configuring the `stats_server_port` system property.
+
+### Server metrics
+
+| Name | Type | Description |
+|---|---|---|
+| zookeeper_server_znode_count | Gauge | The number of z-nodes stored. |
+| zookeeper_server_data_size_bytes | Gauge | The total size of all of z-nodes stored. |
+| zookeeper_server_connections | Gauge | The number of currently opened connections. |
+| zookeeper_server_watches_count | Gauge | The number of watchers registered. |
+| zookeeper_server_ephemerals_count | Gauge | The number of ephemeral z-nodes. |
+
+### Request metrics
+
+| Name | Type | Description |
+|---|---|---|
+| zookeeper_server_requests | Counter | The total number of requests received by a particular server. |
+| zookeeper_server_requests_latency_ms | Summary | The requests latency calculated in milliseconds. <br /> Available labels: *type* (write, read). <br /> <ul><li>*write*: the requests that write data to ZooKeeper.</li><li>*read*: the requests that read data from ZooKeeper.</li></ul>|
+
+## BookKeeper
+
+The BookKeeper metrics are exposed under "/metrics" at port 8000. You can change the port by updating `prometheusStatsHttpPort`
+in `bookkeeper.conf` configuration file.
+
+### Server metrics
+
+| Name | Type | Description |
+|---|---|---|
+| bookie_SERVER_STATUS | Gauge | The server status for bookie server. <br /><ul><li>1: the bookie is running in writable mode.</li><li>0: the bookie is running in readonly mode.</li></ul> |
+| bookkeeper_server_ADD_ENTRY_count | Counter | The total number of ADD_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. |
+| bookkeeper_server_READ_ENTRY_count | Counter | The total number of READ_ENTRY requests received at the bookie. The `success` label is used to distinguish successes and failures. |
+| bookie_WRITE_BYTES | Counter | The total number of bytes written to the bookie. |
+| bookie_READ_BYTES | Counter | The total number of bytes read from the bookie. |
+| bookkeeper_server_ADD_ENTRY_REQUEST | Summary | The summary of request latency of ADD_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. |
+| bookkeeper_server_READ_ENTRY_REQUEST | Summary | The summary of request latency of READ_ENTRY requests at the bookie. The `success` label is used to distinguish successes and failures. |
+
+### Journal metrics
+
+| Name | Type | Description |
+|---|---|---|
+| bookie_journal_JOURNAL_SYNC_count | Counter | The total number of journal fsync operations happening at the bookie. The `success` label is used to distinguish successes and failures. |
+| bookie_journal_JOURNAL_QUEUE_SIZE | Gauge | The total number of requests pending in the journal queue. |
+| bookie_journal_JOURNAL_FORCE_WRITE_QUEUE_SIZE | Gauge | The total number of force write (fsync) requests pending in the force-write queue. |
+| bookie_journal_JOURNAL_CB_QUEUE_SIZE | Gauge | The total number of callbacks pending in the callback queue. |
+| bookie_journal_JOURNAL_ADD_ENTRY | Summary | The summary of request latency of adding entries to the journal. |
+| bookie_journal_JOURNAL_SYNC | Summary | The summary of fsync latency of syncing data to the journal disk. |
+
+### Storage metrics
+
+| Name | Type | Description |
+|---|---|---|
+| bookie_ledgers_count | Gauge | The total number of ledgers stored in the bookie. |
+| bookie_entries_count | Gauge | The total number of entries stored in the bookie. |
+| bookie_write_cache_size | Gauge | The bookie write cache size (in bytes). |
+| bookie_read_cache_size | Gauge | The bookie read cache size (in bytes). |
+| bookie_DELETED_LEDGER_COUNT | Counter | The total number of ledgers deleted since the bookie has started. |
+| bookie_ledger_writable_dirs | Gauge | The number of writable directories in the bookie. |
+
+## Broker
+
+The broker metrics are exposed under "/metrics" at port 8080. You can change the port by updating `webServicePort` to a different port
+in `broker.conf` configuration file.
+
+All the metrics exposed by a broker are labelled with `cluster=${pulsar_cluster}`. The value of `${pulsar_cluster}` is the pulsar cluster
+name you configured in `broker.conf`.
+
+Broker has the following kinds of metrics:
+
+* [Namespace metrics](#namespace-metrics)
+  * [Replication metrics](#replication-metrics)
+* [Topic metrics](#topic-metrics)
+  * [Replication metrics](#replication-metrics-1)
+* [ManagedLedgerCache metrics](#managedledgercache-metrics)
+* [ManagedLedger metrics](#managedledger-metrics)
+* [LoadBalancing metrics](#loadbalancing-metrics)
+  * [BundleUnloading metrics](#bundleunloading-metrics)
+  * [BundleSplit metrics](#bundlesplit-metrics)
+* [Subscription metrics](#subscription-metrics)
+* [Consumer metrics](#consumer-metrics)
+* [ManagedLedger bookie client metrics](#managed-ledger-bookie-client-metrics)
+* [Jetty metrics](#jetty-metrics)
+
+### Namespace metrics
+
+> Namespace metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to `false`.
+
+All the namespace metrics are labelled with the following labels:
+
+- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`.
+- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name.
+
+| Name | Type | Description |
+|---|---|---|
+| pulsar_topics_count | Gauge | The number of Pulsar topics of the namespace owned by this broker. |
+| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the namespace served by this broker. |
+| pulsar_producers_count | Gauge | The number of active producers of the namespace connected to this broker. |
+| pulsar_consumers_count | Gauge | The number of active consumers of the namespace connected to this broker. |
+| pulsar_rate_in | Gauge | The total message rate of the namespace coming into this broker (messages/second). |
+| pulsar_rate_out | Gauge | The total message rate of the namespace going out from this broker (messages/second). |
+| pulsar_throughput_in | Gauge | The total throughput of the namespace coming into this broker (bytes/second). |
+| pulsar_throughput_out | Gauge | The total throughput of the namespace going out from this broker (bytes/second). |
+| pulsar_storage_size | Gauge | The total storage size of the topics in this namespace owned by this broker (bytes). |
+| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this namespace owned by this broker (messages). |
+| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this namespace offloaded to the tiered storage (bytes). |
+| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this namespace (message batches / second). |
+| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this namespace (message batches / second). |
+| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. |
+| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a namespace that the storage write latency is smaller with a given threshold.<br /> Available thresholds: <br /><ul><li>pulsar_storage_write_latency_le_0_5: <= 0.5ms </li><li>pulsar_storage_write_latency_le_1: <= 1ms</li><li>pulsar_storage_write_latency_le_5: <= 5ms</li><li>pulsar_storage_write_latency_le_10: <= 10ms</li><li>pulsar_storage_write_latency_le_20: <= 20ms</li><li>pulsar_storage_write_latency_le_50: <= 50ms</ [...]
+| pulsar_entry_size_le_* | Histogram | The entry rate of a namespace that the entry size is smaller with a given threshold.<br /> Available thresholds: <br /><ul><li>pulsar_entry_size_le_128: <= 128 bytes </li><li>pulsar_entry_size_le_512: <= 512 bytes</li><li>pulsar_entry_size_le_1_kb: <= 1 KB</li><li>pulsar_entry_size_le_2_kb: <= 2 KB</li><li>pulsar_entry_size_le_4_kb: <= 4 KB</li><li>pulsar_entry_size_le_16_kb: <= 16 KB</li><li>pulsar_entry_size_le_100_kb: <= 100 KB</li><li>pulsar_ent [...]
+
+#### Replication metrics
+
+If a namespace is configured to be replicated between multiple Pulsar clusters, the corresponding replication metrics will also be exposed when `replicationMetricsEnabled` is enabled.
+
+All the replication metrics will also be labelled with `remoteCluster=${pulsar_remote_cluster}`.
+
+| Name | Type | Description |
+|---|---|---|
+| pulsar_replication_rate_in | Gauge | The total message rate of the namespace replicating from remote cluster (messages/second). |
+| pulsar_replication_rate_out | Gauge | The total message rate of the namespace replicating to remote cluster (messages/second). |
+| pulsar_replication_throughput_in | Gauge | The total throughput of the namespace replicating from remote cluster (bytes/second). |
+| pulsar_replication_throughput_out | Gauge | The total throughput of the namespace replicating to remote cluster (bytes/second). |
+| pulsar_replication_backlog | Gauge | The total backlog of the namespace replicating to remote cluster (messages). |
+
+### Topic metrics
+
+> Topic metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to true.
+
+All the topic metrics are labelled with the following labels:
+
+- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`.
+- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name.
+- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name.
+
+| Name | Type | Description |
+|---|---|---|
+| pulsar_subscriptions_count | Gauge | The number of Pulsar subscriptions of the topic served by this broker. |
+| pulsar_producers_count | Gauge | The number of active producers of the topic connected to this broker. |
+| pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. |
+| pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). |
+| pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). |
+| pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). |
+| pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). |
+| pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). |
+| pulsar_storage_backlog_size | Gauge | The total backlog size of the topics of this topic owned by this broker (messages). |
+| pulsar_storage_offloaded_size | Gauge | The total amount of the data in this topic offloaded to the tiered storage (bytes). |
+| pulsar_storage_backlog_quota_limit | Gauge | The total amount of the data in this topic that limit the backlog quota (bytes). |
+| pulsar_storage_write_rate | Gauge | The total message batches (entries) written to the storage for this topic (message batches / second). |
+| pulsar_storage_read_rate | Gauge | The total message batches (entries) read from the storage for this topic (message batches / second). |
+| pulsar_subscription_delayed | Gauge | The total message batches (entries) are delayed for dispatching. |
+| pulsar_storage_write_latency_le_* | Histogram | The entry rate of a topic that the storage write latency is smaller with a given threshold.<br /> Available thresholds: <br /><ul><li>pulsar_storage_write_latency_le_0_5: <= 0.5ms </li><li>pulsar_storage_write_latency_le_1: <= 1ms</li><li>pulsar_storage_write_latency_le_5: <= 5ms</li><li>pulsar_storage_write_latency_le_10: <= 10ms</li><li>pulsar_storage_write_latency_le_20: <= 20ms</li><li>pulsar_storage_write_latency_le_50: <= 50ms</li>< [...]
+| pulsar_entry_size_le_* | Histogram | The entry rate of a topic that the entry size is smaller with a given threshold.<br /> Available thresholds: <br /><ul><li>pulsar_entry_size_le_128: <= 128 bytes </li><li>pulsar_entry_size_le_512: <= 512 bytes</li><li>pulsar_entry_size_le_1_kb: <= 1 KB</li><li>pulsar_entry_size_le_2_kb: <= 2 KB</li><li>pulsar_entry_size_le_4_kb: <= 4 KB</li><li>pulsar_entry_size_le_16_kb: <= 16 KB</li><li>pulsar_entry_size_le_100_kb: <= 100 KB</li><li>pulsar_entry_s [...]
+| pulsar_in_bytes_total | Counter | The total number of bytes received for this topic |
+| pulsar_in_messages_total | Counter | The total number of messages received for this topic |
+| pulsar_out_bytes_total | Counter | The total number of bytes read from this topic |
+| pulsar_out_messages_total | Counter | The total number of messages read from this topic |
+
+#### Replication metrics
+
+If a namespace that a topic belongs to is configured to be replicated between multiple Pulsar clusters, the corresponding replication metrics will also be exposed when `replicationMetricsEnabled` is enabled.
+
+All the replication metrics will also be labelled with `remoteCluster=${pulsar_remote_cluster}`.
+
+| Name | Type | Description |
+|---|---|---|
+| pulsar_replication_rate_in | Gauge | The total message rate of the topic replicating from remote cluster (messages/second). |
+| pulsar_replication_rate_out | Gauge | The total message rate of the topic replicating to remote cluster (messages/second). |
+| pulsar_replication_throughput_in | Gauge | The total throughput of the topic replicating from remote cluster (bytes/second). |
+| pulsar_replication_throughput_out | Gauge | The total throughput of the topic replicating to remote cluster (bytes/second). |
+| pulsar_replication_backlog | Gauge | The total backlog of the topic replicating to remote cluster (messages). |
+
+### ManagedLedgerCache metrics
+All the ManagedLedgerCache metrics are labelled with the following labels:
+- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you configured in broker.conf.
+
+| Name | Type | Description |
+| --- | --- | --- |
+| pulsar_ml_cache_evictions | Gauge | The number of cache evictions during the last minute. |
+| pulsar_ml_cache_hits_rate | Gauge | The number of cache hits per second. |
+| pulsar_ml_cache_hits_throughput | Gauge | The amount of data is retrieved from the cache in byte/s |
+| pulsar_ml_cache_misses_rate | Gauge | The number of cache misses per second |
+| pulsar_ml_cache_misses_throughput | Gauge | The amount of data is retrieved from the cache in byte/s |
+| pulsar_ml_cache_pool_active_allocations | Gauge | The number of currently active allocations in direct arena |
+| pulsar_ml_cache_pool_active_allocations_huge | Gauge | The number of currently active huge allocation in direct arena |
+| pulsar_ml_cache_pool_active_allocations_normal | Gauge | The number of currently active normal allocations in direct arena |
+| pulsar_ml_cache_pool_active_allocations_small | Gauge | The number of currently active small allocations in direct arena |
+| pulsar_ml_cache_pool_active_allocations_tiny | Gauge | The number of currently active tiny allocations in direct arena |
+| pulsar_ml_cache_pool_allocated | Gauge | The total allocated memory of chunk lists in direct arena |
+| pulsar_ml_cache_pool_used | Gauge | The total used memory of chunk lists in direct arena |
+| pulsar_ml_cache_used_size | Gauge | The size in byte used to store the entries payloads |
+| pulsar_ml_count | Gauge | The number of currently opened managed ledgers  |
+
+### ManagedLedger metrics
+All the managedLedger metrics are labelled with the following labels:
+- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you configured in broker.conf.
+- namespace: namespace=${pulsar_namespace}. ${pulsar_namespace} is the namespace name.
+- quantile: quantile=${quantile}. Quantile is only for `Histogram` type metric, and represents the threshold for given Buckets.
+
+| Name | Type | Description |
+| --- | --- | --- |
+| pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added |
+| pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed |
+| pulsar_ml_AddEntryLatencyBuckets | Histogram | The add entry latency of a ledger with a given quantile (threshold).<br /> Available quantile: <br /><ul><li> quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]</li> <li>quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]</li><li>quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]</li><li>quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]</li><li>quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]</ [...]
+| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The add entry latency > 1s |
+| pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added |
+| pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded |
+| pulsar_ml_EntrySizeBuckets | Histogram | The add entry size of a ledger with given quantile.<br /> Available quantile: <br /><ul><li>quantile="0.0_128.0" is EntrySize between (0byte, 128byte]</li><li>quantile="128.0_512.0" is EntrySize between (128byte, 512byte]</li><li>quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]</li><li>quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]</li><li>quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]</li><li>quantile="4096.0_1638 [...]
+| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge  | The add entry size > 1MB |
+| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with given quantile. <br /> Available quantile: <br /><ul><li>quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]</li><li>quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]</li><li>quantile="1.0_5.0" is EntrySize between (1ms, 5ms]</li><li>quantile="5.0_10.0" is EntrySize between (5ms, 10ms]</li><li>quantile="10.0_20.0" is EntrySize between (10ms, 20ms]</li><li>quantile="20.0_50.0" is EntrySize between (20m [...]
+| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The ledger switch latency > 1s |
+| pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s |
+| pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers |
+| pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read |
+| pulsar_ml_ReadEntriesErrors | Gauge | The number of readEntries requests that failed |
+| pulsar_ml_ReadEntriesRate | Gauge | The msg/s rate of messages read |
+| pulsar_ml_ReadEntriesSucceeded | Gauge | The number of readEntries requests that succeeded |
+| pulsar_ml_StoredMessagesSize | Gauge | The total size of the messages in active ledgers (accounting for the multiple copies stored) |
+
+### LoadBalancing metrics
+All the loadbalancing metrics are labelled with the following labels:
+- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you configured in broker.conf.
+- broker: broker=${broker}. ${broker} is the ip address of the broker
+- metric: metric="loadBalancing".
+
+| Name | Type | Description |
+| --- | --- | --- |
+| pulsar_lb_bandwidth_in_usage | Gauge | The broker inbound bandwith usage (in percent). |
+| pulsar_lb_bandwidth_out_usage | Gauge | The broker outbound bandwith usage (in percent). |
+| pulsar_lb_cpu_usage | Gauge | The broker cpu usage (in percent). |
+| pulsar_lb_directMemory_usage | Gauge | The broker process direct memory usage (in percent). |
+| pulsar_lb_memory_usage | Gauge | The broker process memory usage (in percent). |
+
+#### BundleUnloading metrics
+All the bundleUnloading metrics are labelled with the following labels:
+- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you configured in broker.conf.
+- metric: metric="bundleUnloading".
+
+| Name | Type | Description |
+| --- | --- | --- |
+| pulsar_lb_unload_broker_count | Counter | Unload broker count in this bundle unloading |
+| pulsar_lb_unload_bundle_count | Counter | Bundle unload count in this bundle unloading |
+
+#### BundleSplit metrics
+All the bundleUnloading metrics are labelled with the following labels:
+- cluster: cluster=${pulsar_cluster}. ${pulsar_cluster} is the cluster name that you configured in broker.conf.
+- metric: metric="bundlesSplit".
+
+| Name | Type | Description |
+| --- | --- | --- |
+| pulsar_lb_bundles_split_count | Counter | bundle split count in this bundle splitting check interval |
+
+### Subscription metrics
+
+> Subscription metrics are only exposed when `exposeTopicLevelMetricsInPrometheus` is set to true.
+
+All the subscription metrics are labelled with the following labels:
+
+- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`.
+- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name.
+- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name.
+- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name.
+
+| Name | Type | Description |
+|---|---|---|
+| pulsar_subscription_back_log | Gauge | The total backlog of a subscription (messages). |
+| pulsar_subscription_delayed | Gauge | The total number of messages are delayed to be dispatched for a subscription (messages). |
+| pulsar_subscription_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). |
+| pulsar_subscription_unacked_messages | Gauge | The total number of unacknowledged messages of a subscription (messages). |
+| pulsar_subscription_blocked_on_unacked_messages | Gauge | Indicate whether a subscription is blocked on unacknowledged messages or not. <br /> <ul><li>1 means the subscription is blocked on waiting unacknowledged messages to be acked.</li><li>0 means the subscription is not blocked on waiting unacknowledged messages to be acked.</li></ul> |
+| pulsar_subscription_msg_rate_out | Gauge | The total message dispatch rate for a subscription (messages/second). |
+| pulsar_subscription_msg_throughput_out | Gauge | The total message dispatch throughput for a subscription (bytes/second). |
+
+### Consumer metrics
+
+> Consumer metrics are only exposed when both `exposeTopicLevelMetricsInPrometheus` and `exposeConsumerLevelMetricsInPrometheus`
+> are set to true.
+
+All the consumer metrics are labelled with the following labels:
+
+- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`.
+- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name.
+- *topic*: `topic=${pulsar_topic}`. `${pulsar_topic}` is the topic name.
+- *subscription*: `subscription=${subscription}`. `${subscription}` is the topic subscription name.
+- *consumer_name*: `consumer_name=${consumer_name}`. `${consumer_name}` is the topic consumer name.
+- *consumer_id*: `consumer_id=${consumer_id}`. `${consumer_id}` is the topic consumer id.
+
+| Name | Type | Description |
+|---|---|---|
+| pulsar_consumer_msg_rate_redeliver | Gauge | The total message rate for message being redelivered (messages/second). |
+| pulsar_consumer_unacked_messages | Gauge | The total number of unacknowledged messages of a consumer (messages). |
+| pulsar_consumer_blocked_on_unacked_messages | Gauge | Indicate whether a consumer is blocked on unacknowledged messages or not. <br /> <ul><li>1 means the consumer is blocked on waiting unacknowledged messages to be acked.</li><li>0 means the consumer is not blocked on waiting unacknowledged messages to be acked.</li></ul> |
+| pulsar_consumer_msg_rate_out | Gauge | The total message dispatch rate for a consumer (messages/second). |
+| pulsar_consumer_msg_throughput_out | Gauge | The total message dispatch throughput for a consumer (bytes/second). |
+| pulsar_consumer_available_permits | Gauge | The available permits for for a consumer. |
+
+### Managed ledger bookie client metrics
+
+All the managed ledger bookie client metrics labelled with the following labels:
+
+- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`.
+
+| Name | Type | Description |
+| --- | --- | --- |
+| pulsar_managedLedger_client_bookkeeper_ml_scheduler_completed_tasks_* | Gauge |  The number of tasks the scheduler executor execute completed. <br />The number of metrics determined by the scheduler executor thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`. <br /> |
+| pulsar_managedLedger_client_bookkeeper_ml_scheduler_queue_* | Gauge | The number of tasks queued in the scheduler executor's queue. <br />The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`. <br /> |
+| pulsar_managedLedger_client_bookkeeper_ml_scheduler_total_tasks_* | Gauge | The total number of tasks the scheduler executor received. <br />The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumSchedulerThreads` in `broker.conf`. <br /> |
+| pulsar_managedLedger_client_bookkeeper_ml_workers_completed_tasks_* | Gauge | The number of tasks the worker executor execute completed. <br />The number of metrics determined by the number of worker task thread number configured by `managedLedgerNumWorkerThreads` in `broker.conf` <br /> |
+| pulsar_managedLedger_client_bookkeeper_ml_workers_queue_* | Gauge | The number of tasks queued in the worker executor's queue. <br />The number of metrics determined by scheduler executor's thread number configured by `managedLedgerNumWorkerThreads` in `broker.conf`. <br /> |
+| pulsar_managedLedger_client_bookkeeper_ml_workers_total_tasks_* | Gauge | The total number of tasks the worker executor received. <br />The number of metrics determined by worker executor's thread number configured by `managedLedgerNumWorkerThreads` in `broker.conf`. <br /> |
+| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_execution | Summary | The scheduler task execution latency calculated in milliseconds. |
+| pulsar_managedLedger_client_bookkeeper_ml_scheduler_task_queued | Summary | The scheduler task queued latency calculated in milliseconds. |
+| pulsar_managedLedger_client_bookkeeper_ml_workers_task_execution | Summary | The worker task execution latency calculated in milliseconds. |
+| pulsar_managedLedger_client_bookkeeper_ml_workers_task_queued | Summary | The worker task queued latency calculated in milliseconds. |
+
+### Jetty metrics
+
+> For a functions-worker running separately from brokers, its Jetty metrics are only exposed when `includeStandardPrometheusMetrics` is set to `true`.
+
+All the jetty metrics are labelled with the following labels:
+
+- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you have configured in the `broker.conf` file.
+
+| Name | Type | Description |
+|---|---|---|
+| jetty_requests_total | Counter | Number of requests. |
+| jetty_requests_active | Gauge | Number of requests currently active. |
+| jetty_requests_active_max | Gauge | Maximum number of requests that have been active at once. |
+| jetty_request_time_max_seconds | Gauge | Maximum time spent handling requests. |
+| jetty_request_time_seconds_total | Counter | Total time spent in all request handling. |
+| jetty_dispatched_total | Counter | Number of dispatches. |
+| jetty_dispatched_active | Gauge | Number of dispatches currently active. |
+| jetty_dispatched_active_max | Gauge | Maximum number of active dispatches being handled. |
+| jetty_dispatched_time_max | Gauge | Maximum time spent in dispatch handling. |
+| jetty_dispatched_time_seconds_total | Counter | Total time spent in dispatch handling. |
+| jetty_async_requests_total | Counter | Total number of async requests. |
+| jetty_async_requests_waiting | Gauge | Currently waiting async requests. |
+| jetty_async_requests_waiting_max | Gauge | Maximum number of waiting async requests. |
+| jetty_async_dispatches_total | Counter | Number of requested that have been asynchronously dispatched. |
+| jetty_expires_total | Counter | Number of async requests requests that have expired. |
+| jetty_responses_total | Counter | Number of responses, labeled by status code. The `code` label can be "1xx", "2xx", "3xx", "4xx", or "5xx". |
+| jetty_stats_seconds | Gauge | Time in seconds stats have been collected for. |
+| jetty_responses_bytes_total | Counter | Total number of bytes across all responses. |
+
+# Pulsar Functions
+
+All the Pulsar Functions metrics are labelled with the following labels:
+
+- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`.
+- *namespace*: `namespace=${pulsar_namespace}`. `${pulsar_namespace}` is the namespace name.
+
+| Name | Type | Description |
+|---|---|---|
+| pulsar_function_processed_successfully_total | Counter | Total number of messages processed successfully. |
+| pulsar_function_processed_successfully_total_1min | Counter | Total number of messages processed successfully in the last 1 minute. |
+| pulsar_function_system_exceptions_total | Counter | Total number of system exceptions. |
+| pulsar_function_system_exceptions_total_1min | Counter | Total number of system exceptions in the last 1 minute. |
+| pulsar_function_user_exceptions_total | Counter | Total number of user exceptions. |
+| pulsar_function_user_exceptions_total_1min | Counter | Total number of user exceptions in the last 1 minute. |
+| pulsar_function_process_latency_ms | Summary | Process latency in milliseconds. |
+| pulsar_function_process_latency_ms_1min | Summary | Process latency in milliseconds in the last 1 minute. |
+| pulsar_function_last_invocation | Gauge | The timestamp of the last invocation of the function. |
+| pulsar_function_received_total | Counter | Total number of messages received from source. |
+| pulsar_function_received_total_1min | Counter | Total number of messages received from source in the last 1 minute. |
+
+# Proxy
+
+All the proxy metrics are labelled with the following labels:
+
+- *cluster*: `cluster=${pulsar_cluster}`. `${pulsar_cluster}` is the cluster name that you configured in `broker.conf`.
+- *kubernetes_pod_name*: `kubernetes_pod_name=${kubernetes_pod_name}`. `${kubernetes_pod_name}` is the kubernetes pod name.
+
+| Name | Type | Description |
+|---|---|---|
+| pulsar_proxy_active_connections | Gauge | Number of connections currently active in the proxy. |
+| pulsar_proxy_new_connections | Counter | Counter of connections being opened in the proxy. |
+| pulsar_proxy_rejected_connections | Counter | Counter for connections rejected due to throttling. |
+| pulsar_proxy_binary_ops | Counter | Counter of proxy operations. |
+| pulsar_proxy_binary_bytes | Counter | Counter of proxy bytes. |
+
+# Pulsar SQL Worker
+
+| Name | Type | Description |
+|---|---|---|
+| split_bytes_read | Counter | Number of bytes read from BookKeeper. |
+| split_num_messages_deserialized | Counter | Number of messages deserialized. |
+| split_num_record_deserialized | Counter | Number of records deserialized. |
+| split_bytes_read_per_query | Summary | Total number of bytes read per query. |
+| split_entry_deserialize_time | Summary | Time spent on derserializing entries. |
+| split_entry_deserialize_time_per_query | Summary | Time spent on derserializing entries per query. |
+| split_entry_queue_dequeue_wait_time | Summary | Time spend on waiting to get entry from entry queue because it is empty. |
+| split_entry_queue_dequeue_wait_time_per_query | Summary | Total time spent on waiting to get entry from entry queue per query. |
+| split_message_queue_dequeue_wait_time_per_query | Summary | Time spent on waiting to dequeue from message queue because is is empty per query. |
+| split_message_queue_enqueue_wait_time | Summary | Time spent on waiting for message queue enqueue because the message queue is full. |
+| split_message_queue_enqueue_wait_time_per_query | Summary | Time spent on waiting for message queue enqueue because the message queue is full per query. |
+| split_num_entries_per_batch | Summary | Number of entries per batch. |
+| split_num_entries_per_query | Summary | Number of entries per query. |
+| split_num_messages_deserialized_per_entry | Summary | Number of messages deserialized per entry. |
+| split_num_messages_deserialized_per_query | Summary | Number of messages deserialized per query. |
+| split_read_attempts | Summary | Number of read attempts (fail if queues are full). |
+| split_read_attempts_per_query | Summary | Number of read attempts per query. |
+| split_read_latency_per_batch | Summary | Latency of reads per batch. |
+| split_read_latency_per_query | Summary | Total read latency per query. |
+| split_record_deserialize_time | Summary | Time spent on deserializing message to record. For example, Avro, JSON, and so on. |
+| split_record_deserialize_time_per_query | Summary | Time spent on deserializing message to record per query. |
+| split_total_execution_time | Summary | Total execution time . |
+
+## Monitor
+
+You can [set up a Prometheus instance](https://prometheus.io/) to collect all the metrics exposed at Pulsar components and set up [Grafana](https://grafana.com/) dashboards to display the metrics and monitor your Pulsar cluster.
+
+The following are some Grafana dashboards examples:
+
+- [pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): A grafana dashboard that displays metrics collected in Prometheus for Pulsar clusters running on Kubernetes.
+- [apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard): A collection of grafana dashboard templates for different Pulsar components running on both Kubernetes and on-premise machines.
diff --git a/site2/website-next/versioned_docs/version-2.6.0/reference-pulsar-admin.md b/site2/website-next/versioned_docs/version-2.6.0/reference-pulsar-admin.md
new file mode 100644
index 0000000..7d39b35
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.6.0/reference-pulsar-admin.md
@@ -0,0 +1,3084 @@
+---
+id: pulsar-admin
+title: Pulsar admin CLI
+sidebar_label: "Pulsar Admin CLI"
+original_id: pulsar-admin
+---
+
+> **Important**
+>
+> This page is deprecated and not updated anymore. For the latest and complete information about `pulsar-admin`, including commands, flags, descriptions, and more, see [pulsar-admin doc](https://pulsar.apache.org/tools/pulsar-admin/).
+
+The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more.
+
+Usage
+
+```bash
+
+$ pulsar-admin command
+
+```
+
+Commands
+* `broker-stats`
+* `brokers`
+* `clusters`
+* `functions`
+* `functions-worker`
+* `namespaces`
+* `ns-isolation-policy`
+* `sources`
+
+  For more information, see [here](io-cli.md#sources)
+* `sinks`
+  
+  For more information, see [here](io-cli.md#sinks)
+* `topics`
+* `tenants`
+* `resource-quotas`
+* `schemas`
+
+## `broker-stats`
+
+Operations to collect broker statistics
+
+```bash
+
+$ pulsar-admin broker-stats subcommand
+
+```
+
+Subcommands
+* `allocator-stats`
+* `topics(destinations)`
+* `mbeans`
+* `monitoring-metrics`
+* `load-report`
+
+
+### `allocator-stats`
+
+Dump allocator stats
+
+Usage
+
+```bash
+
+$ pulsar-admin broker-stats allocator-stats allocator-name
+
+```
+
+### `topics(destinations)`
+
+Dump topic stats
+
+Usage
+
+```bash
+
+$ pulsar-admin broker-stats topics options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+### `mbeans`
+
+Dump Mbean stats
+
+Usage
+
+```bash
+
+$ pulsar-admin broker-stats mbeans options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+
+### `monitoring-metrics`
+
+Dump metrics for monitoring
+
+Usage
+
+```bash
+
+$ pulsar-admin broker-stats monitoring-metrics options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+
+### `load-report`
+
+Dump broker load-report
+
+Usage
+
+```bash
+
+$ pulsar-admin broker-stats load-report
+
+```
+
+## `brokers`
+
+Operations about brokers
+
+```bash
+
+$ pulsar-admin brokers subcommand
+
+```
+
+Subcommands
+* `list`
+* `namespaces`
+* `update-dynamic-config`
+* `list-dynamic-config`
+* `get-all-dynamic-config`
+* `get-internal-config`
+* `get-runtime-config`
+* `healthcheck`
+
+### `list`
+List active brokers of the cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin brokers list cluster-name
+
+```
+
+### `namespaces`
+List namespaces owned by the broker
+
+Usage
+
+```bash
+
+$ pulsar-admin brokers namespaces cluster-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--url`|The URL for the broker||
+
+
+### `update-dynamic-config`
+Update a broker's dynamic service configuration
+
+Usage
+
+```bash
+
+$ pulsar-admin brokers update-dynamic-config options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--config`|Service configuration parameter name||
+|`--value`|Value for the configuration parameter value specified using the `--config` flag||
+
+
+### `list-dynamic-config`
+Get list of updatable configuration name
+
+Usage
+
+```bash
+
+$ pulsar-admin brokers list-dynamic-config
+
+```
+
+### `delete-dynamic-config`
+Delete dynamic-serviceConfiguration of broker
+
+Usage
+
+```bash
+
+$ pulsar-admin brokers delete-dynamic-config options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--config`|Service configuration parameter name||
+
+
+### `get-all-dynamic-config`
+Get all overridden dynamic-configuration values
+
+Usage
+
+```bash
+
+$ pulsar-admin brokers get-all-dynamic-config
+
+```
+
+### `get-internal-config`
+Get internal configuration information
+
+Usage
+
+```bash
+
+$ pulsar-admin brokers get-internal-config
+
+```
+
+### `get-runtime-config`
+Get runtime configuration values
+
+Usage
+
+```bash
+
+$ pulsar-admin brokers get-runtime-config
+
+```
+
+### `healthcheck`
+Run a health check against the broker
+
+Usage
+
+```bash
+
+$ pulsar-admin brokers healthcheck
+
+```
+
+## `clusters`
+Operations about clusters
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters subcommand
+
+```
+
+Subcommands
+* `get`
+* `create`
+* `update`
+* `delete`
+* `list`
+* `update-peer-clusters`
+* `get-peer-clusters`
+* `get-failure-domain`
+* `create-failure-domain`
+* `update-failure-domain`
+* `delete-failure-domain`
+* `list-failure-domains`
+
+
+### `get`
+Get the configuration data for the specified cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters get cluster-name
+
+```
+
+### `create`
+Provisions a new cluster. This operation requires Pulsar super-user privileges.
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters create cluster-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-url`|The URL for the broker service.||
+|`--broker-url-secure`|The broker service URL for a secure connection||
+|`--url`|service-url||
+|`--url-secure`|service-url for secure connection||
+
+
+### `update`
+Update the configuration for a cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters update cluster-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-url`|The URL for the broker service.||
+|`--broker-url-secure`|The broker service URL for a secure connection||
+|`--url`|service-url||
+|`--url-secure`|service-url for secure connection||
+
+
+### `delete`
+Deletes an existing cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters delete cluster-name
+
+```
+
+### `list`
+List the existing clusters
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters list
+
+```
+
+### `update-peer-clusters`
+Update peer cluster names
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters update-peer-clusters cluster-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)||
+
+### `get-peer-clusters`
+Get list of peer clusters
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters get-peer-clusters
+
+```
+
+### `get-failure-domain`
+Get the configuration brokers of a failure domain
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters get-failure-domain cluster-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster||
+
+### `create-failure-domain`
+Create a new failure domain for a cluster (updates it if already created)
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters create-failure-domain cluster-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-list`|Comma separated broker list||
+|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster||
+
+### `update-failure-domain`
+Update failure domain for a cluster (creates a new one if not exist)
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters update-failure-domain cluster-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-list`|Comma separated broker list||
+|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster||
+
+### `delete-failure-domain`
+Delete an existing failure domain
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters delete-failure-domain cluster-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster||
+
+### `list-failure-domains`
+List the existing failure domains for a cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin clusters list-failure-domains cluster-name
+
+```
+
+## `functions`
+
+A command-line interface for Pulsar Functions
+
+Usage
+
+```bash
+
+$ pulsar-admin functions subcommand
+
+```
+
+Subcommands
+* `localrun`
+* `create`
+* `delete`
+* `update`
+* `get`
+* `restart`
+* `stop`
+* `start`
+* `status`
+* `stats`
+* `list`
+* `querystate`
+* `putstate`
+* `trigger`
+
+
+### `localrun`
+Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster)
+
+
+Usage
+
+```bash
+
+$ pulsar-admin functions localrun options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)||
+|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)||
+|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)||
+|`--auto-ack`|Whether or not the framework will automatically acknowledge messages||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--broker-service-url `|The URL of the Pulsar broker||
+|`--classname`|The function's class name||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)||
+|`--client-auth-params`|Client authentication param||
+|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker||
+|`--function-config-file`|The path to a YAML config file specifying the function's configuration||
+|`--hostname-verification-enabled`|Enable hostname verification|false|
+|`--instance-id-offset`|Start the instanceIds from this offset|0|
+|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)||
+|`--log-topic`|The topic to which the function's logs are produced||
+|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--output`|The function's output topic (If none is specified, no output is written)||
+|`--output-serde-classname`|The SerDe class to be used for messages output by the function||
+|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1|
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE|
+|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python)||
+|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function||
+|`--sliding-interval-count`|The number of messages after which the window slides||
+|`--sliding-interval-duration-ms`|The time duration after which the window slides||
+|`--state-storage-service-url`|The URL for the state storage service (by default Apache BookKeeper)||
+|`--tenant`|The function’s tenant||
+|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)||
+|`--user-config`|User-defined config key/values||
+|`--window-length-count`|The number of messages per window||
+|`--window-length-duration-ms`|The time duration of the window in milliseconds||
+|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent||
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--max-message-retries`|How many times should we try to process a message before giving up||
+|`--retain-ordering`|Function consumes and processes messages in order||
+|`--timeout-ms`|The message timeout in milliseconds||
+|`--tls-allow-insecure`|Allow insecure tls connection|false|
+|`--tls-trust-cert-path`|The tls trust cert file path||
+|`--use-tls`|Use tls connection|false|
+
+
+### `create`
+Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster)
+
+Usage
+
+```
+
+$ pulsar-admin functions create options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)||
+|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)||
+|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)||
+|`--auto-ack`|Whether or not the framework will automatically acknowledge messages||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--classname`|The function's class name||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)||
+|`--function-config-file`|The path to a YAML config file specifying the function's configuration||
+|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)||
+|`--log-topic`|The topic to which the function's logs are produced||
+|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The function's name||
+|`--namespace`|The function’s namespace||
+|`--output`|The function's output topic (If none is specified, no output is written)||
+|`--output-serde-classname`|The SerDe class to be used for messages output by the function||
+|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1|
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE|
+|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python)||
+|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function||
+|`--sliding-interval-count`|The number of messages after which the window slides||
+|`--sliding-interval-duration-ms`|The time duration after which the window slides||
+|`--tenant`|The function’s tenant||
+|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)||
+|`--user-config`|User-defined config key/values||
+|`--window-length-count`|The number of messages per window||
+|`--window-length-duration-ms`|The time duration of the window in milliseconds||
+|`--dead-letter-topic`|The topic where all messages which could not be processed||
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--max-message-retries`|How many times should we try to process a message before giving up||
+|`--retain-ordering`|Function consumes and processes messages in order||
+|`--timeout-ms`|The message timeout in milliseconds||
+
+
+### `delete`
+Delete a Pulsar Function that's running on a Pulsar cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin functions delete options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `update`
+Update a Pulsar Function that's been deployed to a Pulsar cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin functions update options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)||
+|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)||
+|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)||
+|`--auto-ack`|Whether or not the framework will automatically acknowledge messages||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--classname`|The function's class name||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)||
+|`--function-config-file`|The path to a YAML config file specifying the function's configuration||
+|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)||
+|`--log-topic`|The topic to which the function's logs are produced||
+|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The function's name||
+|`--namespace`|The function’s namespace||
+|`--output`|The function's output topic (If none is specified, no output is written)||
+|`--output-serde-classname`|The SerDe class to be used for messages output by the function||
+|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1|
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE|
+|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python)||
+|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function||
+|`--sliding-interval-count`|The number of messages after which the window slides||
+|`--sliding-interval-duration-ms`|The time duration after which the window slides||
+|`--tenant`|The function’s tenant||
+|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)||
+|`--user-config`|User-defined config key/values||
+|`--window-length-count`|The number of messages per window||
+|`--window-length-duration-ms`|The time duration of the window in milliseconds||
+|`--dead-letter-topic`|The topic where all messages which could not be processed||
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--max-message-retries`|How many times should we try to process a message before giving up||
+|`--retain-ordering`|Function consumes and processes messages in order||
+|`--timeout-ms`|The message timeout in milliseconds||
+
+
+### `get`
+Fetch information about a Pulsar Function
+
+Usage
+
+```bash
+
+$ pulsar-admin functions get options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `restart`
+Restart function instance
+
+Usage
+
+```bash
+
+$ pulsar-admin functions restart options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `stop`
+Stops function instance
+
+Usage
+
+```bash
+
+$ pulsar-admin functions stop options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `start`
+Starts a stopped function instance
+
+Usage
+
+```bash
+
+$ pulsar-admin functions start options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `status`
+Check the current status of a Pulsar Function
+
+Usage
+
+```bash
+
+$ pulsar-admin functions status options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `stats`
+Get the current stats of a Pulsar Function
+
+Usage
+
+```bash
+
+$ pulsar-admin functions stats options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+### `list`
+List all of the Pulsar Functions running under a specific tenant and namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin functions list options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `querystate`
+Fetch the current state associated with a Pulsar Function running in cluster mode
+
+Usage
+
+```bash
+
+$ pulsar-admin functions querystate options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`-k`, `--key`|The key for the state you want to fetch||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false|
+
+### `putstate`
+Put a key/value pair to the state associated with a Pulsar Function
+
+Usage
+
+```bash
+
+$ pulsar-admin functions putstate options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the Pulsar Function||
+|`--name`|The name of a Pulsar Function||
+|`--namespace`|The namespace of a Pulsar Function||
+|`--tenant`|The tenant of a Pulsar Function||
+|`-s`, `--state`|The FunctionState that needs to be put||
+
+### `trigger`
+Triggers the specified Pulsar Function with a supplied value
+
+Usage
+
+```bash
+
+$ pulsar-admin functions trigger options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+|`--topic`|The specific topic name that the function consumes from that you want to inject the data to||
+|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function||
+|`--trigger-value`|The value with which you want to trigger the function||
+
+
+## `functions-worker`
+Operations to collect function-worker statistics
+
+```bash
+
+$ pulsar-admin functions-worker subcommand
+
+```
+
+Subcommands
+
+* `function-stats`
+* `get-cluster`
+* `get-cluster-leader`
+* `get-function-assignments`
+* `monitoring-metrics`
+
+### `function-stats`
+
+Dump all functions stats running on this broker
+
+Usage
+
+```bash
+
+$ pulsar-admin functions-worker function-stats
+
+```
+
+### `get-cluster`
+
+Get all workers belonging to this cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin functions-worker get-cluster
+
+```
+
+### `get-cluster-leader`
+
+Get the leader of the worker cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin functions-worker get-cluster-leader
+
+```
+
+### `get-function-assignments`
+
+Get the assignments of the functions across the worker cluster
+
+Usage
+
+```bash
+
+$ pulsar-admin functions-worker get-function-assignments
+
+```
+
+### `monitoring-metrics`
+
+Dump metrics for Monitoring
+
+Usage
+
+```bash
+
+$ pulsar-admin functions-worker monitoring-metrics
+
+```
+
+## `namespaces`
+
+Operations for managing namespaces
+
+```bash
+
+$ pulsar-admin namespaces subcommand
+
+```
+
+Subcommands
+* `list`
+* `topics`
+* `policies`
+* `create`
+* `delete`
+* `set-deduplication`
+* `set-auto-topic-creation`
+* `remove-auto-topic-creation`
+* `set-auto-subscription-creation`
+* `remove-auto-subscription-creation`
+* `permissions`
+* `grant-permission`
+* `revoke-permission`
+* `grant-subscription-permission`
+* `revoke-subscription-permission`
+* `set-clusters`
+* `get-clusters`
+* `get-backlog-quotas`
+* `set-backlog-quota`
+* `remove-backlog-quota`
+* `get-persistence`
+* `set-persistence`
+* `get-message-ttl`
+* `set-message-ttl`
+* `get-anti-affinity-group`
+* `set-anti-affinity-group`
+* `get-anti-affinity-namespaces`
+* `delete-anti-affinity-group`
+* `get-retention`
+* `set-retention`
+* `unload`
+* `split-bundle`
+* `set-dispatch-rate`
+* `get-dispatch-rate`
+* `set-replicator-dispatch-rate`
+* `get-replicator-dispatch-rate`
+* `set-subscribe-rate`
+* `get-subscribe-rate`
+* `set-subscription-dispatch-rate`
+* `get-subscription-dispatch-rate`
+* `set-subscription-expiration-time`
+* `get-subscription-expiration-time`
+* `clear-backlog`
+* `unsubscribe`
+* `set-encryption-required`
+* `set-delayed-delivery`
+* `get-delayed-delivery`
+* `set-subscription-auth-mode`
+* `get-max-producers-per-topic`
+* `set-max-producers-per-topic`
+* `get-max-consumers-per-topic`
+* `set-max-consumers-per-topic`
+* `get-max-consumers-per-subscription`
+* `set-max-consumers-per-subscription`
+* `get-max-unacked-messages-per-subscription`
+* `set-max-unacked-messages-per-subscription`
+* `get-max-unacked-messages-per-consumer`
+* `set-max-unacked-messages-per-consumer`
+* `get-compaction-threshold`
+* `set-compaction-threshold`
+* `get-offload-threshold`
+* `set-offload-threshold`
+* `get-offload-deletion-lag`
+* `set-offload-deletion-lag`
+* `clear-offload-deletion-lag`
+* `get-schema-autoupdate-strategy`
+* `set-schema-autoupdate-strategy`
+* `set-offload-policies`
+* `get-offload-policies`
+
+
+### `list`
+Get the namespaces for a tenant
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces list tenant-name
+
+```
+
+### `topics`
+Get the list of topics for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces topics tenant/namespace
+
+```
+
+### `policies`
+Get the configuration policies of a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces policies tenant/namespace
+
+```
+
+### `create`
+Create a new namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces create tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-b`, `--bundles`|The number of bundles to activate|0|
+|`-c`, `--clusters`|List of clusters this namespace will be assigned||
+
+
+### `delete`
+Deletes a namespace. The namespace needs to be empty
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces delete tenant/namespace
+
+```
+
+### `set-deduplication`
+Enable or disable message deduplication on a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-deduplication tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--enable`, `-e`|Enable message deduplication on the specified namespace|false|
+|`--disable`, `-d`|Disable message deduplication on the specified namespace|false|
+
+### `set-auto-topic-creation`
+Enable or disable autoTopicCreation for a namespace, overriding broker settings
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-auto-topic-creation tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--enable`, `-e`|Enable allowAutoTopicCreation on namespace|false|
+|`--disable`, `-d`|Disable allowAutoTopicCreation on namespace|false|
+|`--type`, `-t`|Type of topic to be auto-created. Possible values: (partitioned, non-partitioned)|non-partitioned|
+|`--num-partitions`, `-n`|Default number of partitions of topic to be auto-created, applicable to partitioned topics only||
+
+### `remove-auto-topic-creation`
+Remove override of autoTopicCreation for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces remove-auto-topic-creation tenant/namespace
+
+```
+
+### `set-auto-subscription-creation`
+Enable autoSubscriptionCreation for a namespace, overriding broker settings
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-auto-subscription-creation tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--enable`, `-e`|Enable allowAutoSubscriptionCreation on namespace|false|
+
+### `remove-auto-subscription-creation`
+Remove override of autoSubscriptionCreation for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces remove-auto-subscription-creation tenant/namespace
+
+```
+
+### `permissions`
+Get the permissions on a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces permissions tenant/namespace
+
+```
+
+### `grant-permission`
+Grant permissions on a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces grant-permission tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--actions`|Actions to be granted (`produce` or `consume`)||
+|`--role`|The client role to which to grant the permissions||
+
+
+### `revoke-permission`
+Revoke permissions on a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces revoke-permission tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--role`|The client role to which to revoke the permissions||
+
+### `grant-subscription-permission`
+Grant permissions to access subscription admin-api
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--roles`|The client roles to which to grant the permissions (comma separated roles)||
+|`--subscription`|The subscription name for which permission will be granted to roles||
+
+### `revoke-subscription-permission`
+Revoke permissions to access subscription admin-api
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--role`|The client role to which to revoke the permissions||
+|`--subscription`|The subscription name for which permission will be revoked to roles||
+
+### `set-clusters`
+Set replication clusters for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-clusters tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)||
+
+
+### `get-clusters`
+Get replication clusters for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-clusters tenant/namespace
+
+```
+
+### `get-backlog-quotas`
+Get the backlog quota policies for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-backlog-quotas tenant/namespace
+
+```
+
+### `set-backlog-quota`
+Set a backlog quota policy for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-backlog-quota tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)||
+|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`|
+
+Example
+
+```bash
+
+$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \
+--limit 2G \
+--policy producer_request_hold
+
+```
+
+### `remove-backlog-quota`
+Remove a backlog quota policy from a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces remove-backlog-quota tenant/namespace
+
+```
+
+### `get-persistence`
+Get the persistence policies for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-persistence tenant/namespace
+
+```
+
+### `set-persistence`
+Set the persistence policies for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-persistence tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-a`, `--bookkeeper-ack-quorum`|The number of acks (guaranteed copies) to wait for each entry|0|
+|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0|
+|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0|
+|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)||
+
+
+### `get-message-ttl`
+Get the message TTL for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-message-ttl tenant/namespace
+
+```
+
+### `set-message-ttl`
+Set the message TTL for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-message-ttl tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-ttl`, `--messageTTL`|Message TTL in seconds. When the value is set to `0`, TTL is disabled. TTL is disabled by default. |0|
+
+### `get-anti-affinity-group`
+Get Anti-affinity group name for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace
+
+```
+
+### `set-anti-affinity-group`
+Set Anti-affinity group name for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-g`, `--group`|Anti-affinity group name||
+
+### `get-anti-affinity-namespaces`
+Get Anti-affinity namespaces grouped with the given anti-affinity group name
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-anti-affinity-namespaces options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--cluster`|Cluster name||
+|`-g`, `--group`|Anti-affinity group name||
+|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api||
+
+### `delete-anti-affinity-group`
+Remove Anti-affinity group name for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace
+
+```
+
+### `get-retention`
+Get the retention policy for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-retention tenant/namespace
+
+```
+
+### `set-retention`
+Set the retention policy for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-retention tenant/namespace
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T). 0 means no retention and -1 means infinite size retention||
+|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention||
+
+
+### `unload`
+Unload a namespace or namespace bundle from the current serving broker.
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces unload tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)||
+
+### `split-bundle`
+Split a namespace-bundle from the current serving broker
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces split-bundle tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)||
+|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false|
+
+### `set-dispatch-rate`
+Set message-dispatch-rate for all topics of the namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1|
+|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1|
+|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1|
+
+### `get-dispatch-rate`
+Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0)
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-dispatch-rate tenant/namespace
+
+```
+
+### `set-replicator-dispatch-rate`
+Set replicator message-dispatch-rate for all topics of the namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-replicator-dispatch-rate tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1|
+|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1|
+|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1|
+
+### `get-replicator-dispatch-rate`
+Get replicator configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0)
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-replicator-dispatch-rate tenant/namespace
+
+```
+
+### `set-subscribe-rate`
+Set subscribe-rate per consumer for all topics of the namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1|
+|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30|
+
+### `get-subscribe-rate`
+Get configured subscribe-rate per consumer for all topics of the namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-subscribe-rate tenant/namespace
+
+```
+
+### `set-subscription-dispatch-rate`
+Set subscription message-dispatch-rate for all subscription of the namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1|
+|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1|
+|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1|
+
+### `get-subscription-dispatch-rate`
+Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0)
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace
+
+```
+
+### `set-subscription-expiration-time`
+Set the subscription expiration time for a namespace (in minutes).
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-subscription-expiration-time tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-t`, `--time`|Subscription expiration time in minutes|0|
+
+### `get-subscription-expiration-time`
+Get the subscription expiration time for a namespace (in minutes).
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-subscription-expiration-time tenant/namespace
+
+```
+
+### `clear-backlog`
+Clear the backlog for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces clear-backlog tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)||
+|`-force`, `--force`|Whether to force a clear backlog without prompt|false|
+|`-s`, `--sub`|The subscription name||
+
+
+### `unsubscribe`
+Unsubscribe the given subscription on all destinations on a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces unsubscribe tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)||
+|`-s`, `--sub`|The subscription name||
+
+### `set-encryption-required`
+Enable or disable message encryption required for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-encryption-required tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-d`, `--disable`|Disable message encryption required|false|
+|`-e`, `--enable`|Enable message encryption required|false|
+
+### `set-delayed-delivery`
+Set the delayed delivery policy on a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-delayed-delivery tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-d`, `--disable`|Disable delayed delivery messages|false|
+|`-e`, `--enable`|Enable delayed delivery messages|false|
+|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s|
+
+
+### `get-delayed-delivery`
+Get the delayed delivery policy on a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-delayed-delivery-time tenant/namespace
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-t`, `--time`|The tick time for when retrying on delayed delivery messages|1s|
+
+
+### `set-subscription-auth-mode`
+Set subscription auth mode on a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]||
+
+### `get-max-producers-per-topic`
+Get maxProducersPerTopic for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace
+
+```
+
+### `set-max-producers-per-topic`
+Set maxProducersPerTopic for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0|
+
+### `get-max-consumers-per-topic`
+Get maxConsumersPerTopic for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace
+
+```
+
+### `set-max-consumers-per-topic`
+Set maxConsumersPerTopic for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0|
+
+### `get-max-consumers-per-subscription`
+Get maxConsumersPerSubscription for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace
+
+```
+
+### `set-max-consumers-per-subscription`
+Set maxConsumersPerSubscription for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0|
+
+### `get-max-unacked-messages-per-subscription`
+Get maxUnackedMessagesPerSubscription for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-max-unacked-messages-per-subscription tenant/namespace
+
+```
+
+### `set-max-unacked-messages-per-subscription`
+Set maxUnackedMessagesPerSubscription for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-max-unacked-messages-per-subscription tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--max-unacked-messages-per-subscription`|maxUnackedMessagesPerSubscription for a namespace|-1|
+
+### `get-max-unacked-messages-per-consumer`
+Get maxUnackedMessagesPerConsumer for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-max-unacked-messages-per-consumer tenant/namespace
+
+```
+
+### `set-max-unacked-messages-per-consumer`
+Set maxUnackedMessagesPerConsumer for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-max-unacked-messages-per-consumer tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--max-unacked-messages-per-consumer`|maxUnackedMessagesPerConsumer for a namespace|-1|
+
+
+### `get-compaction-threshold`
+Get compactionThreshold for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-compaction-threshold tenant/namespace
+
+```
+
+### `set-compaction-threshold`
+Set compactionThreshold for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0|
+
+
+### `get-offload-threshold`
+Get offloadThreshold for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-offload-threshold tenant/namespace
+
+```
+
+### `set-offload-threshold`
+Set offloadThreshold for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-offload-threshold tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1|
+
+### `get-offload-deletion-lag`
+Get offloadDeletionLag, in minutes, for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace
+
+```
+
+### `set-offload-deletion-lag`
+Set offloadDeletionLag for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1|
+
+### `clear-offload-deletion-lag`
+Clear offloadDeletionLag for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace
+
+```
+
+### `get-schema-autoupdate-strategy`
+Get the schema auto-update strategy for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace
+
+```
+
+### `set-schema-autoupdate-strategy`
+Set the schema auto-update strategy for a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full|
+|`-d`, `--disabled`|Disable automatic schema updates.|false|
+
+### `get-publish-rate`
+Get the message publish rate for each topic in a namespace, in bytes as well as messages per second 
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces get-publish-rate tenant/namespace
+
+```
+
+### `set-publish-rate`
+Set the message publish rate for each topic in a namespace
+
+Usage
+
+```bash
+
+$ pulsar-admin namespaces set-publish-rate tenant/namespace options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-m`, `--msg-publish-rate`|Threshold for number of messages per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1|
+|`-b`, `--byte-publish-rate`|Threshold for number of bytes per second per topic in the namespace (-1 implies not set, 0 for no limit).|-1|
+
+## `ns-isolation-policy`
+Operations for managing namespace isolation policies.
+
+Usage
+
+```bash
+
+$ pulsar-admin ns-isolation-policy subcommand
+
+```
+
+Subcommands
+* `set`
+* `get`
+* `list`
+* `delete`
+* `brokers`
+* `broker`
+
+### `set`
+Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges.
+
+Usage
+
+```bash
+
+$ pulsar-admin ns-isolation-policy set cluster-name policy-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]|
+|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]|
+|`--namespaces`|Comma-separated namespaces regex list|[]|
+|`--primary`|Comma-separated primary broker regex list|[]|
+|`--secondary`|Comma-separated secondary broker regex list|[]|
+
+
+### `get`
+Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges.
+
+Usage
+
+```bash
+
+$ pulsar-admin ns-isolation-policy get cluster-name policy-name
+
+```
+
+### `list`
+List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges.
+
+Usage
+
+```bash
+
+$ pulsar-admin ns-isolation-policy list cluster-name
+
+```
+
+### `delete`
+Delete namespace isolation policy of a cluster. This operation requires superuser privileges.
+
+Usage
+
+```bash
+
+$ pulsar-admin ns-isolation-policy delete
+
+```
+
+### `brokers`
+List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges.
+
+Usage
+
+```bash
+
+$ pulsar-admin ns-isolation-policy brokers cluster-name
+
+```
+
+### `broker`
+Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges.
+
+Usage
+
+```bash
+
+$ pulsar-admin ns-isolation-policy broker cluster-name options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`--broker`|Broker name to get namespace-isolation policies attached to it||
+
+## `topics`
+Operations for managing Pulsar topics (both persistent and non persistent)
+
+Usage
+
+```bash
+
+$ pulsar-admin topics subcommand
+
+```
+
+Subcommands
+* `compact`
+* `compaction-status`
+* `offload`
+* `offload-status`
+* `create-partitioned-topic`
+* `create-missed-partitions`
+* `delete-partitioned-topic`
+* `create`
+* `get-partitioned-topic-metadata`
+* `update-partitioned-topic`
+* `list-partitioned-topics`
+* `list`
+* `terminate`
+* `permissions`
+* `grant-permission`
+* `revoke-permission`
+* `lookup`
+* `bundle-range`
+* `delete`
+* `unload`
+* `create-subscription`
+* `subscriptions`
+* `unsubscribe`
+* `stats`
+* `stats-internal`
+* `info-internal`
+* `partitioned-stats`
+* `partitioned-stats-internal`
+* `skip`
+* `clear-backlog`
+* `expire-messages`
+* `expire-messages-all-subscriptions`
+* `peek-messages`
+* `reset-cursor`
+* `get-message-by-id`
+* `last-message-id`
+
+### `compact`
+Run compaction on the specified topic (persistent topics only)
+
+Usage
+
+```
+
+$ pulsar-admin topics compact persistent://tenant/namespace/topic
+
+```
+
+### `compaction-status`
+Check the status of a topic compaction (persistent topics only)
+
+Usage
+
+```bash
+
+$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic
+
+```
+
+Options
+
+|Flag|Description|Default|
+|----|---|---|
+|`-w`, `--wait-complete`|Wait for compaction to complete|false|
+
+
+### `offload`
+Trigger offload of data from a topic to long-term storage (e.g. Amazon S3)
+
+Usage
+
+```bash
+
+$ pulsar-admin topics offload persistent://tenant/namespace/topic options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic||
+
+
+### `offload-status`
+Check the status of data offloading from a topic to long-term storage
+
+Usage
+
+```bash
+
+$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-w`, `--wait-complete`|Wait for compaction to complete|false|
+
+
+### `create-partitioned-topic`
+Create a partitioned topic. A partitioned topic must be created before producers can publish to it.
+
+:::note
+
+By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data.
+To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`.
+To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value.
+For more information about these two parameters, see [here](reference-configuration.md#broker).
+
+:::
+
+Usage
+
+```bash
+
+$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-p`, `--partitions`|The number of partitions for the topic|0|
+
+### `create-missed-partitions`
+Try to create partitions for partitioned topic. The partitions of partition topic has to be created, 
+can be used by repair partitions when topic auto creation is disabled
+
+Usage
+
+```bash
+
+$ pulsar-admin topics create-missed-partitions persistent://tenant/namespace/topic
+
+```
+
+### `delete-partitioned-topic`
+Delete a partitioned topic. This will also delete all the partitions of the topic if they exist.
+
+Usage
+
+```bash
+
+$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent}
+
+```
+
+### `create`
+Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled.
+
+:::note
+
+By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data.
+To disable this feature, set `brokerDeleteInactiveTopicsEnabled`  to `false`.
+To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value.
+For more information about these two parameters, see [here](reference-configuration.md#broker).
+
+:::
+
+Usage
+
+```bash
+
+$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic
+
+```
+
+### `get-partitioned-topic-metadata`
+Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions.
+
+Usage
+
+```bash
+
+$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic
+
+```
+
+### `update-partitioned-topic`
+Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions.
+
+Usage
+
+```bash
+
+$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options
+
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`-p`, `--partitions`|The number of partitions for the topic|0|
+
+### `list-partitioned-topics`
+Get the list of partitioned topics under a namespace.
+
+Usage
+
+```bash
+
+$ pulsar-admin topics list-partitioned-topics tenant/namespace
+
+```
+
+### `list`
+Get the list of topics under a namespace
+
+Usage
+
+```
+
+$ pulsar-admin topics list tenant/cluster/namespace
+
+```
+
+### `terminate`
+Terminate a persistent topic (disallow further messages from being published on the topic)
+
+Usage
+
+```bash
+
+$ pulsar-admin topics terminate persistent://tenant/namespace/topic
+
+```
+
+### `permissions`
+Get the permissions on a topic. Retrieve the effective permissions for a destination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic.
+
+Usage
+
+```bash
+
+$ pulsar-admin topics permissions topic
+
+```
+
+### `grant-permission`
+Grant a new permission to a client role on a single topic
+
+Usage
+
+```bash
+
... 1102 lines suppressed ...