You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by mm...@apache.org on 2019/05/31 15:35:07 UTC

[pulsar] branch master updated: Version docs for 2.3.2 release (#4417)

This is an automated email from the ASF dual-hosted git repository.

mmerli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new 7de6c24  Version docs for 2.3.2 release (#4417)
7de6c24 is described below

commit 7de6c24422fdb0a50675bb806dd8584cc1465666
Author: Boyang Jerry Peng <je...@gmail.com>
AuthorDate: Fri May 31 08:35:00 2019 -0700

    Version docs for 2.3.2 release (#4417)
---
 site2/website/releases.json                        |    1 +
 .../versioned_docs/version-2.3.2/adaptors-spark.md |   77 +
 .../version-2.3.2/admin-api-namespaces.md          |  759 ++++++
 .../version-2.3.2/administration-geo.md            |  129 +
 .../version-2.3.2/concepts-clients.md              |   80 +
 .../version-2.3.2/concepts-messaging.md            |  381 +++
 .../version-2.3.2/deploy-bare-metal.md             |  444 ++++
 .../version-2.3.2/functions-guarantees.md          |   42 +
 .../version-2.3.2/functions-worker.md              |  241 ++
 .../version-2.3.2/getting-started-clients.md       |   57 +
 .../version-2.3.2/getting-started-docker.md        |  171 ++
 .../version-2.3.2/getting-started-standalone.md    |  221 ++
 .../versioned_docs/version-2.3.2/io-connectors.md  |   30 +
 .../versioned_docs/version-2.3.2/io-redis.md       |   28 +
 .../version-2.3.2/reference-cli-tools.md           |  698 ++++++
 .../version-2.3.2/reference-configuration.md       |  490 ++++
 .../version-2.3.2/reference-pulsar-admin.md        | 2552 ++++++++++++++++++++
 .../version-2.3.2/security-kerberos.md             |  284 +++
 .../version-2.3.2/security-overview.md             |   42 +
 .../versioned_sidebars/version-2.3.2-sidebars.json |  127 +
 site2/website/versions.json                        |    1 +
 21 files changed, 6855 insertions(+)

diff --git a/site2/website/releases.json b/site2/website/releases.json
index b55858b..06ee139 100644
--- a/site2/website/releases.json
+++ b/site2/website/releases.json
@@ -1,4 +1,5 @@
 [
+  "2.3.2",
   "2.3.1",
   "2.3.0",
   "2.2.1",
diff --git a/site2/website/versioned_docs/version-2.3.2/adaptors-spark.md b/site2/website/versioned_docs/version-2.3.2/adaptors-spark.md
new file mode 100644
index 0000000..b28faf0
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/adaptors-spark.md
@@ -0,0 +1,77 @@
+---
+id: version-2.3.2-adaptors-spark
+title: Pulsar adaptor for Apache Spark
+sidebar_label: Apache Spark
+original_id: adaptors-spark
+---
+
+The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive data from Pulsar.
+
+An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming Pulsar receiver and can process it in a variety of ways.
+
+## Prerequisites
+
+To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration.
+
+### Maven
+
+If you're using Maven, add this to your `pom.xml`:
+
+```xml
+<!-- in your <properties> block -->
+<pulsar.version>{{pulsar:version}}</pulsar.version>
+
+<!-- in your <dependencies> block -->
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-spark</artifactId>
+  <version>${pulsar.version}</version>
+</dependency>
+```
+
+### Gradle
+
+If you're using Gradle, add this to your `build.gradle` file:
+
+```groovy
+def pulsarVersion = "{{pulsar:version}}"
+
+dependencies {
+    compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion
+}
+```
+
+## Usage
+
+Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`:
+
+```java
+    String serviceUrl = "pulsar://localhost:6650/";
+    String topic = "persistent://public/default/test_src";
+    String subs = "test_sub";
+
+    SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example");
+
+    JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60));
+
+    ConsumerConfigurationData<byte[]> pulsarConf = new ConsumerConfigurationData();
+
+    Set<String> set = new HashSet<>();
+    set.add(topic);
+    pulsarConf.setTopicNames(set);
+    pulsarConf.setSubscriptionName(subs);
+
+    SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver(
+        serviceUrl,
+        pulsarConf,
+        new AuthenticationDisabled());
+
+    JavaReceiverInputDStream<byte[]> lineDStream = jsc.receiverStream(pulsarReceiver);
+```
+
+
+## Example
+
+You can find a complete example [here](https://github.com/apache/pulsar/tree/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java).
+In this example, the number of messages which contain the string "Pulsar" in received messages is counted.
+
diff --git a/site2/website/versioned_docs/version-2.3.2/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.3.2/admin-api-namespaces.md
new file mode 100644
index 0000000..9d8b080
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/admin-api-namespaces.md
@@ -0,0 +1,759 @@
+---
+id: version-2.3.2-admin-api-namespaces
+title: Managing Namespaces
+sidebar_label: Namespaces
+original_id: admin-api-namespaces
+---
+
+Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic).
+
+Namespaces can be managed via:
+
+* The [`namespaces`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool
+* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API
+* The `namespaces` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md)
+
+## Namespaces resources
+
+### Create
+
+You can create new namespaces under a given [tenant](reference-terminology.md#tenant).
+
+#### pulsar-admin
+
+Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name:
+
+```shell
+$ pulsar-admin namespaces create test-tenant/test-namespace
+```
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace}
+
+#### Java
+
+```java
+admin.namespaces().createNamespace(namespace);
+```
+
+### Get policies
+
+You can fetch the current policies associated with a namespace at any time.
+
+#### pulsar-admin
+
+Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace:
+
+```shell
+$ pulsar-admin namespaces policies test-tenant/test-namespace
+{
+  "auth_policies": {
+    "namespace_auth": {},
+    "destination_auth": {}
+  },
+  "replication_clusters": [],
+  "bundles_activated": true,
+  "bundles": {
+    "boundaries": [
+      "0x00000000",
+      "0xffffffff"
+    ],
+    "numBundles": 1
+  },
+  "backlog_quota_map": {},
+  "persistence": null,
+  "latency_stats_sample_rate": {},
+  "message_ttl_in_seconds": 0,
+  "retention_policies": null,
+  "deleted": false
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies}
+
+#### Java
+
+```java
+admin.namespaces().getPolicies(namespace);
+```
+
+### List namespaces within a tenant
+
+You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant).
+
+#### pulsar-admin
+
+Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant:
+
+```shell
+$ pulsar-admin namespaces list test-tenant
+test-tenant/ns1
+test-tenant/ns2
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces}
+
+#### Java
+
+```java
+admin.namespaces().getNamespaces(tenant);
+```
+
+
+### Delete
+
+You can delete existing namespaces from a tenant.
+
+#### pulsar-admin
+
+Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace:
+
+```shell
+$ pulsar-admin namespaces delete test-tenant/ns1
+```
+
+#### REST
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace}
+
+#### Java
+
+```java
+admin.namespaces().deleteNamespace(namespace);
+```
+
+
+#### set replication cluster
+
+It sets replication clusters for a namespace, so Pulsar can internally replicate publish message from one colo to another colo.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-clusters test-tenant/ns1 \
+  --clusters cl1
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters}
+```
+
+###### Java
+
+```java
+admin.namespaces().setNamespaceReplicationClusters(namespace, clusters);
+```
+
+#### get replication cluster
+
+It gives a list of replication clusters for a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1
+```
+
+```
+cl2
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/replication|operation/getNamespaceReplicationClusters}
+```
+
+###### Java
+
+```java
+admin.namespaces().getNamespaceReplicationClusters(namespace)
+```
+
+#### set backlog quota policies
+
+Backlog quota helps broker to restrict bandwidth/storage of a namespace once it reach certain threshold limit . Admin can set this limit and one of the following action after the limit is reached.
+
+  1.  producer_request_hold: broker will hold and not persist produce request payload
+
+  2.  producer_exception: broker will disconnects with client by giving exception
+
+  3.  consumer_backlog_eviction: broker will start discarding backlog messages
+
+  Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-backlog-quota --limit 10 --policy producer_request_hold test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/backlogQuota|operation/setBacklogQuota}
+```
+
+###### Java
+
+```java
+admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, policy))
+```
+
+#### get backlog quota policies
+
+It shows a configured backlog quota for a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1
+```
+
+```json
+{
+  "destination_storage": {
+    "limit": 10,
+    "policy": "producer_request_hold"
+  }
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/backlogQuotaMap|operation/getBacklogQuotaMap}
+```
+
+###### Java
+
+```java
+admin.namespaces().getBacklogQuotaMap(namespace);
+```
+
+#### remove backlog quota policies
+
+It removes backlog quota policies for a given namespace
+
+###### CLI
+
+```
+$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|DELETE|/admin/v2/namespaces/{tenant}/{namespace}/backlogQuota|operation/removeBacklogQuota}
+```
+
+###### Java
+
+```java
+admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType)
+```
+
+#### set persistence policies
+
+Persistence policies allow to configure persistency-level for all topic messages under a given namespace.
+
+  -   Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0
+
+  -   Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0
+
+  -   Bookkeeper-write-quorum: How many writes to make of each entry, default: 0
+
+  -   Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/persistence|operation/setPersistence}
+```
+
+###### Java
+
+```java
+admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate))
+```
+
+
+#### get persistence policies
+
+It shows configured persistence policies of a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-persistence test-tenant/ns1
+```
+
+```json
+{
+  "bookkeeperEnsemble": 3,
+  "bookkeeperWriteQuorum": 2,
+  "bookkeeperAckQuorum": 2,
+  "managedLedgerMaxMarkDeleteRate": 0
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/persistence|operation/getPersistence}
+```
+
+###### Java
+
+```java
+admin.namespaces().getPersistence(namespace)
+```
+
+
+#### unload namespace bundle
+
+Namespace bundle is a virtual group of topics which belong to same namespace. If broker gets overloaded with number of bundles then this command can help to unload heavy bundle from that broker, so it can be served by some other less loaded broker. Namespace bundle is defined with it’s start and end range such as 0x00000000 and 0xffffffff.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|PUT|/admin/v2/namespaces/{tenant}/{namespace}/{bundle}/unload|operation/unloadNamespaceBundle}
+```
+
+###### Java
+
+```java
+admin.namespaces().unloadNamespaceBundle(namespace, bundle)
+```
+
+
+#### set message-ttl
+
+It configures message’s time to live (in seconds) duration.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/messageTTL|operation/setNamespaceMessageTTL}
+```
+
+###### Java
+
+```java
+admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL)
+```
+
+#### get message-ttl
+
+It gives a message ttl of configured namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-message-ttl test-tenant/ns1
+```
+
+```
+100
+```
+
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/messageTTL|operation/getNamespaceMessageTTL}
+```
+
+###### Java
+
+```java
+admin.namespaces().getNamespaceMessageTTL(namespace)
+```
+
+
+#### split bundle
+
+Each namespace bundle can contain multiple topics and each bundle can be served by only one broker. If bundle gets heavy with multiple live topics in it then it creates load on that broker and in order to resolve this issue, admin can split bundle using this command.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|PUT|/admin/v2/namespaces/{tenant}/{namespace}/{bundle}/split|operation/splitNamespaceBundle}
+```
+
+###### Java
+
+```java
+admin.namespaces().splitNamespaceBundle(namespace, bundle)
+```
+
+
+#### clear backlog
+
+It clears all message backlog for all the topics those belong to specific namespace. You can also clear backlog for a specific subscription as well.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/clearBacklog|operation/clearNamespaceBacklogForSubscription}
+```
+
+###### Java
+
+```java
+admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription)
+```
+
+
+#### clear bundle backlog
+
+It clears all message backlog for all the topics those belong to specific NamespaceBundle. You can also clear backlog for a specific subscription as well.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces clear-backlog  --bundle 0x00000000_0xffffffff  --sub my-subscription test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/{bundle}/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription}
+```
+
+###### Java
+
+```java
+admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription)
+```
+
+
+#### set retention
+
+Each namespace contains multiple topics and each topic’s retention size (storage size) should not exceed to a specific threshold or it should be stored till certain time duration. This command helps to configure retention size and time of topics in a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin set-retention --size 10 --time 100 test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/retention|operation/setRetention}
+```
+
+###### Java
+
+```java
+admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB))
+```
+
+
+#### get retention
+
+It shows retention information of a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-retention test-tenant/ns1
+```
+
+```json
+{
+  "retentionTimeInMinutes": 10,
+  "retentionSizeInMB": 100
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/retention|operation/getRetention}
+```
+
+###### Java
+
+```java
+admin.namespaces().getRetention(namespace)
+```
+
+#### set dispatch throttling
+
+It sets message dispatch rate for all the topics under a given namespace.
+Dispatch rate can be restricted by number of message per X seconds (`msg-dispatch-rate`) or by number of message-bytes per X second (`byte-dispatch-rate`).
+dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
+disables the throttling.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \
+  --msg-dispatch-rate 1000 \
+  --byte-dispatch-rate 1048576 \
+  --dispatch-rate-period 1
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/dispatchRate|operation/setDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1))
+```
+
+#### get configured message-rate
+
+It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1
+```
+
+```json
+{
+  "dispatchThrottlingRatePerTopicInMsg" : 1000,
+  "dispatchThrottlingRatePerTopicInByte" : 1048576,
+  "ratePeriodInSecond" : 1
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/dispatchRate|operation/getDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().getDispatchRate(namespace)
+```
+
+
+#### set dispatch throttling for subscription
+
+It sets message dispatch rate for all the subscription of topics under a given namespace.
+Dispatch rate can be restricted by number of message per X seconds (`msg-dispatch-rate`) or by number of message-bytes per X second (`byte-dispatch-rate`).
+dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
+disables the throttling.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \
+  --msg-dispatch-rate 1000 \
+  --byte-dispatch-rate 1048576 \
+  --dispatch-rate-period 1
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/subscriptionDispatchRate|operation/setDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1))
+```
+
+#### get configured message-rate
+
+It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1
+```
+
+```json
+{
+  "dispatchThrottlingRatePerTopicInMsg" : 1000,
+  "dispatchThrottlingRatePerTopicInByte" : 1048576,
+  "ratePeriodInSecond" : 1
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/subscriptionDispatchRate|operation/getDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().getSubscriptionDispatchRate(namespace)
+```
+
+#### set dispatch throttling for subscription
+
+It sets message dispatch rate for all the replicator between replication clusters under a given namespace.
+Dispatch rate can be restricted by number of message per X seconds (`msg-dispatch-rate`) or by number of message-bytes per X second (`byte-dispatch-rate`).
+dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
+disables the throttling.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \
+  --msg-dispatch-rate 1000 \
+  --byte-dispatch-rate 1048576 \
+  --dispatch-rate-period 1
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/replicatorDispatchRate|operation/setDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1))
+```
+
+#### get configured message-rate
+
+It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1
+```
+
+```json
+{
+  "dispatchThrottlingRatePerTopicInMsg" : 1000,
+  "dispatchThrottlingRatePerTopicInByte" : 1048576,
+  "ratePeriodInSecond" : 1
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/replicatorDispatchRate|operation/getDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().getReplicatorDispatchRate(namespace)
+```
+
+### Namespace isolation
+
+Coming soon.
+
+### Unloading from a broker
+
+You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it.
+
+#### pulsar-admin
+
+Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command.
+
+###### CLI
+
+```shell
+$ pulsar-admin namespaces unload my-tenant/my-ns
+```
+
+###### REST
+
+```
+{@inject: endpoint|PUT|/admin/v2/namespaces/{tenant}/{namespace}/unload|operation/unloadNamespace}
+```
+
+###### Java
+
+```java
+admin.namespaces().unload(namespace)
+```
diff --git a/site2/website/versioned_docs/version-2.3.2/administration-geo.md b/site2/website/versioned_docs/version-2.3.2/administration-geo.md
new file mode 100644
index 0000000..c8b4c4d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/administration-geo.md
@@ -0,0 +1,129 @@
+---
+id: version-2.3.2-administration-geo
+title: Pulsar geo-replication
+sidebar_label: Geo-replication
+original_id: administration-geo
+---
+
+*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+
+## How it works
+
+The diagram below illustrates the process of geo-replication across Pulsar clusters:
+
+![Replication Diagram](assets/geo-replication.png)
+
+In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
+
+Without geo-replication, **C1** and **C2** consumers are not able to consume messages published by **P3** producer.
+
+## Geo-replication and Pulsar properties
+
+Geo-replication must be enabled on a per-tenant basis in Pulsar. Geo-replication can be enabled between clusters only when a tenant has been created that allows access to both clusters.
+
+Although geo-replication must be enabled between two clusters, it's actually managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
+
+* [Enable geo-replication namespaces](#enabling-geo-replication-namespaces)
+* Configure that namespace to replicate across two or more provisioned clusters
+
+Any message published on *any* topic in that namespace will be replicated to all clusters in the specified set.
+
+## Local persistence and forwarding
+
+When messages are produced on a Pulsar topic, they are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters.
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions.
+
+Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition).
+
+> #### Subscriptions are local to a cluster
+> While producers and consumers can publish to and consume from any cluster in a Pulsar instance, subscriptions are local to the clusters in which they are created and cannot be transferred between clusters. If you do need to transfer a subscription, you’ll need to create a new subscription in the desired cluster.
+
+In the aforementioned example, the **T1** topic is being replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
+
+All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers will receive all messages published by **P1**, **P2**, and **P3** producers. Ordering is still guaranteed on a per-producer basis.
+
+## Configuring replication
+
+As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level.
+
+### Granting permissions to properties
+
+To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create it or grant later.
+
+Specify all the intended clusters when creating a tenant:
+
+```shell
+$ bin/pulsar-admin tenants create my-tenant \
+  --admin-roles my-admin-role \
+  --allowed-clusters us-west,us-east,us-cent
+```
+
+To update permissions of an existing tenant, use `update` instead of `create`.
+
+### Enabling geo-replication namespaces
+
+You can create a namespace with the following command sample.
+
+```shell
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace
+```
+
+Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand:
+
+```shell
+$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \
+  --clusters us-west,us-east,us-cent
+```
+
+The replication clusters for a namespace can be changed at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes.
+
+### Using topics with geo-replication
+
+Once you've created a geo-replication namespace, any topics that producers or consumers create within that namespace will be replicated across clusters. Typically, each application will use the `serviceUrl` for the local cluster.
+
+#### Selective replication
+
+By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message will be replicated only to the subset in the replication list.
+
+The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when constructing the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object:
+
+```java
+List<String> restrictReplicationTo = Arrays.asList(
+        "us-west",
+        "us-east"
+);
+
+Producer producer = client.newProducer()
+        .topic("some-topic")
+        .create();
+
+producer.newMessage()
+        .value("my-payload".getBytes())
+        .setReplicationClusters(restrictReplicationTo)
+        .send();
+```
+
+#### Topic stats
+
+Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API:
+
+```shell
+$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic
+```
+
+Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs.
+
+#### Deleting a geo-replication topic
+
+Given that geo-replication topics exist in multiple regions, it's not possible to directly delete a geo-replication topic. Instead, you should rely on automatic topic garbage collection.
+
+In Pulsar, a topic is automatically deleted when it meets the following three conditions:
+- when no producers or consumers are connected to it;
+- there are no subscriptions to it;
+- no more messages are kept for retention. 
+For geo-replication topics, each region uses a fault-tolerant mechanism to decide when it's safe to delete the topic locally.
+
+You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker).
+
+To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic.
diff --git a/site2/website/versioned_docs/version-2.3.2/concepts-clients.md b/site2/website/versioned_docs/version-2.3.2/concepts-clients.md
new file mode 100644
index 0000000..92114f6
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/concepts-clients.md
@@ -0,0 +1,80 @@
+---
+id: version-2.3.2-concepts-clients
+title: Pulsar Clients
+sidebar_label: Clients
+original_id: concepts-clients
+---
+
+Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md),  [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications.
+
+Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff.
+
+> #### Custom client libraries
+> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md)
+
+
+## Client setup phase
+
+When an application wants to create a producer/consumer, the Pulsar client library will initiate a setup phase that is composed of two steps:
+
+1. The client will attempt to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata will know who is serving the topic or, in case nobody is serving it, will try to assign it to the least loaded broker.
+1. Once the client library has the broker address, it will create a TCP connection (or reuse an existing connection from the pool) and authenticate it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client will send a command to create producer/consumer to the broker, which will comply after having validated the authorization policy.
+
+Whenever the TCP connection breaks, the client will immediately re-initiate this setup phase and will keep trying with exponential backoff to re-establish the producer or consumer until the operation succeeds.
+
+## Reader interface
+
+In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they've been processed.  Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription will begin reading with the first message created afterwards.  Whenever a consumer  [...]
+
+The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with:
+
+* The **earliest** available message in the topic
+* The **latest** available message in the topic
+* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache.
+
+The reader interface is helpful for use cases like using Pulsar to provide [effectively-once](https://streaml.io/blog/exactly-once/) processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic.
+
+![The Pulsar consumer and reader interfaces](assets/pulsar-reader-consumer-interfaces.png)
+
+> ### Non-partitioned topics only
+> The reader interface for Pulsar cannot currently be used with [partitioned topics](concepts-messaging.md#partitioned-topics).
+
+Here's a Java example that begins reading from the earliest available message on a topic:
+
+```java
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.MessageId;
+import org.apache.pulsar.client.api.Reader;
+
+// Create a reader on a topic and for a specific message (and onward)
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic("reader-api-test")
+    .startMessageId(MessageId.earliest)
+    .create();
+
+while (true) {
+    Message message = reader.readNext();
+
+    // Process the message
+}
+```
+
+To create a reader that will read from the latest available message:
+
+```java
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(MessageId.latest)
+    .create();
+```
+
+To create a reader that will read from some message between earliest and latest:
+
+```java
+byte[] msgIdBytes = // Some byte array
+MessageId id = MessageId.fromByteArray(msgIdBytes);
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(id)
+    .create();
+```
diff --git a/site2/website/versioned_docs/version-2.3.2/concepts-messaging.md b/site2/website/versioned_docs/version-2.3.2/concepts-messaging.md
new file mode 100644
index 0000000..24599da
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/concepts-messaging.md
@@ -0,0 +1,381 @@
+---
+id: version-2.3.2-concepts-messaging
+title: Messaging Concepts
+sidebar_label: Messaging
+original_id: concepts-messaging
+---
+
+Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern, aka pub-sub. In this pattern, [producers](#producers) publish messages to [topics](#topics). [Consumers](#consumers) can then [subscribe](#subscription-modes) to those topics, process incoming messages, and send an acknowledgement when processing is complete.
+
+Once a subscription has been created, all messages will be [retained](concepts-architecture-overview.md#persistent-storage) by Pulsar, even if the consumer gets disconnected. Retained messages will be discarded only when a consumer acknowledges that they've been successfully processed.
+
+## Messages
+
+Messages are the basic "unit" of Pulsar. They're what producers publish to topics and what consumers then consume from topics (and acknowledge when the message has been processed). Messages are the analogue of letters in a postal service system.
+
+Component | Purpose
+:---------|:-------
+Value / data payload | The data carried by the message. All Pulsar messages carry raw bytes, although message data can also conform to data [schemas](concepts-schema-registry.md)
+Key | Messages can optionally be tagged with keys, which can be useful for things like [topic compaction](concepts-topic-compaction.md)
+Properties | An optional key/value map of user-defined properties
+Producer name | The name of the producer that produced the message (producers are automatically given default names, but you can apply your own explicitly as well)
+Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. A message's sequence ID is its ordering in that sequence.
+Publish time | The timestamp of when the message was published (automatically applied by the producer)
+Event time | An optional timestamp that applications can attach to the message representing when something happened, e.g. when the message was processed. The event time of a message is 0 if none is explicitly set.
+
+
+> For a more in-depth breakdown of Pulsar message contents, see the documentation on Pulsar's [binary protocol](developing-binary-protocol.md).
+
+## Producers
+
+A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker) for processing.
+
+### Send modes
+
+Producers can send messages to brokers either synchronously (sync) or asynchronously (async).
+
+| Mode       | Description                                                                                                                                                                                                                                                                                                                                                              |
+|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Sync send  | The producer will wait for acknowledgement from the broker after sending each message. If acknowledgment isn't received then the producer will consider the send operation a failure.                                                                                                                                                                                    |
+| Async send | The producer will put the message in a blocking queue and return immediately. The client library will then send the message to the broker in the background. If the queue is full (max size [configurable](reference-configuration.md#broker), the producer could be blocked or fail immediately when calling the API, depending on arguments passed to the producer. |
+
+### Compression
+
+Messages published by producers can be compressed during transportation in order to save bandwidth. Pulsar currently supports the following types of compression:
+
+* [LZ4](https://github.com/lz4/lz4)
+* [ZLIB](https://zlib.net/)
+* [ZSTD](https://facebook.github.io/zstd/)
+* [SNAPPY](https://google.github.io/snappy/)
+
+### Batching
+
+If batching is enabled, the producer will accumulate and send a batch of messages in a single request. Batching size is defined by the maximum number of messages and maximum publish latency.
+
+## Consumers
+
+A consumer is a process that attaches to a topic via a subscription and then receives messages.
+
+### Receive modes
+
+Messages can be received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async).
+
+| Mode          | Description                                                                                                                                                                                                   |
+|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Sync receive  | A sync receive will be blocked until a message is available.                                                                                                                                                  |
+| Async receive | An async receive will return immediately with a future value---a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java, for example---that completes once a new message is available. |
+
+### Listeners
+
+Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received.
+
+### Acknowledgement
+
+When a consumer has consumed a message successfully, the consumer sends an acknowledgement request to the broker, so that the broker will discard the message. Otherwise, it [stores](concepts-architecture-overview.md#persistent-storage) the message.
+
+Messages can be acknowledged either one by one or cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message will not be re-delivered to that consumer.
+
+
+> Cumulative acknowledgement cannot be used with [shared subscription mode](#subscription-modes), because shared mode involves multiple consumers having access to the same subscription.
+
+In the shared subscription mode, messages can be acknowledged individually.
+
+### Negative acknowledgement
+
+When a consumer does not consume a message successfully at a time, and wants to consume the message again, the consumer can send a negative acknowledgement to the broker, and then the broker will redeliver the message.
+
+Messages can be negatively acknowledged one by one or cumulatively, which depends on the consumption subscription mode.
+
+In the exclusive and failover subscription modes, consumers only negatively acknowledge the last message they have received.
+
+In the shared and Key_Shared subscription modes, you can negatively acknowledge messages individually.
+
+### Acknowledgement timeout
+
+When a message is not consumed successfully, and you want to trigger the broker to redeliver the message automatically, you can adopt the unacknowledged message automatic re-delivery mechanism. Client will track the unacknowledged messages within the entire `acktimeout` time range, and send a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified.
+
+> Note    
+> Use negative acknowledgement prior to acknowledgement timeout. Negative acknowledgement controls re-delivery of individual messages with more precise, and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout.
+
+### Dead letter topic
+
+Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic.
+
+The following example shows how to enable dead letter topic in Java client.
+
+```java
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+              .topic(topic)
+              .subscriptionName("my-subscription")
+              .subscriptionType(SubscriptionType.Shared)
+              .deadLetterPolicy(DeadLetterPolicy.builder()
+                    .maxRedeliverCount(maxRedeliveryCount)
+                    .build())
+              .subscribe();
+                
+```
+Dead letter topic depends on message re-delivery. You need to confirm message re-delivery method: negative acknowledgement or acknowledgement timeout. Use negative acknowledgement prior to acknowledgement timeout. 
+
+> Note    
+> Currently, dead letter topic is enabled only in the shared subscription mode.
+
+## Topics
+
+As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from [producers](reference-terminology.md#producer) to [consumers](reference-terminology.md#consumer). Topic names are URLs that have a well-defined structure:
+
+```http
+{persistent|non-persistent}://tenant/namespace/topic
+```
+
+Topic name component | Description
+:--------------------|:-----------
+`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics) (persistent is the default, so if you don't specify a type the topic will be persistent). With persistent topics, all messages are durably [persisted](concepts-architecture-overview.md#persistent-storage) on disk (that means on multiple disks unless the broker is standalone) [...]
+`tenant`             | The topic's tenant within the instance. Tenants are essential to multi-tenancy in Pulsar and can be spread across clusters.
+`namespace`          | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant can have multiple namespaces.
+`topic`              | The final part of the name. Topic names are freeform and have no special meaning in a Pulsar instance.
+
+
+> #### No need to explicitly create new topics
+> You don't need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar will automatically create that topic under the [namespace](#namespaces) provided in the [topic name](#topics).
+
+
+## Namespaces
+
+A namespace is a logical nomenclature within a tenant. A tenant can create multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace.
+
+## Subscription modes
+
+A subscription is a named configuration rule that determines how messages are delivered to consumers. There are three available subscription modes in Pulsar: [exclusive](#exclusive), [shared](#shared), and [failover](#failover). These modes are illustrated in the figure below.
+
+![Subscription modes](assets/pulsar-subscription-modes.png)
+
+### Exclusive
+
+In *exclusive* mode, only a single consumer is allowed to attach to the subscription. If more than one consumer attempts to subscribe to a topic using the same subscription, the consumer receives an error.
+
+In the diagram above, only **Consumer-A** is allowed to consume messages.
+
+> Exclusive mode is the default subscription mode.
+
+![Exclusive subscriptions](assets/pulsar-exclusive-subscriptions.png)
+
+### Failover
+
+In *failover* mode, multiple consumers can attach to the same subscription. The consumers will be lexically sorted by the consumer's name and the first consumer will initially be the only one receiving messages. This consumer is called the *master consumer*.
+
+When the master consumer disconnects, all (non-acked and subsequent) messages will be delivered to the next consumer in line.
+
+In the diagram above, Consumer-C-1 is the master consumer while Consumer-C-2 would be the next in line to receive messages if Consumer-C-1 disconnected.
+
+![Failover subscriptions](assets/pulsar-failover-subscriptions.png)
+
+### Shared
+
+In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers.
+
+In the diagram above, **Consumer-B-1** and **Consumer-B-2** are able to subscribe to the topic, but **Consumer-C-1** and others could as well.
+
+> #### Limitations of shared mode
+> There are two important things to be aware of when using shared mode:
+> * Message ordering is not guaranteed.
+> * You cannot use cumulative acknowledgment with shared mode.
+
+![Shared subscriptions](assets/pulsar-shared-subscriptions.png)
+
+### Key_shared
+
+In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message.
+
+> #### Limitations of Key_Shared mode
+> There are two important things to be aware of when using Key_Shared mode:
+> * You need to specify a key or orderingKey for messages
+> * You cannot use cumulative acknowledgment with Key_Shared mode.
+
+![Key_Shared subscriptions](assets/pulsar-key-shared-subscriptions.png)
+
+**Key_Shared subscription is a beta feature. You can disable it at broker.config.**
+
+## Multi-topic subscriptions
+
+When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways:
+
+* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*`
+* By explicitly defining a list of topics
+
+> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces)
+
+When subscribing to multiple topics, the Pulsar client will automatically make a call to the Pulsar API to discover the topics that match the regex pattern/list and then subscribe to all of them. If any of the topics don't currently exist, the consumer will auto-subscribe to them once the topics are created.
+
+> #### No ordering guarantees
+> When a consumer subscribes to multiple topics, all ordering guarantees normally provided by Pulsar on single topics do not hold. If your use case for Pulsar involves any strict ordering requirements, we would strongly recommend against using this feature.
+
+Here are some multi-topic subscription examples for Java:
+
+```java
+import java.util.regex.Pattern;
+
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.PulsarClient;
+
+PulsarClient pulsarClient = // Instantiate Pulsar client object
+
+// Subscribe to all topics in a namespace
+Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*");
+Consumer allTopicsConsumer = pulsarClient.subscribe(allTopicsInNamespace, "subscription-1");
+
+// Subscribe to a subsets of topics in a namespace, based on regex
+Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*");
+Consumer someTopicsConsumer = pulsarClient.subscribe(someTopicsInNamespace, "subscription-1");
+```
+
+For code examples, see:
+
+* [Java](client-libraries-java.md#multi-topic-subscriptions)
+
+## Partitioned topics
+
+Normal topics can be served only by a single broker, which limits the topic's maximum throughput. *Partitioned topics* are a special type of topic that be handled by multiple brokers, which allows for much higher throughput.
+
+Behind the scenes, a partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar.
+
+The diagram below illustrates this:
+
+![](assets/partitioning.png)
+
+Here, the topic **Topic1** has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically).
+
+Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines both which broker handles each partition, while the [subscription mode](#subscription-modes) determines which messages go to which consumers.
+
+Decisions about routing and subscription modes can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics.
+
+There is no difference between partitioned topics and normal topics in terms of how subscription modes work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer.
+
+Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic.
+
+### Routing modes
+
+When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to.
+
+There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available:
+
+Mode     | Description 
+:--------|:------------
+`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. 
+`SinglePartition`     | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition.
+`CustomPartition`     | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface.
+
+### Ordering guarantee
+
+The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee.
+
+If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode.
+
+Ordering guarantee | Description | Routing Mode and Key
+:------------------|:------------|:------------
+Per-key-partition  | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message.
+Per-producer       | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message.
+
+### Hashing scheme
+
+{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message.
+
+There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. 
+The default hashing function for producer is `JavaStringHash`.
+Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`.
+
+
+
+## Non-persistent topics
+
+
+By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover.
+
+Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss.
+
+Non-persistent topics have names of this form (note the `non-persistent` in the name):
+
+```http
+non-persistent://tenant/namespace/topic
+```
+
+> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md).
+
+In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases [...]
+
+> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it.
+
+By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the [`pulsar-admin topics`](referencereference--pulsar-admin/#topics-1) interface.
+
+### Performance
+
+Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is deliver to all connected subscribers. Producers thus see comparatively low publish latency with non-persistent topic.
+
+### Client API
+
+Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription modes---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics.
+
+Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic:
+
+```java
+PulsarClient client = PulsarClient.create("pulsar://localhost:6650");
+String npTopic = "non-persistent://public/default/my-topic";
+String subscriptionName = "my-subscription-name";
+
+Consumer consumer = client.subscribe(npTopic, subscriptionName);
+```
+
+Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic:
+
+```java
+Producer producer = client.createProducer(npTopic);
+```
+
+## Message retention and expiry
+
+By default, Pulsar message brokers:
+
+* immediately delete *all* messages that have been acknowledged by a consumer, and
+* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog.
+
+Pulsar has two features, however, that enable you to override this default behavior:
+
+* Message **retention** enables you to store messages that have been acknowledged by a consumer
+* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged
+
+> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook.
+
+The diagram below illustrates both concepts:
+
+![Message retention and expiry](assets/retention-expiry.png)
+
+With message retention, shown at the top, a <span style="color: #89b557;">retention policy</span> applied to all topics in a namespace dicates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are <span style="color: #bb3b3e;">deleted</span>. Without a retention policy, *all* of the <span style="color: #19967d;">acknowledged messages</span> would be deleted.
+
+With message expiry, shown at the bottom, some messages are <span style="color: #bb3b3e;">deleted</span>, even though they <span style="color: #337db6;">haven't been acknowledged</span>, because they've expired according to the <span style="color: #e39441;">TTL applied to the namespace</span> (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old).
+
+## Message deduplication
+
+Message **duplication** occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message ***de*duplication** is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, *even if the message is received more than once*.
+
+The following diagram illustrates what happens when message deduplication is disabled vs. enabled:
+
+![Pulsar message deduplication](assets/message-deduplication.png)
+
+
+Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred.
+
+In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message.
+
+> Message deduplication is handled at the namespace level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md).
+
+
+### Producer idempotency
+
+The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, which means that you don't need to modify your Pulsar client code. Instead, you only need to make administrative changes (see the [Managi [...]
+
+### Deduplication and effectively-once semantics
+
+Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide [effectively-once](https://streaml.io/blog/exactly-once) processing semantics. Messaging systems that don't offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplica [...]
+
+> More in-depth information can be found in [this post](https://streaml.io/blog/pulsar-effectively-once/) on the [Streamlio blog](https://streaml.io/blog)
+
+
diff --git a/site2/website/versioned_docs/version-2.3.2/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.3.2/deploy-bare-metal.md
new file mode 100644
index 0000000..7911ff5
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/deploy-bare-metal.md
@@ -0,0 +1,444 @@
+---
+id: version-2.3.2-deploy-bare-metal
+title: Deploying a cluster on bare metal
+sidebar_label: Bare metal
+original_id: deploy-bare-metal
+---
+
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you're interested in experimenting with
+> Pulsar or using it in a startup or on a single team, we recommend opting for a single cluster. If you do need to run a multi-cluster Pulsar instance,
+> however, see the guide [here](deploy-bare-metal-multi-cluster.md).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and make sure it is installed under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+> have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
+>
+> 3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+> package and make sure it is installed under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure
+> this feature, you could reference this [Tiered storage cookbook](cookbooks-tiered-storage.md).
+
+Deploying a Pulsar cluster involves doing the following (in order):
+
+* Deploying a [ZooKeeper](#deploying-a-zookeeper-cluster) cluster (optional)
+* Initializing [cluster metadata](#initializing-cluster-metadata)
+* Deploying a [BookKeeper](#deploying-a-bookkeeper-cluster) cluster
+* Deploying one or more Pulsar [brokers](#deploying-pulsar-brokers)
+
+## Preparation
+
+### Requirements
+
+> If you already have an existing zookeeper cluster and would like to reuse it, you don't need to prepare the machines
+> for running ZooKeeper.
+
+To run Pulsar on bare metal, you are recommended to have:
+
+* At least 6 Linux machines or VMs
+  * 3 running [ZooKeeper](https://zookeeper.apache.org)
+  * 3 running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie
+* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts
+
+> However if you don't have enough machines, or are trying out Pulsar in cluster mode (and expand the cluster later),
+> you can even deploy Pulsar in one node, where it will run zookeeper, bookie and broker in same machine.
+
+Each machine in your cluster will need to have [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or higher installed.
+
+Here's a diagram showing the basic setup:
+
+![alt-text](assets/pulsar-basic-setup.png)
+
+In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL, in this case `pulsar-cluster.acme.com`, that abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper.
+
+### Hardware considerations
+
+When deploying a Pulsar cluster, we have some basic recommendations that you should keep in mind when capacity planning.
+
+#### ZooKeeper
+
+For machines running ZooKeeper, we recommend using lighter-weight machines or VMs. Pulsar uses ZooKeeper only for periodic coordination- and configuration-related tasks, *not* for basic operations. If you're running Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance would likely suffice.
+
+#### Bookies & Brokers
+
+For machines running a bookie and a Pulsar broker, we recommend using more powerful machines. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines we also recommend:
+
+* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers)
+* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies)
+
+## Installing the Pulsar binary package
+
+> You'll need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploying-a-zookeeper-cluster) and [BookKeeper](#deploying-a-bookkeeper-cluster).
+
+To get started deploying a Pulsar cluster on bare metal, you'll need to download a binary tarball release in one of the following ways:
+
+* By clicking on the link directly below, which will automatically trigger a download:
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+* From the Pulsar [downloads page](pulsar:download_page_url)
+* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on [GitHub](https://github.com)
+* Using [wget](https://www.gnu.org/software/wget):
+
+```bash
+$ wget pulsar:binary_release_url
+```
+
+Once you've downloaded the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvzf apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+The untarred directory contains the following subdirectories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's [command-line tools](reference-cli-tools.md), such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
+`logs` | Logs created by the installation.
+
+## Installing Builtin Connectors (optional)
+
+> Since release `2.1.0-incubating`, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+> If you would like to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using builtin connectors, you'll need to download the connectors tarball release on every broker node in
+one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:connector_release_url" download>Pulsar IO Connectors {{pulsar:version}} release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:connector_release_url/{connector}-{{pulsar:version}}.nar
+  ```
+
+Once the nar file is downloaded, copy the file to directory `connectors` in the pulsar directory, 
+for example, if the connector file `pulsar-io-aerospike-{{pulsar:version}}.nar` is downloaded:
+
+```bash
+$ mkdir connectors
+$ mv pulsar-io-aerospike-{{pulsar:version}}.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-{{pulsar:version}}.nar
+...
+```
+
+## Installing Tiered Storage Offloaders (optional)
+
+> Since release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+> If you would like to enable tiered storage feature, you can follow the instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using tiered storage offloaders, you'll need to download the offloaders tarball release on every broker node in
+one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders {{pulsar:version}} release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:offloader_release_url
+  ```
+
+Once the tarball is downloaded, in the pulsar directory, untar the offloaders package and copy the offloaders as `offloaders`
+in the pulsar directory:
+
+```bash
+$ tar xvfz apache-pulsar-offloaders-{{pulsar:version}}-bin.tar.gz
+
+// you will find a directory named `apache-pulsar-offloaders-{{pulsar:version}}` in the pulsar directory
+// then copy the offloaders
+
+$ mv apache-pulsar-offloaders-{{pulsar:version}}/offloaders offloaders
+
+$ ls offloaders
+tiered-storage-jcloud-{{pulsar:version}}.nar
+```
+
+For more details of how to configure tiered storage feature, you could reference this [Tiered storage cookbook](cookbooks-tiered-storage.md)
+
+
+## Deploying a ZooKeeper cluster
+
+> If you already have an exsiting zookeeper cluster and would like to use it, you can skip this section.
+
+[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster you'll need to deploy ZooKeeper first (before all other components). We recommend deploying a 3-node ZooKeeper cluster. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper.
+
+To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory you created [above](#installing-the-pulsar-binary-package)). Here's an example:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+> If you have only one machine to deploy Pulsar, you just need to add one server entry in the configuration file.
+
+On each host, you need to specify the ID of the node in each node's `myid` file, which is in each server's `data/zookeeper` folder by default (this can be changed via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed info on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
+
+```bash
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com` the command would be `echo 2 > data/zookeeper/myid` and so on.
+
+Once each server has been added to the `zookeeper.conf` configuration and has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start zookeeper
+```
+
+> If you are planning to deploy zookeeper with bookie on the same node, you
+> need to start zookeeper by using different stats port.
+
+Start zookeeper with [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool like:
+
+```bash
+$ PULSAR_EXTRA_OPTS="-Dstats_server_port=8001" bin/pulsar-daemon start zookeeper
+```
+
+## Initializing cluster metadata
+
+Once you've deployed ZooKeeper for your cluster, there is some metadata that needs to be written to ZooKeeper for each cluster in your instance. It only needs to be written **once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your ZooKeeper cluster. Here's an example:
+
+```shell
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster pulsar-cluster-1 \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2181 \
+  --web-service-url http://pulsar.us-west.example.com:8080 \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443 \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650 \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+As you can see from the example above, the following needs to be specified:
+
+Flag | Description
+:----|:-----------
+`--cluster` | A name for the cluster
+`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (we don't recommend using a different port).
+`--web-service-url-tls` | If you're using [TLS](security-tls-transport.md), you'll also need to specify a TLS web service URL for the cluster. The default port is 8443 (we don't recommend using a different port).
+`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (we don't recommend using a different port).
+`--broker-service-url-tls` | If you're using [TLS](security-tls-transport.md), you'll also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (we don't recommend using a different port).
+
+## Deploying a BookKeeper cluster
+
+[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You will need to deploy a cluster of BookKeeper bookies to use Pulsar. We recommend running a **3-bookie BookKeeper cluster**.
+
+BookKeeper bookies can be configured using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. Here's an example:
+
+```properties
+zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+```
+
+Once you've appropriately modified the `zkServers` parameter, you can provide any other configuration modifications you need. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper), although we would recommend consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide.
+
+> ##### NOTES
+>
+> Since Pulsar 2.1.0 release, Pulsar introduces [stateful function](functions-state.md) for Pulsar Functions. If you would like to enable that feature,
+> you need to enable table service on BookKeeper by setting following setting in `conf/bookkeeper.conf` file.
+>
+> ```conf
+> extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent
+> ```
+
+Once you've applied the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground.
+
+To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start bookie
+```
+
+To start the bookie in the foreground:
+
+```bash
+$ bin/bookkeeper bookie
+```
+
+You can verify that a bookie is working properly by running the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#shell) on it:
+
+```bash
+$ bin/bookkeeper shell bookiesanity
+```
+
+This will create an ephemeral BookKeeper ledger on the local bookie, write a few entries, read them back, and finally delete the ledger.
+
+After you have started all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to
+verify all the bookies in the cluster are up running.
+
+```bash
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+```
+
+This command will create a `num-bookies` sized ledger on the cluster, write a few entries, and finally delete the ledger.
+
+
+## Deploying Pulsar brokers
+
+Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide Pulsar's administrative interface. We recommend running **3 brokers**, one for each machine that's already running a BookKeeper bookie.
+
+### Configuring Brokers
+
+The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you've deployed. Make sure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters. In this case, since we only have 1 cluster and no configuration store setup, the `configurationStoreServers` will point to the same `zookeeperServers`.
+
+```properties
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+```
+
+You also need to specify the cluster name (matching the name that you provided when [initializing the cluster's metadata](#initializing-cluster-metadata)):
+
+```properties
+clusterName=pulsar-cluster-1
+```
+
+In addition, you need to match the broker and web service ports provided when initializing the cluster's metadata (especially when using a different port from default):
+
+```properties
+brokerServicePort=6650
+brokerServicePortTls=6651
+webServicePort=8080
+webServicePortTls=8443
+```
+
+> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`
+>
+> ```properties
+> # Number of bookies to use when creating a ledger
+> managedLedgerDefaultEnsembleSize=1
+>
+> # Number of copies to store for each message
+> managedLedgerDefaultWriteQuorum=1
+> 
+> # Number of guaranteed copies (acks to wait before write is complete)
+> managedLedgerDefaultAckQuorum=1
+> ```
+
+### Enabling Pulsar Functions (optional)
+
+If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below:
+
+1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`.
+
+    ```conf
+    functionsWorkerEnabled=true
+    ```
+
+2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provided when [initializing the cluster's metadata](#initializing-cluster-metadata). 
+
+    ```conf
+    pulsarFunctionsCluster: pulsar-cluster-1
+    ```
+
+If you would like to learn more options about deploying functions worker, please checkout [Deploy and manage functions worker](functions-worker.md).
+
+### Starting Brokers
+
+You can then provide any other configuration changes that you'd like in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you've decided on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, brokers can be started either in the foreground or in the background, using nohup.
+
+You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command:
+
+```bash
+$ bin/pulsar broker
+```
+
+You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start broker
+```
+
+Once you've succesfully started up all the brokers you intend to use, your Pulsar cluster should be ready to go!
+
+## Connecting to the running cluster
+
+Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provides a simple way to make sure that your cluster is runnning properly.
+
+To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You'll need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you've assigned to your broker/bookie hosts. Here's an example:
+
+```properties
+webServiceUrl=http://us-west.example.com:8080/
+brokerServiceurl=pulsar://us-west.example.com:6650/
+```
+
+Once you've done that, you can publish a message to Pulsar topic:
+
+```bash
+$ bin/pulsar-client produce \
+  persistent://public/default/test \
+  -n 1 \
+  -m "Hello Pulsar"
+```
+
+> You may need to use a different cluster name in the topic if you specified a cluster name different from `pulsar-cluster-1`.
+
+This will publish a single message to the Pulsar topic. In addition, you can subscribe the Pulsar topic in a different terminal before publishing messages as below:
+
+```bash
+$ bin/pulsar-client consume \
+  persistent://public/default/test \
+  -n 100 \
+  -s "consumer-test" \
+  -t "Exclusive"
+```
+
+Once the message above has been successfully published to the topic, you should see it in the standard output:
+
+```bash
+----- got message -----
+Hello Pulsar
+```
+
+## Running Functions
+
+> If you have [enabled](#enabling-pulsar-functions-optional) Pulsar Functions, you can also tryout pulsar functions now.
+
+Create a ExclamationFunction `exclamation`.
+
+```bash
+bin/pulsar-admin functions create \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+```
+
+Check if the function is running as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function.
+
+```bash
+bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world"
+```
+
+You will see output as below:
+
+```shell
+hello world!
+```
diff --git a/site2/website/versioned_docs/version-2.3.2/functions-guarantees.md b/site2/website/versioned_docs/version-2.3.2/functions-guarantees.md
new file mode 100644
index 0000000..cde72c9
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/functions-guarantees.md
@@ -0,0 +1,42 @@
+---
+id: version-2.3.2-functions-guarantees
+title: Processing guarantees
+sidebar_label: Processing guarantees
+original_id: functions-guarantees
+---
+
+Pulsar Functions provides three different messaging semantics that you can apply to any function:
+
+Delivery semantics | Description
+:------------------|:-------
+**At-most-once** delivery | Each message that is sent to the function will most likely be processed but also may not be (hence the "at most")
+**At-least-once** delivery | Each message that is sent to the function could be processed more than once (hence the "at least")
+**Effectively-once** delivery | Each message that is sent to the function will have one output associated with it
+
+## Applying processing guarantees to a function
+
+You can set the processing guarantees for a Pulsar Function when you create the Function. This [`pulsar-function create`](reference-pulsar-admin.md#create-1) command, for example, would apply effectively-once guarantees to the Function:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --processing-guarantees EFFECTIVELY_ONCE \
+  # Other function configs
+```
+
+The available options are:
+
+* `ATMOST_ONCE`
+* `ATLEAST_ONCE`
+* `EFFECTIVELY_ONCE`
+
+> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, then the function will provide at-least-once guarantees.
+
+## Updating the processing guarantees of a function
+
+You can change the processing guarantees applied to a function once it's already been created using the [`update`](reference-pulsar-admin.md#update-1) command. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions update \
+  --processing-guarantees ATMOST_ONCE \
+  # Other function configs
+```
diff --git a/site2/website/versioned_docs/version-2.3.2/functions-worker.md b/site2/website/versioned_docs/version-2.3.2/functions-worker.md
new file mode 100644
index 0000000..1ce7fa4
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/functions-worker.md
@@ -0,0 +1,241 @@
+---
+id: version-2.3.2-functions-worker
+title: Deploy and manage functions worker
+sidebar_label: Functions Worker
+original_id: functions-worker
+---
+
+Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either of the two options based on your requirements. 
+- [run with brokers](#run-Functions-worker-with-brokers)
+- [run it separately](#run-Functions-worker-separately) in a different broker
+
+> Note  
+> The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster.
+
+## Run Functions-worker with brokers
+
+The following diagram illustrates the deployment of functions-workers running along with brokers.
+
+![assets/functions-worker-corun.png](assets/functions-worker-corun.png)
+
+To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file.
+
+```conf
+functionsWorkerEnabled=true
+```
+
+When you set `functionsWorkerEnabled` to `true`, it means that you start functions-worker as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker.
+
+Before you run Functions-work with broker, you have to configure Functions-worker, and then start it with brokers.
+
+### Configure Functions-Worker to run with brokers
+In this mode, since `functions-worker` is running as part of broker, most of the settings already inherit from your broker configuration (for example, configurationStore settings, authentication settings, and so on).
+
+Pay attention to the following required settings when configuring functions-worker in this mode.
+
+- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be more than `2` .
+- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration).
+
+If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings.
+
+- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name.
+- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name.
+- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters.
+
+### Start Functions-worker with broker
+
+Once you have configured the `functions_worker.yml` file, you can start or restart your broker. 
+
+And then you can use the following command to verify if `functions-worker` is running well.
+
+```bash
+curl <broker-ip>:8080/admin/v2/worker/cluster
+```
+
+After entering the command above, a list of active function workers in the cluster is returned. The output is something similar as follows.
+
+```json
+[{"workerId":"<worker-id>","workerHostname":"<worker-hostname>","port":8080}]
+```
+
+## Run Functions-worker separately
+
+This section illustrates how to run `functions-worker` as a separate process in separate machines.
+
+![assets/functions-worker-separated.png](assets/functions-worker-separated.png)
+
+> Note    
+In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake.
+
+### Configure Functions-worker to run separately
+
+To run function-worker separately, you have to configure the following parameters. 
+
+#### Worker parameters
+
+- `workerId`: The type is string. It is unique across clusters, used to identify a worker machine.
+- `workerHostname`: The hostname of the worker machine.
+- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it.
+- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it.
+
+#### Function package parameter
+
+- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`.
+
+#### Function metadata parameter
+
+- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster.
+- `pulsarWebServiceUrl`: The Pulser web service URL for your broker cluster.
+- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration).
+
+If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers.
+
+- `clientAuthenticationPlugin`
+- `clientAuthenticationParameters`
+
+#### Security settings
+
+If you want to enable security on functions workers, you *should*:
+- [Enable TLS transport encryption](#enable-tls-transport-encryption)
+- [Enable Authentication Provider](#enable-authentication-provider)
+- [Enable Authorization Provider](#enable-authorization-provider)
+
+**Enable TLS transport encryption**
+
+To enable TLS transport encryption, configure the following settings.
+
+```
+tlsEnabled: true
+tlsCertificateFilePath: /path/to/functions-worker.cert.pem
+tlsKeyFilePath:         /path/to/functions-worker.key-pk8.pem
+tlsTrustCertsFilePath:  /path/to/ca.cert.pem
+```
+
+For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport.md).
+
+**Enable Authentication Provider**
+
+To enable authentication on Functions Worker, configure the following settings.
+> Note  
+Substitute the *providers list* with the providers you want to enable.
+
+```
+authenticationEnabled: true
+authenticationProviders: [ provider1, provider2 ]
+```
+
+For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName`
+under `properties` if needed. 
+
+```
+properties:
+  saslJaasClientAllowedIds: .*pulsar.*
+  saslJaasBrokerSectionName: Broker
+```
+
+For *Token Authentication* prodivder, add necessary settings under `properties` if needed.
+See [Token Authentication](security-token-admin.md) for more details.
+```
+properties:
+  tokenSecretKey:       file://my/secret.key 
+  # If using public/private
+  # tokenPublicKey:     file:///path/to/public.key 
+```
+
+**Enable Authorization Provider**
+
+To enable authorization on Functions Worker, you need to configure `authorizationEnabled` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies.
+
+```yaml
+authorizationEnabled: true
+configurationStoreServers: <configuration-store-servers>
+```
+
+You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example.
+
+```yaml
+superUserRoles:
+  - role1
+  - role2
+  - role3
+```
+
+#### BookKeeper Authentication
+
+If authentication is enabled on the BookKeeper cluster, you should configure the BookKeeper authentication settings as follows:
+
+- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication.
+- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication.
+- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication.
+
+### Start Functions-worker
+
+Once you have finished configuring the `functions_worker.yml` configuration file, you can use the following command to start a `functions-worker`:
+
+```bash
+bin/pulsar functions-worker
+```
+
+### Configure Proxies for Functions-workers
+
+When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served
+by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster.
+Hence you need to configure your `pulsar-admin` to use the right service URL accordingly.
+
+In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service.
+
+If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to
+start proxies.    
+
+![assets/functions-worker-separated.png](assets/functions-worker-separated-proxy.png)
+
+To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings:
+
+```conf
+functionWorkerWebServiceURL=<pulsar-functions-worker-web-service-url>
+functionWorkerWebServiceURLTLS=<pulsar-functions-worker-web-service-url>
+```
+
+## Compare the Run-with-Broker and Run-separately modes
+
+As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode.
+
+Use which mode for your cases, refer to the following guidelines to determine.
+
+Use the `Run-with-Broker` mode in the following cases:
+- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; 
+- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes).
+
+Use the `Run-separately` mode in the following cases:
+-  a) you don't have a Kubernetes cluster; 
+-  b) if you want to run functions and brokers separately.
+
+## Troubleshooting
+
+**Error message: Namespace missing local cluster name in clusters list**
+
+```
+Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone]
+```
+
+The error message prompts when either of the cases occurs:
+- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file;
+- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well.
+
+**Workaround**
+
+If any of these cases happens, follow the instructions below to fix the problem:
+
+1. Get the current clusters list of `public/functions` namespace.
+
+```bash
+bin/pulsar-admin namespaces get-clusters public/functions
+```
+
+2. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list.
+
+```bash
+bin/pulsar-admin namespaces set-clusters --cluster=<existing-clusters>,<new-cluster> public/functions
+```
+
+3. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file. 
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.3.2/getting-started-clients.md b/site2/website/versioned_docs/version-2.3.2/getting-started-clients.md
new file mode 100644
index 0000000..7a1e15c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/getting-started-clients.md
@@ -0,0 +1,57 @@
+---
+id: version-2.3.2-client-libraries
+title: Pulsar client libraries
+sidebar_label: Use Pulsar with client libraries
+original_id: client-libraries
+---
+
+Pulsar supports the following client libraries:
+
+- [Java client](#java-client)
+- [Go client](#go-client)
+- [Python client](#python-client)
+- [C++ client](#c-client)
+
+## Java client
+
+For instructions on how to use the Pulsar Java client to produce and consume messages, see [Pulsar Java client](client-libraries-java.md).
+
+Two independent sets of Javadoc API docs are available.
+
+Library | Purpose
+:-------|:-------
+[`org.apache.pulsar.client.api`](/api/client) | The [Pulsar Java client](client-libraries-java.md) is used to produce and consume messages on Pulsar topics.
+[`org.apache.pulsar.client.admin`](/api/admin) | The Java client for the [Pulsar admin interface](admin-api-overview.md).
+
+
+## Go client
+
+For a tutorial on using the Pulsar Go client, see [Pulsar Go client](client-libraries-go.md).
+
+
+## Python client
+
+For a tutorial on using the Pulsar Python client, see [Pulsar Python client](client-libraries-python.md).
+
+There are also [pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client [here](/api/python).
+
+## C++ client
+
+For a tutorial on using the Pulsar C++ clent, see [Pulsar C++ client](client-libraries-cpp.md).
+
+There are also [Doxygen](http://www.stack.nl/~dimitri/doxygen/)-generated API docs for the C++ client [here](/api/cpp).
+
+## Feature Matrix
+Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
+
+## Thirdparty Clients
+
+Besides the official released clients, there are also multiple projects on developing a Pulsar client in different languages.
+
+> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | |
+| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
diff --git a/site2/website/versioned_docs/version-2.3.2/getting-started-docker.md b/site2/website/versioned_docs/version-2.3.2/getting-started-docker.md
new file mode 100644
index 0000000..d6de5ad
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/getting-started-docker.md
@@ -0,0 +1,171 @@
+---
+id: version-2.3.2-standalone-docker
+title: Set up a standalone Pulsar in Docker
+sidebar_label: Run Pulsar in Docker
+original_id: standalone-docker
+---
+
+For local development and testing, you can run Pulsar in standalone
+mode on your own machine within a Docker container.
+
+If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition)
+and follow the instructions for your OS.
+
+## Start Pulsar in Docker
+
+* For MacOS and Linux:
+
+  ```shell
+  $ docker run -it \
+    -p 6650:6650 \
+    -p 8080:8080 \
+    -v $PWD/data:/pulsar/data \
+    apachepulsar/pulsar:{{pulsar:version}} \
+    bin/pulsar standalone
+  ```
+
+* For Windows:  
+  
+  ```shell
+  $ docker run -it \
+    -p 6650:6650 \
+    -p 8080:8080 \
+    -v "$PWD/data:/pulsar/data".ToLower() \
+    apachepulsar/pulsar:{{pulsar:version}} \
+    bin/pulsar standalone
+  ```
+
+A few things to note about this command:
+ * `$PWD/data` : The docker host directory in Windows operating system must be lowercase.`$PWD/data` provides you with the specified directory, for example: `E:/data`.
+ * `-v $PWD/data:/pulsar/data`: This makes the process inside the container to store the
+   data and metadata in the filesystem outside the container, in order not to start "fresh" every time the container is restarted.
+
+If you start Pulsar successfully, you will see `INFO`-level log messages like this:
+
+```
+2017-08-09 22:34:04,030 - INFO  - [main:WebService@213] - Web Service started at http://127.0.0.1:8080
+2017-08-09 22:34:04,038 - INFO  - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246
+...
+```
+
+> #### Tip
+> 
+> When you start a local standalone cluster, a `public/default`
+namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.
+For more information, see [Topics](concepts-messaging.md#topics).
+
+## Use Pulsar in Docker
+
+Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) 
+and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can
+use one of these root URLs to interact with your cluster:
+
+* `pulsar://localhost:6650`
+* `http://localhost:8080`
+
+The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python.md)
+client API.
+
+Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/):
+
+```shell
+$ pip install pulsar-client
+```
+
+### Consume a message
+
+Create a consumer and subscribe to the topic:
+
+```python
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+consumer = client.subscribe('my-topic',
+                            subscription_name='my-sub')
+
+while True:
+    msg = consumer.receive()
+    print("Received message: '%s'" % msg.data())
+    consumer.acknowledge(msg)
+
+client.close()
+```
+
+### Produce a message
+
+Now start a producer to send some test messages:
+
+```python
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+producer = client.create_producer('my-topic')
+
+for i in range(10):
+    producer.send(('hello-pulsar-%d' % i).encode('utf-8'))
+
+client.close()
+```
+
+## Get the topic statistics
+
+In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.
+For details on APIs, refer to [Admin API Overview](admin-api-overview.md).
+
+In the simplest example, you can use curl to probe the stats for a particular topic:
+
+```shell
+$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool
+```
+
+The output is something like this:
+
+```json
+{
+  "averageMsgSize": 0.0,
+  "msgRateIn": 0.0,
+  "msgRateOut": 0.0,
+  "msgThroughputIn": 0.0,
+  "msgThroughputOut": 0.0,
+  "publishers": [
+    {
+      "address": "/172.17.0.1:35048",
+      "averageMsgSize": 0.0,
+      "clientVersion": "1.19.0-incubating",
+      "connectedSince": "2017-08-09 20:59:34.621+0000",
+      "msgRateIn": 0.0,
+      "msgThroughputIn": 0.0,
+      "producerId": 0,
+      "producerName": "standalone-0-1"
+    }
+  ],
+  "replication": {},
+  "storageSize": 16,
+  "subscriptions": {
+    "my-sub": {
+      "blockedSubscriptionOnUnackedMsgs": false,
+      "consumers": [
+        {
+          "address": "/172.17.0.1:35064",
+          "availablePermits": 996,
+          "blockedConsumerOnUnackedMsgs": false,
+          "clientVersion": "1.19.0-incubating",
+          "connectedSince": "2017-08-09 21:05:39.222+0000",
+          "consumerName": "166111",
+          "msgRateOut": 0.0,
+          "msgRateRedeliver": 0.0,
+          "msgThroughputOut": 0.0,
+          "unackedMessages": 0
+        }
+      ],
+      "msgBacklog": 0,
+      "msgRateExpired": 0.0,
+      "msgRateOut": 0.0,
+      "msgRateRedeliver": 0.0,
+      "msgThroughputOut": 0.0,
+      "type": "Exclusive",
+      "unackedMessages": 0
+    }
+  }
+}
+```
diff --git a/site2/website/versioned_docs/version-2.3.2/getting-started-standalone.md b/site2/website/versioned_docs/version-2.3.2/getting-started-standalone.md
new file mode 100644
index 0000000..2c77cf0
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/getting-started-standalone.md
@@ -0,0 +1,221 @@
+---
+id: version-2.3.2-standalone
+title: Set up a standalone Pulsar locally
+sidebar_label: Run Pulsar locally
+original_id: standalone
+---
+
+For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
+
+> #### Pulsar in production? 
+> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide.
+
+## Install Pulsar standalone
+
+### System requirements
+
+Pulsar is currently available for **MacOS** and **Linux**. To use Pulsar, you need to install [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html).
+
+### Install Pulsar using binary release
+
+To get started with Pulsar, download a binary tarball release in one of the following ways:
+
+* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>)
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)  
+  
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+  
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:binary_release_url
+  ```
+
+After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
+
+```bash
+$ tar xvfz apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+#### What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md).
+`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
+`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example.
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
+`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
+
+These directories are created once you begin running Pulsar.
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`instances` | Artifacts created for [Pulsar Functions](functions-overview.md).
+`logs` | Logs created by the installation.
+
+#### Install other optional components
+
+> #### Tip
+> If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
+> 
+> * [Install builtin connectors (optional)](#install-builtin-connectors-optional)
+> * [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
+> 
+> Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
+
+##### Install builtin connectors (optional)
+
+Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors {{pulsar:version}} release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
+
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:connector_release_url/{connector}-{{pulsar:version}}.nar
+  ```
+
+After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
+For example, if you download the `pulsar-io-aerospike-{{pulsar:version}}.nar` connector file, enter the following commands:
+
+```bash
+$ mkdir connectors
+$ mv pulsar-io-aerospike-{{pulsar:version}}.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-{{pulsar:version}}.nar
+...
+```
+
+> #### Note
+>
+> * If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
+> (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
+> 
+> * If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos.md)),
+> you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+##### Install tiered storage offloaders (optional)
+
+> #### Tip
+>
+> Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+> To enable tiered storage feature, follow the instructions below; otherwise skip this section.
+
+To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders {{pulsar:version}} release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
+
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:offloader_release_url
+  ```
+
+After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
+in the pulsar directory:
+
+```bash
+$ tar xvfz apache-pulsar-offloaders-{{pulsar:version}}-bin.tar.gz
+
+// you will find a directory named `apache-pulsar-offloaders-{{pulsar:version}}` in the pulsar directory
+// then copy the offloaders
+
+$ mv apache-pulsar-offloaders-{{pulsar:version}}/offloaders offloaders
+
+$ ls offloaders
+tiered-storage-jcloud-{{pulsar:version}}.nar
+```
+
+For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md).
+
+> #### Note
+>
+> * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
+> 
+> * If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos.md)),
+> you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
+
+## Start Pulsar standalone
+
+Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
+
+```bash
+$ bin/pulsar standalone
+```
+
+If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
+
+```bash
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
+2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
+```
+
+> #### Tip
+> 
+> * The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
+You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+> 
+> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
+
+## Use Pulsar standalone
+
+Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
+
+### Consume a message
+
+The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
+
+```bash
+$ bin/pulsar-client consume my-topic -s "first-subscription"
+```
+
+If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
+
+```
+09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
+```
+
+> #### Tip
+>  
+> As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
+
+### Produce a message
+
+The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
+
+```bash
+$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
+```
+
+If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
+
+```
+13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
+```
+
+## Stop Pulsar standalone
+
+Press `Ctrl+C` to stop a local standalone Pulsar.
+
+> #### Tip
+> 
+> If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
+> 
+> For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
diff --git a/site2/website/versioned_docs/version-2.3.2/io-connectors.md b/site2/website/versioned_docs/version-2.3.2/io-connectors.md
new file mode 100644
index 0000000..eeeaa4b
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/io-connectors.md
@@ -0,0 +1,30 @@
+---
+id: version-2.3.2-io-connectors
+title: Builtin Connectors
+sidebar_label: Builtin Connectors
+original_id: io-connectors
+---
+
+Pulsar distribution includes a set of common connectors that have been packaged and tested with the rest of Apache Pulsar.
+These connectors import and export data from some of the most commonly used data systems. Using any these connectors is
+as easy as writing a simple connector configuration and running the connector locally or submitting the connector to a
+Pulsar Functions cluster.
+
+- [Aerospike Sink Connector](io-aerospike.md)
+- [Cassandra Sink Connector](io-cassandra.md)
+- [Kafka Sink Connector](io-kafka.md#sink)
+- [Kafka Source Connector](io-kafka.md#source)
+- [Kinesis Sink Connector](io-kinesis.md#sink)
+- [RabbitMQ Source Connector](io-rabbitmq.md#source)
+- [RabbitMQ Sink Connector](io-rabbitmq.md#sink)
+- [Twitter Firehose Source Connector](io-twitter.md)
+- [CDC Source Connector based on Debezium](io-cdc.md)
+- [Netty Source Connector](io-netty.md#source)
+- [Hbase Sink Connector](io-hbase.md#sink)
+- [ElasticSearch Sink Connector](io-elasticsearch.md#sink)
+- [File Source Connector](io-file.md#source)
+- [Hdfs Sink Connector](io-hdfs.md#sink)
+- [MongoDB Sink Connector](io-mongo.md#sink)
+- [Redis Sink Connector](io-redis.md#sink)
+- [Solr Sink Connector](io-solr.md#sink)
+- [InfluxDB Sink Connector](io-influxdb.md#sink)
diff --git a/site2/website/versioned_docs/version-2.3.2/io-redis.md b/site2/website/versioned_docs/version-2.3.2/io-redis.md
new file mode 100644
index 0000000..071b568
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/io-redis.md
@@ -0,0 +1,28 @@
+---
+id: version-2.3.2-io-redis
+title: redis Connector
+sidebar_label: redis Connector
+original_id: io-redis
+---
+
+## Sink
+
+The redis Sink Connector is used to pull messages from Pulsar topics and persist the messages
+to a redis database.
+
+## Sink Configuration Options
+
+| Name | Default | Required | Description |
+|------|---------|----------|-------------|
+| `redisHosts` | `null` | `true` | A comma separated list of Redis hosts to connect to. |
+| `redisPassword` | `null` | `false` | The password used to connect to Redis. |
+| `redisDatabase` | `0` | `true` | The Redis database to connect to. |
+| `clientMode` | `Standalone` | `false` | The client mode to use when interacting with the Redis cluster. Possible values [Standalone, Cluster]. |
+| `autoReconnect` | `true` | `false` | Flag to determine if the Redis client should automatically reconnect. |
+| `requestQueue` | `2147483647` | `false` | The maximum number of queued requests to Redis. |
+| `tcpNoDelay` | `false` | `false` | Flag to enable TCP no delay should be used. |
+| `keepAlive` | `false` | `false` | Flag to enable a keepalive to Redis. |
+| `connectTimeout` | `10000` | `false` | The amount of time in milliseconds to wait before timing out when connecting. |
+| `operationTimeout` | `10000` | `false` | The amount of time in milliseconds before an operation is marked as timed out. |
+| `batchTimeMs` | `1000` | `false` | The Redis operation time in milliseconds. |
+| `batchSize` | `1000` | `false` | The batch size of write to Redis database. |
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.3.2/reference-cli-tools.md b/site2/website/versioned_docs/version-2.3.2/reference-cli-tools.md
new file mode 100644
index 0000000..f90b17a
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/reference-cli-tools.md
@@ -0,0 +1,698 @@
+---
+id: version-2.3.2-reference-cli-tools
+title: Pulsar command-line tools
+sidebar_label: Pulsar CLI tools
+original_id: reference-cli-tools
+---
+
+Pulsar offers several command-line tools that you can use for managing Pulsar installations, performance testing, using command-line producers and consumers, and more.
+
+All Pulsar command-line tools can be run from the `bin` directory of your [installed Pulsar package](getting-started-standalone.md). The following tools are currently documented:
+
+* [`pulsar`](#pulsar)
+* [`pulsar-client`](#pulsar-client)
+* [`pulsar-daemon`](#pulsar-daemon)
+* [`pulsar-perf`](#pulsar-perf)
+* [`bookkeeper`](#bookkeeper)
+
+> ### Getting help
+> You can get help for any CLI tool, command, or subcommand using the `--help` flag, or `-h` for short. Here's an example:
+> ```shell
+> $ bin/pulsar broker --help
+> ```
+
+## `pulsar`
+
+The pulsar tool is used to start Pulsar components, such as bookies and ZooKeeper, in the foreground.
+
+These processes can also be started in the background, using nohup, using the pulsar-daemon tool, which has the same command interface as pulsar.
+
+Usage:
+```bash
+$ pulsar command
+```
+Commands:
+* `bookie`
+* `broker`
+* `compact-topic`
+* `discovery`
+* `configuration-store`
+* `initialize-cluster-metadata`
+* `proxy`
+* `standalone`
+* `websocket`
+* `zookeeper`
+* `zookeeper-shell`
+
+Example:
+```bash
+$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker
+```
+
+The table below lists the environment variables that you can use to configure the `pulsar` tool.
+
+|Variable|Description|Default|
+|---|---|---|
+|`PULSAR_LOG_CONF`|Log4j configuration file|`conf/log4j2.yaml`|
+|`PULSAR_BROKER_CONF`|Configuration file for broker|`conf/broker.conf`|
+|`PULSAR_BOOKKEEPER_CONF`|description: Configuration file for bookie|`conf/bookkeeper.conf`|
+|`PULSAR_ZK_CONF`|Configuration file for zookeeper|`conf/zookeeper.conf`|
+|`PULSAR_CONFIGURATION_STORE_CONF`|Configuration file for the configuration store|`conf/global_zookeeper.conf`|
+|`PULSAR_DISCOVERY_CONF`|Configuration file for discovery service|`conf/discovery.conf`|
+|`PULSAR_WEBSOCKET_CONF`|Configuration file for websocket proxy|`conf/websocket.conf`|
+|`PULSAR_STANDALONE_CONF`|Configuration file for standalone|`conf/standalone.conf`|
+|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the jvm||
+|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath||
+|`PULSAR_PID_DIR`|Folder where the pulsar server PID file should be stored||
+|`PULSAR_STOP_TIMEOUT`|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful||
+
+
+
+### `bookie`
+
+Starts up a bookie server
+
+Usage:
+```bash
+$ pulsar bookie options
+```
+
+Options
+
+|Option|Description|Default|
+|---|---|---|
+|`-readOnly`|Force start a read-only bookie server|false|
+|`-withAutoRecovery`|Start auto-recover service bookie server|false|
+
+
+Example
+```bash
+$ PULSAR_BOOKKEEPER_CONF=/path/to/bookkeeper.conf pulsar bookie \
+  -readOnly \
+  -withAutoRecovery
+```
+
+### `broker`
+
+Starts up a Pulsar broker
+
+Usage
+```bash
+$ pulsar broker options
+```
+
+Options
+|Option|Description|Default|
+|---|---|---|
+|`-bc` , `--bookie-conf`|Configuration file for BookKeeper||
+|`-rb` , `--run-bookie`|Run a BookKeeper bookie on the same host as the Pulsar broker|false|
+|`-ra` , `--run-bookie-autorecovery`|Run a BookKeeper autorecovery daemon on the same host as the Pulsar broker|false|
+
+Example
+```bash
+$ PULSAR_BROKER_CONF=/path/to/broker.conf pulsar broker
+```
+
+### `compact-topic`
+
+Run compaction against a Pulsar topic (in a new process)
+
+Usage
+```bash
+$ pulsar compact-topic options
+```
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-t` , `--topic`|The Pulsar topic that you would like to compact||
+
+Example
+```bash
+$ pulsar compact-topic --topic topic-to-compact
+```
+
+### `discovery`
+
+Run a discovery server
+
+Usage
+```bash
+$ pulsar discovery
+```
+
+Example
+```bash
+$ PULSAR_DISCOVERY_CONF=/path/to/discovery.conf pulsar discovery
+```
+
+### `configuration-store`
+
+Starts up the Pulsar configuration store
+
+Usage
+```bash
+$ pulsar configuration-store
+```
+
+Example
+```bash
+$ PULSAR_CONFIGURATION_STORE_CONF=/path/to/configuration_store.conf pulsar configuration-store
+```
+
+### `initialize-cluster-metadata`
+
+One-time cluster metadata initialization
+
+Usage
+```bash
+$ pulsar initialize-cluster-metadata options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-ub` , `--broker-service-url`|The broker service URL for the new cluster||
+|`-tb` , `--broker-service-url-tls`|The broker service URL for the new cluster with TLS encryption||
+|`-c` , `--cluster`|Cluster name||
+|`--configuration-store`|The configuration store quorum connection string||
+|`-uw` , `--web-service-url`|The web service URL for the new cluster||
+|`-tw` , `--web-service-url-tls`|The web service URL for the new cluster with TLS encryption||
+|`-zk` , `--zookeeper`|The local ZooKeeper quorum connection string||
+
+
+### `proxy`
+
+Manages the Pulsar proxy
+
+Usage
+```bash
+$ pulsar proxy options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--configuration-store`|Configuration store connection string||
+|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string||
+
+Example
+```bash
+$ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \
+  --zookeeper-servers zk-0,zk-1,zk2 \
+  --configuration-store zk-0,zk-1,zk-2
+```
+
+### `standalone`
+
+Run a broker service with local bookies and local ZooKeeper
+
+Usage
+```bash
+$ pulsar standalone options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-a` , `--advertised-address`|The standalone broker advertised address||
+|`--bookkeeper-dir`|Local bookies’ base data directory|data/standalone/bookeeper|
+|`--bookkeeper-port`|Local bookies’ base port|3181|
+|`--no-broker`|Only start ZooKeeper and BookKeeper services, not the broker|false|
+|`--num-bookies`|The number of local bookies|1|
+|`--only-broker`|Only start the Pulsar broker service (not ZooKeeper or BookKeeper)||
+|`--wipe-data`|Clean up previous ZooKeeper/BookKeeper data||
+|`--zookeeper-dir`|Local ZooKeeper’s data directory|data/standalone/zookeeper|
+|`--zookeeper-port` |Local ZooKeeper’s port|2181|
+
+Example
+```bash
+$ PULSAR_STANDALONE_CONF=/path/to/standalone.conf pulsar standalone
+```
+
+### `websocket`
+
+Usage
+```bash
+$ pulsar websocket
+```
+
+Example
+```bash
+$ PULSAR_WEBSOCKET_CONF=/path/to/websocket.conf pulsar websocket
+```
+
+### `zookeeper`
+
+Starts up a ZooKeeper cluster
+
+Usage
+```bash
+$ pulsar zookeeper
+```
+
+Example
+```bash
+$ PULSAR_ZK_CONF=/path/to/zookeeper.conf pulsar zookeeper
+```
+
+
+### `zookeeper-shell`
+
+Connects to a running ZooKeeper cluster using the ZooKeeper shell
+
+Usage
+```bash
+$ pulsar zookeeper-shell options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--conf`|Configuration file for ZooKeeper||
+
+
+
+## `pulsar-client`
+
+The pulsar-client tool
+
+Usage
+```bash
+$ pulsar-client command
+```
+
+Commands
+* `produce`
+* `consume`
+
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{\"key1\":\"val1\",\"key2\":\"val2\"}"||
+|`--auth-plugin`|Authentication plugin class name||
+|`--url`|Broker URL to which to connect|pulsar://localhost:6650/|
+
+
+### `produce`
+Send a message or messages to a specific broker and topic
+
+Usage
+```bash
+$ pulsar-client produce topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-f`, `--files`|Comma-separated file paths to send; either -m or -f must be specified|[]|
+|`-m`, `--messages`|Comma-separated string of messages to send; either -m or -f must be specified|[]|
+|`-n`, `--num-produce`|The number of times to send the message(s); the count of messages/files * num-produce should be below 1000|1|
+|`-r`, `--rate`|Rate (in messages per second) at which to produce; a value 0 means to produce messages as fast as possible|0.0|
+
+
+### `consume`
+Consume messages from a specific broker and topic
+
+Usage
+```bash
+$ pulsar-client consume topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--hex`|Display binary messages in hexadecimal format.|false|
+|`-n`, `--num-messages`|Number of messages to consume, 0 means to consume forever.|0|
+|`-r`, `--rate`|Rate (in messages per second) at which to consume; a value 0 means to consume messages as fast as possible|0.0|
+|`-s`, `--subscription-name`|Subscription name||
+|`-t`, `--subscription-type`|The type of the subscription. Possible values: Exclusive, Shared, Failover.|Exclusive|
+
+
+
+## `pulsar-daemon`
+A wrapper around the pulsar tool that’s used to start and stop processes, such as ZooKeeper, bookies, and Pulsar brokers, in the background using nohup.
+
+pulsar-daemon has a similar interface to the pulsar command but adds start and stop commands for various services. For a listing of those services, run pulsar-daemon to see the help output or see the documentation for the pulsar command.
+
+Usage
+```bash
+$ pulsar-daemon command
+```
+
+Commands
+* `start`
+* `stop`
+
+
+### `start`
+Start a service in the background using nohup.
+
+Usage
+```bash
+$ pulsar-daemon start service
+```
+
+### `stop`
+Stop a service that’s already been started using start.
+
+Usage
+```bash
+$ pulsar-daemon stop service options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|-force|Stop the service forcefully if not stopped by normal shutdown.|false|
+
+
+
+## `pulsar-perf`
+A tool for performance testing a Pulsar broker.
+
+Usage
+```bash
+$ pulsar-perf command
+```
+
+Commands
+* `consume`
+* `produce`
+* `read`
+* `websocket-producer`
+* `managed-ledger`
+* `monitor-brokers`
+* `simulation-client`
+* `simulation-controller`
+* `help`
+
+Environment variables
+
+The table below lists the environment variables that you can use to configure the pulsar-perf tool.
+
+|Variable|Description|Default|
+|---|---|---|
+|`PULSAR_LOG_CONF`|Log4j configuration file|conf/log4j2.yaml|
+|`PULSAR_CLIENT_CONF`|Configuration file for the client|conf/client.conf|
+|`PULSAR_EXTRA_OPTS`|Extra options to be passed to the JVM||
+|`PULSAR_EXTRA_CLASSPATH`|Extra paths for Pulsar's classpath||
+
+
+### `consume`
+Run a consumer
+
+Usage
+```
+$ pulsar-perf consume options
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--auth_params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}.||
+|`--auth_plugin`|Authentication plugin class name||
+|`--acks-delay-millis`|Acknowlegments grouping delay in millis|100|
+|`-k`, `--encryption-key-name`|The private key name to decrypt payload||
+|`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload||
+|`-h`, `--help`|Help message|false|
+|`--conf-file`|Configuration file||
+|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
+|`-n`, `--num-consumers`|Number of consumers (per topic)|1|
+|`-t`, `--num-topic`|The number of topics|1|
+|`-r`, `--rate`|Simulate a slow message consumer (rate in msg/s)|0|
+|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000|
+|`-u`, `--service-url`|Pulsar service URL||
+|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled|0|
+|`-s`, `--subscriber-name`|Subscriber name prefix|sub|
+|`-st`, `--subscription-type`|Subscriber name prefix. Possible values are Exclusive, Shared, Failover.|Exclusive|
+|`--trust-cert-file`|Path for the trusted TLS certificate file||
+
+
+### `produce`
+Run a producer
+
+Usage
+```bash
+$ pulsar-perf produce options
+```
+
+Options
+
+|Flag|Description|Default|
+|---|---|---|
+|`--auth_params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}.||
+|`--auth_plugin`|Authentication plugin class name||
+|`-b`, `--batch-time-window`|Batch messages in a window of the specified number of milliseconds|1|
+|`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.||
+|`--conf-file`|Configuration file||
+|`-k`, `--encryption-key-name`|The public key name to encrypt payload||
+|`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload||
+|`-h`, `--help`|Help message|false|
+|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
+|`-o`, `--max-outstanding`|Max number of outstanding messages|1000|
+|`-p`, `--max-outstanding-across-partitions`|Max number of outstanding messages across partitions|50000|
+|`-m`, `--num-messages`|Number of messages to publish in total. If set to 0, it will keep publishing.|0|
+|`-n`, `--num-producers`|The number of producers (per topic)|1|
+|`-t`, `--num-topic`|The number of topics|1|
+|`-f`, `--payload-file`|Use payload from a file instead of an empty buffer||
+|`-r`, `--rate`|Publish rate msg/s across topics|100|
+|`-u`, `--service-url`|Pulsar service URL||
+|`-s`, `--size`|Message size (in bytes)|1024|
+|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0|
+|`-time`, `--test-duration`|Test duration in secs. If set to 0, it will keep publishing.|0|
+|`--trust-cert-file`|Path for the trusted TLS certificate file||
+|`--warmup-time`|Warm-up time in seconds|1|
+
+
+### `read`
+Run a topic reader
+
+Usage
+```bash
+$ pulsar-perf read options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--auth_params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}.||
+|`--auth_plugin`|Authentication plugin class name||
+|`--conf-file`|Configuration file||
+|`-h`, `--help`|Help message|false|
+|`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
+|`-t`, `--num-topic`|The number of topics|1|
+|`-r`, `--rate`|Simulate a slow message reader (rate in msg/s)|0|
+|`-q`, `--receiver-queue-size`|Size of the receiver queue|1000|
+|`-u`, `--service-url`|Pulsar service URL||
+|`-m`, `--start-message-id`|Start message id. This can be either 'earliest', 'latest' or a specific message id by using 'lid:eid'|earliest|
+|`-i`, `--stats-interval-seconds`|Statistics interval seconds. If 0, statistics will be disabled.|0|
+|`--trust-cert-file`|Path for the trusted TLS certificate file||
+|`--use-tls`|Use TLS encryption on the connection|false|
+
+
+### `websocket-producer`
+Run a websocket producer
+
+Usage
+```bash
+$ pulsar-perf websocket-producer options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--auth_params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class, for example "key1:val1,key2:val2" or "{"key1":"val1","key2":"val2"}.||
+|`--auth_plugin`|Authentication plugin class name||
+|`--conf-file`|Configuration file||
+|`-h`, `--help`|Help message|false|
+|`-m`, `--num-messages`|Number of messages to publish in total. If 0, it will keep publishing|0|
+|`-t`, `--num-topic`|The number of topics|1|
+|`-f`, `--payload-file`|Use payload from a file instead of empty buffer||
+|`-u`, `--proxy-url`|Pulsar Proxy URL, e.g., "ws://localhost:8080/"||
+|`-r`, `--rate`|Publish rate msg/s across topics|100|
+|`-s`, `--size`|Message size in byte|1024|
+|`-time`, `--test-duration`|Test duration in secs. If 0, it will keep publishing|0|
+
+
+### `managed-ledger`
+Write directly on managed-ledgers
+
+Usage
+```bash
+$ pulsar-perf managed-ledger options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-a`, `--ack-quorum`|Ledger ack quorum|1|
+|`-dt`, `--digest-type`|BookKeeper digest type. Possible Values: [CRC32, MAC, CRC32C, DUMMY]|CRC32C|
+|`-e`, `--ensemble-size`|Ledger ensemble size|1|
+|`-h`, `--help`|Help message|false|
+|`-c`, `--max-connections`|Max number of TCP connections to a single bookie|1|
+|`-o`, `--max-outstanding`|Max number of outstanding requests|1000|
+|`-m`, `--num-messages`|Number of messages to publish in total. If 0, it will keep publishing|0|
+|`-t`, `--num-topic`|Number of managed ledgers|1|
+|`-r`, `--rate`|Write rate msg/s across managed ledgers|100|
+|`-s`, `--size`|Message size in byte|1024|
+|`-time`, `--test-duration`|Test duration in secs. If 0, it will keep publishing|0|
+|`--threads`|Number of threads writing|1|
+|`-w`, `--write-quorum`|Ledger write quorum|1|
+|`-zk`, `--zookeeperServers`|ZooKeeper connection string||
+
+
+### `monitor-brokers`
+Continuously receive broker data and/or load reports
+
+Usage
+```bash
+$ pulsar-perf monitor-brokers options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--connect-string`|A connection string for one or more ZooKeeper servers||
+|`-h`, `--help`|Help message|false|
+
+
+### `simulation-client`
+Run a simulation server acting as a Pulsar client. Uses the client configuration specified in `conf/client.conf`.
+
+Usage
+```bash
+$ pulsar-perf simulation-client options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--port`|Port to listen on for controller|0|
+|`--service-url`|Pulsar Service URL||
+|`-h`, `--help`|Help message|false|
+
+### `simulation-controller`
+Run a simulation controller to give commands to servers
+
+Usage
+```bash
+$ pulsar-perf simulation-controller options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--client-port`|The port that the clients are listening on|0|
+|`--clients`|Comma-separated list of client hostnames||
+|`--cluster`|The cluster to test on||
+|`-h`, `--help`|Help message|false|
+
+
+### `help`
+This help message
+
+Usage
+```bash
+$ pulsar-perf help
+```
+
+
+## `bookkeeper`
+A tool for managing BookKeeper.
+
+Usage
+```bash
+$ bookkeeper command
+```
+
+Commands
+* `auto-recovery`
+* `bookie`
+* `localbookie`
+* `upgrade`
+* `shell`
+
+
+Environment variables
+
+The table below lists the environment variables that you can use to configure the bookkeeper tool.
+
+|Variable|Description|Default|
+|---|---|---|
+|BOOKIE_LOG_CONF|Log4j configuration file|conf/log4j2.yaml|
+|BOOKIE_CONF|BookKeeper configuration file|conf/bk_server.conf|
+|BOOKIE_EXTRA_OPTS|Extra options to be passed to the JVM||
+|BOOKIE_EXTRA_CLASSPATH|Extra paths for BookKeeper's classpath||  
+|ENTRY_FORMATTER_CLASS|The Java class used to format entries||
+|BOOKIE_PID_DIR|Folder where the BookKeeper server PID file should be stored||
+|BOOKIE_STOP_TIMEOUT|Wait time before forcefully killing the Bookie server instance if attempts to stop it are not successful||
+
+
+### `auto-recovery`
+Runs an auto-recovery service daemon
+
+Usage
+```bash
+$ bookkeeper auto-recovery options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--conf`|Configuration for the auto-recovery daemon||
+
+
+### `bookie`
+Starts up a BookKeeper server (aka bookie)
+
+Usage
+```bash
+$ bookkeeper bookie options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--conf`|Configuration for the auto-recovery daemon||
+|-readOnly|Force start a read-only bookie server|false|
+|-withAutoRecovery|Start auto-recovery service bookie server|false|
+
+
+### `localbookie`
+Runs a test ensemble of N bookies locally
+
+Usage
+```bash
+$ bookkeeper localbookie N
+```
+
+### `upgrade`
+Upgrade the bookie’s filesystem
+
+Usage
+```bash
+$ bookkeeper upgrade options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--conf`|Configuration for the auto-recovery daemon||
+|`-u`, `--upgrade`|Upgrade the bookie’s directories||
+
+
+### `shell`
+Run shell for admin commands. To see a full listing of those commands, run bookkeeper shell without an argument.
+
+Usage
+```bash
+$ bookkeeper shell
+```
+
+Example
+```bash
+$ bookkeeper shell bookiesanity
+```
+
diff --git a/site2/website/versioned_docs/version-2.3.2/reference-configuration.md b/site2/website/versioned_docs/version-2.3.2/reference-configuration.md
new file mode 100644
index 0000000..fe91495
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/reference-configuration.md
@@ -0,0 +1,490 @@
+---
+id: version-2.3.2-reference-configuration
+title: Pulsar configuration
+sidebar_label: Pulsar configuration
+original_id: reference-configuration
+---
+
+<style type="text/css">
+  table{
+    font-size: 80%;
+  }
+</style>
+
+
+Pulsar configuration can be managed either via a series of configuration files contained in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md)
+
+* [BookKeeper](#bookkeeper)
+* [Broker](#broker)
+* [Client](#client)
+* [Service discovery](#service-discovery)
+* [Log4j](#log4j)
+* [Log4j shell](#log4j-shell)
+* [Standalone](#standalone)
+* [WebSocket](#websocket)
+* [ZooKeeper](#zookeeper)
+
+## BookKeeper
+
+BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages.
+
+
+|Name|Description|Default|
+|---|---|---|
+|bookiePort|The port on which the bookie server listens.|3181|
+|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (i.e. the interface used to establish its identity). By default, loopback interfaces are not allowed as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cl [...]
+|listeningInterface|The network interface on which the bookie listens. If not set, the bookie will listen on all interfaces.|eth0|
+|journalDirectory|The directory where Bookkeeper outputs its write-ahead log (WAL)|data/bookkeeper/journal|
+|ledgerDirectories|The directory where Bookkeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by comma, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers|
+|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical|
+|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers|
+|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage|
+|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true|
+|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|2147483648|
+|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2|
+|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled.|3600|
+|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5|
+|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled.|86400|
+|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This pa [...]
+|compactionRate|The rate at which compaction will read entries, in adds per second.|1000|
+|isThrottleByBytes|Throttle compaction by bytes or by entries.|false|
+|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000|
+|compactionRateByBytes|Set the rate at which compaction will readd entries. The unit is bytes added per second.|1000000|
+|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048|
+|journalMaxBackups|The max number of old journal filse to keep. Keeping a number of old journal files would help data recovery in special cases.|5|
+|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16|
+|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64|
+|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true|
+|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true|
+|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1|
+|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096|
+|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288|
+|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false|
+|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8|
+|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|5000|
+|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000|
+|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000|
+|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server  [...]
+|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000|
+|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181|
+|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000|
+|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true|
+|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0|
+|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficent when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192|
+|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain bettern performance in lager number of ledgers with  [...]
+|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If “readOnlyModeEnabled=true” then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true|
+|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95|
+|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000|
+|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800|
+|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400|
+|numAddWorkerThreads|number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0|
+|numReadWorkerThreads|number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8|
+|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500|
+|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096|
+|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536|
+|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ipaddress for the registration.|false|
+|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider|
+|prometheusStatsHttpPort||8000|
+|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log For good performance, it should be big enough to hold a sub|25% of direct memory|
+|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens|25% of direct memory|
+|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000|
+|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases|10% of direct memory|
+|dbStorage_rocksDB_writeBufferSizeMB||64|
+|dbStorage_rocksDB_sstSizeInMB||64|
+|dbStorage_rocksDB_blockSize||65536|
+|dbStorage_rocksDB_bloomFilterBitsPerKey||10|
+|dbStorage_rocksDB_numLevels||-1|
+|dbStorage_rocksDB_numFilesInLevel0||4|
+|dbStorage_rocksDB_maxSizeInLevel1MB||256|
+
+
+
+## Broker
+
+Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more.
+
+|Name|Description|Default|
+|---|---|---|
+|enablePersistentTopics|  Whether persistent topics are enabled on the broker |true|
+|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true|
+|functionsWorkerEnabled|  Whether the Pulsar Functions worker service is enabled in the broker  |false|
+|zookeeperServers|  Zookeeper quorum connection string  ||
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|brokerServicePort| Broker data port  |6650|
+|brokerServicePortTls|  Broker data port for TLS  |6651|
+|webServicePort|  Port to use to server HTTP request  |8080|
+|webServicePortTls| Port to use to server HTTPS request |8443|
+|webSocketServiceEnabled| Enable the WebSocket API service in broker  |false|
+|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0.  |0.0.0.0|
+|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used.  ||
+|clusterName| Name of the cluster to which this broker belongs to ||
+|brokerDeduplicationEnabled|  Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis.  |false|
+|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes.  |10000|
+|brokerDeduplicationEntriesInterval|  The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000|
+|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360|
+|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000|
+|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed  |60000|
+|backlogQuotaCheckEnabled|  Enable backlog quota check. Enforces action on topic when the quota is reached  |true|
+|backlogQuotaCheckIntervalInSeconds|  How often to check for topics that have reached the quota |60|
+|backlogQuotaDefaultLimitGB|  Default per-topic backlog quota limit |10|
+|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics  |true|
+|brokerDeleteInactiveTopicsFrequencySeconds|  How often to check for inactive topics  |60|
+|messageExpiryCheckIntervalInMinutes| How frequently to proactively check and purge expired messages  |5|
+|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to see if topics with compaction policies need to be compacted  |60|
+|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed.  |1000|
+|clientLibraryVersionCheckEnabled|  Enable check for minimum allowed client library version |false|
+|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information  |true|
+|statusFilePath|  Path for the file used to determine the rotation status for the broker when responding to service discovery health checks ||
+|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles)  |false|
+|tlsEnabled|  Enable TLS  |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate file ||
+|tlsAllowInsecureConnection|  Accept untrusted TLS certificate from client  |false|
+|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.2```, ```TLSv1.1```, ```TLSv1``` ||
+|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```||
+|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`||
+|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`||
+|tokenAuthClaim| Specify which of the token's claims will be used as the authentication "principal" or "role". The default "sub" claim will be used if this is left blank ||
+|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction  |50000|
+|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction  |200000|
+|subscriptionRedeliveryTrackerEnabled| Enable subscription message redelivery tracker |true|
+|maxConcurrentLookupRequest|  Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000|
+|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000|
+|authenticationEnabled| Enable authentication |false|
+|authenticationProviders| Autentication provider name list, which is comma separated list of class names  ||
+|authorizationEnabled|  Enforce authorization |false|
+|superUserRoles|  Role names that are treated as “super-user”, meaning they will be able to do all admin operations and publish/consume from all topics ||
+|brokerClientAuthenticationPlugin|  Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters  ||
+|brokerClientAuthenticationParameters|||
+|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication  ||
+|bookkeeperClientAuthenticationPlugin|  Authentication plugin to use when connecting to bookies ||
+|bookkeeperClientAuthenticationParametersName|  BookKeeper auth plugin implementatation specifics parameters name and values  ||
+|bookkeeperClientAuthenticationParameters|||   
+|bookkeeperClientTimeoutInSeconds|  Timeout for BK add / read operations  |30|
+|bookkeeperClientSpeculativeReadTimeoutInMillis|  Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0|
+|bookkeeperClientHealthCheckEnabled|  Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies  |true|
+|bookkeeperClientHealthCheckIntervalSeconds||60|
+|bookkeeperClientHealthCheckErrorThresholdPerInterval||5|
+|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800|
+|bookkeeperClientRackawarePolicyEnabled|  Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble  |true|
+|bookkeeperClientRegionawarePolicyEnabled|  Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored  |false|
+|bookkeeperClientReorderReadSequenceEnabled|  Enable/disable reordering read sequence on reading entries.  |false|
+|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker  ||
+|bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't have enough bookie available.  ||
+|bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list.  ||
+|bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read  all entries for a ledger. | true |
+|managedLedgerDefaultEnsembleSize|  Number of bookies to use when creating a ledger |2|
+|managedLedgerDefaultWriteQuorum| Number of copies to store for each message  |2|
+|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2|
+|managedLedgerCacheSizeMB|  Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory ||
+|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false|
+|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered  |0.9|
+|managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 |
+|managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 |
+|managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' and thus should be set as inactive. | 1000|
+|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages  |1.0|
+|managedLedgerMaxEntriesPerLedger|  Max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered on these conditions: <ul><li>Either the max rollover time has been reached</li><li>or max entries have been written to the ledged and at least min-time has passed</li></ul>|50000|
+|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic  |10|
+|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240|
+|managedLedgerCursorMaxEntriesPerLedger|  Max number of entries to append to a cursor ledger  |50000|
+|managedLedgerCursorRolloverTimeInSeconds|  Max time before triggering a rollover on a cursor ledger  |14400|
+|managedLedgerMaxUnackedRangesToPersist|  Max number of “acknowledgment holes” that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in “ranges” of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redel [...]
+|autoSkipNonRecoverableData|  Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false|
+|loadBalancerEnabled| Enable load balancer  |true|
+|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection ||
+|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update  |10|
+|loadBalancerReportUpdateMaxIntervalMinutes|  maximum interval to update load report  |15|
+|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect  |1|
+|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers  |30|
+|loadBalancerSheddingGracePeriodMinutes|  Prevent the same topics to be shed and moved to other broker more that once within this timeframe |30|
+|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker  |50000|
+|loadBalancerBrokerUnderloadedThresholdPercentage|  Usage threshold to determine a broker as under-loaded |1|
+|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded  |85|
+|loadBalancerResourceQuotaUpdateIntervalMinutes|  Interval to update namespace bundle resource quotat |15|
+|loadBalancerBrokerComfortLoadLevelPercentage|  Usage threshold to determine a broker is having just right level of load  |65|
+|loadBalancerAutoBundleSplitEnabled|  enable/disable namespace bundle auto split  |false|
+|loadBalancerNamespaceBundleMaxTopics|  maximum topics in a bundle, otherwise bundle split will be triggered  |1000|
+|loadBalancerNamespaceBundleMaxSessions|  maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered  |1000|
+|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered  |1000|
+|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered  |100|
+|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace  |128|
+|replicationMetricsEnabled| Enable replication metrics  |true|
+|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links.  |16|
+|replicationProducerQueueSize|  Replicator producer queue size  |1000|
+|replicatorPrefix|  Replicator prefix used for replicator producer name and cursor name pulsar.repl||
+|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false|
+|defaultRetentionTimeInMinutes| Default message retention time  ||
+|defaultRetentionSizeInMB|  Default retention size  |0|
+|keepAliveIntervalSeconds|  How often to check whether the connections are still alive  |30|
+|brokerServicePurgeInactiveFrequencyInSeconds|  How often broker checks for inactive topics to be deleted (topics with no subscriptions and no one connected) |60|
+|loadManagerClassName|  Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl|
+|managedLedgerOffloadDriver|  Driver to use to offload old data to long term storage (Possible values: S3)  ||
+|managedLedgerOffloadMaxThreads|  Maximum number of thread pool threads for ledger offloading |2|
+|s3ManagedLedgerOffloadRegion|  For Amazon S3 ledger offload, AWS region  ||
+|s3ManagedLedgerOffloadBucket|  For Amazon S3 ledger offload, Bucket to place offloaded ledger into ||
+|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) ||
+|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864|
+|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default)  |1048576|
+
+
+
+
+## Client
+
+The [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool can be used to publish messages to Pulsar and consume messages from Pulsar topics. This tool can be used in lieu of a client library.
+
+|Name|Description|Default|
+|---|---|---|
+|webServiceUrl| The web URL for the cluster.  |http://localhost:8080/|
+|brokerServiceUrl|  The Pulsar protocol URL for the cluster.  |pulsar://localhost:6650/|
+|authPlugin|  The authentication plugin.  ||
+|authParams|  The authentication parameters for the cluster, as a comma-separated string. ||
+|useTls|  Whether or not TLS authentication will be enforced in the cluster.  |false|
+|tlsAllowInsecureConnection|||    
+|tlsTrustCertsFilePath|||
+
+
+## Service discovery
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  Zookeeper quorum connection string (comma-separated)  ||
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout |30000|
+|servicePort| Port to use to server binary-proto request  |6650|
+|servicePortTls|  Port to use to server binary-proto-tls request  |6651|
+|webServicePort|  Port that discovery service listen on |8080|
+|webServicePortTls| Port to use to server HTTPS request |8443|
+|bindOnLocalhost| Control whether to bind directly on localhost rather than on normal hostname  |false|
+|authenticationEnabled| Enable authentication |false|
+|authenticationProviders| Authentication provider name list, which is comma separated list of class names (comma-separated) ||
+|authorizationEnabled|  Enforce authorization |false|
+|superUserRoles|  Role names that are treated as “super-user”, meaning they will be able to do all admin operations and publish/consume from all topics (comma-separated) ||
+|tlsEnabled|  Enable TLS  |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+
+
+
+## Log4j
+
+
+|Name|Default|
+|---|---|
+|pulsar.root.logger|  WARN,CONSOLE|
+|pulsar.log.dir|  logs|
+|pulsar.log.file| pulsar.log|
+|log4j.rootLogger|  ${pulsar.root.logger}|
+|log4j.appender.CONSOLE|  org.apache.log4j.ConsoleAppender|
+|log4j.appender.CONSOLE.Threshold|  DEBUG|
+|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n|
+|log4j.appender.ROLLINGFILE|  org.apache.log4j.DailyRollingFileAppender|
+|log4j.appender.ROLLINGFILE.Threshold|  DEBUG|
+|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}|
+|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n|
+|log4j.appender.TRACEFILE|  org.apache.log4j.FileAppender|
+|log4j.appender.TRACEFILE.Threshold|  TRACE|
+|log4j.appender.TRACEFILE.File| pulsar-trace.log|
+|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n|
+
+
+## Log4j shell
+
+|Name|Default|
+|---|---|
+|bookkeeper.root.logger|  ERROR,CONSOLE|
+|log4j.rootLogger|  ${bookkeeper.root.logger}|
+|log4j.appender.CONSOLE|  org.apache.log4j.ConsoleAppender|
+|log4j.appender.CONSOLE.Threshold|  DEBUG|
+|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n|
+|log4j.logger.org.apache.zookeeper| ERROR|
+|log4j.logger.org.apache.bookkeeper|  ERROR|
+|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO|
+
+
+## Standalone
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  The quorum connection string for local ZooKeeper  ||
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|brokerServicePort| The port on which the standalone broker listens for connections |6650|
+|webServicePort|  THe port used by the standalone broker for HTTP requests  |8080|
+|bindAddress| The hostname or IP address on which the standalone service binds  |0.0.0.0|
+|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used.  ||
+|clusterName| The name of the cluster that this broker belongs to. |standalone|
+|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000|
+|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000|
+|backlogQuotaCheckEnabled|  Enable the backlog quota check, which enforces a specified action when the quota is reached.  |true|
+|backlogQuotaCheckIntervalInSeconds|  How often to check for topics that have reached the backlog quota.  |60|
+|backlogQuotaDefaultLimitGB|  The default per-topic backlog quota limit.  |10|
+|ttlDurationDefaultInSeconds|  Default ttl for namespaces if ttl is not already configured at namespace policies.  |0|
+|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. |true|
+|brokerDeleteInactiveTopicsFrequencySeconds|  How often to check for inactive topics, in seconds. |60|
+|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5|
+|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed.  |1000|
+|clientLibraryVersionCheckEnabled|  Enable checks for minimum allowed client library version. |false|
+|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information  |true|
+|statusFilePath|  The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs|
+|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000|
+|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer.  |200000|
+|authenticationEnabled| Enable authentication for the broker. |false|
+|authenticationProviders| A comma-separated list of class names for authentication providers. |false|
+|authorizationEnabled|  Enforce authorization in brokers. |false|
+|superUserRoles|  Role names that are treated as “superusers.” Superusers are authorized to perform all admin tasks. ||  
+|brokerClientAuthenticationPlugin|  The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. ||
+|brokerClientAuthenticationParameters|  The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin.  ||
+|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list.  ||
+|bookkeeperClientAuthenticationPlugin|  Authentication plugin to be used when connecting to bookies (BookKeeper servers). ||
+|bookkeeperClientAuthenticationParametersName|  BookKeeper authentication plugin implementation parameters and values.  ||
+|bookkeeperClientAuthenticationParameters|  Parameters associated with the bookkeeperClientAuthenticationParametersName ||
+|bookkeeperClientTimeoutInSeconds|  Timeout for BookKeeper add and read operations. |30|
+|bookkeeperClientSpeculativeReadTimeoutInMillis|  Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads.  |0|
+|bookkeeperClientHealthCheckEnabled|  Enable bookie health checks.  |true|
+|bookkeeperClientHealthCheckIntervalSeconds|  The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks.  |60|
+|bookkeeperClientHealthCheckErrorThresholdPerInterval|  Error threshold for health checks.  |5|
+|bookkeeperClientHealthCheckQuarantineTimeInSeconds|  If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800|
+|bookkeeperClientRackawarePolicyEnabled|    |true|
+|bookkeeperClientRegionawarePolicyEnabled|    |false|
+|bookkeeperClientReorderReadSequenceEnabled|    |false|
+|bookkeeperClientIsolationGroups|||   
+|managedLedgerDefaultEnsembleSize|    |1|
+|managedLedgerDefaultWriteQuorum|   |1|
+|managedLedgerDefaultAckQuorum|   |1|
+|managedLedgerCacheSizeMB|    |1024|
+|managedLedgerCacheEvictionWatermark|   |0.9|
+|managedLedgerDefaultMarkDeleteRateLimit|   |0.1|
+|managedLedgerMaxEntriesPerLedger|    |50000|
+|managedLedgerMinLedgerRolloverTimeMinutes|   |10|
+|managedLedgerMaxLedgerRolloverTimeMinutes|   |240|
+|managedLedgerCursorMaxEntriesPerLedger|    |50000|
+|managedLedgerCursorRolloverTimeInSeconds|    |14400|
+|autoSkipNonRecoverableData|    |false|
+|loadBalancerEnabled|   |false|
+|loadBalancerPlacementStrategy|   |weightedRandomSelection|
+|loadBalancerReportUpdateThresholdPercentage|   |10|
+|loadBalancerReportUpdateMaxIntervalMinutes|    |15|
+|loadBalancerHostUsageCheckIntervalMinutes|  |1|
+|loadBalancerSheddingIntervalMinutes|   |30|
+|loadBalancerSheddingGracePeriodMinutes|    |30|
+|loadBalancerBrokerMaxTopics|   |50000|
+|loadBalancerBrokerUnderloadedThresholdPercentage|    |1|
+|loadBalancerBrokerOverloadedThresholdPercentage|   |85|
+|loadBalancerResourceQuotaUpdateIntervalMinutes|    |15|
+|loadBalancerBrokerComfortLoadLevelPercentage|    |65|
+|loadBalancerAutoBundleSplitEnabled|    |false|
+|loadBalancerNamespaceBundleMaxTopics|    |1000|
+|loadBalancerNamespaceBundleMaxSessions|    |1000|
+|loadBalancerNamespaceBundleMaxMsgRate|   |1000|
+|loadBalancerNamespaceBundleMaxBandwidthMbytes|   |100|
+|loadBalancerNamespaceMaximumBundles|   |128|
+|replicationMetricsEnabled|   |true|
+|replicationConnectionsPerBroker|   |16|
+|replicationProducerQueueSize|    |1000|
+|defaultRetentionTimeInMinutes|   |0|
+|defaultRetentionSizeInMB|    |0|
+|keepAliveIntervalSeconds|    |30|
+|brokerServicePurgeInactiveFrequencyInSeconds|    |60|
+
+
+
+
+
+## WebSocket
+
+|Name|Description|Default|
+|---|---|---|
+|configurationStoreServers    |||
+|zooKeeperSessionTimeoutMillis|   |30000|
+|serviceUrl|||
+|serviceUrlTls|||
+|brokerServiceUrl|||
+|brokerServiceUrlTls|||
+|webServicePort||8080|
+|webServicePortTls||8443|
+|bindAddress||0.0.0.0|
+|clusterName |||
+|authenticationEnabled||false|
+|authenticationProviders|||   
+|authorizationEnabled||false|
+|superUserRoles |||
+|brokerClientAuthenticationPlugin|||
+|brokerClientAuthenticationParameters|||
+|tlsEnabled||false|
+|tlsAllowInsecureConnection||false|
+|tlsCertificateFilePath|||
+|tlsKeyFilePath |||
+|tlsTrustCertsFilePath|||
+
+
+## Pulsar proxy
+
+The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file.
+
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  The ZooKeeper quorum connection string (as a comma-separated list)  ||
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
+|servicePort| The port to use for server binary Protobuf requests |6650|
+|servicePortTls|  The port to use to server binary Protobuf TLS requests  |6651|
+|statusFilePath|  Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks ||
+|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy  |false|
+|authenticationProviders| Authentication provider name list (a comma-separated list of class names) ||
+|authorizationEnabled|  Whether authorization is enforced by the Pulsar proxy |false|
+|authorizationProvider| Authorization provider as a fully qualified class name  |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider|
+|brokerClientAuthenticationPlugin|  The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientAuthenticationParameters|  The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientTrustCertsFilePath|  The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers ||
+|superUserRoles|  Role names that are treated as “super-users,” meaning that they will be able to perform all admin ||
+|forwardAuthorizationCredentials| Whether client authorization credentials are forwared to the broker for re-authorization. Authentication must be enabled via authenticationEnabled=true for this to take effect.  |false|
+|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000|
+|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000|
+|tlsEnabledInProxy| Whether TLS is enabled for the proxy  |false|
+|tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar brokers |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file ||
+|tlsHostnameVerificationEnabled|  Whether the hostname is validated when the proxy creates a TLS connection with brokers  |false|
+|tlsRequireTrustedClientCertOnConnect|  Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false|
+|tlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLSv1.2```, ```TLSv1.1```, ```TLSv1``` ||
+|tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- ```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```||
+|tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`||
+|tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: `tokenPublicKey=data:base64,xxxxxxxxx` or `tokenPublicKey=file:///my/secret.key`||
+|tokenAuthClaim| Specify the token claim that will be used as the authentication "principal" or "role". The "subject" field will be used if this is left blank ||
+
+## ZooKeeper
+
+ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available:
+
+
+|Name|Description|Default|
+|---|---|---|
+|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
+|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
+|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
+|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
+|clientPort|  The port on which the ZooKeeper server will listen for connections. |2181|
+|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
+|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1|
+|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+
+
+
+
+In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding
+a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration
diff --git a/site2/website/versioned_docs/version-2.3.2/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.3.2/reference-pulsar-admin.md
new file mode 100644
index 0000000..50ac36c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/reference-pulsar-admin.md
@@ -0,0 +1,2552 @@
+---
+id: version-2.3.2-pulsar-admin
+title: Pulsar admin CLI
+sidebar_label: Pulsar Admin CLI
+original_id: pulsar-admin
+---
+
+The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more.
+
+Usage
+```bash
+$ pulsar-admin command
+```
+
+Commands
+* `broker-stats`
+* `brokers`
+* `clusters`
+* `functions`
+* `namespaces`
+* `ns-isolation-policy`
+* `sink`
+* `source`
+* `topics`
+* `tenants`
+* `resource-quotas`
+* `schemas`
+
+## `broker-stats`
+
+Operations to collect broker statistics
+
+```bash
+$ pulsar-admin broker-stats subcommand
+```
+
+Subcommands
+* `allocator-stats`
+* `topics(destinations)`
+* `mbeans`
+* `monitoring-metrics`
+* `load-report`
+
+
+### `allocator-stats`
+
+Dump allocator stats
+
+Usage
+```bash
+$ pulsar-admin broker-stats allocator-stats allocator-name
+```
+
+### `topics(destinations)`
+
+Dump topic stats
+
+Usage
+```bash
+$ pulsar-admin broker-stats topics options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+### `mbeans`
+
+Dump Mbean stats
+
+Usage
+```bash
+$ pulsar-admin broker-stats mbeans options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+
+### `monitoring-metrics`
+
+Dump metrics for monitoring
+
+Usage
+```bash
+$ pulsar-admin broker-stats monitoring-metrics options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+
+### `load-report`
+
+Dump broker load-report
+
+Usage
+```bash
+$ pulsar-admin broker-stats load-report
+```
+
+
+## `brokers`
+
+Operations about brokers
+
+```bash
+$ pulsar-admin brokers subcommand
+```
+
+Subcommands
+* `list`
+* `namespaces`
+* `update-dynamic-config`
+* `list-dynamic-config`
+* `get-all-dynamic-config`
+* `get-internal-config`
+* `get-runtime-config`
+* `healthcheck`
+
+### `list`
+List active brokers of the cluster
+
+Usage
+```bash
+$ pulsar-admin brokers list cluster-name
+```
+
+### `namespaces`
+List namespaces owned by the broker
+
+Usage
+```bash
+$ pulsar-admin brokers namespaces cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--url`|The URL for the broker||
+
+
+### `update-dynamic-config`
+Update a broker's dynamic service configuration
+
+Usage
+```bash
+$ pulsar-admin brokers update-dynamic-config options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--config`|Service configuration parameter name||
+|`--value`|Value for the configuration parameter value specified using the `--config` flag||
+
+
+### `list-dynamic-config`
+Get list of updatable configuration name
+
+Usage
+```bash
+$ pulsar-admin brokers list-dynamic-config
+```
+
+### `get-all-dynamic-config`
+Get all overridden dynamic-configuration values
+
+Usage
+```bash
+$ pulsar-admin brokers get-all-dynamic-config
+```
+
+### `get-internal-config`
+Get internal configuration information
+
+Usage
+```bash
+$ pulsar-admin brokers get-internal-config
+```
+
+### `get-runtime-config`
+Get runtime configuration values
+
+Usage
+```bash
+$ pulsar-admin brokers get-runtime-config
+```
+
+### `healthcheck`
+Run a health check against the broker
+
+Usage
+```bash
+$ pulsar-admin brokers healthcheck
+```
+
+
+## `clusters`
+Operations about clusters
+
+Usage
+```bash
+$ pulsar-admin clusters subcommand
+```
+
+Subcommands
+* `get`
+* `create`
+* `update`
+* `delete`
+* `list`
+* `update-peer-clusters`
+* `get-peer-clusters`
+* `get-failure-domain`
+* `create-failure-domain`
+* `update-failure-domain`
+* `delete-failure-domain`
+* `list-failure-domains`
+
+
+### `get`
+Get the configuration data for the specified cluster
+
+Usage
+```bash
+$ pulsar-admin clusters get cluster-name
+```
+
+### `create`
+Provisions a new cluster. This operation requires Pulsar super-user privileges.
+
+Usage
+```bash
+$ pulsar-admin clusters create cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-url`|The URL for the broker service.||
+|`--broker-url-secure`|The broker service URL for a secure connection||
+|`--url`|service-url||
+|`--url-secure`|service-url for secure connection||
+
+
+### `update`
+Update the configuration for a cluster
+
+Usage
+```bash
+$ pulsar-admin clusters update cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-url`|The URL for the broker service.||
+|`--broker-url-secure`|The broker service URL for a secure connection||
+|`--url`|service-url||
+|`--url-secure`|service-url for secure connection||
+
+
+### `delete`
+Deletes an existing cluster
+
+Usage
+```bash
+$ pulsar-admin clusters delete cluster-name
+```
+
+### `list`
+List the existing clusters
+
+Usage
+```bash
+$ pulsar-admin clusters list
+```
+
+### `update-peer-clusters`
+Update peer cluster names
+
+Usage
+```bash
+$ pulsar-admin clusters update-peer-clusters cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--peer-clusters`|Comma separated peer cluster names (Pass empty string "" to delete list)||
+
+### `get-peer-clusters`
+Get list of peer clusters
+
+Usage
+```bash
+$ pulsar-admin clusters get-peer-clusters
+```
+
+### `get-failure-domain`
+Get the configuration brokers of a failure domain
+
+Usage
+```bash
+$ pulsar-admin clusters get-failure-domain cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster||
+
+### `create-failure-domain`
+Create a new failure domain for a cluster (updates it if already created)
+
+Usage
+```bash
+$ pulsar-admin clusters create-failure-domain cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-list`|Comma separated broker list||
+|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster||
+
+### `update-failure-domain`
+Update failure domain for a cluster (creates a new one if not exist)
+
+Usage
+```bash
+$ pulsar-admin clusters update-failure-domain cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-list`|Comma separated broker list||
+|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster||
+
+### `delete-failure-domain`
+Delete an existing failure domain
+
+Usage
+```bash
+$ pulsar-admin clusters delete-failure-domain cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--domain-name`|The failure domain name, which is a logical domain under a Pulsar cluster||
+
+### `list-failure-domains`
+List the existing failure domains for a cluster
+
+Usage
+```bash
+$ pulsar-admin clusters list-failure-domains cluster-name
+```
+
+
+## `functions`
+
+A command-line interface for Pulsar Functions
+
+Usage
+```bash
+$ pulsar-admin functions subcommand
+```
+
+Subcommands
+* `localrun`
+* `create`
+* `delete`
+* `update`
+* `get`
+* `restart`
+* `stop`
+* `start`
+* `status`
+* `stats`
+* `list`
+* `querystate`
+* `trigger`
+
+
+### `localrun`
+Run the Pulsar Function locally (rather than deploying it to the Pulsar cluster)
+
+
+Usage
+```bash
+$ pulsar-admin functions localrun options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)||
+|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)||
+|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)||
+|`--auto-ack`|Whether or not the framework will automatically acknowledge messages||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--broker-service-url `|The URL of the Pulsar broker||
+|`--classname`|The function's class name||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)||
+|`--client-auth-params`|Client authentication param||
+|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker||
+|`--function-config-file`|The path to a YAML config file specifying the function's configuration||
+|`--hostname-verification-enabled`|Enable hostname verification|false|
+|`--instance-id-offset`|Start the instanceIds from this offset|0|
+|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)||
+|`--log-topic`|The topic to which the function's logs are produced||
+|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--output`|The function's output topic (If none is specified, no output is written)||
+|`--output-serde-classname`|The SerDe class to be used for messages output by the function||
+|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1|
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE|
+|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python)||
+|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function||
+|`--sliding-interval-count`|The number of messages after which the window slides||
+|`--sliding-interval-duration-ms`|The time duration after which the window slides||
+|`--state-storage-service-url`|The URL for the state storage service (by default Apache BookKeeper)||
+|`--tenant`|The function’s tenant||
+|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)||
+|`--user-config`|User-defined config key/values||
+|`--window-length-count`|The number of messages per window||
+|`--window-length-duration-ms`|The time duration of the window in milliseconds||
+|`--dead-letter-topic`|The topic where all messages which could not be processed successfully are sent||
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--max-message-retries`|How many times should we try to process a message before giving up||
+|`--retain-ordering`|Function consumes and processes messages in order||
+|`--timeout-ms`|The message timeout in milliseconds||
+|`--tls-allow-insecure`|Allow insecure tls connection|false|
+|`--tls-trust-cert-path`|The tls trust cert file path||
+|`--use-tls`|Use tls connection|false|
+
+
+### `create`
+Create a Pulsar Function in cluster mode (i.e. deploy it on a Pulsar cluster)
+
+Usage
+```
+$ pulsar-admin functions create options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)||
+|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)||
+|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)||
+|`--auto-ack`|Whether or not the framework will automatically acknowledge messages||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--classname`|The function's class name||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)||
+|`--function-config-file`|The path to a YAML config file specifying the function's configuration||
+|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)||
+|`--log-topic`|The topic to which the function's logs are produced||
+|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The function's name||
+|`--namespace`|The function’s namespace||
+|`--output`|The function's output topic (If none is specified, no output is written)||
+|`--output-serde-classname`|The SerDe class to be used for messages output by the function||
+|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1|
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE|
+|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python)||
+|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function||
+|`--sliding-interval-count`|The number of messages after which the window slides||
+|`--sliding-interval-duration-ms`|The time duration after which the window slides||
+|`--tenant`|The function’s tenant||
+|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)||
+|`--user-config`|User-defined config key/values||
+|`--window-length-count`|The number of messages per window||
+|`--window-length-duration-ms`|The time duration of the window in milliseconds||
+|`--dead-letter-topic`|The topic where all messages which could not be processed||
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--max-message-retries`|How many times should we try to process a message before giving up||
+|`--retain-ordering`|Function consumes and processes messages in order||
+|`--timeout-ms`|The message timeout in milliseconds||
+
+
+### `delete`
+Delete a Pulsar Function that's running on a Pulsar cluster
+
+Usage
+```bash
+$ pulsar-admin functions delete options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `update`
+Update a Pulsar Function that's been deployed to a Pulsar cluster
+
+Usage
+```bash
+$ pulsar-admin functions update options
+```
+
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--cpu`|The cpu in cores that need to be allocated per function instance(applicable only to docker runtime)||
+|`--ram`|The ram in bytes that need to be allocated per function instance(applicable only to process/docker runtime)||
+|`--disk`|The disk in bytes that need to be allocated per function instance(applicable only to docker runtime)||
+|`--auto-ack`|Whether or not the framework will automatically acknowledge messages||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--classname`|The function's class name||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema class names (as a JSON string)||
+|`--function-config-file`|The path to a YAML config file specifying the function's configuration||
+|`--inputs`|The function's input topic or topics (multiple topics can be specified as a comma-separated list)||
+|`--log-topic`|The topic to which the function's logs are produced||
+|`--jar`|Path to the jar file for the function (if the function is written in Java). It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The function's name||
+|`--namespace`|The function’s namespace||
+|`--output`|The function's output topic (If none is specified, no output is written)||
+|`--output-serde-classname`|The SerDe class to be used for messages output by the function||
+|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1|
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the function. Possible Values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]|ATLEAST_ONCE|
+|`--py`|Path to the main Python file/Python Wheel file for the function (if the function is written in Python)||
+|`--schema-type`|The builtin schema type or custom schema class name to be used for messages output by the function||
+|`--sliding-interval-count`|The number of messages after which the window slides||
+|`--sliding-interval-duration-ms`|The time duration after which the window slides||
+|`--tenant`|The function’s tenant||
+|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (supported for java fun only)||
+|`--user-config`|User-defined config key/values||
+|`--window-length-count`|The number of messages per window||
+|`--window-length-duration-ms`|The time duration of the window in milliseconds||
+|`--dead-letter-topic`|The topic where all messages which could not be processed||
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--max-message-retries`|How many times should we try to process a message before giving up||
+|`--retain-ordering`|Function consumes and processes messages in order||
+|`--timeout-ms`|The message timeout in milliseconds||
+
+
+### `get`
+Fetch information about a Pulsar Function
+
+Usage
+```bash
+$ pulsar-admin functions get options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `restart`
+Restart function instance
+
+Usage
+```bash
+$ pulsar-admin functions restart options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (restart all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `stop`
+Stops function instance
+
+Usage
+```bash
+$ pulsar-admin functions stop options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (stop all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `start`
+Starts a stopped function instance
+
+Usage
+```bash
+$ pulsar-admin functions start options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (start all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `status`
+Check the current status of a Pulsar Function
+
+Usage
+```bash
+$ pulsar-admin functions status options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (Get-status of all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `stats`
+Get the current stats of a Pulsar Function
+
+Usage
+```bash
+$ pulsar-admin functions stats options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--instance-id`|The function instanceId (Get-stats of all instances if instance-id is not provided)||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+### `list`
+List all of the Pulsar Functions running under a specific tenant and namespace
+
+Usage
+```bash
+$ pulsar-admin functions list options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+
+
+### `querystate`
+Fetch the current state associated with a Pulsar Function running in cluster mode
+
+Usage
+```bash
+$ pulsar-admin functions querystate options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`-k`, `--key`|The key for the state you want to fetch||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+|`-w`, `--watch`|Watch for changes in the value associated with a key for a Pulsar Function|false|
+
+
+### `trigger`
+Triggers the specified Pulsar Function with a supplied value
+
+Usage
+```bash
+$ pulsar-admin functions trigger options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--fqfn`|The Fully Qualified Function Name (FQFN) for the function||
+|`--name`|The function's name||
+|`--namespace`|The function's namespace||
+|`--tenant`|The function's tenant||
+|`--topic`|The specific topic name that the function consumes from that you want to inject the data to||
+|`--trigger-file`|The path to the file that contains the data with which you'd like to trigger the function||
+|`--trigger-value`|The value with which you want to trigger the function||
+
+
+## `namespaces`
+
+Operations for managing namespaces
+
+
+```bash
+$ pulsar-admin namespaces subcommand
+```
+
+Subcommands
+* `list`
+* `topics`
+* `policies`
+* `create`
+* `delete`
+* `set-deduplication`
+* `permissions`
+* `grant-permission`
+* `revoke-permission`
+* `grant-subscription-permission`
+* `revoke-subscription-permission`
+* `set-clusters`
+* `get-clusters`
+* `get-backlog-quotas`
+* `set-backlog-quota`
+* `remove-backlog-quota`
+* `get-persistence`
+* `set-persistence`
+* `get-message-ttl`
+* `set-message-ttl`
+* `get-anti-affinity-group`
+* `set-anti-affinity-group`
+* `get-anti-affinity-namespaces`
+* `delete-anti-affinity-group`
+* `get-retention`
+* `set-retention`
+* `unload`
+* `split-bundle`
+* `set-dispatch-rate`
+* `get-dispatch-rate`
+* `set-subscribe-rate`
+* `get-subscribe-rate`
+* `set-subscription-dispatch-rate`
+* `get-subscription-dispatch-rate`
+* `clear-backlog`
+* `unsubscribe`
+* `set-encryption-required`
+* `set-subscription-auth-mode`
+* `get-max-producers-per-topic`
+* `set-max-producers-per-topic`
+* `get-max-consumers-per-topic`
+* `set-max-consumers-per-topic`
+* `get-max-consumers-per-subscription`
+* `set-max-consumers-per-subscription`
+* `get-compaction-threshold`
+* `set-compaction-threshold`
+* `get-offload-threshold`
+* `set-offload-threshold`
+* `get-offload-deletion-lag`
+* `set-offload-deletion-lag`
+* `clear-offload-deletion-lag`
+* `get-schema-autoupdate-strategy`
+* `set-schema-autoupdate-strategy`
+
+
+### `list`
+Get the namespaces for a tenant
+
+Usage
+```bash
+$ pulsar-admin namespaces list tenant-name
+```
+
+### `topics`
+Get the list of topics for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces topics tenant/namespace
+```
+
+### `policies`
+Get the configuration policies of a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces policies tenant/namespace
+```
+
+### `create`
+Create a new namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces create tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-b`, `--bundles`|The number of bundles to activate|0|
+|`-c`, `--clusters`|List of clusters this namespace will be assigned||
+
+
+### `delete`
+Deletes a namespace. The namespace needs to be empty
+
+Usage
+```bash
+$ pulsar-admin namespaces delete tenant/namespace
+```
+
+### `set-deduplication`
+Enable or disable message deduplication on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-deduplication tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--enable`, `-e`|Enable message deduplication on the specified namespace|false|
+|`--disable`, `-d`|Disable message deduplication on the specified namespace|false|
+
+
+### `permissions`
+Get the permissions on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces permissions tenant/namespace
+```
+
+### `grant-permission`
+Grant permissions on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces grant-permission tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--actions`|Actions to be granted (`produce` or `consume`)||
+|`--role`|The client role to which to grant the permissions||
+
+
+### `revoke-permission`
+Revoke permissions on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces revoke-permission tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--role`|The client role to which to revoke the permissions||
+
+### `grant-subscription-permission`
+Grant permissions to access subscription admin-api
+
+Usage
+```bash
+$ pulsar-admin namespaces grant-subscription-permission tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--roles`|The client roles to which to grant the permissions (comma separated roles)||
+|`--subscription`|The subscription name for which permission will be granted to roles||
+
+### `revoke-subscription-permission`
+Revoke permissions to access subscription admin-api
+
+Usage
+```bash
+$ pulsar-admin namespaces revoke-subscription-permission tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--role`|The client role to which to revoke the permissions||
+|`--subscription`|The subscription name for which permission will be revoked to roles||
+
+### `set-clusters`
+Set replication clusters for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-clusters tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)||
+
+
+### `get-clusters`
+Get replication clusters for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-clusters tenant/namespace
+```
+
+### `get-backlog-quotas`
+Get the backlog quota policies for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-backlog-quotas tenant/namespace
+```
+
+### `set-backlog-quota`
+Set a backlog quota policy for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-backlog-quota tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)||
+|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`|
+
+Example
+```bash
+$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \
+--limit 2G \
+--policy producer_request_hold
+```
+
+### `remove-backlog-quota`
+Remove a backlog quota policy from a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces remove-backlog-quota tenant/namespace
+```
+
+### `get-persistence`
+Get the persistence policies for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-persistence tenant/namespace
+```
+
+### `set-persistence`
+Set the persistence policies for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-persistence tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-a`, `--bookkeeper-ack-quorom`|The number of acks (guaranteed copies) to wait for each entry|0|
+|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0|
+|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0|
+|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)||
+
+
+### `get-message-ttl`
+Get the message TTL for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-message-ttl tenant/namespace
+```
+
+### `set-message-ttl`
+Set the message TTL for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-message-ttl tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-ttl`, `--messageTTL`|Message TTL in seconds|0|
+
+### `get-anti-affinity-group`
+Get Anti-affinity group name for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-anti-affinity-group tenant/namespace
+```
+
+### `set-anti-affinity-group`
+Set Anti-affinity group name for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-anti-affinity-group tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-g`, `--group`|Anti-affinity group name||
+
+### `get-anti-affinity-namespaces`
+Get Anti-affinity namespaces grouped with the given anti-affinity group name
+
+Usage
+```bash
+$ pulsar-admin namespaces get-anti-affinity-namespaces options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--cluster`|Cluster name||
+|`-g`, `--group`|Anti-affinity group name||
+|`-p`, `--tenant`|Tenant is only used for authorization. Client has to be admin of any of the tenant to access this api||
+
+### `delete-anti-affinity-group`
+Remove Anti-affinity group name for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces delete-anti-affinity-group tenant/namespace
+```
+
+### `get-retention`
+Get the retention policy for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-retention tenant/namespace
+```
+
+### `set-retention`
+Set the retention policy for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-retention tenant/namespace
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T). 0 means no retention and -1 means infinite size retention||
+|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention||
+
+
+### `unload`
+Unload a namespace or namespace bundle from the current serving broker.
+
+Usage
+```bash
+$ pulsar-admin namespaces unload tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)||
+
+### `split-bundle`
+Split a namespace-bundle from the current serving broker
+
+Usage
+```bash
+$ pulsar-admin namespaces split-bundle tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)||
+|`-u`, `--unload`|Unload newly split bundles after splitting old bundle|false|
+
+### `set-dispatch-rate`
+Set message-dispatch-rate for all topics of the namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-dispatch-rate tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1|
+|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1|
+|`-md`, `--msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1|
+
+### `get-dispatch-rate`
+Get configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0)
+
+Usage
+```bash
+$ pulsar-admin namespaces get-dispatch-rate tenant/namespace
+```
+
+### `set-subscribe-rate`
+Set subscribe-rate per consumer for all topics of the namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-subscribe-rate tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-sr`, `--subscribe-rate`|The subscribe rate (default -1 will be overwrite if not passed)|-1|
+|`-st`, `--subscribe-rate-period`|The subscribe rate period in second type (default 30 second will be overwrite if not passed)|30|
+
+### `get-subscribe-rate`
+Get configured subscribe-rate per consumer for all topics of the namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-subscribe-rate tenant/namespace
+```
+
+### `set-subscription-dispatch-rate`
+Set subscription message-dispatch-rate for all subscription of the namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-subscription-dispatch-rate tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-bd`, `--byte-dispatch-rate`|The byte dispatch rate (default -1 will be overwrite if not passed)|-1|
+|`-dt`, `--dispatch-rate-period`|The dispatch rate period in second type (default 1 second will be overwrite if not passed)|1|
+|`-md`, `--sub-msg-dispatch-rate`|The message dispatch rate (default -1 will be overwrite if not passed)|-1|
+
+### `get-subscription-dispatch-rate`
+Get subscription configured message-dispatch-rate for all topics of the namespace (Disabled if value < 0)
+
+Usage
+```bash
+$ pulsar-admin namespaces get-subscription-dispatch-rate tenant/namespace
+```
+
+### `clear-backlog`
+Clear the backlog for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces clear-backlog tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)||
+|`-force`, `--force`|Whether to force a clear backlog without prompt|false|
+|`-s`, `--sub`|The subscription name||
+
+
+### `unsubscribe`
+Unsubscribe the given subscription on all destinations on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces unsubscribe tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|{start-boundary}_{end-boundary} (e.g. 0x00000000_0xffffffff)||
+|`-s`, `--sub`|The subscription name||
+
+### `set-encryption-required`
+Enable or disable message encryption required for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-encryption-required tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-d`, `--disable`|Disable message encryption required|false|
+|`-e`, `--enable`|Enable message encryption required|false|
+
+### `set-subscription-auth-mode`
+Set subscription auth mode on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-subscription-auth-mode tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-m`, `--subscription-auth-mode`|Subscription authorization mode for Pulsar policies. Valid options are: [None, Prefix]||
+
+### `get-max-producers-per-topic`
+Get maxProducersPerTopic for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-max-producers-per-topic tenant/namespace
+```
+
+### `set-max-producers-per-topic`
+Set maxProducersPerTopic for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-max-producers-per-topic tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-p`, `--max-producers-per-topic`|maxProducersPerTopic for a namespace|0|
+
+### `get-max-consumers-per-topic`
+Get maxConsumersPerTopic for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-max-consumers-per-topic tenant/namespace
+```
+
+### `set-max-consumers-per-topic`
+Set maxConsumersPerTopic for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-max-consumers-per-topic tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--max-consumers-per-topic`|maxConsumersPerTopic for a namespace|0|
+
+### `get-max-consumers-per-subscription`
+Get maxConsumersPerSubscription for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-max-consumers-per-subscription tenant/namespace
+```
+
+### `set-max-consumers-per-subscription`
+Set maxConsumersPerSubscription for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-max-consumers-per-subscription tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--max-consumers-per-subscription`|maxConsumersPerSubscription for a namespace|0|
+
+
+### `get-compaction-threshold`
+Get compactionThreshold for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-compaction-threshold tenant/namespace
+```
+
+### `set-compaction-threshold`
+Set compactionThreshold for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0|
+
+
+### `get-offload-threshold`
+Get offloadThreshold for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-offload-threshold tenant/namespace
+```
+
+### `set-offload-threshold`
+Set offloadThreshold for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-offload-threshold tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1|
+
+### `get-offload-deletion-lag`
+Get offloadDeletionLag, in minutes, for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-offload-deletion-lag tenant/namespace
+```
+
+### `set-offload-deletion-lag`
+Set offloadDeletionLag for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-offload-deletion-lag tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-l`, `--lag`|Duration to wait after offloading a ledger segment, before deleting the copy of that segment from cluster local storage. (eg: 10m, 5h, 3d, 2w).|-1|
+
+### `clear-offload-deletion-lag`
+Clear offloadDeletionLag for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces clear-offload-deletion-lag tenant/namespace
+```
+
+### `get-schema-autoupdate-strategy`
+Get the schema auto-update strategy for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-schema-autoupdate-strategy tenant/namespace
+```
+
+### `set-schema-autoupdate-strategy`
+Set the schema auto-update strategy for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-schema-autoupdate-strategy tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--compatibility`|Compatibility level required for new schemas created via a Producer. Possible values (Full, Backward, Forward, None).|Full|
+|`-d`, `--disabled`|Disable automatic schema updates.|false|
+
+
+## `ns-isolation-policy`
+Operations for managing namespace isolation policies.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy subcommand
+```
+
+Subcommands
+* `set`
+* `get`
+* `list`
+* `delete`
+* `brokers`
+* `broker`
+
+### `set`
+Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy set cluster-name policy-name options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]|
+|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]|
+|`--namespaces`|Comma-separated namespaces regex list|[]|
+|`--primary`|Comma-separated primary broker regex list|[]|
+|`--secondary`|Comma-separated secondary broker regex list|[]|
+
+
+### `get`
+Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy get cluster-name policy-name
+```
+
+### `list`
+List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy list cluster-name
+```
+
+### `delete`
+Delete namespace isolation policy of a cluster. This operation requires superuser privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy delete
+```
+
+### `brokers`
+List all brokers with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy brokers cluster-name
+```
+
+### `broker`
+Get broker with namespace-isolation policies attached to it. This operation requires Pulsar super-user privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy broker cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--broker`|Broker name to get namespace-isolation policies attached to it||
+
+
+## `sink`
+
+An interface for managing Pulsar IO sinks (egress data from Pulsar)
+
+Usage
+```bash
+$ pulsar-admin sink subcommand
+```
+
+Subcommands
+* `create`
+* `update`
+* `delete`
+* `list`
+* `get`
+* `status`
+* `stop`
+* `start`
+* `restart`
+* `localrun`
+* `available-sinks`
+
+
+### `create`
+Submit a Pulsar IO sink connector to run in a Pulsar cluster
+
+Usage
+```bash
+$ pulsar-admin sink create options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The sink's class name if archive is file-url-path (file://)||
+|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema types or class names (as a JSON string)||
+|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--inputs`|The sink’s input topic(s) (multiple topics can be specified as a comma-separated list)||
+|`--archive`|Path to the archive file for the sink. It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The sink’s name||
+|`--namespace`|The sink’s namespace||
+|`--parallelism`|The sink’s parallelism factor (i.e. the number of sink instances to run).||
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the sink. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.||
+|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--sink-config`|User defined configs key/values||
+|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration||
+|`--sink-type`|The built-in sinks's connector provider||
+|`--topics-pattern`|TopicsPattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topicsPattern] are mutually exclusive. Add SerDe class name for a pattern in --customSerdeInputs (supported for java fun only)||
+|`--tenant`|The sink’s tenant||
+|`--auto-ack`|Whether or not the framework will automatically acknowledge messages||
+|`--timeout-ms`|The message timeout in milliseconds||
+|`--retain-ordering`|Sink consumes and sinks messages in order||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+
+
+### `update`
+Update a Pulsar IO sink connector
+
+Usage
+```bash
+$ pulsar-admin sink update options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The sink's class name if archive is file-url-path (file://)||
+|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema types or class names (as a JSON string)||
+|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--inputs`|The sink’s input topic(s) (multiple topics can be specified as a comma-separated list)||
+|`--archive`|Path to the archive file for the sink. It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The sink’s name||
+|`--namespace`|The sink’s namespace||
+|`--parallelism`|The sink’s parallelism factor (i.e. the number of sink instances to run).||
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the sink. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.||
+|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--sink-config`|User defined configs key/values||
+|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration||
+|`--sink-type`|The built-in sinks's connector provider||
+|`--topics-pattern`|TopicsPattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topicsPattern] are mutually exclusive. Add SerDe class name for a pattern in --customSerdeInputs (supported for java fun only)||
+|`--tenant`|The sink’s tenant||
+|`--auto-ack`|Whether or not the framework will automatically acknowledge messages||
+|`--retain-ordering`|Sink consumes and sinks messages in order||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--timeout-ms`|The message timeout in milliseconds||
+
+
+### `delete`
+Stops a Pulsar IO sink connector
+
+Usage
+```bash
+$ pulsar-admin sink delete options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The sink's name||
+|`--namespace`|The sink's namespace||
+|`--tenant`|The sink's tenant||
+
+
+### `list`
+List all running Pulsar IO sink connectors
+
+Usage
+```bash
+$ pulsar-admin sink list options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--namespace`|The sink's namespace||
+|`--tenant`|The sink's tenant||
+
+
+### `get`
+Gets the information about a Pulsar IO sink connector
+
+Usage
+```bash
+$ pulsar-admin sink get options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The sink's name||
+|`--namespace`|The sink's namespace||
+|`--tenant`|The sink's tenant||
+
+
+### `status`
+Check the current status of a Pulsar Sink
+
+Usage
+```bash
+$ pulsar-admin sink status options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--instance-id`|The sink instanceId (Get-status of all instances if instance-id is not provided)||
+|`--name`|The sink's name||
+|`--namespace`|The sink's namespace||
+|`--tenant`|The sink's tenant||
+
+
+### `stop`
+Stops sink instance
+
+Usage
+```bash
+$ pulsar-admin sink stop options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--instance-id`|The sink instanceId (stop all instances if instance-id is not provided)||
+|`--name`|The sink's name||
+|`--namespace`|The sink's namespace||
+|`--tenant`|The sink's tenant||
+
+
+### `start`
+Starts sink instance
+
+Usage
+```bash
+$ pulsar-admin sink start options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--instance-id`|The sink instanceId (start all instances if instance-id is not provided)||
+|`--name`|The sink's name||
+|`--namespace`|The sink's namespace||
+|`--tenant`|The sink's tenant||
+
+
+### `restart`
+Restart sink instance
+
+Usage
+```bash
+$ pulsar-admin sink restart options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--instance-id`|The sink instanceId (restart all instances if instance-id is not provided)||
+|`--name`|The sink's name||
+|`--namespace`|The sink's namespace||
+|`--tenant`|The sink's tenant||
+
+
+### `localrun`
+Run a Pulsar IO sink connector locally (rather than deploying it to the Pulsar cluster)
+
+Usage
+```bash
+$ pulsar-admin sink localrun options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--broker-service-url`|The URL for the Pulsar broker||
+|`--classname`|The sink's class name if archive is file-url-path (file://)||
+|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema types or class names (as a JSON string)||
+|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--inputs`|The sink’s input topic(s) (multiple topics can be specified as a comma-separated list)||
+|`--archive`|Path to the archive file for the sink. It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The sink’s name||
+|`--namespace`|The sink’s namespace||
+|`--parallelism`|The sink’s parallelism factor (i.e. the number of sink instances to run).||
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the sink. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.||
+|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--sink-config`|User defined configs key/values||
+|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration||
+|`--sink-type`|The built-in sinks's connector provider||
+|`--topics-pattern`|TopicsPattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topicsPattern] are mutually exclusive. Add SerDe class name for a pattern in --customSerdeInputs (supported for java fun only)||
+|`--tenant`|The sink’s tenant||
+|`--auto-ack`|Whether or not the framework will automatically acknowledge messages||
+|`--timeout-ms`|The message timeout in milliseconds||
+|`--client-auth-params`|Client authentication param||
+|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker||
+|`--hostname-verification-enabled`|Enable hostname verification|false|
+|`--retain-ordering`|Sink consumes and sinks messages in order||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--tls-allow-insecure`|Allow insecure tls connection|false|
+|`--tls-trust-cert-path`|The tls trust cert file path||
+|`--use-tls`|Use tls connection|false|
+
+
+### `available-sinks`
+Get the list of Pulsar IO connector sinks supported by Pulsar cluster
+
+Usage
+```bash
+$ pulsar-admin sink available-sinks
+```
+
+
+## `source`
+An interface for managing Pulsar IO sources (ingress data into Pulsar)
+
+Usage
+```bash
+$ pulsar-admin source subcommand
+```
+
+Subcommands
+* `create`
+* `update`
+* `delete`
+* `get`
+* `status`
+* `list`
+* `stop`
+* `start`
+* `restart`
+* `localrun`
+* `available-sources`
+
+
+### `create`
+Submit a Pulsar IO source connector to run in a Pulsar cluster
+
+Usage
+```bash
+$ pulsar-admin source create options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The source's class name if archive is file-url-path (file://)||
+|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--deserialization-classname`|The SerDe classname for the source||
+|`--destination-topic-name`|The Pulsar topic to which data is sent||
+|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--archive`|The path to the NAR archive for the Source. It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The source’s name||
+|`--namespace`|The source’s namespace||
+|`--parallelism`|The source’s parallelism factor (i.e. the number of source instances to run).||
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the source. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.||
+|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--schema-type`|The schema type (either a builtin schema like 'avro', 'json', etc, or custom Schema class name to be used to encode messages emitted from the source||
+|`--source-type`|One of the built-in source's connector provider||
+|`--source-config`|Source config key/values||
+|`--source-config-file`|The path to a YAML config file specifying the source’s configuration||
+|`--tenant`|The source’s tenant||
+
+
+### `update`
+Update a already submitted Pulsar IO source connector
+
+Usage
+```bash
+$ pulsar-admin source update options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The source's class name if archive is file-url-path (file://)||
+|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--deserialization-classname`|The SerDe classname for the source||
+|`--destination-topic-name`|The Pulsar topic to which data is sent||
+|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--archive`|The path to the NAR archive for the Source. It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The source’s name||
+|`--namespace`|The source’s namespace||
+|`--parallelism`|The source’s parallelism factor (i.e. the number of source instances to run).||
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the source. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.||
+|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--schema-type`|The schema type (either a builtin schema like 'avro', 'json', etc, or custom Schema class name to be used to encode messages emitted from the source||
+|`--source-type`|One of the built-in source's connector provider||
+|`--source-config`|Source config key/values||
+|`--source-config-file`|The path to a YAML config file specifying the source’s configuration||
+|`--tenant`|The source’s tenant||
+
+
+### `delete`
+Stops a Pulsar IO source connector
+
+Usage
+```bash
+$ pulsar-admin source delete options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The source's name||
+|`--namespace`|The source's namespace||
+|`--tenant`|The source's tenant||
+
+
+### `get`
+Gets the information about a Pulsar IO source connector
+
+Usage
+```bash
+$ pulsar-admin source get options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The source's name||
+|`--namespace`|The source's namespace||
+|`--tenant`|The source's tenant||
+
+
+### `status`
+Check the current status of a Pulsar Source
+
+Usage
+```bash
+$ pulsar-admin source status options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--instance-id`|The source instanceId (Get-status of all instances if instance-id is not provided)||
+|`--name`|The source's name||
+|`--namespace`|The source's namespace||
+|`--tenant`|The source's tenant||
+
+
+### `list`
+List all running Pulsar IO source connectors
+
+Usage
+```bash
+$ pulsar-admin source list options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--namespace`|The source's namespace||
+|`--tenant`|The source's tenant||
+
+
+### `stop`
+Stop source instance
+
+Usage
+```bash
+$ pulsar-admin source stop options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--instance-id`|The source instanceId (stop all instances if instance-id is not provided)||
+|`--name`|The source's name||
+|`--namespace`|The source's namespace||
+|`--tenant`|The source's tenant||
+
+
+### `start`
+Start source instance
+
+Usage
+```bash
+$ pulsar-admin source start options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--instance-id`|The source instanceId (start all instances if instance-id is not provided)||
+|`--name`|The source's name||
+|`--namespace`|The source's namespace||
+|`--tenant`|The source's tenant||
+
+
+### `restart`
+Restart source instance
+
+Usage
+```bash
+$ pulsar-admin source restart options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--instance-id`|The source instanceId (restart all instances if instance-id is not provided)||
+|`--name`|The source's name||
+|`--namespace`|The source's namespace||
+|`--tenant`|The source's tenant||
+
+
+### `localrun`
+Run a Pulsar IO source connector locally (rather than deploying it to the Pulsar cluster)
+
+Usage
+```bash
+$ pulsar-admin source localrun options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The source's class name if archive is file-url-path (file://)||
+|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--deserialization-classname`|The SerDe classname for the source||
+|`--destination-topic-name`|The Pulsar topic to which data is sent||
+|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--archive`|The path to the NAR archive for the Source. It also supports url-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package.||
+|`--name`|The source’s name||
+|`--namespace`|The source’s namespace||
+|`--parallelism`|The source’s parallelism factor (i.e. the number of source instances to run).||
+|`--processing-guarantees`|The processing guarantees (aka delivery semantics) applied to the source. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.||
+|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--schema-type`|The schema type (either a builtin schema like 'avro', 'json', etc, or custom Schema class name to be used to encode messages emitted from the source||
+|`--source-type`|One of the built-in source's connector provider||
+|`--source-config`|Source config key/values||
+|`--source-config-file`|The path to a YAML config file specifying the source’s configuration||
+|`--tenant`|The source’s tenant||
+|`--broker-service-url`|The URL for the Pulsar broker||
+|`--client-auth-params`|Client authentication param||
+|`--client-auth-plugin`|Client authentication plugin using which function-process can connect to broker||
+|`--hostname-verification-enabled`|Enable hostname verification|false|
+|`--tls-allow-insecure`|Allow insecure tls connection|false|
+|`--tls-trust-cert-path`|The tls trust cert file path||
+|`--use-tls`|Use tls connection||
+
+
+### `available-sources`
+Get the list of Pulsar IO connector sources supported by Pulsar cluster
+
+Usage
+```bash
+$ pulsar-admin source available-sources
+```
+
+
+## `topics`
+Operations for managing Pulsar topics (both persistent and non persistent)
+
+Usage
+```bash
+$ pulsar-admin topics subcommand
+```
+
+Subcommands
+* `compact`
+* `compaction-status`
+* `offload`
+* `offload-status`
+* `create-partitioned-topic`
+* `delete-partitioned-topic`
+* `create`
+* `get-partitioned-topic-metadata`
+* `update-partitioned-topic`
+* `list`
+* `list-in-bundle`
+* `terminate`
+* `permissions`
+* `grant-permission`
+* `revoke-permission`
+* `lookup`
+* `bundle-range`
+* `delete`
+* `unload`
+* `subscriptions`
+* `unsubscribe`
+* `stats`
+* `stats-internal`
+* `info-internal`
+* `partitioned-stats`
+* `skip`
+* `skip-all`
+* `expire-messages`
+* `expire-messages-all-subscriptions`
+* `peek-messages`
+* `reset-cursor`
+
+
+### `compact`
+Run compaction on the specified topic (persistent topics only)
+
+Usage
+```
+$ pulsar-admin topics compact persistent://tenant/namespace/topic
+```
+
+### `compaction-status`
+Check the status of a topic compaction (persistent topics only)
+
+Usage
+```bash
+$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-w`, `--wait-complete`|Wait for compaction to complete|false|
+
+
+### `offload`
+Trigger offload of data from a topic to long-term storage (e.g. Amazon S3)
+
+Usage
+```bash
+$ pulsar-admin topics offload persistent://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic||
+
+
+### `offload-status`
+Check the status of data offloading from a topic to long-term storage
+
+Usage
+```bash
+$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-w`, `--wait-complete`|Wait for compaction to complete|false|
+
+
+### `create-partitioned-topic`
+Create a partitioned topic. A partitioned topic must be created before producers can publish to it.
+
+Usage
+```bash
+$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-p`, `--partitions`|The number of partitions for the topic|0|
+
+### `delete-partitioned-topic`
+Delete a partitioned topic. This will also delete all the partitions of the topic if they exist.
+
+Usage
+```bash
+$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent}
+```
+
+### `create`
+Creates a non-partitioned topic. A non-partitioned topic must explicitly be created by the user if allowAutoTopicCreation or createIfMissing is disabled.
+
+Usage
+```bash
+$ pulsar-admin topics create {persistent|non-persistent}://tenant/namespace/topic
+```
+
+### `get-partitioned-topic-metadata`
+Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions.
+
+Usage
+```bash
+$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic
+```
+
+### `update-partitioned-topic`
+Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions.
+
+Usage
+```bash
+$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-p`, `--partitions`|The number of partitions for the topic|0|
+
+### `list`
+Get the list of topics under a namespace
+
+Usage
+```
+$ pulsar-admin topics list tenant/cluster/namespace
+```
+
+### `list-in-bundle`
+Get a list of non-persistent topics present under a namespace bundle
+
+Usage
+```
+$ pulsar-admin topics list-in-bundle tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-b`, `--bundle`|The bundle range||
+
+
+### `terminate`
+Terminate a topic (disallow further messages from being published on the topic)
+
+Usage
+```bash
+$ pulsar-admin topics terminate {persistent|non-persistent}://tenant/namespace/topic
+```
+
+### `permissions`
+Get the permissions on a topic. Retrieve the effective permissions for a desination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic.
+
+Usage
+```bash
+$ pulsar-admin topics permissions topic
+```
+
+### `grant-permission`
+Grant a new permission to a client role on a single topic
+
+Usage
+```bash
+$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--actions`|Actions to be granted (`produce` or `consume`)||
+|`--role`|The client role to which to grant the permissions||
+
+
+### `revoke-permission`
+Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412).
+
+Usage
+```bash
+$ pulsar-admin topics revoke-permission topic
+```
+
+### `lookup`
+Look up a topic from the current serving broker
+
+Usage
+```bash
+$ pulsar-admin topics lookup topic
+```
+
+### `bundle-range`
+Get the namespace bundle which contains the given topic
+
+Usage
+```bash
+$ pulsar-admin topics bundle-range topic
+```
+
+### `delete`
+Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic.
+
+Usage
+```bash
+$ pulsar-admin topics delete topic
+```
+
+### `unload`
+Unload a topic
+
+Usage
+```bash
+$ pulsar-admin topics unload topic
+```
+
+### `subscriptions`
+Get the list of subscriptions on the topic
+
+Usage
+```bash
+$ pulsar-admin topics subscriptions topic
+```
+
+### `unsubscribe`
+Delete a durable subscriber from a topic
+
+Usage
+```bash
+$ pulsar-admin topics unsubscribe topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-s`, `--subscription`|The subscription to delete||
+
+
+### `stats`
+Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period.
+
+Usage
+```bash
+$ pulsar-admin topics stats topic
+```
+
+### `stats-internal`
+Get the internal stats for the topic
+
+Usage
+```bash
+$ pulsar-admin topics stats-internal topic
+```
+
+### `info-internal`
+Get the internal metadata info for the topic
+
+Usage
+```bash
+$ pulsar-admin topics info-internal topic
+```
+
+### `partitioned-stats`
+Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period.
+
+Usage
+```bash
+$ pulsar-admin topics partitioned-stats topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--per-partition`|Get per-partition stats|false|
+
+
+### `skip`
+Skip some messages for the subscription
+
+Usage
+```bash
+$ pulsar-admin topics skip topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-n`, `--count`|The number of messages to skip|0|
+|`-s`, `--subscription`|The subscription on which to skip messages||
+
+
+### `skip-all`
+Skip all the messages for the subscription
+
+Usage
+```bash
+$ pulsar-admin topics skip-all topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-s`, `--subscription`|The subscription to clear||
+
+
+### `expire-messages`
+Expire messages that are older than the given expiry time (in seconds) for the subscription.
+
+Usage
+```bash
+$ pulsar-admin topics expire-messages topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0|
+|`-s`, `--subscription`|The subscription to skip messages on||
+
+
+### `expire-messages-all-subscriptions`
+Expire messages older than the given expiry time (in seconds) for all subscriptions
+
+Usage
+```bash
+$ pulsar-admin topics expire-messages-all-subscriptions topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0|
+
+
+### `peek-messages`
+Peek some messages for the subscription.
+
+Usage
+```bash
+$ pulsar-admin topics peek-messages topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-n`, `--count`|The number of messages|0|
+|`-s`, `--subscription`|Subscription to get messages from||
+
+
+### `reset-cursor`
+Reset position for subscription to closest to timestamp
+
+Usage
+```bash
+$ pulsar-admin topics reset-cursor topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-s`, `--subscription`|Subscription to reset position on||
+|`-t`, `--time`|The time, in minutes, to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.||
+
+
+
+## `tenants`
+Operations for managing tenants
+
+Usage
+```bash
+$ pulsar-admin tenants subcommand
+```
+
+Subcommands
+* `list`
+* `get`
+* `create`
+* `update`
+* `delete`
+
+### `list`
+List the existing tenants
+
+Usage
+```bash
+$ pulsar-admin tenants list
+```
+
+### `get`
+Gets the configuration of a tenant
+
+Usage
+```bash
+$ pulsar-admin tenants get tenant-name
+```
+
+### `create`
+Creates a new tenant
+
+Usage
+```bash
+$ pulsar-admin tenants create tenant-name options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-r`, `--admin-roles`|Comma-separated admin roles||
+|`-c`, `--allowed-clusters`|Comma-separated allowed clusters||
+
+### `update`
+Updates a tenant
+
+Usage
+```bash
+$ pulsar-admin tenants update tenant-name options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-r`, `--admin-roles`|Comma-separated admin roles||
+|`-c`, `--allowed-clusters`|Comma-separated allowed clusters||
+
+
+### `delete`
+Deletes an existing tenant
+
+Usage
+```bash
+$ pulsar-admin tenants delete tenant-name
+```
+
+
+## `resource-quotas`
+Operations for managing resource quotas
+
+Usage
+```bash
+$ pulsar-admin resource-quotas subcommand
+```
+
+Subcommands
+* `get`
+* `set`
+* `reset-namespace-bundle-quota`
+
+
+### `get`
+Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified.
+
+Usage
+```bash
+$ pulsar-admin resource-quotas get options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.||
+|`-n`, `--namespace`|The namespace||
+
+
+### `set`
+Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified.
+
+Usage
+```bash
+$ pulsar-admin resource-quotas set options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0|
+|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0|
+|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.||
+|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false|
+|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0|
+|`-mi`, `--msgRateIn`|Expected incoming messages per second|0|
+|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0|
+|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.||
+
+
+### `reset-namespace-bundle-quota`
+Reset the specifed namespace bundle's resource quota to a default value.
+
+Usage
+```bash
+$ pulsar-admin resource-quotas reset-namespace-bundle-quota options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.||
+|`-n`, `--namespace`|The namespace||
+
+
+
+## `schemas`
+Operations related to Schemas associated with Pulsar topics.
+
+Usage
+```
+$ pulsar-admin schemas subcommand
+```
+
+Subcommands
+* `upload`
+* `delete`
+* `get`
+* `extract`
+
+
+### `upload`
+Upload the schema definition for a topic
+
+Usage
+```bash
+$ pulsar-admin schemas upload persistent://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.||
+
+
+### `delete`
+Delete the schema definition associated with a topic
+
+Usage
+```bash
+$ pulsar-admin schemas delete persistent://tenant/namespace/topic
+```
+
+
+### `get`
+Retrieve the schema definition assoicated with a topic (at a given version if version is supplied).
+
+Usage
+```bash
+$ pulsar-admin schemas get persistent://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--version`|The version of the schema definition to retrive for a topic.||
+
+### `extract`
+Provide the schema definition for a topic via Java class name contained in a JAR file
+
+Usage
+```bash
+$ pulsar-admin schemas extract persistent://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-c`, `--classname`|The Java class name||
+|`-j`, `--jar`|A path to the JAR file which contains the above Java class||
+|`-t`, `--type`|The type of the schema (avro or json)||
+
+
diff --git a/site2/website/versioned_docs/version-2.3.2/security-kerberos.md b/site2/website/versioned_docs/version-2.3.2/security-kerberos.md
new file mode 100644
index 0000000..e03accf
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/security-kerberos.md
@@ -0,0 +1,284 @@
+---
+id: version-2.3.2-security-kerberos
+title: Authentication using Kerberos
+sidebar_label: Authentication using Kerberos
+original_id: security-kerberos
+---
+
+[Kerberos](https://web.mit.edu/kerberos/) is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography. 
+
+In Pulsar, we use Kerberos with [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) as a choice for authentication. And Pulsar uses the [Java Authentication and Authorization Service (JAAS)](https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service) for SASL configuration. You must provide JAAS configurations for Kerberos authentication. 
+
+In this document, we will introduce how to configure `Kerberos` with `SASL` between Pulsar clients and brokers in detail, and then how to configure Kerberos for Pulsar proxy.
+
+## Configuration for Kerberos between Client and Broker
+
+### Prerequisites
+
+To begin, you need to set up(or already have) a [Key Distribution Center(KDC)](https://en.wikipedia.org/wiki/Key_distribution_center) configured and running. 
+
+If your organization is already using a Kerberos server (for example, by using `Active Directory`), there is no need to install a new server for Pulsar. Otherwise you will need to install one. Your Linux vendor likely has packages for `Kerberos` and a short guide on how to install and configure it: ([Ubuntu](https://help.ubuntu.com/community/Kerberos), 
+[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html)).
+
+Note that if you are using Oracle Java, you need to download JCE policy files for your Java version and copy them to the `$JAVA_HOME/jre/lib/security` directory.
+
+#### Kerberos Principals
+
+If you are using existing Kerberos system, ask your Kerberos administrator for a principal for each Brokers in your cluster and for every operating system user that will access Pulsar with Kerberos authentication(via clients and tools).
+
+If you have installed your own Kerberos system, you can create these principals with the following commands:
+
+```shell
+### add Principals for broker
+sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}'
+sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}"
+### add Principals for client
+sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}'
+sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}"
+```
+Note that it is a *Kerberos* requirement that all your hosts can be resolved with their FQDNs.
+
+#### Configure how to connect to KDC
+
+You need to specify the path to the `krb5.conf` file for both client and broker side. The contents of `krb5.conf` file indicate the default Realm and KDC information. See [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details.
+
+```shell
+-Djava.security.krb5.conf=/etc/pulsar/krb5.conf
+```
+Here is an example of the krb5.conf file:
+ 
+In the configuration file, `EXAMPLE.COM` is the default realm; `kdc = localhost:62037` is the kdc server url for realm `EXAMPLE.COM `:
+
+```
+[libdefaults]
+ default_realm = EXAMPLE.COM
+
+[realms]
+ EXAMPLE.COM  = {
+  kdc = localhost:62037
+ }
+```
+
+Usually machines configured with kerberos already have a system wide configuration and this configuration is optional.
+
+#### JAAS configuration file
+
+JAAS configuration file is needed for both client and broker sides. It provides the section of information that used to connect KDC. Here is an example named `pulsar_jaas.conf`:
+
+```
+ PulsarBroker {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   storeKey=true
+   useTicketCache=false
+   keyTab="/etc/security/keytabs/pulsarbroker.keytab"
+   principal="broker/localhost@EXAMPLE.COM";
+};
+
+ PulsarClient {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   storeKey=true
+   useTicketCache=false
+   keyTab="/etc/security/keytabs/pulsarclient.keytab"
+   principal="client/localhost@EXAMPLE.COM";
+};
+```
+
+You need to set the `JAAS` configuration file path as JVM parameter for client and broker. For example:
+
+```shell
+    -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf 
+```
+
+In the `pulsar_jaas.conf` file above 
+
+1. `PulsarBroker` is a section name in the JAAS file used by each broker. This section tells the broker which principal to use inside Kerberos
+    and the location of the keytab where the principal is stored. It allows the broker to use the keytab specified in this section.
+2. `PulsarClient` is a section name in the JASS file used by each client. This section tells the client which principal to use inside Kerberos
+    and the location of the keytab where the principal is stored. It allows the client to use the keytab specified in this section.
+
+It is also a choice to have 2 separate JAAS configuration files: the file for broker will only have `PulsarBroker` section; while the one for client only have `PulsarClient` section.
+
+### Kerberos configuration for Brokers
+
+1. In the `broker.conf` file, set Kerberos related configuration.
+
+ - Set `authenticationEnabled` to `true`;
+ - Set `authenticationProviders` to choose `AuthenticationProviderSasl`;
+ - Set `saslJaasClientAllowedIds` regex for principal that is allowed to connect to broker. 
+ - Set `saslJaasBrokerSectionName` that corresponding to the section in JAAS configuration file for broker.
+ 
+ Here is an example:
+
+```
+authenticationEnabled=true
+authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl
+saslJaasClientAllowedIds=.*client.*
+saslJaasBrokerSectionName=PulsarBroker
+```
+
+2. Set JVM parameter for JAAS configuration file and krb5 configuration file with additional option.
+```shell
+   -Djava.security.auth.login.config=/etc/pulsar/pulsar_jaas.conf -Djava.security.krb5.conf=/etc/pulsar/krb5.conf 
+```
+You can add this at the end of `PULSAR_EXTRA_OPTS` in the file [`pulsar_env.sh`](https://github.com/apache/pulsar/blob/master/conf/pulsar_env.sh)
+
+Make sure that the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file are reachable by the operating system user who is starting broker.
+
+### Kerberos configuration for clients
+
+In client, we need to configure the authentication type to use `AuthenticationSasl`, and also provide the authentication parameters to it. 
+
+There are 2 parameters needed: 
+- `saslJaasClientSectionName` is corresponding to the section in JAAS configuration file for client; 
+- `serverType` stands for whether this client is connect to broker or proxy, and client use this parameter to know which server side principal should be used. 
+
+When authenticate between client and broker with the setting in above JAAS configuration file, we need to set `saslJaasClientSectionName` to `PulsarClient` and `serverType` to `broker`.
+
+The following is an example of creating a Java client:
+ 
+ ```java
+ System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf");
+ System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf");
+
+ Map<String, String> clientSaslConfig = Maps.newHashMap();
+ clientSaslConfig.put("saslJaasClientSectionName", "PulsarClient");
+ clientSaslConfig.put("serverType", "broker");
+
+ Authentication saslAuth = AuthenticationFactory
+         .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams);
+ 
+ PulsarClient client = PulsarClient.builder()
+         .serviceUrl("pulsar://my-broker.com:6650")
+         .authentication(saslAuth)
+         .build();
+ ```
+
+Make sure that the keytabs configured in the `pulsar_jaas.conf` file and kdc server in the `krb5.conf` file are reachable by the operating system user who is starting pulsar client.
+
+## Kerberos configuration for working with Pulsar Proxy
+
+With the above configuration, client and broker can do authentication using Kerberos.  
+
+If a client wants to connect to Pulsar Proxy, it is a little different. Client (as a SASL client in Kerberos) will be authenticated by Pulsar Proxy (as a SASL Server in Kerberos) first; and then Pulsar Proxy will be authenticated by Pulsar broker. 
+
+Now comparing with the above configuration between client and broker, we will show how to configure Pulsar Proxy. 
+
+### Create principal for Pulsar Proxy in Kerberos
+
+Comparing with the above configuration, you need to add new principal for Pulsar Proxy. If you already have principals for client and broker, only add proxy principal here.
+
+```shell
+### add Principals for Pulsar Proxy
+sudo /usr/sbin/kadmin.local -q 'addprinc -randkey proxy/{hostname}@{REALM}'
+sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{proxy-keytabname}.keytab proxy/{hostname}@{REALM}"
+### add Principals for broker
+sudo /usr/sbin/kadmin.local -q 'addprinc -randkey broker/{hostname}@{REALM}'
+sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{broker-keytabname}.keytab broker/{hostname}@{REALM}"
+### add Principals for client
+sudo /usr/sbin/kadmin.local -q 'addprinc -randkey client/{hostname}@{REALM}'
+sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{client-keytabname}.keytab client/{hostname}@{REALM}"
+```
+
+### Add a section in JAAS configuration file for Pulsar Proxy
+
+Comparing with the above configuration, add a new section for Pulsar Proxy in JAAS configuration file.
+
+Here is an example named `pulsar_jaas.conf`:
+
+```
+ PulsarBroker {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   storeKey=true
+   useTicketCache=false
+   keyTab="/etc/security/keytabs/pulsarbroker.keytab"
+   principal="broker/localhost@EXAMPLE.COM";
+};
+
+ PulsarProxy {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   storeKey=true
+   useTicketCache=false
+   keyTab="/etc/security/keytabs/pulsarproxy.keytab"
+   principal="proxy/localhost@EXAMPLE.COM";
+};
+
+ PulsarClient {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   storeKey=true
+   useTicketCache=false
+   keyTab="/etc/security/keytabs/pulsarclient.keytab"
+   principal="client/localhost@EXAMPLE.COM";
+};
+```
+
+### Proxy Client configuration
+
+Pulsar client configuration is similar with client and broker configuration, except that `serverType` is set to `proxy` instead of `broker`, because it needs to do Kerberos authentication between client and proxy.
+
+ ```java
+ System.setProperty("java.security.auth.login.config", "/etc/pulsar/pulsar_jaas.conf");
+ System.setProperty("java.security.krb5.conf", "/etc/pulsar/krb5.conf");
+
+ Map<String, String> clientSaslConfig = Maps.newHashMap();
+ clientSaslConfig.put("saslJaasClientSectionName", "PulsarClient");
+ clientSaslConfig.put("serverType", "proxy");        // ** here is the different **
+
+ Authentication saslAuth = AuthenticationFactory
+         .create(org.apache.pulsar.client.impl.auth.AuthenticationSasl.class.getName(), authParams);
+ 
+ PulsarClient client = PulsarClient.builder()
+         .serviceUrl("pulsar://my-broker.com:6650")
+         .authentication(saslAuth)
+         .build();
+ ```
+
+### Kerberos configuration for Pulsar Proxy service
+
+In the `proxy.conf` file, set Kerberos related configuration. Here is an example:
+```shell
+## related to authenticate client.
+authenticationEnabled=true
+authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl
+saslJaasClientAllowedIds=.*client.*
+saslJaasBrokerSectionName=PulsarProxy
+
+## related to be authenticated by broker
+brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationSasl
+brokerClientAuthenticationParameters=saslJaasClientSectionName:PulsarProxy,serverType:broker
+forwardAuthorizationCredentials=true
+```
+
+The first part is related to authenticate between client and Pulsar Proxy. In this phase, client works as SASL client, while Pulsar Proxy works as SASL server. 
+
+The second part is related to authenticate between Pulsar Proxy and Pulsar Broker. In this phase, Pulsar Proxy works as SASL client, while Pulsar Broker works as SASL server.
+
+### Broker side configuration.
+
+The broker side configuration file is the same with the above `broker.conf`, you do not need special configuration for Pulsar Proxy.
+
+```
+authenticationEnabled=true
+authenticationProviders=org.apache.pulsar.broker.authentication.AuthenticationProviderSasl
+saslJaasClientAllowedIds=.*client.*
+saslJaasBrokerSectionName=PulsarBroker
+```
+
+## Regarding authorization and role token
+
+For Kerberos authentication, the authenticated principal is used as the role token for Pulsar authorization.  For more information of authorization in Pulsar, see [security authorization](security-authorization.md).
+
+## Regarding authorization between BookKeeper and ZooKeeper
+
+Adding `bookkeeperClientAuthenticationPlugin` parameter in `broker.conf` is a prerequisite for Broker (as a Kerberos client) being authenticated by Bookie (as a Kerberos Server):
+
+```
+bookkeeperClientAuthenticationPlugin=org.apache.bookkeeper.sasl.SASLClientProviderFactory
+```
+
+For more details of how to configure Kerberos for BookKeeper and Zookeeper, refer to [BookKeeper document](http://bookkeeper.apache.org/docs/latest/security/sasl/).
+
diff --git a/site2/website/versioned_docs/version-2.3.2/security-overview.md b/site2/website/versioned_docs/version-2.3.2/security-overview.md
new file mode 100644
index 0000000..f7539ab
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.3.2/security-overview.md
@@ -0,0 +1,42 @@
+---
+id: version-2.3.2-security-overview
+title: Pulsar Security Overview
+sidebar_label: Overview
+original_id: security-overview
+---
+
+Apache Pulsar is the central message bus for a business. It is frequently used to store mission-critical data, and therefore enabling security features are crucial.
+
+By default, there is no encryption, authentication, or authorization configured. Any client can communicate to Apache Pulsar via plain text service urls.
+It is critical that access via these plain text service urls is restricted to trusted clients only. Network segmentation and/or authorization ACLs can be used
+to restrict access to trusted IPs in such cases. If neither is used, the cluster is wide open and can be accessed by anyone.
+
+Pulsar supports a pluggable authentication mechanism that Pulsar clients can use to authenticate with brokers and proxies. Pulsar
+can also be configured to support multiple authentication sources.
+
+It is strongly recommended to secure the service components in your Apache Pulsar deployment.
+
+## Role Tokens
+
+In Pulsar, a *role* is a string, like `admin` or `app1`, that can represent a single client or multiple clients. Roles are used to control permission for clients
+to produce or consume from certain topics, administer the configuration for tenants, and more.
+
+Apache Pulsar uses a [Authentication Provider](#authentication-providers) to establish the identity of a client and then assign that client a *role token*. This
+role token is then used for [Authorization and ACLs](security-authorization.md) to determine what the client is authorized to do.
+
+## Authentication Providers
+
+Currently Pulsar supports two authentication providers:
+
+- [TLS Authentication](security-tls-authentication.md)
+- [Athenz](security-athenz.md)
+- [Kerberos](security-kerberos.md)
+
+## Contents
+
+- [Encryption](security-tls-transport.md) and [Authentication](security-tls-authentication.md) using TLS
+- [Authentication using Athenz](security-athenz.md)
+- [Authentication using Kerberos](security-kerberos.md)
+- [Authorization and ACLs](security-authorization.md)
+- [End-to-End Encryption](security-encryption.md)
+
diff --git a/site2/website/versioned_sidebars/version-2.3.2-sidebars.json b/site2/website/versioned_sidebars/version-2.3.2-sidebars.json
new file mode 100644
index 0000000..f221ae3
--- /dev/null
+++ b/site2/website/versioned_sidebars/version-2.3.2-sidebars.json
@@ -0,0 +1,127 @@
+{
+  "version-2.3.2-docs": {
+    "Getting started": [
+      "version-2.3.2-pulsar-2.0",
+      "version-2.3.2-standalone",
+      "version-2.3.2-standalone-docker",
+      "version-2.3.2-client-libraries"
+    ],
+    "Concepts and Architecture": [
+      "version-2.3.2-concepts-overview",
+      "version-2.3.2-concepts-messaging",
+      "version-2.3.2-concepts-architecture-overview",
+      "version-2.3.2-concepts-clients",
+      "version-2.3.2-concepts-replication",
+      "version-2.3.2-concepts-multi-tenancy",
+      "version-2.3.2-concepts-authentication",
+      "version-2.3.2-concepts-topic-compaction",
+      "version-2.3.2-concepts-tiered-storage",
+      "version-2.3.2-concepts-schema-registry"
+    ],
+    "Pulsar Functions": [
+      "version-2.3.2-functions-overview",
+      "version-2.3.2-functions-quickstart",
+      "version-2.3.2-functions-api",
+      "version-2.3.2-functions-deploying",
+      "version-2.3.2-functions-guarantees",
+      "version-2.3.2-functions-state",
+      "version-2.3.2-functions-metrics",
+      "version-2.3.2-functions-worker"
+    ],
+    "Pulsar IO": [
+      "version-2.3.2-io-overview",
+      "version-2.3.2-io-quickstart",
+      "version-2.3.2-io-managing",
+      "version-2.3.2-io-connectors",
+      "version-2.3.2-io-develop",
+      "version-2.3.2-io-cdc"
+    ],
+    "Pulsar SQL": [
+      "version-2.3.2-sql-overview",
+      "version-2.3.2-sql-getting-started",
+      "version-2.3.2-sql-deployment-configurations"
+    ],
+    "Deployment": [
+      "version-2.3.2-deploy-aws",
+      "version-2.3.2-deploy-kubernetes",
+      "version-2.3.2-deploy-bare-metal",
+      "version-2.3.2-deploy-bare-metal-multi-cluster",
+      "version-2.3.2-deploy-dcos",
+      "version-2.3.2-deploy-monitoring"
+    ],
+    "Administration": [
+      "version-2.3.2-administration-zk-bk",
+      "version-2.3.2-administration-geo",
+      "version-2.3.2-administration-dashboard",
+      "version-2.3.2-administration-stats",
+      "version-2.3.2-administration-load-balance",
+      "version-2.3.2-administration-proxy"
+    ],
+    "Security": [
+      "version-2.3.2-security-overview",
+      "version-2.3.2-security-tls-transport",
+      "version-2.3.2-security-tls-authentication",
+      "version-2.3.2-security-token-client",
+      "version-2.3.2-security-token-admin",
+      "version-2.3.2-security-athenz",
+      "version-2.3.2-security-kerberos",
+      "version-2.3.2-security-authorization",
+      "version-2.3.2-security-encryption",
+      "version-2.3.2-security-extending"
+    ],
+    "Client libraries": [
+      "version-2.3.2-client-libraries-java",
+      "version-2.3.2-client-libraries-go",
+      "version-2.3.2-client-libraries-python",
+      "version-2.3.2-client-libraries-cpp",
+      "version-2.3.2-client-libraries-websocket"
+    ],
+    "Admin API": [
+      "version-2.3.2-admin-api-overview",
+      "version-2.3.2-admin-api-clusters",
+      "version-2.3.2-admin-api-tenants",
+      "version-2.3.2-admin-api-brokers",
+      "version-2.3.2-admin-api-namespaces",
+      "version-2.3.2-admin-api-permissions",
+      "version-2.3.2-admin-api-persistent-topics",
+      "version-2.3.2-admin-api-non-persistent-topics",
+      "version-2.3.2-admin-api-partitioned-topics",
+      "version-2.3.2-admin-api-schemas"
+    ],
+    "Adaptors": [
+      "version-2.3.2-adaptors-kafka",
+      "version-2.3.2-adaptors-spark",
+      "version-2.3.2-adaptors-storm"
+    ],
+    "Cookbooks": [
+      "version-2.3.2-cookbooks-tiered-storage",
+      "version-2.3.2-cookbooks-compaction",
+      "version-2.3.2-cookbooks-deduplication",
+      "version-2.3.2-cookbooks-non-persistent",
+      "version-2.3.2-cookbooks-partitioned",
+      "version-2.3.2-cookbooks-retention-expiry",
+      "version-2.3.2-cookbooks-encryption",
+      "version-2.3.2-cookbooks-message-queue",
+      "version-2.3.2-cookbooks-bookkeepermetadata"
+    ],
+    "Development": [
+      "version-2.3.2-develop-tools",
+      "version-2.3.2-develop-binary-protocol",
+      "version-2.3.2-develop-schema",
+      "version-2.3.2-develop-load-manager",
+      "version-2.3.2-develop-cpp"
+    ],
+    "Reference": [
+      "version-2.3.2-reference-terminology",
+      "version-2.3.2-reference-cli-tools",
+      "version-2.3.2-pulsar-admin",
+      "version-2.3.2-reference-configuration"
+    ]
+  },
+  "version-2.3.2-docs-other": {
+    "First Category": [
+      "version-2.3.2-doc4",
+      "version-2.3.2-doc5"
+    ]
+  }
+}
diff --git a/site2/website/versions.json b/site2/website/versions.json
index 483f289..988a4e5 100644
--- a/site2/website/versions.json
+++ b/site2/website/versions.json
@@ -1,4 +1,5 @@
 [
+  "2.3.2",
   "2.3.1",
   "2.3.0",
   "2.2.1",