You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by li...@apache.org on 2021/12/06 02:42:33 UTC

[pulsar] branch master updated: [website][upgrade]feat: website upgrade / docs migration - 2.5.0 Kubernetes (Helm)/Deployment/Administration (#13099)

This is an automated email from the ASF dual-hosted git repository.

liuyu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new 8715ed2  [website][upgrade]feat: website upgrade / docs migration - 2.5.0 Kubernetes (Helm)/Deployment/Administration (#13099)
8715ed2 is described below

commit 8715ed29fb0e63d951910c349cdd0ca3c7db08ac
Author: Yan <ya...@streamnative.io>
AuthorDate: Mon Dec 6 10:41:36 2021 +0800

    [website][upgrade]feat: website upgrade / docs migration - 2.5.0 Kubernetes (Helm)/Deployment/Administration (#13099)
    
    * commit 2.5.0 chapter Get Started Concepts and Architecture Pulsar Schema
    
    * commit version 2.5.0 Pulsar Functions/Pulsar IO/Pulsar SQL
    
    * Kubernetes (Helm)/Deployment/Administration
    
    Co-authored-by: Anonymitaet <50...@users.noreply.github.com>
---
 .../version-2.5.0/administration-geo.md            | 213 +++++++++
 .../version-2.5.0/administration-load-balance.md   | 200 ++++++++
 .../version-2.5.0/administration-proxy.md          | 115 +++++
 .../version-2.5.0/administration-pulsar-manager.md | 205 ++++++++
 .../version-2.5.0/administration-stats.md          |  64 +++
 .../version-2.5.0/administration-upgrade.md        | 168 +++++++
 .../version-2.5.0/administration-zk-bk.md          | 349 ++++++++++++++
 .../versioned_docs/version-2.5.0/deploy-aws.md     | 268 +++++++++++
 .../deploy-bare-metal-multi-cluster.md             | 479 +++++++++++++++++++
 .../version-2.5.0/deploy-bare-metal.md             | 529 +++++++++++++++++++++
 .../versioned_docs/version-2.5.0/deploy-dcos.md    | 200 ++++++++
 .../version-2.5.0/deploy-kubernetes.md             |  12 +
 .../version-2.5.0/deploy-monitoring.md             | 103 ++++
 .../versioned_docs/version-2.5.0/helm-deploy.md    | 440 +++++++++++++++++
 .../versioned_docs/version-2.5.0/helm-install.md   |  40 ++
 .../versioned_docs/version-2.5.0/helm-overview.md  | 115 +++++
 .../versioned_docs/version-2.5.0/helm-prepare.md   |  85 ++++
 .../versioned_docs/version-2.5.0/helm-tools.md     |  43 ++
 .../versioned_docs/version-2.5.0/helm-upgrade.md   |  43 ++
 .../versioned_sidebars/version-2.5.0-sidebars.json |  94 ++++
 20 files changed, 3765 insertions(+)

diff --git a/site2/website-next/versioned_docs/version-2.5.0/administration-geo.md b/site2/website-next/versioned_docs/version-2.5.0/administration-geo.md
new file mode 100644
index 0000000..cb2ba73
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/administration-geo.md
@@ -0,0 +1,213 @@
+---
+id: administration-geo
+title: Pulsar geo-replication
+sidebar_label: "Geo-replication"
+original_id: administration-geo
+---
+
+*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+
+## How geo-replication works
+
+The diagram below illustrates the process of geo-replication across Pulsar clusters:
+
+![Replication Diagram](/assets/geo-replication.png)
+
+In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
+
+Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
+
+## Geo-replication and Pulsar properties
+
+You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
+
+Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
+
+* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
+* Configure that namespace to replicate across two or more provisioned clusters
+
+Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
+
+## Local persistence and forwarding
+
+When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters.
+
+In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency.
+
+Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition).
+
+Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover [...]
+
+In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
+
+All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
+
+## Configure replication
+
+As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level.
+
+The following example connects three clusters: **us-east**, **us-west**, and **us-cent**.
+
+### Connect replication clusters
+
+To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection.
+
+**Example**
+
+Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`.
+
+1. Configure the connection from `us-west` to `us-east`.
+
+   Run the following command on `us-west`.
+
+```shell
+
+$ bin/pulsar-admin clusters create \
+  --broker-url pulsar://<DNS-OF-US-EAST>:<PORT>	\
+  --url http://<DNS-OF-US-EAST>:<PORT> \
+  us-east
+
+```
+
+:::tip
+
+If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/).
+
+:::
+
+2. Configure the connection from `us-west` to `us-cent`.
+
+   Run the following command on `us-west`.
+
+```shell
+
+$ bin/pulsar-admin clusters create \
+  --broker-url pulsar://<DNS-OF-US-CENT>:<PORT>	\
+  --url http://<DNS-OF-US-CENT>:<PORT> \
+  us-cent
+
+```
+
+3. Run similar commands on `us-east` and `us-cent` to create connections among clusters.
+
+### Grant permissions to properties
+
+To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later.
+
+Specify all the intended clusters when you create a tenant:
+
+```shell
+
+$ bin/pulsar-admin tenants create my-tenant \
+  --admin-roles my-admin-role \
+  --allowed-clusters us-west,us-east,us-cent
+
+```
+
+To update permissions of an existing tenant, use `update` instead of `create`.
+
+### Enable geo-replication namespaces
+
+You can create a namespace with the following command sample.
+
+```shell
+
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace
+
+```
+
+Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand:
+
+```shell
+
+$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \
+  --clusters us-west,us-east,us-cent
+
+```
+
+You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes.
+
+### Use topics with geo-replication
+
+Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster.
+
+#### Selective replication
+
+By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list.
+
+The following is an example for the [Java API](client-libraries-java). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object:
+
+```java
+
+List<String> restrictReplicationTo = Arrays.asList(
+        "us-west",
+        "us-east"
+);
+
+Producer producer = client.newProducer()
+        .topic("some-topic")
+        .create();
+
+producer.newMessage()
+        .value("my-payload".getBytes())
+        .setReplicationClusters(restrictReplicationTo)
+        .send();
+
+```
+
+#### Topic stats
+
+Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin) tool and {@inject: rest:REST:/} API:
+
+```shell
+
+$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic
+
+```
+
+Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs.
+
+#### Delete a geo-replication topic
+
+Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection.
+
+In Pulsar, a topic is automatically deleted when the topic meets the following three conditions:
+- no producers or consumers are connected to it;
+- no subscriptions to it;
+- no more messages are kept for retention. 
+For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe.
+
+You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker).
+
+To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic.
+
+## Replicated subscriptions
+
+Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions.
+
+In case of failover, a consumer can restart consuming from the failure point in a different cluster. 
+
+### Enable replicated subscription
+
+Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. 
+
+```java
+
+Consumer<String> consumer = client.newConsumer(Schema.STRING)
+            .topic("my-topic")
+            .subscriptionName("my-subscription")
+            .replicateSubscriptionState(true)
+            .subscribe();
+
+```
+
+### Advantages
+
+ * It is easy to implement the logic. 
+ * You can choose to enable or disable replicated subscription.
+ * When you enable it, the overhead is low, and it is easy to configure. 
+ * When you disable it, the overhead is zero.
+
+### Limitations
+
+When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
diff --git a/site2/website-next/versioned_docs/version-2.5.0/administration-load-balance.md b/site2/website-next/versioned_docs/version-2.5.0/administration-load-balance.md
new file mode 100644
index 0000000..3efba60
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/administration-load-balance.md
@@ -0,0 +1,200 @@
+---
+id: administration-load-balance
+title: Pulsar load balance
+sidebar_label: "Load balance"
+original_id: administration-load-balance
+---
+
+## Load balance across Pulsar brokers
+
+Pulsar is an horizontally scalable messaging system, so the traffic
+in a logical cluster must be spread across all the available Pulsar brokers as evenly as possible, which is a core requirement.
+
+You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. 
+
+## Pulsar load manager architecture
+
+The following part introduces the basic architecture of the Pulsar load manager.
+
+### Assign topics to brokers dynamically
+
+Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster.
+
+When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. 
+
+In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic.
+
+The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker.
+
+The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage.
+
+#### Assignment granularity
+
+The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. 
+
+Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism.
+
+The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level.
+
+For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising
+a portion of overall hash range of the namespace.
+
+Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which
+bundle the hash falls into.
+
+Each bundle is independent of the others and thus is independently assigned to different brokers.
+
+### Create namespaces and bundles
+
+When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`:
+
+```properties
+
+# When a namespace is created without specifying the number of bundle, this
+# value will be used as the default
+defaultNumberOfNamespaceBundles=4
+
+```
+
+You can either change the system default, or override it when you create a new namespace:
+
+```shell
+
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16
+
+```
+
+With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers.
+
+In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution.
+
+On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers.
+
+### Unload topics and bundles
+
+You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics,
+release ownership and reassign the topics to a new broker, based on current load.
+
+When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned.
+
+Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded.
+
+Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic:
+
+```shell
+
+pulsar-admin topics unload persistent://tenant/namespace/topic
+
+```
+
+To unload all topics for a namespace and trigger reassignments:
+
+```shell
+
+pulsar-admin namespaces unload tenant/namespace
+
+```
+
+### Split namespace bundles 
+
+Since the load for the topics in a bundle might change over time, or predicting upfront might just be hard, brokers can split bundles into two. The new smaller bundles can be reassigned to different brokers.
+
+The splitting happens based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution.
+
+```properties
+
+# enable/disable namespace bundle auto split
+loadBalancerAutoBundleSplitEnabled=true
+
+# enable/disable automatic unloading of split bundles
+loadBalancerAutoUnloadSplitBundlesEnabled=true
+
+# maximum topics in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxTopics=1000
+
+# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxSessions=1000
+
+# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxMsgRate=30000
+
+# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxBandwidthMbytes=100
+
+# maximum number of bundles in a namespace (for auto-split)
+loadBalancerNamespaceMaximumBundles=128
+
+```
+
+### Shed load automatically
+
+The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers.
+
+When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the
+ones with higher traffic, that make up for the overload percentage.
+
+For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`.
+
+Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network
+and memory), broker unloads bundles for at least 15% of traffic.
+
+The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting:
+
+```properties
+
+# Enable/disable automatic bundle unloading for load-shedding
+loadBalancerSheddingEnabled=true
+
+```
+
+Additional settings that apply to shedding:
+
+```properties
+
+# Load shedding interval. Broker periodically checks whether some traffic should be offload from
+# some over-loaded broker to other under-loaded brokers
+loadBalancerSheddingIntervalMinutes=1
+
+# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe
+loadBalancerSheddingGracePeriodMinutes=30
+
+```
+
+#### Broker overload thresholds
+
+The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled).
+
+By default, overload threshold is set at 85%:
+
+```properties
+
+# Usage threshold to determine a broker as over-loaded
+loadBalancerBrokerOverloadedThresholdPercentage=85
+
+```
+
+Pulsar gathers the usage stats from the system metrics.
+
+In case of network utilization, in some cases the network interface speed that Linux reports is
+not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps
+NIC speed for which the OS reports 10Gbps speed.
+
+Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down.
+
+You can use the following setting to correct the max NIC speed:
+
+```properties
+
+# Override the auto-detection of the network interfaces max speed.
+# This option is useful in some environments (eg: EC2 VMs) where the max speed
+# reported by Linux is not reflecting the real bandwidth available to the broker.
+# Since the network usage is employed by the load manager to decide when a broker
+# is overloaded, it is important to make sure the info is correct or override it
+# with the right value here. The configured value can be a double (eg: 0.8) and that
+# can be used to trigger load-shedding even before hitting on NIC limits.
+loadBalancerOverrideBrokerNicSpeedGbps=
+
+```
+
+When the value is empty, Pulsar uses the value that the OS reports.
+
diff --git a/site2/website-next/versioned_docs/version-2.5.0/administration-proxy.md b/site2/website-next/versioned_docs/version-2.5.0/administration-proxy.md
new file mode 100644
index 0000000..02359dc
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/administration-proxy.md
@@ -0,0 +1,115 @@
+---
+id: administration-proxy
+title: The Pulsar proxy
+sidebar_label: "Pulsar proxy"
+original_id: administration-proxy
+---
+
+The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) is an optional gateway that you can run in front of the brokers in a Pulsar cluster. You can run a Pulsar proxy in cases when direction connections between clients and Pulsar brokers are either infeasible, undesirable, or both, for example when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform.
+
+## Configure the proxy
+
+The proxy must have some way to find the addresses of the brokers of the cluster. You can do this by either configuring the proxy to connect directly to service discovery or by specifying a broker URL in the configuration. 
+
+### Option 1: Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+
+```properties
+
+zookeeperServers=zk-0,zk-1,zk-2
+configurationStoreServers=zk-0:2184,zk-remote:2184
+
+```
+
+> If you use service discovery, the network ACL must allow the proxy to talk to the ZooKeeper nodes on the zookeeper client port, which is usually 2181, and on the configuration store client port, which is 2184 by default. Opening the network ACLs means that if someone compromises a proxy, they have full access to ZooKeeper. For this reason, using broker URLs to configure the proxy is more secure.
+
+### Option 2: Use broker URLs
+
+The more secure method of configuring the proxy is to specify a URL to connect to the brokers.
+
+> [Authorization](security-authorization#enable-authorization-and-assign-superusers) at the proxy requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you should disable the Proxy level authorization. Brokers still authorize requests after the proxy forwards them.
+
+You can configure the broker URLs in `conf/proxy.conf` as follows.
+
+```properties
+
+brokerServiceURL=pulsar://brokers.example.com:6650
+brokerWebServiceURL=http://brokers.example.com:8080
+functionWorkerWebServiceURL=http://function-workers.example.com:8080
+
+```
+
+Or if you use TLS:
+
+```properties
+
+brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651
+brokerWebServiceURLTLS=https://brokers.example.com:8443
+functionWorkerWebServiceURL=https://function-workers.example.com:8443
+
+```
+
+The hostname in the URLs provided should be a DNS entry which points to multiple brokers or a Virtual IP which is backed by multiple broker IP addresses so that the proxy does not lose connectivity to the pulsar cluster if a single broker becomes unavailable.
+
+The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs.
+
+Note that if you do not use functions, then you do not need to configure `functionWorkerWebServiceURL`.
+
+## Start the proxy
+
+To start the proxy:
+
+```bash
+
+$ cd /path/to/pulsar/directory
+$ bin/pulsar proxy
+
+```
+
+> You can run as many instances of the Pulsar proxy in a cluster as you want.
+
+
+## Stop the proxy
+
+The Pulsar proxy runs by default in the foreground. To stop the proxy, simply stop the process in which the proxy is running.
+
+## Proxy frontends
+
+You can run the Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer.
+
+## Use Pulsar clients with the proxy
+
+Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, then the connection URL for clients is `pulsar://pulsar.cluster.default:6650`.
+
+## Proxy configuration
+
+You can configure the Pulsar proxy using the [`proxy.conf`](reference-configuration.md#proxy) configuration file. The following parameters are available in that file:
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  The ZooKeeper quorum connection string (as a comma-separated list)  ||
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
+|servicePort| The port to use for server binary Protobuf requests |6650|
+|servicePortTls|  The port to use to server binary Protobuf TLS requests  |6651|
+|statusFilePath | Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks ||
+|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy  |false|
+|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true|
+|authenticationProviders| Authentication provider name list (a comma-separated list of class names) ||
+|authorizationEnabled|  Whether authorization is enforced by the Pulsar proxy |false|
+|authorizationProvider| Authorization provider as a fully qualified class name  |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider|
+|brokerClientAuthenticationPlugin|  The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientAuthenticationParameters|  The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientTrustCertsFilePath|  The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers ||
+|superUserRoles|  Role names that are treated as “super-users,” meaning that they are able to perform all admin ||
+|forwardAuthorizationCredentials| Whether client authorization credentials are forwarded to the broker for re-authorization. Authentication must be enabled via authenticationEnabled=true for this to take effect.  |false|
+|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy rejects requests beyond that. |10000|
+|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy errors out requests beyond that. |50000|
+|tlsEnabledInProxy| Whether TLS is enabled for the proxy  |false|
+|tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar brokers |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file ||
+|tlsHostnameVerificationEnabled|  Whether the hostname is validated when the proxy creates a TLS connection with brokers  |false|
+|tlsRequireTrustedClientCertOnConnect|  Whether client certificates are required for TLS. Connections are rejected if the client certificate is not trusted. |false|
diff --git a/site2/website-next/versioned_docs/version-2.5.0/administration-pulsar-manager.md b/site2/website-next/versioned_docs/version-2.5.0/administration-pulsar-manager.md
new file mode 100644
index 0000000..3e129ae
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/administration-pulsar-manager.md
@@ -0,0 +1,205 @@
+---
+id: administration-pulsar-manager
+title: Pulsar Manager
+sidebar_label: "Pulsar Manager"
+original_id: administration-pulsar-manager
+---
+
+Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments.
+
+:::note
+
+If you monitor your current stats with [Pulsar dashboard](administration-dashboard), you can try to use Pulsar Manager instead. Pulsar dashboard is deprecated.
+
+:::
+
+## Install
+
+The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+```shell
+
+docker pull apachepulsar/pulsar-manager:v0.2.0
+docker run -it \
+    -p 9527:9527 -p 7750:7750 \
+    -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \
+    apachepulsar/pulsar-manager:v0.2.0
+
+```
+
+* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring.
+
+### Set administrator account and password
+
+ ```shell
+ 
+ CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token)
+ curl \
+     -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \
+     -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \
+     -H "Content-Type: application/json" \
+     -X PUT http://localhost:7750/pulsar-manager/users/superuser \
+     -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}'
+ 
+ ```
+
+You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well:
+
+```
+
+git clone https://github.com/apache/pulsar-manager
+cd pulsar-manager/front-end
+npm install --save
+npm run build:prod
+cd ..
+./gradlew build -x test
+cd ..
+docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager .
+
+```
+
+### Use custom databases
+
+If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL.   
+
+1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql).
+
+2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration.
+
+```
+
+spring.datasource.driver-class-name=org.postgresql.Driver
+spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager
+spring.datasource.username=postgres
+spring.datasource.password=postgres
+
+```
+
+3. Compile to generate a new executable jar package.
+
+```
+
+./gradlew build -x test
+
+```
+
+### Enable JWT authentication
+
+If you want to turn on JWT authentication, configure the following parameters:
+
+* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization.
+* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET.
+* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode.
+* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode.
+* `jwt.broker.secret.key`: configure this option if you use the SECRET mode.
+
+For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/).
+
+
+If you want to enable JWT authentication, use one of the following methods.
+
+
+* Method 1: use command-line tool
+
+```
+
+wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/apache-pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
+tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
+cd pulsar-manager
+tar -zxvf pulsar-manager.tar
+cd pulsar-manager
+cp -r ../dist ui
+./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key
+
+```
+
+Firstly, [set the administrator account and password](#set-administrator-account-and-password)
+
+Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html.
+
+* Method 2: configure the application.properties file
+
+```
+
+backend.jwt.token=token
+
+jwt.broker.token.mode=PRIVATE
+jwt.broker.public.key=file:///path/broker-public.key
+jwt.broker.private.key=file:///path/broker-private.key
+
+or 
+jwt.broker.token.mode=SECRET
+jwt.broker.secret.key=file:///path/broker-secret.key
+
+```
+
+* Method 3: use Docker and enable token authentication.
+
+```
+
+export JWT_TOKEN="your-token"
+docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh
+
+```
+
+* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command.
+* `REDIRECT_HOST`: the IP address of the front-end server.
+* `REDIRECT_PORT`: the port of the front-end server.
+* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database.
+* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database.
+* `USERNAME`: the username of PostgreSQL.
+* `PASSWORD`: the password of PostgreSQL.
+* `LOG_LEVEL`: the level of log.
+
+* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key.
+
+```
+
+export JWT_TOKEN="your-token"
+export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key"
+export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key"
+docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh
+
+```
+
+* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command.
+* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command.
+* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command.
+* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally
+* `REDIRECT_HOST`: the IP address of the front-end server.
+* `REDIRECT_PORT`: the port of the front-end server.
+* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database.
+* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database.
+* `USERNAME`: the username of PostgreSQL.
+* `PASSWORD`: the password of PostgreSQL.
+* `LOG_LEVEL`: the level of log.
+
+* Method 5: use Docker and turn on **token authentication** and **token management** by secret key.
+
+```
+
+export JWT_TOKEN="your-token"
+export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key"
+docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh
+
+```
+
+* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command.
+* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command.
+* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally
+* `REDIRECT_HOST`: the IP address of the front-end server.
+* `REDIRECT_PORT`: the port of the front-end server.
+* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database.
+* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database.
+* `USERNAME`: the username of PostgreSQL.
+* `PASSWORD`: the password of PostgreSQL.
+* `LOG_LEVEL`: the level of log.
+
+* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README).
+* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end).
+
+## Log in
+
+[Set the administrator account and password](#set-administrator-account-and-password).
+
+Visit http://localhost:9527 to log in.
diff --git a/site2/website-next/versioned_docs/version-2.5.0/administration-stats.md b/site2/website-next/versioned_docs/version-2.5.0/administration-stats.md
new file mode 100644
index 0000000..ac0c036
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/administration-stats.md
@@ -0,0 +1,64 @@
+---
+id: administration-stats
+title: Pulsar stats
+sidebar_label: "Pulsar statistics"
+original_id: administration-stats
+---
+
+## Partitioned topics
+
+|Stat|Description|
+|---|---|
+|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.|
+|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.|
+|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.|
+|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.|
+|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.|
+|storageSize| The sum of storage size of the ledgers for this topic.|
+|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.|
+|producerId| Internal identifier for this producer on this topic.|
+|producerName|  Internal identifier for this producer, generated by the client library.|
+|address| IP address and source port for the connection of this producer.|
+|connectedSince| Timestamp this producer is created or last reconnected.|
+|subscriptions| The list of all local subscriptions to the topic.|
+|my-subscription| The name of this subscription (client defined).|
+|msgBacklog| The count of messages in backlog for this subscription.|
+|type| This subscription type.|
+|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.|
+|consumers| The list of connected consumers for this subscription.|
+|consumerName| Internal identifier for this consumer, generated by the client library.|
+|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.|
+|replication| This section gives the stats for cross-colo replication of this topic.|
+|replicationBacklog| The outbound replication backlog in messages.|
+|connected| Whether the outbound replicator is connected.|
+|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.|
+|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. |
+|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.|
+
+
+## Topics
+
+|Stat|Description|
+|---|---|
+|entriesAddedCounter| Messages published since this broker loads this topic.|
+|numberOfEntries| Total number of messages being tracked.|
+|totalSize| Total storage size in bytes of all messages.|
+|currentLedgerEntries| Count of messages written to the ledger currently open for writing.|
+|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.|
+|lastLedgerCreatedTimestamp| Time when last ledger is created.|
+|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.|
+|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.|
+|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.|
+|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.|
+|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.|
+|ledgers| The ordered list of all ledgers for this topic holding its messages.|
+|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.|
+|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.|
+|readPosition| The latest position of subscriber for reading message.|
+|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.|
+|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.|
+|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.|
+|cursorLedger| The ledger used to persistently store the current markDeletePosition.|
+|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.|
+|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.|
+|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.|
diff --git a/site2/website-next/versioned_docs/version-2.5.0/administration-upgrade.md b/site2/website-next/versioned_docs/version-2.5.0/administration-upgrade.md
new file mode 100644
index 0000000..72d136b
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/administration-upgrade.md
@@ -0,0 +1,168 @@
+---
+id: administration-upgrade
+title: Upgrade Guide
+sidebar_label: "Upgrade"
+original_id: administration-upgrade
+---
+
+## Upgrade guidelines
+
+Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless).
+
+The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading.
+
+- Backup all your configuration files before upgrading.
+- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration.   
+- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. 
+- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process.
+- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade.
+- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly.
+- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode.
+
+> Note: Currently, Apache Pulsar is compatible between versions. 
+
+## Upgrade sequence
+
+To upgrade an Apache Pulsar cluster, follow the upgrade sequence.
+
+1. Upgrade ZooKeeper (optional)  
+- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes.  
+- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process.
+2. Upgrade bookies  
+- Canary test: test an upgraded version in one or a small set of bookies.
+- Rolling upgrade:  
+  - a. Disable `autorecovery` with the following command.
+
+     ```shell
+     
+     bin/bookkeeper shell autorecovery -disable
+     
+     ```
+
+  
+  - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary.  
+  - c. After you upgrade all bookies, re-enable `autorecovery` with the following command.
+
+     ```shell
+     
+     bin/bookkeeper shell autorecovery -enable
+     
+     ```
+
+3. Upgrade brokers
+- Canary test: test an upgraded version in one or a small set of brokers.
+- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary.
+4. Upgrade proxies
+- Canary test: test an upgraded version in one or a small set of proxies.
+- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary.
+
+## Upgrade ZooKeeper (optional)
+While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster.
+
+### Canary test
+
+You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster.
+
+To upgrade ZooKeeper server to a new version, complete the following steps:
+
+1. Stop a ZooKeeper server.
+2. Upgrade the binary and configuration files.
+3. Start the ZooKeeper server with the new binary files.
+4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected.
+5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well.
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary.
+
+### Upgrade all ZooKeeper servers
+
+After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. 
+
+You can upgrade all ZooKeeper servers one by one by following steps in canary test.
+
+## Upgrade bookies
+
+While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster.
+For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade).
+
+### Canary test
+
+You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster.
+
+To upgrade bookie to a new version, complete the following steps:
+
+1. Stop a bookie.
+2. Upgrade the binary and configuration files.
+3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload.
+
+   ```shell
+   
+   bin/pulsar bookie --readOnly
+   
+   ```
+
+4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode.
+
+   ```shell
+   
+   bin/pulsar bookie
+   
+   ```
+
+5. Observe and make sure the cluster serves both write and read traffic.
+
+#### Canary rollback
+
+If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. 
+
+### Upgrade all bookies
+
+After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. 
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each bookie.
+
+1. Stop the bookie. 
+2. Upgrade the software (either new binary or new configuration files).
+2. Start the bookie.
+
+> **Advanced operations**   
+> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process.
+
+## Upgrade brokers and proxies
+
+The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy.
+
+### Canary test
+
+You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster.
+
+To upgrade to a new version, complete the following steps:
+
+1. Stop a broker (or proxy).
+2. Upgrade the binary and configuration file.
+3. Start a broker (or proxy).
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy).
+
+### Upgrade all brokers or proxies
+
+After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. 
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade.
+
+In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each broker or proxy.
+
+1. Stop the broker or proxy. 
+2. Upgrade the software (either new binary or new configuration files).
+3. Start the broker or proxy.
diff --git a/site2/website-next/versioned_docs/version-2.5.0/administration-zk-bk.md b/site2/website-next/versioned_docs/version-2.5.0/administration-zk-bk.md
new file mode 100644
index 0000000..512956f
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/administration-zk-bk.md
@@ -0,0 +1,349 @@
+---
+id: administration-zk-bk
+title: ZooKeeper and BookKeeper administration
+sidebar_label: "ZooKeeper and BookKeeper"
+original_id: administration-zk-bk
+---
+
+Pulsar relies on two external systems for essential tasks:
+
+* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks.
+* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data.
+
+ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects.
+
+> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar.
+
+
+## ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. 
+
+### Deploy local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar.
+
+To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*.
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster:
+
+```properties
+
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+
+```
+
+On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this:
+
+```shell
+
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+
+```
+
+On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+
+$ bin/pulsar-daemon start zookeeper
+
+```
+
+### Deploy configuration store
+
+The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
+
+If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+
+```
+
+As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers.
+
+Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum.
+
+For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as
+
+```
+
+zk[1-3].${CLUSTER}.example.com
+
+```
+
+In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This guarantees that writes to configuration store is possible even if one of these regions is unreachable.
+
+The ZK configuration in all the servers looks like:
+
+```properties
+
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+
+```
+
+Additionally, ZK observers need to have:
+
+```properties
+
+peerType=observer
+
+```
+
+##### Start the service
+
+Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+
+$ bin/pulsar-daemon start configuration-store
+
+```
+
+### ZooKeeper configuration
+
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
+
+#### Local ZooKeeper
+
+The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters:
+
+|Name|Description|Default|
+|---|---|---|
+|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
+|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
+|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
+|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
+|clientPort|  The port on which the ZooKeeper server listens for connections. |2181|
+|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
+|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1|
+|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+
+
+#### Configuration Store
+
+The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters:
+
+
+## BookKeeper
+
+BookKeeper is responsible for all durable message storage in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*.
+
+> For a guide to managing message persistence, retention, and expiry in Pulsar, see [this cookbook](cookbooks-retention-expiry).
+
+### Deploy BookKeeper
+
+BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
+
+Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Configure bookies
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster.
+
+### Start up bookies
+
+You can start up a bookie in two ways: in the foreground or as a background daemon.
+
+To start up a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool:
+
+```bash
+
+$ bin/bookkeeper bookie
+
+```
+
+To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start bookie
+
+```
+
+You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+
+```shell
+
+$ bin/bookkeeper shell bookiesanity
+
+```
+
+This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger.
+
+### Hardware considerations
+
+Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, ensuring that the bookies have a suitable hardware configuration is essential. You can choose two key dimensions to bookie hardware capacity:
+
+* Disk I/O capacity read/write
+* Storage capacity
+
+Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is designed to use multiple devices:
+
+* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+* A **ledger storage device** is where data is stored until all consumers have acknowledged the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller.
+
+
+
+### Configure BookKeeper
+
+you can find configurable parameters for BookKeeper bookies in the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) file.
+
+Minimum configuration changes required in `conf/bookkeeper.conf` are:
+
+```properties
+
+# Change to point to journal disk mount point
+journalDirectory=data/bookkeeper/journal
+
+# Point to ledger storage disk mount point
+ledgerDirectories=data/bookkeeper/ledgers
+
+# Point to local ZK quorum
+zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
+
+```
+
+To change the zookeeper root path that Bookkeeper uses, use zkLedgersRootPath=/MY-PREFIX/ledgers instead of zkServers=localhost:2181/MY-PREFIX
+
+> Consult the official [BookKeeper docs](http://bookkeeper.apache.org) for more information about BookKeeper.
+
+## BookKeeper persistence policies
+
+In Pulsar, you can set *persistence policies*, at the namespace level, that determine how BookKeeper handles persistent storage of messages. Policies determine four things:
+
+* The number of acks (guaranteed copies) to wait for each ledger entry.
+* The number of bookies to use for a topic.
+* The number of writes to make for each ledger entry.
+* The throttling rate for mark-delete operations.
+
+### Set persistence policies
+
+You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level.
+
+#### Pulsar-admin
+
+Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are:
+
+Flag | Description | Default
+:----|:------------|:-------
+`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0
+`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0
+`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0
+`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0
+
+The following is an example:
+
+```shell
+
+$ pulsar-admin namespaces set-persistence my-tenant/my-ns \
+  --bookkeeper-ack-quorum 3 \
+  --bookkeeper-ensemble 2
+
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@}
+
+#### Java
+
+```java
+
+int bkEnsemble = 2;
+int bkQuorum = 3;
+int bkAckQuorum = 2;
+double markDeleteRate = 0.7;
+PersistencePolicies policies =
+  new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate);
+admin.namespaces().setPersistence(namespace, policies);
+
+```
+
+### List persistence policies
+
+You can see which persistence policy currently applies to a namespace.
+
+#### Pulsar-admin
+
+Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace.
+
+The following is an example:
+
+```shell
+
+$ pulsar-admin namespaces get-persistence my-tenant/my-ns
+{
+  "bookkeeperEnsemble": 1,
+  "bookkeeperWriteQuorum": 1,
+  "bookkeeperAckQuorum", 1,
+  "managedLedgerMaxMarkDeleteRate": 0
+}
+
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@}
+
+#### Java
+
+```java
+
+PersistencePolicies policies = admin.namespaces().getPersistence(namespace);
+
+```
+
+## How Pulsar uses ZooKeeper and BookKeeper
+
+This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster:
+
+![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png)
+
+Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies.
diff --git a/site2/website-next/versioned_docs/version-2.5.0/deploy-aws.md b/site2/website-next/versioned_docs/version-2.5.0/deploy-aws.md
new file mode 100644
index 0000000..4845db0
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/deploy-aws.md
@@ -0,0 +1,268 @@
+---
+id: deploy-aws
+title: Deploying a Pulsar cluster on AWS using Terraform and Ansible
+sidebar_label: "Amazon Web Services"
+original_id: deploy-aws
+---
+
+> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster).
+
+One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install [...]
+
+## Requirements and setup
+
+In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things:
+
+* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool
+* Python and [pip](https://pip.pypa.io/en/stable/)
+* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts
+
+You also need to make sure that you are currently logged into your AWS account via the `aws` tool:
+
+```bash
+
+$ aws configure
+
+```
+
+## Installation
+
+You can install Ansible on Linux or macOS using pip.
+
+```bash
+
+$ pip install ansible
+
+```
+
+You can install Terraform using the instructions [here](https://www.terraform.io/intro/getting-started/install.html).
+
+You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands:
+
+```bash
+
+$ git clone https://github.com/apache/pulsar
+$ cd pulsar/deployment/terraform-ansible/aws
+
+```
+
+## SSH setup
+
+> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting
+> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file.
+>
+> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`,
+> follow the steps below:
+>
+> 1. update `ansible.cfg` with following values:
+>
+
+> ```shell
+> 
+> private_key_file=~/.ssh/pulsar_aws
+>
+> 
+> ```
+
+>
+> 2. update `terraform.tfvars` with following values:
+>
+
+> ```shell
+> 
+> public_key_path=~/.ssh/pulsar_aws.pub
+>
+> 
+> ```
+
+
+In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`:
+
+```bash
+
+$ ssh-keygen -t rsa
+
+```
+
+Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created:
+
+```bash
+
+$ ls ~/.ssh
+id_rsa               id_rsa.pub
+
+```
+
+## Create AWS resources using Terraform
+
+To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command:
+
+```bash
+
+$ terraform init
+# This will create a .terraform folder
+
+```
+
+After that, you can apply the default Terraform configuration by entering this command:
+
+```bash
+
+$ terraform apply
+
+```
+
+Then you see this prompt below:
+
+```bash
+
+Do you want to perform these actions?
+  Terraform will perform the actions described above.
+  Only 'yes' will be accepted to approve.
+
+  Enter a value:
+
+```
+
+Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created.
+
+### Apply a non-default configuration
+
+You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available:
+
+Variable name | Description | Default
+:-------------|:------------|:-------
+`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub`
+`region` | The AWS region in which the Pulsar cluster runs | `us-west-2`
+`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a`
+`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses  | `ami-9fa343e7`
+`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3
+`num_bookie_nodes` | The number of bookies that runs in the cluster | 3
+`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2
+`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1
+`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16`
+`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies)
+
+### What is installed
+
+When you run the Ansible playbook, the following AWS resources are used:
+
+* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes:
+  * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances)
+  * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances)
+  * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+  * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)
+* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security
+* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world
+* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC
+* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC
+
+All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region.
+
+### Fetch your Pulsar connection URL
+
+When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this:
+
+```
+
+pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650
+
+```
+
+You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that):
+
+```bash
+
+$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value
+
+```
+
+### Destroy your cluster
+
+At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command:
+
+```bash
+
+$ terraform destroy
+
+```
+
+## Setup Disks
+
+Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config,
+
+To setup disks on bookie nodes, enter this command:
+
+```bash
+
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  setup-disk.yaml
+
+```
+
+After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk.
+Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up.
+
+## Run the Pulsar playbook
+
+Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. To do so, enter this command:
+
+```bash
+
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  ../deploy-pulsar.yaml
+
+```
+
+If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command:
+
+```bash
+
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  --private-key="~/.ssh/some-non-default-key" \
+  ../deploy-pulsar.yaml
+
+```
+
+## Access the cluster
+
+You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url).
+
+For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip:
+
+```bash
+
+$ pip install pulsar-client
+
+```
+
+Now, open up the Python shell using the `python` command:
+
+```bash
+
+$ python
+
+```
+
+Once you are in the shell, enter the following command:
+
+```python
+
+>>> import pulsar
+>>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650')
+# Make sure to use your connection URL
+>>> producer = client.create_producer('persistent://public/default/test-topic')
+>>> producer.send('Hello world')
+>>> client.close()
+
+```
+
+If all of these commands are successful, Pulsar clients can now use your cluster!
+
diff --git a/site2/website-next/versioned_docs/version-2.5.0/deploy-bare-metal-multi-cluster.md b/site2/website-next/versioned_docs/version-2.5.0/deploy-bare-metal-multi-cluster.md
new file mode 100644
index 0000000..2665fc9
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/deploy-bare-metal-multi-cluster.md
@@ -0,0 +1,479 @@
+---
+id: deploy-bare-metal-multi-cluster
+title: Deploying a multi-cluster on bare metal
+sidebar_label: "Bare metal multi-cluster"
+original_id: deploy-bare-metal-multi-cluster
+---
+
+:::tip
+
+1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with
+Pulsar or using it in a startup or on a single team, you had better opt for a single cluster. For instructions on deploying a single cluster,
+see the guide [here](deploy-bare-metal).
+2. If you want to use all builtin [Pulsar IO](io-overview) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+run a separate cluster of function workers for [Pulsar Functions](functions-overview).
+3. If you want to use [Tiered Storage](concepts-tiered-storage) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure
+this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage).
+
+:::
+
+A Pulsar *instance* consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo). Deploying a multi-cluster Pulsar instance involves the following basic steps:
+
+* Deploying two separate [ZooKeeper](#deploy-zookeeper) quorums: a [local](#deploy-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks
+* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster
+* Deploying a [BookKeeper cluster](#deploy-bookkeeper) of bookies in each Pulsar cluster
+* Deploying [brokers](#deploy-brokers) in each Pulsar cluster
+
+If you want to deploy a single Pulsar cluster, see [Clusters and Brokers](getting-started-standalone.md#start-the-cluster).
+
+> #### Run Pulsar locally or on Kubernetes?
+> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pulsar [...]
+
+## System requirement
+Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
+
+## Install Pulsar
+
+To get started running Pulsar, download a binary tarball release in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz
+  
+  ```
+
+Once you download the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+
+$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
+$ cd apache-pulsar-@pulsar:version@
+
+```
+
+## What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview)
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses 
+`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase
+
+The following directories are created once you begin running Pulsar:
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`instances` | Artifacts created for [Pulsar Functions](functions-overview)
+`logs` | Logs that the installation creates
+
+
+## Deploy ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Configuration Store](#deploy-the-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum.
+
+The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper.
+
+
+### Deploy local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar.
+
+You need to stand up one local ZooKeeper cluster *per Pulsar cluster* for deploying a Pulsar instance. 
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster:
+
+```properties
+
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+
+```
+
+On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
+
+```shell
+
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+
+```
+
+On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+
+$ bin/pulsar-daemon start zookeeper
+
+```
+
+### Deploy the configuration store 
+
+The ZooKeeper cluster that is configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
+
+If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you should stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+
+```
+
+As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers.
+
+Again, given the very low expected load on the configuration store servers, you can
+share the same hosts used for the local ZooKeeper quorum.
+
+For example, assume a Pulsar instance with the following clusters `us-west`,
+`us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: 
+
+```
+
+zk[1-3].${CLUSTER}.example.com
+
+```
+
+In this scenario if you want to pick the quorum participants from few clusters and
+let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This method guarantees that writes to configuration store is possible even if one of these regions is unreachable.
+
+The ZK configuration in all the servers looks like:
+
+```properties
+
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+
+```
+
+Additionally, ZK observers need to have the following parameters:
+
+```properties
+
+peerType=observer
+
+```
+
+##### Start the service
+
+Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+
+$ bin/pulsar-daemon start configuration-store
+
+```
+
+## Cluster metadata initialization
+
+Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only needs to write these metadata once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example:
+
+```shell
+
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster us-west \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2184 \
+  --web-service-url http://pulsar.us-west.example.com:8080/ \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
+
+```
+
+As you can see from the example above, you need to specify the following:
+
+* The name of the cluster
+* The local ZooKeeper connection string for the cluster
+* The configuration store connection string for the entire instance
+* The web service URL for the cluster
+* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster
+
+If you use [TLS](security-tls-transport), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster.
+
+Make sure to run `initialize-cluster-metadata` for each cluster in your instance.
+
+## Deploy BookKeeper
+
+BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
+
+Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Configure bookies
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster.
+
+### Start bookies
+
+You can start a bookie in two ways: in the foreground or as a background daemon.
+
+To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start bookie
+
+```
+
+You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+
+```shell
+
+$ bin/bookkeeper shell bookiesanity
+
+```
+
+This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger.
+
+After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running.
+
+```bash
+
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+
+```
+
+Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity.
+
+* Disk I/O capacity read/write
+* Storage capacity
+
+Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is
+designed to use multiple devices:
+
+* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller.
+
+
+
+## Deploy brokers
+
+Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers.
+
+### Broker configuration
+
+You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
+
+The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those  [...]
+
+You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster.
+
+The following is an example configuration:
+
+```properties
+
+# Local ZooKeeper servers
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Configuration store quorum connection string.
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+
+clusterName=us-west
+
+# Broker data port
+brokerServicePort=6650
+
+# Broker data port for TLS
+brokerServicePortTls=6651
+
+# Port to use to server HTTP request
+webServicePort=8080
+
+# Port to use to server HTTPS request
+webServicePortTls=8443
+
+```
+
+### Broker hardware
+
+Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that.
+
+### Start the broker service
+
+You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+
+$ bin/pulsar-daemon start broker
+
+```
+
+You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker):
+
+```shell
+
+$ bin/pulsar broker
+
+```
+
+## Service discovery
+
+[Clients](getting-started-clients) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup).
+
+You can also use your own service discovery system if you want. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means.
+
+> #### Service discovery already provided by many scheduling systems
+> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism.
+
+
+### Service discovery setup
+
+The service discovery mechanism that included with Pulsar maintains a list of active brokers, which stored in ZooKeeper, and supports lookup using HTTP and also the [binary protocol](developing-binary-protocol) of Pulsar.
+
+To get started setting up the built-in service of discovery of Pulsar, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the ZooKeeper quorum connection string of the cluster and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [con [...]
+store](reference-terminology.md#configuration-store) quorum connection string.
+
+```properties
+
+# Zookeeper quorum connection string
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Global configuration store connection string
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+
+```
+
+To start the discovery service:
+
+```shell
+
+$ bin/pulsar-daemon start discovery
+
+```
+
+## Admin client and verification
+
+At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients.
+
+The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster:
+
+```properties
+
+serviceUrl=http://pulsar.us-west.example.com:8080/
+
+```
+
+## Provision new tenants
+
+Pulsar is built as a fundamentally multi-tenant system.
+
+
+If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool:
+
+```shell
+
+$ bin/pulsar-admin tenants create test-tenant \
+  --allowed-clusters us-west \
+  --admin-roles test-admin-role
+
+```
+
+In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources.
+
+Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant.
+
+
+The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant.
+
+```shell
+
+$ bin/pulsar-admin namespaces create test-tenant/ns1
+
+```
+
+##### Test producer and consumer
+
+
+Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool.
+
+
+You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them.
+
+The topic name in this case could be:
+
+```http
+
+persistent://test-tenant/ns1/my-topic
+
+```
+
+Start a consumer that creates a subscription on the topic and waits for messages:
+
+```shell
+
+$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic
+
+```
+
+Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds:
+
+```shell
+
+$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic
+
+```
+
+To report the topic stats:
+
+```shell
+
+$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.5.0/deploy-bare-metal.md b/site2/website-next/versioned_docs/version-2.5.0/deploy-bare-metal.md
new file mode 100644
index 0000000..79c9337
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/deploy-bare-metal.md
@@ -0,0 +1,529 @@
+---
+id: deploy-bare-metal
+title: Deploy a cluster on bare metal
+sidebar_label: "Bare metal"
+original_id: deploy-bare-metal
+---
+
+:::tip
+
+1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with
+Pulsar or using Pulsar in a startup or on a single team, you had better opt for a single cluster. If you do need to run a multi-cluster Pulsar instance,
+see the guide [here](deploy-bare-metal-multi-cluster).
+2. If you want to use all builtin [Pulsar IO](io-overview) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+have run a separate cluster of function workers for [Pulsar Functions](functions-overview).
+3. If you want to use [Tiered Storage](concepts-tiered-storage) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure
+this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage).
+
+:::
+
+Deploying a Pulsar cluster involves doing the following (in order):
+
+* Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional)
+* Initialize [cluster metadata](#initialize-cluster-metadata)
+* Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster
+* Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers)
+
+## Preparation
+
+### Requirements
+
+Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
+
+> If you already have an existing ZooKeeper cluster and want to reuse it, you do not need to prepare the machines
+> for running ZooKeeper.
+
+To run Pulsar on bare metal, you had better have the following:
+
+* At least 6 Linux machines or VMs
+  * 3 for running [ZooKeeper](https://zookeeper.apache.org)
+  * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie
+* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts
+
+> If you do not have enough machines, or try out Pulsar in cluster mode (and expand the cluster later),
+> you can even deploy Pulsar in one node, where Zookeeper, bookie and broker are run in the same machine.
+
+> If you do not have a DNS server, you can use multi-host in service URL instead.
+
+Each machine in your cluster needs to have [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or higher version of Java installed.
+
+The following is a diagram showing the basic setup:
+
+![alt-text](/assets/pulsar-basic-setup.png)
+
+In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL, in this case `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper.
+
+### Hardware considerations
+
+When you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning.
+
+#### ZooKeeper
+
+For machines running ZooKeeper, you had better use lighter-weight machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, *not* for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice.
+
+#### Bookies and Brokers
+
+For machines running a bookie and a Pulsar broker, you had better use more powerful machines. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following:
+
+* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers)
+* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies)
+
+## Install the Pulsar binary package
+
+> You need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploy-a-zookeeper-cluster) and [BookKeeper](#deploy-a-bookkeeper-cluster).
+
+To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways:
+
+* By clicking on the link below directly, which automatically triggers a download:
+  * <a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>
+* From the Pulsar [downloads page](pulsar:download_page_url)
+* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on [GitHub](https://github.com)
+* Using [wget](https://www.gnu.org/software/wget):
+
+```bash
+
+$ wget pulsar:binary_release_url
+
+```
+
+Once you download the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+
+$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz
+$ cd apache-pulsar-@pulsar:version@
+
+```
+
+The untarred directory contains the following subdirectories:
+
+Directory | Contains
+:---------|:--------
+`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses
+`logs` | Logs that the installation creates
+
+## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional)
+
+> Since Pulsar releases `2.1.0-incubating`, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+> If you want to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using builtin connectors, you need to download the connectors tarball release on every broker node in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar
+  
+  ```
+
+Once you download the nar file, copy the file to directory `connectors` in the pulsar directory, 
+for example, if you download the connector file `pulsar-io-aerospike-@pulsar:version@.nar`:
+
+```bash
+
+$ mkdir connectors
+$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-@pulsar:version@.nar
+...
+
+```
+
+## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional)
+
+> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders @pulsar:version@ release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:offloader_release_url
+  
+  ```
+
+Once you download the tarball, in the pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the pulsar directory:
+
+```bash
+
+$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz
+
+// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory
+// then copy the offloaders
+
+$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
+
+$ ls offloaders
+tiered-storage-jcloud-@pulsar:version@.nar
+
+```
+
+For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage)
+
+
+## Deploy a ZooKeeper cluster
+
+> If you already have an existing zookeeper cluster and want to use it, you can skip this section.
+
+[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster you need to deploy ZooKeeper first (before all other components). You had better deploy a 3-node ZooKeeper cluster. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper.
+
+To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example:
+
+```properties
+
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+
+```
+
+> If you have only one machine to deploy Pulsar, you just need to add one server entry in the configuration file.
+
+On each host, you need to specify the ID of the node in the `myid` file of each node, which is in each `data/zookeeper` folder of server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this:
+
+```bash
+
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+
+```
+
+On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start zookeeper
+
+```
+
+> If you plan to deploy zookeeper with bookie on the same node, you
+> need to start zookeeper by using different stats port.
+
+Start zookeeper with [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool like:
+
+```bash
+
+$ PULSAR_EXTRA_OPTS="-Dstats_server_port=8001" bin/pulsar-daemon start zookeeper
+
+```
+
+## Initialize cluster metadata
+
+Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper for each cluster in your instance. You only need to write **once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your ZooKeeper cluster. The following is an example:
+
+```shell
+
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster pulsar-cluster-1 \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2181 \
+  --web-service-url http://pulsar.us-west.example.com:8080 \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443 \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650 \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+As you can see from the example above, you
+need to specify the following:
+
+Flag | Description
+:----|:-----------
+`--cluster` | A name for the cluster
+`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port).
+`--web-service-url-tls` | If you use [TLS](security-tls-transport), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port).
+`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port).
+`--broker-service-url-tls` | If you use [TLS](security-tls-transport), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port).
+
+
+> If you don't have a DNS server, you can use multi-host in service URL with the following settings:
+>
+
+> ```properties
+> 
+> --web-service-url http://host1:8080,host2:8080,host3:8080 \
+> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \
+> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \
+> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651
+>
+> 
+> ```
+
+
+## Deploy a BookKeeper cluster
+
+[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**.
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example:
+
+```properties
+
+zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+```
+
+Once you appropriately modify the `zkServers` parameter, you can provide any other configuration modifications you need. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper), although consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice.
+
+Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground.
+
+To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start bookie
+
+```
+
+To start the bookie in the foreground:
+
+```bash
+
+$ bin/bookkeeper bookie
+
+```
+
+You can verify that a bookie works properly by running the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#shell) on it:
+
+```bash
+
+$ bin/bookkeeper shell bookiesanity
+
+```
+
+This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger.
+
+After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to
+verify all the bookies in the cluster are up running.
+
+```bash
+
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+
+```
+
+This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger.
+
+
+## Deploy Pulsar brokers
+
+Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie.
+
+### Configure Brokers
+
+The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Make sure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`.
+
+```properties
+
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+```
+
+You also need to specify the cluster name (matching the name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata)):
+
+```properties
+
+clusterName=pulsar-cluster-1
+
+```
+
+In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port from default):
+
+```properties
+
+brokerServicePort=6650
+brokerServicePortTls=6651
+webServicePort=8080
+webServicePortTls=8443
+
+```
+
+> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`
+>
+
+> ```properties
+> 
+> # Number of bookies to use when creating a ledger
+> managedLedgerDefaultEnsembleSize=1
+>
+> # Number of copies to store for each message
+> managedLedgerDefaultWriteQuorum=1
+> 
+> # Number of guaranteed copies (acks to wait before write is complete)
+> managedLedgerDefaultAckQuorum=1
+>
+> 
+> ```
+
+
+### Enable Pulsar Functions (optional)
+
+If you want to enable [Pulsar Functions](functions-overview), you can follow the instructions as below:
+
+1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`.
+
+   ```conf
+   
+   functionsWorkerEnabled=true
+   
+   ```
+
+2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). 
+
+   ```conf
+   
+   pulsarFunctionsCluster: pulsar-cluster-1
+   
+   ```
+
+If you want to learn more options about deploying functions worker, checkout [Deploy and manage functions worker](functions-worker).
+
+### Start Brokers
+
+You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup.
+
+You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command:
+
+```bash
+
+$ bin/pulsar broker
+
+```
+
+You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start broker
+
+```
+
+Once you successfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go!
+
+## Connect to the running cluster
+
+Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly.
+
+To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example:
+
+```properties
+
+webServiceUrl=http://us-west.example.com:8080
+brokerServiceurl=pulsar://us-west.example.com:6650
+
+```
+
+> If you don't have a DNS server, you can specify multi-host in service URL like below:
+>
+
+> ```properties
+> 
+> webServiceUrl=http://host1:8080,host2:8080,host3:8080
+> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650
+>
+> 
+> ```
+
+
+Once you do that, you can publish a message to Pulsar topic:
+
+```bash
+
+$ bin/pulsar-client produce \
+  persistent://public/default/test \
+  -n 1 \
+  -m "Hello Pulsar"
+
+```
+
+> You may need to use a different cluster name in the topic if you specify a cluster name different from `pulsar-cluster-1`.
+
+This command publishes a single message to the Pulsar topic. In addition, you can subscribe the Pulsar topic in a different terminal before publishing messages as below:
+
+```bash
+
+$ bin/pulsar-client consume \
+  persistent://public/default/test \
+  -n 100 \
+  -s "consumer-test" \
+  -t "Exclusive"
+
+```
+
+Once you successfully publish the message above to the topic, you should see it in the standard output:
+
+```bash
+
+----- got message -----
+Hello Pulsar
+
+```
+
+## Run Functions
+
+> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can also tryout pulsar functions now.
+
+Create a ExclamationFunction `exclamation`.
+
+```bash
+
+bin/pulsar-admin functions create \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+
+```
+
+Check if the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function.
+
+```bash
+
+bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world"
+
+```
+
+You can see the output as below:
+
+```shell
+
+hello world!
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.5.0/deploy-dcos.md b/site2/website-next/versioned_docs/version-2.5.0/deploy-dcos.md
new file mode 100644
index 0000000..f5f8d1f
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/deploy-dcos.md
@@ -0,0 +1,200 @@
+---
+id: deploy-dcos
+title: Deploy Pulsar on DC/OS
+sidebar_label: "DC/OS"
+original_id: deploy-dcos
+---
+
+:::tip
+
+If you want to enable all builtin [Pulsar IO](io-overview) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of
+`apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+:::
+
+[DC/OS](https://dcos.io/) (the <strong>D</strong>ata<strong>C</strong>enter <strong>O</strong>perating <strong>S</strong>ystem) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool that [Mesosphere](https://mesosphere.com/) creates and maintains .
+
+Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets.
+
+## Prerequisites
+
+In order to run Pulsar on DC/OS, you need the following:
+
+* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher
+* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes
+* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed
+* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo.
+
+  ```bash
+  
+  $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json
+  
+  ```
+
+Each node in the DC/OS-managed Mesos cluster must have at least:
+
+* 4 CPU
+* 4 GB of memory
+* 60 GB of total persistent disk
+
+Alternatively, you can change the configuration in `PulsarGroups.json` according to match your resources of DC/OS cluster.
+
+## Deploy Pulsar using the DC/OS command interface
+
+You can deploy Pulsar on DC/OS using this command:
+
+```bash
+
+$ dcos marathon group add PulsarGroups.json
+
+```
+
+This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster:
+
+* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance)
+* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance)
+* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance
+
+
+> When you run DC/OS, a ZooKeeper cluster already runs at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately.
+
+After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying.
+
+![DC/OS command executed](/assets/dcos_command_execute.png)
+
+![DC/OS command executed2](/assets/dcos_command_execute2.png)
+
+## The BookKeeper group
+
+To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group.
+
+![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png)
+
+At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that the bookies have been deployed successfully and are now running.
+ 
+![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png)
+ 
+You can also click into each bookie instance to get more detailed information, such as the bookie running log.
+
+![DC/OS bookie log](/assets/dcos_bookie_log.png)
+
+To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory.
+
+![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png)
+
+## The Pulsar broker Group
+
+Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers.
+
+![DC/OS broker status](/assets/dcos_broker_status.png)
+
+![DC/OS broker running](/assets/dcos_broker_run.png)
+
+You can also click into each broker instance to get more detailed information, such as the broker running log.
+
+![DC/OS broker log](/assets/dcos_broker_log.png)
+
+Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created.
+
+![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png)
+
+## Monitor Group
+
+The **monitory** group consists of Prometheus and Grafana.
+
+![DC/OS monitor status](/assets/dcos_monitor_status.png)
+
+### Prometheus
+
+Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example.
+
+![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png)
+
+If you click that endpoint, you can see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL display all the bookies and brokers.
+
+![DC/OS prom targets](/assets/dcos_prom_targets.png)
+
+### Grafana
+
+Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example.
+ 
+![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png)
+
+If you click that endpoint, you can access the Grafana dashboard.
+
+![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png)
+
+## Run a simple Pulsar consumer and producer on DC/OS
+
+Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action.
+
+### Download and prepare the Pulsar Java tutorial
+
+You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file of the repo).
+
+```bash
+
+$ git clone https://github.com/streamlio/pulsar-java-tutorial
+
+```
+
+Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java).
+The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. The client agent IP address can also replace this.
+
+Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it can produce more messages.
+
+Now compile the project code using the command below:
+
+```bash
+
+$ mvn clean package
+
+```
+
+### Run the consumer and producer
+
+Execute this command to run the consumer:
+
+```bash
+
+$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial"
+
+```
+
+Execute this command to run the producer:
+
+```bash
+
+$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial"
+
+```
+
+You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI.
+
+![DC/OS pulsar producer](/assets/dcos_producer.png)
+
+![DC/OS pulsar consumer](/assets/dcos_consumer.png)
+
+### View Grafana metric output
+
+While the producer and consumer run, you can access running metrics information from Grafana.
+
+![DC/OS pulsar dashboard](/assets/dcos_metrics.png)
+
+
+## Uninstall Pulsar
+
+You can shut down and uninstall the `pulsar` application from DC/OS at any time in the following two ways:
+
+1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar group.
+
+   ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png)
+
+2. You can use the following command:
+
+   ```bash
+   
+   $ dcos marathon group remove /pulsar
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.5.0/deploy-kubernetes.md b/site2/website-next/versioned_docs/version-2.5.0/deploy-kubernetes.md
new file mode 100644
index 0000000..34c2891
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/deploy-kubernetes.md
@@ -0,0 +1,12 @@
+---
+id: deploy-kubernetes
+title: Deploying Pulsar on Kubernetes
+sidebar_label: "Kubernetes"
+original_id: deploy-kubernetes
+---
+
+For those looking to get up and running with these charts as fast
+as possible, in a **non-production** use case, we provide
+a [quick start guide](getting-started-helm) for Proof of Concept (PoC) deployments.
+
+For those looking to configure and install a Pulsar cluster on Kubernetes for production usage, you should follow the complete [Installation Guide](helm-install).
diff --git a/site2/website-next/versioned_docs/version-2.5.0/deploy-monitoring.md b/site2/website-next/versioned_docs/version-2.5.0/deploy-monitoring.md
new file mode 100644
index 0000000..2efc328
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/deploy-monitoring.md
@@ -0,0 +1,103 @@
+---
+id: deploy-monitoring
+title: Monitoring
+sidebar_label: "Monitoring"
+original_id: deploy-monitoring
+---
+
+You can use different ways to monitor a Pulsar cluster, exposing both metrics that relate to the usage of topics and the overall health of the individual components of the cluster.
+
+## Collect metrics
+
+You can collect broker stats, ZooKeeper stats, and BookKeeper stats. 
+
+### Broker stats
+
+You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types:
+
+* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below:
+
+  ```shell
+  
+  bin/pulsar-admin broker-stats destinations
+  
+  ```
+
+* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics using the command below:
+
+  ```shell
+  
+  bin/pulsar-admin broker-stats monitoring-metrics
+  
+  ```
+
+All the message rates are updated every 1min.
+
+The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at:
+
+```shell
+
+http://$BROKER_ADDRESS:8080/metrics/
+
+```
+
+### ZooKeeper stats
+
+The local Zookeeper and configuration store server and clients that are shipped with Pulsar have been instrumented to expose detailed stats through Prometheus as well.
+
+```shell
+
+http://$LOCAL_ZK_SERVER:8000/metrics
+http://$GLOBAL_ZK_SERVER:8001/metrics
+
+```
+
+The default port of local ZooKeeper is `8000` and the default port of configuration store is `8001`. You can change the default port of local Zookeeper and configuration store by specifying system property `stats_server_port`.
+
+### BookKeeper stats
+
+For BookKeeper you can configure the stats frameworks by changing the `statsProviderClass` in
+`conf/bookkeeper.conf`.
+
+The default BookKeeper configuration, which is included with Pulsar distribution, enables the Prometheus exporter.
+
+```shell
+
+http://$BOOKIE_ADDRESS:8000/metrics
+
+```
+
+The default port for bookie is `8000` (instead of `8080`). You can change the port by configuring `prometheusStatsHttpPort` in `conf/bookkeeper.conf`.
+
+## Configure Prometheus
+
+You can use Prometheus to collect and store the metrics data. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/).
+
+When you run Pulsar on bare metal, you can provide the list of nodes that needs to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is automatically setup with the [provided](deploy-kubernetes) instructions.
+
+## Dashboards
+
+When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode.
+
+For that reason you only need to collect time series of metrics aggregated at the namespace level.
+
+### Pulsar per-topic dashboard
+
+The per-topic dashboard instructions are available at [Dashboard](administration-dashboard).
+
+### Grafana
+
+You can use grafana to easily create dashboard driven by the data that is stored in Prometheus.
+
+When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards.
+
+Enter the command below to use the dashboard manually:
+
+```shell
+
+docker run -p3000:3000 \
+        -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \
+        apachepulsar/pulsar-grafana:latest
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.5.0/helm-deploy.md b/site2/website-next/versioned_docs/version-2.5.0/helm-deploy.md
new file mode 100644
index 0000000..04cfe68
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/helm-deploy.md
@@ -0,0 +1,440 @@
+---
+id: helm-deploy
+title: Deploying a Pulsar cluster using Helm
+sidebar_label: "Deployment"
+original_id: helm-deploy
+---
+
+Before running `helm install`, you need to make some decisions about how you will run Pulsar.
+Options can be specified using Helm's `--set option.name=value` command line option.
+
+## Selecting configuration options
+
+In each section collect the options that will be combined to use with `helm install`.
+
+### Kubernetes Namespace
+
+By default, the chart is installed to a namespace called `pulsar`.
+
+```yaml
+
+namespace: pulsar
+
+```
+
+If you decide to install the chart into a different k8s namespace, you can include this option in your Helm install command:
+
+```bash
+
+--set namespace=<different-k8s-namespace>
+
+```
+
+By default, the chart doesn't create the namespace.
+
+```yaml
+
+namespaceCreate: false
+
+```
+
+If you want the chart to create the k8s namespace automatically, you can include this option in your Helm install command.
+
+```bash
+
+--set namespaceCreate=true
+
+```
+
+### Persistence
+
+By default the chart creates Volume Claims with the expectation that a dynamic provisioner will create the underlying Persistent Volumes.
+
+```yaml
+
+volumes:
+  persistence: true
+  # configure the components to use local persistent volume
+  # the local provisioner should be installed prior to enable local persistent volume
+  local_storage: false
+
+```
+
+If you would like to use local persistent volumes as the persistent storage for your Helm release, you can install [local-storage-provisioner](#install-local-storage-provisioner) and include the following option in your Helm install command. 
+
+```bash
+
+--set volumes.local_storage=true
+
+```
+
+> **Important**: After initial installation, making changes to your storage settings requires manually editing Kubernetes objects,
+> so it's best to plan ahead before installing your production instance of Pulsar to avoid extra storage migration work.
+
+This chart is designed for production use, To use this chart in a development environment (e.g. minikube), you can disable persistence by including this option in your Helm install command.
+
+```bash
+
+--set volumes.persistence=false
+
+```
+
+### Affinity 
+
+By default `anti-affinity` is turned on to ensure pods of same component can run on different nodes.
+
+```yaml
+
+affinity:
+  anti_affinity: true
+
+```
+
+If you are planning to use this chart in a development environment (e.g. minikue), you can disable `anti-affinity` by including this option in your Helm install command.
+
+```bash
+
+--set affinity.anti_affinity=false
+
+```
+
+### Components
+
+This chart is designed for production usage. It deploys a production-ready Pulsar cluster including Pulsar core components and monitoring components.
+
+You can customize the components to deploy by turning on/off individual components.
+
+```yaml
+
+## Components
+##
+## Control what components of Apache Pulsar to deploy for the cluster
+components:
+  # zookeeper
+  zookeeper: true
+  # bookkeeper
+  bookkeeper: true
+  # bookkeeper - autorecovery
+  autorecovery: true
+  # broker
+  broker: true
+  # functions
+  functions: true
+  # proxy
+  proxy: true
+  # toolset
+  toolset: true
+  # pulsar manager
+  pulsar_manager: true
+
+## Monitoring Components
+##
+## Control what components of the monitoring stack to deploy for the cluster
+monitoring:
+  # monitoring - prometheus
+  prometheus: true
+  # monitoring - grafana
+  grafana: true
+
+```
+
+### Docker Images
+
+This chart is designed to enable controlled upgrades. So it provides the capability to configure independent image versions for components. You can customize the images by setting individual component.
+
+```yaml
+
+## Images
+##
+## Control what images to use for each component
+images:
+  zookeeper:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  bookie:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  autorecovery:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  broker:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  proxy:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+    pullPolicy: IfNotPresent
+  functions:
+    repository: apachepulsar/pulsar-all
+    tag: 2.5.0
+  prometheus:
+    repository: prom/prometheus
+    tag: v1.6.3
+    pullPolicy: IfNotPresent
+  grafana:
+    repository: streamnative/apache-pulsar-grafana-dashboard-k8s
+    tag: 0.0.4
+    pullPolicy: IfNotPresent
+  pulsar_manager:
+    repository: apachepulsar/pulsar-manager
+    tag: v0.1.0
+    pullPolicy: IfNotPresent
+    hasCommand: false
+
+```
+
+### TLS
+
+This Pulsar Chart can be configured to enable TLS to protect all the traffic between components. Before you enable TLS, you have to provision TLS certificates
+for the components you have configured to enable TLS.
+
+- [Provision TLS certs using `cert-manager`](#provision-tls-certs-using-cert-manager)
+
+#### Provision TLS certs using cert-manager
+
+In order to using `cert-manager` to provision the TLS certificates, you have to install [cert-manager](#install-cert-manager) before installing the Pulsar chart. After
+successfully install cert manager, you can then set `certs.internal_issuer.enabled`
+to `true`. So the Pulsar chart will use `cert-manager` to generate `selfsigning` TLS
+certs for the configured components.
+
+```yaml
+
+certs:
+  internal_issuer:
+    enabled: false
+    component: internal-cert-issuer
+    type: selfsigning
+
+```
+
+You can also customize the generated TLS certificates by configuring the fields as the following.
+
+```yaml
+
+tls:
+  # common settings for generating certs
+  common:
+    # 90d
+    duration: 2160h
+    # 15d
+    renewBefore: 360h
+    organization:
+      - pulsar
+    keySize: 4096
+    keyAlgorithm: rsa
+    keyEncoding: pkcs8
+
+```
+
+#### Enable TLS
+
+After installing `cert-manager`, you can then set `tls.enabled` to `true` to enable TLS encryption for the entire cluster.
+
+```yaml
+
+tls:
+  enabled: false
+
+```
+
+You can also control whether to enable TLS encryption for individual component.
+
+```yaml
+
+tls:
+  # settings for generating certs for proxy
+  proxy:
+    enabled: false
+    cert_name: tls-proxy
+  # settings for generating certs for broker
+  broker:
+    enabled: false
+    cert_name: tls-broker
+  # settings for generating certs for bookies
+  bookie:
+    enabled: false
+    cert_name: tls-bookie
+  # settings for generating certs for zookeeper
+  zookeeper:
+    enabled: false
+    cert_name: tls-zookeeper
+  # settings for generating certs for recovery
+  autorecovery:
+    cert_name: tls-recovery
+  # settings for generating certs for toolset
+  toolset:
+    cert_name: tls-toolset
+
+```
+
+### Authentication
+
+Authentication is disabled by default. You can set `auth.authentication.enabled` to `true` to turn on authentication.
+Currently this chart only supports JWT authentication provider. You can set `auth.authentication.provider` to `jwt` to use JWT authentication provider.
+
+```yaml
+
+# Enable or disable broker authentication and authorization.
+auth:
+  authentication:
+    enabled: false
+    provider: "jwt"
+    jwt:
+      # Enable JWT authentication
+      # If the token is generated by a secret key, set the usingSecretKey as true.
+      # If the token is generated by a private key, set the usingSecretKey as false.
+      usingSecretKey: false
+  superUsers:
+    # broker to broker communication
+    broker: "broker-admin"
+    # proxy to broker communication
+    proxy: "proxy-admin"
+    # pulsar-admin client to broker/proxy communication
+    client: "admin"
+
+```
+
+If you decide to enable authentication, you can run [prepare helm release](#prepare-the-helm-release) to generate token secret keys and tokens for three super users specified in `auth.superUsers` field. The generated token keys and super user tokens are uploaded and stored as kubernetes secrets prefixed with `<pulsar-release-name>-token-`. You can use following command to find those secrets.
+
+```bash
+
+kubectl get secrets -n <k8s-namespace>
+
+```
+
+### Authorization
+
+Authorization is disabled by default. Authorization can be enabled
+only if Authentication is enabled.
+
+```yaml
+
+auth:
+  authorization:
+    enabled: false
+
+```
+
+You can include this option to turn on authorization.
+
+```bash
+
+--set auth.authorization.enabled=true
+
+```
+
+### CPU and RAM resource requirements
+
+The resource requests, and number of replicas for the Pulsar components in this Chart are set by default to be adequate for a small production deployment. If you are trying to deploy a non-production instance, you can reduce the defaults in order to fit into a smaller cluster.
+
+Once you have all of your configuration options collected, we need
+to install dependent charts before proceeding to install the Pulsar
+Chart.
+
+## Install Dependent Charts
+
+### Install Local Storage Provisioner
+
+If you decide to use local persistent volumes as the persistent storage, you need to [install a storage provisioner for local persistent volumes](https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/).
+
+One of the easiest way to get started is to use the local storage provisioner provided along with the Pulsar Helm chart.
+
+```
+
+helm repo add streamnative https://charts.streamnative.io
+helm repo update
+helm install pulsar-storage-provisioner streamnative/local-storage-provisioner
+
+```
+
+### Install Cert Manager
+
+The Pulsar Chart uses [cert-manager](https://github.com/jetstack/cert-manager) to automate provisioning and managing TLS certificates. If you decide to enable TLS encryption for brokers or proxies, you need to install cert-manager first.
+
+You can follow the [official instructions](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm) to install cert-manager.
+
+Alternatively, we provide a bash script [install-cert-manager.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/cert-manager/install-cert-manager.sh) to install a cert-manager release to namespace `cert-manager`.
+
+```bash
+
+git clone https://github.com/apache/pulsar-helm-chart
+cd pulsar-helm-chart
+./scripts/cert-manager/install-cert-manager.sh
+
+```
+
+## Prepare the Helm Release
+
+Once you have install all the dependent charts and collected all of your configuration options, you can run [prepare_helm_release.sh](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh) to prepare the helm release.
+
+```bash
+
+git clone https://github.com/apache/pulsar-helm-chart
+cd pulsar-helm-chart
+./scripts/pulsar/prepare_helm_release.sh -n <k8s-namespace> -k <helm-release-name>
+
+```
+
+The `prepare_helm_release` creates following resources:
+
+- A k8s namespace for installing the Pulsar release
+- Create a secret for storing the username and password of control center administrator. The username and password can be passed to `prepare_helm_release.sh` through flags `--control-center-admin` and `--control-center-password`. The username and password is used for logging into Grafana dashboard and Pulsar Manager.
+- Create the JWT secret keys and tokens for three superusers: `broker-admin`, `proxy-admin`, and `admin`. By default, it generates asymmetric pubic/private key pair. You can choose to generate symmeric secret key by specifying `--symmetric`.
+  - `proxy-admin` role is used for proxies to communicate to brokers.
+  - `broker-admin` role is used for inter-broker communications.
+  - `admin` role is used by the admin tools.
+
+## Deploy using Helm
+
+Once you have done the following three things, you can proceed to install a Helm release.
+
+- Collect all of your configuration options
+- Install dependent charts
+- Prepare the Helm release
+
+In this example, we've named our Helm release `pulsar`.
+
+```bash
+
+git clone https://github.com/apache/pulsar-helm-chart
+cd pulsar-helm-chart
+helm upgrade --install pulsar charts/pulsar \
+    --timeout 600 \
+    --set [your configuration options]
+
+```
+
+:::note
+
+For the first deployment, add `--set initialize=true` option to initialize bookie and Pulsar cluster metadata.
+
+:::
+
+You can also use `--version <installation version>` option if you would like to install a specific version of Pulsar Helm chart.
+
+## Monitoring the Deployment
+
+This will output the list of resources installed once the deployment finishes which may take 5-10 minutes.
+
+The status of the deployment can be checked by running `helm status pulsar` which can also be done while the deployment is taking place if you run the command in another terminal.
+
+## Accessing the Pulsar Cluster
+
+The default values will create a `ClusterIP` for the following resources you can use to interact with the cluster.
+
+- Proxy: You can use the IP address to produce and consume messages to the installed Pulsar cluster.
+- Pulsar Manager: You can access the pulsar manager UI at `http://<pulsar-manager-ip>:9527`.
+- Grafana Dashboard: You can access the Grafana dashboard at `http://<grafana-dashboard-ip>:3000`.
+
+To find the IP address of those components use:
+
+```bash
+
+kubectl get service -n <k8s-namespace>
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.5.0/helm-install.md b/site2/website-next/versioned_docs/version-2.5.0/helm-install.md
new file mode 100644
index 0000000..957f0da
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/helm-install.md
@@ -0,0 +1,40 @@
+---
+id: helm-install
+title: Install Apache Pulsar using Helm
+sidebar_label: "Install "
+original_id: helm-install
+---
+
+Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart.
+
+## Requirements
+
+In order to deploy Apache Pulsar on Kubernetes, the following are required.
+
+1. kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin))
+2. Helm v3 (3.0.2 or higher)
+3. A Kubernetes cluster, version 1.14 or higher.
+
+## Environment setup
+
+Before proceeding to deploying Pulsar, you need to prepare your environment.
+
+### Tools
+
+`helm` and `kubectl` need to be [installed on your computer](helm-tools).
+
+## Cloud cluster preparation
+
+> NOTE: Kubernetes 1.14 or higher is required, due to the usage of certain Kubernetes features.
+
+Follow the instructions to create and connect to the Kubernetes cluster of your choice:
+
+- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine)
+
+## Deploy Pulsar
+
+With the environment set up and configuration generated, you can now proceed to the [deployment of Pulsar](helm-deploy).
+
+## Upgrade Pulsar
+
+If you are upgrading an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade) instead.
diff --git a/site2/website-next/versioned_docs/version-2.5.0/helm-overview.md b/site2/website-next/versioned_docs/version-2.5.0/helm-overview.md
new file mode 100644
index 0000000..6f68381
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/helm-overview.md
@@ -0,0 +1,115 @@
+---
+id: helm-overview
+title: Apache Pulsar Helm Chart
+sidebar_label: "Overview"
+original_id: helm-overview
+---
+
+This is the official supported Helm chart to install Apache Pulsar on a cloud-native environment. It was enhanced based on StreamNative's [Helm Chart](https://github.com/streamnative/charts).
+
+## Introduction
+
+The Apache Pulsar Helm chart is one of the most convenient ways 
+to operate Pulsar on Kubernetes. This chart contains all the required components to get started and can scale to large deployments.
+
+This chart includes all the components for a complete experience, but each part can be configured to install separately.
+
+- Pulsar core components:
+  - ZooKeeper
+  - Bookies
+  - Brokers
+  - Function workers
+  - Proxies
+- Control Center:
+  - Pulsar Manager
+  - Prometheus
+  - Grafana
+  - Alert Manager
+
+It includes support for:
+
+- Security
+  - Automatically provisioned TLS certs, using [Jetstack](https://www.jetstack.io/)'s [cert-manager](https://cert-manager.io/docs/)
+      - self-signed
+      - [Let's Encrypt](https://letsencrypt.org/)
+  - TLS Encryption
+      - Proxy
+      - Broker
+      - Toolset
+      - Bookie
+      - ZooKeeper
+  - Authentication
+      - JWT
+  - Authorization
+- Storage
+  - Non-persistence storage
+  - Persistence Volume
+  - Local Persistent Volumes
+- Functions
+  - Kubernetes Runtime
+  - Process Runtime
+  - Thread Runtime
+- Operations
+  - Independent Image Versions for all components, enabling controlled upgrades
+
+## Pulsar Helm chart quick start
+
+For those looking to get up and running with these charts as fast
+as possible, in a **non-production** use case, we provide
+a [quick start guide](getting-started-helm) for Proof of Concept (PoC) deployments.
+
+This guide walks the user through deploying these charts with default
+values & features, but *does not* meet production ready requirements.
+If you wish to deploy these charts into production under sustained load,
+you should follow the complete [Installation Guide](helm-install).
+
+## Troubleshooting
+
+We've done our best to make these charts as seamless as possible,
+occasionally troubles do surface outside of our control. We've collected
+tips and tricks for troubleshooting common issues. Please examine these first before raising an [issue](https://github.com/apache/pulsar/issues/new/choose), and feel free to add to them by raising a [Pull Request](https://github.com/apache/pulsar/compare)!
+
+## Installation
+
+The Apache Pulsar Helm chart contains all required dependencies.
+
+If you are just looking to deploy a Proof of Concept for testing,
+we strongly suggest you follow our [Quick Start Guide](getting-started-helm) for your first iteration.
+
+1. [Preparation](helm-prepare)
+2. [Deployment](helm-deploy)
+
+## Upgrading
+
+Once your Pulsar Chart is installed, configuration changes and chart
+updates should be done using `helm upgrade`.
+
+```bash
+
+git clone https://github.com/apache/pulsar-helm-chart
+cd pulsar-helm-chart
+helm get values <pulsar-release-name> > pulsar.yaml
+helm upgrade <pulsar-release-name> charts/pulsar -f pulsar.yaml
+
+```
+
+For more detailed information, see [Upgrading](helm-upgrade).
+
+## Uninstall
+
+To uninstall the Pulsar Chart, run the following command:
+
+```bash
+
+helm delete <pulsar-release-name>
+
+```
+
+For the purposes of continuity, these charts have some Kubernetes objects that are not removed when performing `helm delete`.
+These items we require you to *consciously* remove them, as they affect re-deployment should you choose to.
+
+* PVCs for stateful data, which you must *consciously* remove
+  - ZooKeeper: This is your metadata.
+  - BookKeeper: This is your data.
+  - Prometheus: This is your metrics data, which can be safely removed.
+* Secrets, if generated by our [prepare release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/prepare_helm_release.sh). They contain secret keys, tokens, etc. You can use [cleanup release script](https://github.com/apache/pulsar-helm-chart/blob/master/scripts/pulsar/cleanup_helm_release.sh) to remove these secrets and tokens as needed.
diff --git a/site2/website-next/versioned_docs/version-2.5.0/helm-prepare.md b/site2/website-next/versioned_docs/version-2.5.0/helm-prepare.md
new file mode 100644
index 0000000..b8fba09
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/helm-prepare.md
@@ -0,0 +1,85 @@
+---
+id: helm-prepare
+title: Preparing Kubernetes resources
+sidebar_label: "Prepare"
+original_id: helm-prepare
+---
+
+For a fully functional Pulsar cluster, you will need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart.
+
+- [Google Kubernetes Engine](#google-kubernetes-engine)
+
+## Google Kubernetes Engine
+
+To get started easier, a script is provided to automate the cluster creation. Alternatively, a cluster can be created manually as well.
+
+- [Manual cluster creation](#manual-cluster-creation)
+- [Scripted cluster creation](#scripted-cluster-creation)
+
+### Manual cluster creation
+
+To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster).
+
+Alternatively you can use the [instructions](#scripted-cluster-creation) below to provision a GKE cluster as needed.
+
+### Scripted cluster creation
+
+A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE.
+
+The script will:
+
+1. Create a new GKE cluster.
+2. Allow the cluster to modify DNS records.
+3. Setup `kubectl`, and connect it to the cluster.
+
+Google Cloud SDK is a dependency of this script, so make sure it's [set up correctly](helm-tools.md#connect-to-a-gke-cluster) in order for the script to work.
+
+The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively.
+
+The table below describes all variables.
+
+| **Variable** | **Description** | **Default value** |
+| ------------ | --------------- | ----------------- |
+| PROJECT      | The ID of your GCP project | No defaults, required to be set. |
+| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` |
+| CONFDIR | Configuration directory to store kubernetes config | Defaults to ${HOME}/.config/streamnative |
+| INT_NETWORK | The IP space to use within this cluster | `default` |
+| LOCAL_SSD_COUNT | The number of local SSD counts | Defaults to 4 |
+| MACHINE_TYPE | The type of machine to use for nodes | `n1-standard-4` |
+| NUM_NODES | The number of nodes to be created in each of the cluster's zones | 4 |
+| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false |
+| REGION | Compute region for the cluster | `us-east1` |
+| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | Defaults to false |
+| ZONE | Compute zone for the cluster | `us-east1-b` |
+| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` |
+| EXTRA_CREATE_ARGS | Extra arguments passed to create command | |
+
+Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required:
+
+```bash
+
+PROJECT=<gcloud project id> scripts/pulsar/gke_bootstrap_script.sh up
+
+```
+
+The script can also be used to clean up the created GKE resources:
+
+```bash
+
+PROJECT=<gcloud project id> scripts/pulsar/gke_bootstrap_script.sh down
+
+```
+
+#### Create a cluster with local SSDs
+
+If you are planning to install a Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so using the provided script by specifying `USE_LOCAL_SSD` to be `true`. A sample command is listed as below:
+
+```
+
+PROJECT=<gcloud project id> USE_LOCAL_SSD=true LOCAL_SSD_COUNT=<local-ssd-count> scripts/pulsar/gke_bootstrap_script.sh up
+
+```
+
+## Next Steps
+
+Continue with the [installation of the chart](helm-deploy) once you have the cluster up and running.
diff --git a/site2/website-next/versioned_docs/version-2.5.0/helm-tools.md b/site2/website-next/versioned_docs/version-2.5.0/helm-tools.md
new file mode 100644
index 0000000..58d2ddc
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/helm-tools.md
@@ -0,0 +1,43 @@
+---
+id: helm-tools
+title: Required tools for deploying Pulsar Helm Chart
+sidebar_label: "Required Tools"
+original_id: helm-tools
+---
+
+Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally.
+
+## kubectl
+
+kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)).
+
+[Install kubectl locally by following the Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl).
+
+The server version of kubectl cannot be obtained until we connect to a cluster. Proceed with setting up Helm.
+
+## Helm
+
+Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3.
+
+### Get Helm
+
+You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/).
+
+### Next steps
+
+Once kubectl and Helm are configured, you can continue to configuring your [Kubernetes cluster](helm-prepare).
+
+## Additional information
+
+### Templates
+
+Templating in Helm is done via golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig).
+
+Some information on how all the inner workings behave:
+
+- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/)
+- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/)
+
+### Tips and tricks
+
+Helm repository has some additional information on developing with Helm in it's [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/).
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.5.0/helm-upgrade.md b/site2/website-next/versioned_docs/version-2.5.0/helm-upgrade.md
new file mode 100644
index 0000000..80778c6
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.5.0/helm-upgrade.md
@@ -0,0 +1,43 @@
+---
+id: helm-upgrade
+title: Upgrade a Pulsar Helm release
+sidebar_label: "Upgrade"
+original_id: helm-upgrade
+---
+
+Before upgrading your Pulsar installation, you need to check the changelog corresponding to the specific release you want to upgrade
+to and look for any release notes that might pertain to the new Pulsar chart version.
+
+We also recommend that you need to provide all values using `helm upgrade --set key=value` syntax or `-f values.yml` instead of using `--reuse-values` because some of the current values might be deprecated.
+
+> **NOTE**:
+>
+> You can retrieve your previous `--set` arguments cleanly, with `helm get values <release-name>`. If you direct this into a file (`helm get values <release-name> > pulsar.yml`), you can safely
+pass this file via `-f`. Thus `helm upgrade <release-name> charts/pulsar -f pulsar.yaml`. This safely replaces the behavior of `--reuse-values`.
+
+## Steps
+
+The following are the steps to upgrade Apache Pulsar to a newer version:
+
+1. Check the change log for the specific version you would like to upgrade to
+2. Go through [deployment documentation](helm-deploy) step by step
+3. Extract your previous `--set` arguments with
+
+   ```bash
+   
+   helm get values <release-name> > pulsar.yaml
+   
+   ```
+
+4. Decide on all the values you need to set
+5. Perform the upgrade, with all `--set` arguments extracted in step 4
+
+   ```bash
+   
+   helm upgrade <release-name> charts/pulsar \
+       --version <new version> \
+       -f pulsar.yaml \
+       --set ...
+   
+   ```
+
diff --git a/site2/website-next/versioned_sidebars/version-2.5.0-sidebars.json b/site2/website-next/versioned_sidebars/version-2.5.0-sidebars.json
index ab379f8..1c005a2 100644
--- a/site2/website-next/versioned_sidebars/version-2.5.0-sidebars.json
+++ b/site2/website-next/versioned_sidebars/version-2.5.0-sidebars.json
@@ -187,6 +187,100 @@
           "id": "version-2.5.0/sql-rest-api"
         }
       ]
+    },
+    {
+      "type": "category",
+      "label": "Kubernetes (Helm)",
+      "items": [
+        {
+          "type": "doc",
+          "id": "version-2.5.0/helm-overview"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/helm-prepare"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/helm-install"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/helm-deploy"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/helm-upgrade"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/helm-tools"
+        }
+      ]
+    },
+    {
+      "type": "category",
+      "label": "Deployment",
+      "items": [
+        {
+          "type": "doc",
+          "id": "version-2.5.0/deploy-aws"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/deploy-kubernetes"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/deploy-bare-metal"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/deploy-bare-metal-multi-cluster"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/deploy-dcos"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/deploy-monitoring"
+        }
+      ]
+    },
+    {
+      "type": "category",
+      "label": "Administration",
+      "items": [
+        {
+          "type": "doc",
+          "id": "version-2.5.0/administration-zk-bk"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/administration-geo"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/administration-pulsar-manager"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/administration-stats"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/administration-load-balance"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/administration-proxy"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.5.0/administration-upgrade"
+        }
+      ]
     }
   ]
 }
\ No newline at end of file