You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by ur...@apache.org on 2022/02/17 08:42:37 UTC

[pulsar-site] branch main updated: update all docs

This is an automated email from the ASF dual-hosted git repository.

urfree pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/pulsar-site.git


The following commit(s) were added to refs/heads/main by this push:
     new c7a9510  update all docs
c7a9510 is described below

commit c7a95109cbcead675e0b13c1be49e0745bef8145
Author: LiLi <ur...@apache.org>
AuthorDate: Thu Feb 17 16:42:26 2022 +0800

    update all docs
    
    Signed-off-by: LiLi <ur...@apache.org>
---
 site2/website-next/migrate/migrate-full.js         |  48 +-
 .../client-libraries.md}                           |   0
 .../standalone.md}                                 |   0
 .../client-libraries.md}                           |   0
 .../standalone.md}                                 |   0
 ...ting-started-clients.md => client-libraries.md} |   0
 .../standalone.md}                                 |   0
 .../client-libraries.md}                           |   0
 .../standalone.md}                                 |   0
 .../client-libraries.md}                           |   0
 .../standalone.md}                                 |   0
 .../client-libraries.md}                           |   0
 .../version-2.7.0/getting-started-clients.md       |  39 --
 .../version-2.7.0/getting-started-standalone.md    | 270 ----------
 .../standalone.md}                                 |   0
 .../client-libraries.md}                           |   0
 .../version-2.7.1/getting-started-clients.md       |  35 --
 ...getting-started-standalone.md => standalone.md} |   0
 .../client-libraries.md}                           |   0
 .../version-2.7.2/getting-started-clients.md       |  35 --
 .../version-2.7.2/getting-started-standalone.md    | 272 ----------
 .../standalone.md}                                 |   0
 ...ting-started-clients.md => client-libraries.md} |   0
 ...getting-started-standalone.md => standalone.md} |   0
 .../version-2.8.0/client-libraries.md              | 596 +-------------------
 .../version-2.8.0/developing-binary-protocol.md    | 581 --------------------
 .../version-2.8.0/developing-load-manager.md       | 227 --------
 .../version-2.8.0/developing-tools.md              | 111 ----
 .../version-2.8.0/getting-started-docker.md        | 179 ------
 .../version-2.8.0/getting-started-helm.md          | 438 ---------------
 .../version-2.8.0/getting-started-standalone.md    | 272 ----------
 .../versioned_docs/version-2.8.0/standalone.md     | 325 +++++++----
 .../version-2.8.1/client-libraries.md              | 596 +-------------------
 .../version-2.8.1/developing-binary-protocol.md    | 581 --------------------
 .../version-2.8.1/developing-load-manager.md       | 227 --------
 .../version-2.8.1/developing-tools.md              | 111 ----
 .../version-2.8.1/getting-started-docker.md        | 179 ------
 .../version-2.8.1/getting-started-helm.md          | 438 ---------------
 .../version-2.8.1/getting-started-standalone.md    | 272 ----------
 .../versioned_docs/version-2.8.1/standalone.md     | 325 +++++++----
 .../version-2.8.2/client-libraries.md              | 596 +-------------------
 .../version-2.8.2/developing-binary-protocol.md    | 581 --------------------
 .../version-2.8.2/developing-load-manager.md       | 227 --------
 .../version-2.8.2/developing-tools.md              | 111 ----
 .../version-2.8.2/getting-started-docker.md        | 179 ------
 .../version-2.8.2/getting-started-helm.md          | 438 ---------------
 .../versioned_docs/version-2.8.2/standalone.md     | 325 +++++++----
 .../version-2.9.0/client-libraries.md              | 597 +--------------------
 .../version-2.9.0/developing-binary-protocol.md    | 581 --------------------
 .../version-2.9.0/developing-load-manager.md       | 227 --------
 .../version-2.9.0/developing-tools.md              | 112 ----
 .../version-2.9.0/getting-started-clients.md       |  36 --
 .../version-2.9.0/getting-started-docker.md        | 214 --------
 .../version-2.9.0/getting-started-helm.md          | 441 ---------------
 .../version-2.9.0/getting-started-standalone.md    | 269 ----------
 .../versioned_docs/version-2.9.0/standalone.md     | 357 ++++++------
 .../version-2.9.1/client-libraries.md              | 597 +--------------------
 .../version-2.9.1/developing-binary-protocol.md    | 581 --------------------
 .../version-2.9.1/developing-load-manager.md       | 227 --------
 .../version-2.9.1/developing-tools.md              | 112 ----
 .../version-2.9.1/getting-started-clients.md       |  36 --
 .../version-2.9.1/getting-started-docker.md        | 214 --------
 .../version-2.9.1/getting-started-helm.md          | 441 ---------------
 .../version-2.9.1/getting-started-standalone.md    | 269 ----------
 .../versioned_docs/version-2.9.1/standalone.md     | 357 ++++++------
 site2/website-next/versions.json                   |  17 +-
 66 files changed, 1211 insertions(+), 13088 deletions(-)

diff --git a/site2/website-next/migrate/migrate-full.js b/site2/website-next/migrate/migrate-full.js
index 4de7e08..b21255b 100644
--- a/site2/website-next/migrate/migrate-full.js
+++ b/site2/website-next/migrate/migrate-full.js
@@ -16,29 +16,29 @@ if (typeof require !== "undefined" && require.main === module) {
   migrate([
     "next",
     "2.9.1",
-    // "2.9.0",
-    // "2.8.2",
-    // "2.8.1",
-    // "2.8.0",
-    // "2.7.3",
-    // "2.7.2",
-    // "2.7.1",
-    // "2.7.0",
-    // "2.6.4",
-    // "2.6.3",
-    // "2.6.2",
-    // "2.6.1",
-    // "2.6.0",
-    // "2.5.2",
-    // "2.5.1",
-    // "2.5.0",
-    // "2.4.2",
-    // "2.4.1",
-    // "2.4.0",
-    // "2.3.2",
-    // "2.3.1",
-    // "2.3.0",
-    // "2.2.1",
-    // "2.2.0",
+    "2.9.0",
+    "2.8.2",
+    "2.8.1",
+    "2.8.0",
+    "2.7.3",
+    "2.7.2",
+    "2.7.1",
+    "2.7.0",
+    "2.6.4",
+    "2.6.3",
+    "2.6.2",
+    "2.6.1",
+    "2.6.0",
+    "2.5.2",
+    "2.5.1",
+    "2.5.0",
+    "2.4.2",
+    "2.4.1",
+    "2.4.0",
+    "2.3.2",
+    "2.3.1",
+    "2.3.0",
+    "2.2.1",
+    "2.2.0",
   ]);
 }
diff --git a/site2/website-next/versioned_docs/version-2.6.4/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.6.0/client-libraries.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.4/getting-started-clients.md
rename to site2/website-next/versioned_docs/version-2.6.0/client-libraries.md
diff --git a/site2/website-next/versioned_docs/version-2.7.1/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.6.0/standalone.md
similarity index 100%
copy from site2/website-next/versioned_docs/version-2.7.1/getting-started-standalone.md
copy to site2/website-next/versioned_docs/version-2.6.0/standalone.md
diff --git a/site2/website-next/versioned_docs/version-2.6.3/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.6.1/client-libraries.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/getting-started-clients.md
rename to site2/website-next/versioned_docs/version-2.6.1/client-libraries.md
diff --git a/site2/website-next/versioned_docs/version-2.6.4/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.6.1/standalone.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.4/getting-started-standalone.md
rename to site2/website-next/versioned_docs/version-2.6.1/standalone.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.6.2/client-libraries.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/getting-started-clients.md
rename to site2/website-next/versioned_docs/version-2.6.2/client-libraries.md
diff --git a/site2/website-next/versioned_docs/version-2.6.3/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.6.2/standalone.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.3/getting-started-standalone.md
rename to site2/website-next/versioned_docs/version-2.6.2/standalone.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.6.3/client-libraries.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/getting-started-clients.md
rename to site2/website-next/versioned_docs/version-2.6.3/client-libraries.md
diff --git a/site2/website-next/versioned_docs/version-2.6.2/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.6.3/standalone.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.2/getting-started-standalone.md
rename to site2/website-next/versioned_docs/version-2.6.3/standalone.md
diff --git a/site2/website-next/versioned_docs/version-2.6.0/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.6.4/client-libraries.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/getting-started-clients.md
rename to site2/website-next/versioned_docs/version-2.6.4/client-libraries.md
diff --git a/site2/website-next/versioned_docs/version-2.6.1/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.6.4/standalone.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.1/getting-started-standalone.md
rename to site2/website-next/versioned_docs/version-2.6.4/standalone.md
diff --git a/site2/website-next/versioned_docs/version-2.8.2/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.7.0/client-libraries.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.8.2/getting-started-clients.md
rename to site2/website-next/versioned_docs/version-2.7.0/client-libraries.md
diff --git a/site2/website-next/versioned_docs/version-2.7.0/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.7.0/getting-started-clients.md
deleted file mode 100644
index 4194e34..0000000
--- a/site2/website-next/versioned_docs/version-2.7.0/getting-started-clients.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-id: client-libraries
-title: Pulsar client libraries
-sidebar_label: "Overview"
-original_id: client-libraries
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-
-Pulsar supports the following client libraries:
-
-- [Java client](client-libraries-java)
-- [Go client](client-libraries-go)
-- [Python client](client-libraries-python)
-- [C++ client](client-libraries-cpp)
-- [Node.js client](client-libraries-node)
-- [WebSocket client](client-libraries-websocket)
-- [C# client](client-libraries-dotnet)
-
-## Feature matrix
-Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
-
-## Third-party clients
-
-Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages.
-
-> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
-
-| Language | Project | Maintainer | License | Description |
-|----------|---------|------------|---------|-------------|
-| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
-| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | 
-| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | 
-| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 |
-| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
-| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar |
-| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB |
diff --git a/site2/website-next/versioned_docs/version-2.7.0/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.7.0/getting-started-standalone.md
deleted file mode 100644
index a959e33..0000000
--- a/site2/website-next/versioned_docs/version-2.7.0/getting-started-standalone.md
+++ /dev/null
@@ -1,270 +0,0 @@
----
-slug: /
-id: standalone
-title: Set up a standalone Pulsar locally
-sidebar_label: "Run Pulsar locally"
-original_id: standalone
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-
-For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
-
-> #### Pulsar in production? 
-> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal) guide.
-
-## Install Pulsar standalone
-
-This tutorial guides you through every step of the installation process.
-
-### System requirements
-
-Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
-
-:::tip
-
-By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. 
-
-:::
-
-### Install Pulsar using binary release
-
-To get started with Pulsar, download a binary tarball release in one of the following ways:
-
-* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>)
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)  
-  
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-  
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:binary_release_url
-  
-  ```
-
-After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
-$ cd apache-pulsar-@pulsar:version@
-
-```
-
-#### What your package contains
-
-The Pulsar binary package initially contains the following directories:
-
-Directory | Contains
-:---------|:--------
-`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/).
-`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
-`examples` | A Java JAR file containing [Pulsar Functions](functions-overview) example.
-`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
-`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
-
-These directories are created once you begin running Pulsar.
-
-Directory | Contains
-:---------|:--------
-`data` | The data storage directory used by ZooKeeper and BookKeeper.
-`instances` | Artifacts created for [Pulsar Functions](functions-overview).
-`logs` | Logs created by the installation.
-
-:::tip
-
-If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
-* [Install builtin connectors (optional)](#install-builtin-connectors-optional)
-* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
-Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
-
-:::
-
-### Install builtin connectors (optional)
-
-Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
-To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
-
-* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)
-
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar
-  
-  ```
-
-After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
-For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands:
-
-```bash
-
-$ mkdir connectors
-$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors
-
-$ ls connectors
-pulsar-io-aerospike-@pulsar:version@.nar
-...
-
-```
-
-:::note
-
-* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
-(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
-* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
-you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
-
-:::
-
-### Install tiered storage offloaders (optional)
-
-:::tip
-
-Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
-To enable tiered storage feature, follow the instructions below; otherwise skip this section.
-
-:::
-
-To get started with [tiered storage offloaders](concepts-tiered-storage), you need to download the offloaders tarball release on every broker node in one of the following ways:
-
-* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders @pulsar:version@ release</a>
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)
-
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:offloader_release_url
-  
-  ```
-
-After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
-in the pulsar directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz
-
-// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory
-// then copy the offloaders
-
-$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
-
-$ ls offloaders
-tiered-storage-jcloud-@pulsar:version@.nar
-
-```
-
-For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage).
-
-:::note
-
-* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
-* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
-you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
-
-:::
-
-## Start Pulsar standalone
-
-Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
-
-```bash
-
-$ bin/pulsar standalone
-
-```
-
-If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
-
-```bash
-
-2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
-2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
-2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
-
-```
-
-:::tip
-
-* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
-
-:::
-
-You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
-> 
-> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview) document to secure your deployment.
->
-> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
-
-## Use Pulsar standalone
-
-Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
-
-### Consume a message
-
-The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
-
-```bash
-
-$ bin/pulsar-client consume my-topic -s "first-subscription"
-
-```
-
-If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
-
-```
-
-09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
-
-```
-
-:::tip
-
-As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
-
-:::
-
-### Produce a message
-
-The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
-
-```bash
-
-$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
-
-```
-
-If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
-
-```
-
-13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
-
-```
-
-## Stop Pulsar standalone
-
-Press `Ctrl+C` to stop a local standalone Pulsar.
-
-:::tip
-
-If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
-For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
-
-:::
-
diff --git a/site2/website-next/versioned_docs/version-2.6.0/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.7.0/standalone.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.6.0/getting-started-standalone.md
rename to site2/website-next/versioned_docs/version-2.7.0/standalone.md
diff --git a/site2/website-next/versioned_docs/version-2.8.1/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.7.1/client-libraries.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.8.1/getting-started-clients.md
rename to site2/website-next/versioned_docs/version-2.7.1/client-libraries.md
diff --git a/site2/website-next/versioned_docs/version-2.7.1/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.7.1/getting-started-clients.md
deleted file mode 100644
index 23e5a06..0000000
--- a/site2/website-next/versioned_docs/version-2.7.1/getting-started-clients.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-id: client-libraries
-title: Pulsar client libraries
-sidebar_label: "Overview"
-original_id: client-libraries
----
-
-Pulsar supports the following client libraries:
-
-- [Java client](client-libraries-java)
-- [Go client](client-libraries-go)
-- [Python client](client-libraries-python)
-- [C++ client](client-libraries-cpp)
-- [Node.js client](client-libraries-node)
-- [WebSocket client](client-libraries-websocket)
-- [C# client](client-libraries-dotnet)
-
-## Feature matrix
-Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
-
-## Third-party clients
-
-Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages.
-
-> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
-
-| Language | Project | Maintainer | License | Description |
-|----------|---------|------------|---------|-------------|
-| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
-| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | 
-| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | 
-| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 |
-| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
-| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar |
-| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB |
diff --git a/site2/website-next/versioned_docs/version-2.7.1/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.7.1/standalone.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.7.1/getting-started-standalone.md
rename to site2/website-next/versioned_docs/version-2.7.1/standalone.md
diff --git a/site2/website-next/versioned_docs/version-2.8.0/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.7.2/client-libraries.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.8.0/getting-started-clients.md
rename to site2/website-next/versioned_docs/version-2.7.2/client-libraries.md
diff --git a/site2/website-next/versioned_docs/version-2.7.2/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.7.2/getting-started-clients.md
deleted file mode 100644
index 23e5a06..0000000
--- a/site2/website-next/versioned_docs/version-2.7.2/getting-started-clients.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-id: client-libraries
-title: Pulsar client libraries
-sidebar_label: "Overview"
-original_id: client-libraries
----
-
-Pulsar supports the following client libraries:
-
-- [Java client](client-libraries-java)
-- [Go client](client-libraries-go)
-- [Python client](client-libraries-python)
-- [C++ client](client-libraries-cpp)
-- [Node.js client](client-libraries-node)
-- [WebSocket client](client-libraries-websocket)
-- [C# client](client-libraries-dotnet)
-
-## Feature matrix
-Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
-
-## Third-party clients
-
-Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages.
-
-> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
-
-| Language | Project | Maintainer | License | Description |
-|----------|---------|------------|---------|-------------|
-| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
-| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | 
-| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | 
-| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 |
-| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
-| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar |
-| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB |
diff --git a/site2/website-next/versioned_docs/version-2.7.2/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.7.2/getting-started-standalone.md
deleted file mode 100644
index c2da381..0000000
--- a/site2/website-next/versioned_docs/version-2.7.2/getting-started-standalone.md
+++ /dev/null
@@ -1,272 +0,0 @@
----
-slug: /
-id: standalone
-title: Set up a standalone Pulsar locally
-sidebar_label: "Run Pulsar locally"
-original_id: standalone
----
-
-For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
-
-> #### Pulsar in production? 
-> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal) guide.
-
-## Install Pulsar standalone
-
-This tutorial guides you through every step of the installation process.
-
-### System requirements
-
-Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
-
-:::tip
-
-By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. 
-
-:::
-
-:::note
-
-Broker is only supported on 64-bit JVM.
-
-:::
-
-### Install Pulsar using binary release
-
-To get started with Pulsar, download a binary tarball release in one of the following ways:
-
-* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>)
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)  
-  
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-  
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:binary_release_url
-  
-  ```
-
-After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
-$ cd apache-pulsar-@pulsar:version@
-
-```
-
-#### What your package contains
-
-The Pulsar binary package initially contains the following directories:
-
-Directory | Contains
-:---------|:--------
-`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/).
-`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
-`examples` | A Java JAR file containing [Pulsar Functions](functions-overview) example.
-`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
-`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
-
-These directories are created once you begin running Pulsar.
-
-Directory | Contains
-:---------|:--------
-`data` | The data storage directory used by ZooKeeper and BookKeeper.
-`instances` | Artifacts created for [Pulsar Functions](functions-overview).
-`logs` | Logs created by the installation.
-
-:::tip
-
-If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
-* [Install builtin connectors (optional)](#install-builtin-connectors-optional)
-* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
-Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
-
-:::
-
-### Install builtin connectors (optional)
-
-Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
-To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
-
-* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)
-
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar
-  
-  ```
-
-After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
-For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands:
-
-```bash
-
-$ mkdir connectors
-$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors
-
-$ ls connectors
-pulsar-io-aerospike-@pulsar:version@.nar
-...
-
-```
-
-:::note
-
-* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
-(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
-* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
-you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
-
-:::
-
-### Install tiered storage offloaders (optional)
-
-:::tip
-
-Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
-To enable tiered storage feature, follow the instructions below; otherwise skip this section.
-
-:::
-
-To get started with [tiered storage offloaders](concepts-tiered-storage), you need to download the offloaders tarball release on every broker node in one of the following ways:
-
-* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders @pulsar:version@ release</a>
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)
-
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:offloader_release_url
-  
-  ```
-
-After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
-in the pulsar directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz
-
-// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory
-// then copy the offloaders
-
-$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
-
-$ ls offloaders
-tiered-storage-jcloud-@pulsar:version@.nar
-
-```
-
-For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage).
-
-:::note
-
-* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
-* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
-you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
-
-:::
-
-## Start Pulsar standalone
-
-Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
-
-```bash
-
-$ bin/pulsar standalone
-
-```
-
-If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
-
-```bash
-
-2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
-2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
-2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
-
-```
-
-:::tip
-
-* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
-
-:::
-
-You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
-> 
-> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview) document to secure your deployment.
->
-> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
-
-## Use Pulsar standalone
-
-Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
-
-### Consume a message
-
-The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
-
-```bash
-
-$ bin/pulsar-client consume my-topic -s "first-subscription"
-
-```
-
-If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
-
-```
-
-09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
-
-```
-
-:::tip
-
-As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
-
-:::
-
-### Produce a message
-
-The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
-
-```bash
-
-$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
-
-```
-
-If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
-
-```
-
-13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
-
-```
-
-## Stop Pulsar standalone
-
-Press `Ctrl+C` to stop a local standalone Pulsar.
-
-:::tip
-
-If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
-For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
-
-:::
-
diff --git a/site2/website-next/versioned_docs/version-2.8.2/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.7.2/standalone.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.8.2/getting-started-standalone.md
rename to site2/website-next/versioned_docs/version-2.7.2/standalone.md
diff --git a/site2/website-next/versioned_docs/version-2.7.3/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.7.3/client-libraries.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.7.3/getting-started-clients.md
rename to site2/website-next/versioned_docs/version-2.7.3/client-libraries.md
diff --git a/site2/website-next/versioned_docs/version-2.7.3/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.7.3/standalone.md
similarity index 100%
rename from site2/website-next/versioned_docs/version-2.7.3/getting-started-standalone.md
rename to site2/website-next/versioned_docs/version-2.7.3/standalone.md
diff --git a/site2/website-next/versioned_docs/version-2.8.0/client-libraries.md b/site2/website-next/versioned_docs/version-2.8.0/client-libraries.md
index c79f7bb..23e5a06 100644
--- a/site2/website-next/versioned_docs/version-2.8.0/client-libraries.md
+++ b/site2/website-next/versioned_docs/version-2.8.0/client-libraries.md
@@ -1,579 +1,35 @@
 ---
-id: client-libraries-cgo
-title: Pulsar CGo client
-sidebar_label: "CGo(deprecated)"
-original_id: client-libraries-cgo
+id: client-libraries
+title: Pulsar client libraries
+sidebar_label: "Overview"
+original_id: client-libraries
 ---
 
-You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+Pulsar supports the following client libraries:
 
-All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+- [Java client](client-libraries-java)
+- [Go client](client-libraries-go)
+- [Python client](client-libraries-python)
+- [C++ client](client-libraries-cpp)
+- [Node.js client](client-libraries-node)
+- [WebSocket client](client-libraries-websocket)
+- [C# client](client-libraries-dotnet)
 
-Currently, the following Go clients are maintained in two repositories.
+## Feature matrix
+Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
 
-| Language | Project | Maintainer | License | Description |
-|----------|---------|------------|---------|-------------|
-| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
-| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
-
-> **API docs available as well**  
-> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
-
-## Installation
-
-### Requirements
-
-Pulsar Go client library is based on the C++ client library. Follow
-the instructions for [C++ library](client-libraries-cpp) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
-
-### Install go package
-
-> **Compatibility Warning**  
-> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
-
-You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
-
-```bash
-
-$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
-
-```
-
-Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
-
-```bash
-
-$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@
-
-```
-
-Once installed locally, you can import it into your project:
-
-```go
-
-import "github.com/apache/pulsar/pulsar-client-go/pulsar"
-
-```
-
-## Connection URLs
-
-To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol) URL.
-
-Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
-
-```http
-
-pulsar://localhost:6650
-
-```
-
-A URL for a production Pulsar cluster may look something like this:
-
-```http
-
-pulsar://pulsar.us-west.example.com:6650
-
-```
-
-If you're using [TLS](security-tls-authentication) authentication, the URL will look like something like this:
-
-```http
-
-pulsar+ssl://pulsar.us-west.example.com:6651
-
-```
-
-## Create a client
-
-In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
-
-```go
-
-import (
-    "log"
-    "runtime"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-        OperationTimeoutSeconds: 5,
-        MessageListenerThreads: runtime.NumCPU(),
-    })
-
-    if err != nil {
-        log.Fatalf("Could not instantiate Pulsar client: %v", err)
-    }
-}
-
-```
-
-The following configurable parameters are available for Pulsar clients:
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
-`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
-`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
-`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
-`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
-`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
-`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
-`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
-`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
-`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
-
-## Producers
-
-Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
-
-```go
-
-producer, err := client.CreateProducer(pulsar.ProducerOptions{
-    Topic: "my-topic",
-})
-
-if err != nil {
-    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
-}
-
-defer producer.Close()
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Hello, Pulsar"),
-}
-
-if err := producer.Send(context.Background(), msg); err != nil {
-    log.Fatalf("Producer could not send message: %v", err)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Producer operations
-
-Pulsar Go producers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
-`Name()` | Fetches the producer's name | `string`
-`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
-`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
-`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
-`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
-`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
-`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
-`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
-`Schema()` | | Schema
-
-Here's a more involved example usage of a producer:
-
-```go
-
-import (
-    "context"
-    "fmt"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client to instantiate a producer
-    producer, err := client.CreateProducer(pulsar.ProducerOptions{
-        Topic: "my-topic",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    ctx := context.Background()
-
-    // Send 10 messages synchronously and 10 messages asynchronously
-    for i := 0; i < 10; i++ {
-        // Create a message
-        msg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("message-%d", i)),
-        }
-
-        // Attempt to send the message
-        if err := producer.Send(ctx, msg); err != nil {
-            log.Fatal(err)
-        }
-
-        // Create a different message to send asynchronously
-        asyncMsg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
-        }
-
-        // Attempt to send the message asynchronously and handle the response
-        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
-            if err != nil { log.Fatal(err) }
-
-            fmt.Printf("the %s successfully published", string(msg.Payload))
-        })
-    }
-}
-
-```
-
-### Producer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
-`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
-`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
-`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication) feature. | 30 seconds
-`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
-`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
-`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
-`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
-`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
-`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
-`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
-`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
-`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms
-`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
-
-## Consumers
-
-Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
-
-```go
-
-msgChannel := make(chan pulsar.ConsumerMessage)
-
-consumerOpts := pulsar.ConsumerOptions{
-    Topic:            "my-topic",
-    SubscriptionName: "my-subscription-1",
-    Type:             pulsar.Exclusive,
-    MessageChannel:   msgChannel,
-}
+## Third-party clients
 
-consumer, err := client.Subscribe(consumerOpts)
+Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages.
 
-if err != nil {
-    log.Fatalf("Could not establish subscription: %v", err)
-}
-
-defer consumer.Close()
-
-for cm := range msgChannel {
-    msg := cm.Message
-
-    fmt.Printf("Message ID: %s", msg.ID())
-    fmt.Printf("Message value: %s", string(msg.Payload()))
-
-    consumer.Ack(msg)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Consumer operations
-
-Pulsar Go consumers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
-`Subscription()` | Returns the consumer's subscription name | `string`
-`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
-`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
-`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
-`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
-`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
-`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
-`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
-`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
-`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
-`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
-`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
-
-#### Receive example
-
-Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client object to instantiate a consumer
-    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
-        Topic:            "my-golang-topic",
-        SubscriptionName: "sub-1",
-        Type: pulsar.Exclusive,
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    defer consumer.Close()
-
-    ctx := context.Background()
-
-    // Listen indefinitely on the topic
-    for {
-        msg, err := consumer.Receive(ctx)
-        if err != nil { log.Fatal(err) }
-
-        // Do something with the message
-        err = processMessage(msg)
-
-        if err == nil {
-            // Message processed successfully
-            consumer.Ack(msg)
-        } else {
-            // Failed to process messages
-            consumer.Nack(msg)
-        }
-    }
-}
-
-```
-
-### Consumer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
-`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
-`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
-`SubscriptionName` | The subscription name for this consumer |
-`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
-`Name` | The name of the consumer |
-`AckTimeout` | Set the timeout for unacked messages | 0
-`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
-`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
-`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
-`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
-`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
-`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
-
-## Readers
-
-Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
-
-```go
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic: "my-golang-topic",
-    StartMessageId: pulsar.LatestMessage,
-})
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
-
-
-### Reader operations
-
-Pulsar Go readers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
-`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
-`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
-`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
-
-#### "Next" example
-
-Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatalf("Could not create client: %v", err) }
-
-    // Use the client to instantiate a reader
-    reader, err := client.CreateReader(pulsar.ReaderOptions{
-        Topic:          "my-golang-topic",
-        StartMessageID: pulsar.EarliestMessage,
-    })
-
-    if err != nil { log.Fatalf("Could not create reader: %v", err) }
-
-    defer reader.Close()
-
-    ctx := context.Background()
-
-    // Listen on the topic for incoming messages
-    for {
-        msg, err := reader.Next(ctx)
-        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
-
-        // Process the message
-    }
-}
-
-```
-
-In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
-
-```go
-
-lastSavedId := // Read last saved message id from external store as byte[]
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic:          "my-golang-topic",
-    StartMessageID: DeserializeMessageID(lastSavedId),
-})
-
-```
-
-### Reader configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages 
-`Name` | The name of the reader 
-`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
-`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
-`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
-`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
-
-## Messages
-
-The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
-
-```go
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Here is some message data"),
-    Key: "message-key",
-    Properties: map[string]string{
-        "foo": "bar",
-    },
-    EventTime: time.Now(),
-    ReplicationClusters: []string{"cluster1", "cluster3"},
-}
-
-if err := producer.send(msg); err != nil {
-    log.Fatalf("Could not publish message due to: %v", err)
-}
-
-```
-
-The following methods parameters are available for `ProducerMessage` objects:
-
-Parameter | Description
-:---------|:-----------
-`Payload` | The actual data payload of the message
-`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
-`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
-`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
-`EventTime` | The timestamp associated with the message
-`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
-`SequenceID` | Set the sequence id to assign to the current message
-
-## TLS encryption and authentication
-
-In order to use [TLS encryption](security-tls-transport), you'll need to configure your client to do so:
-
- * Use `pulsar+ssl` URL type
- * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
- * Configure `Authentication` option
-
-Here's an example:
-
-```go
-
-opts := pulsar.ClientOptions{
-    URL: "pulsar+ssl://my-cluster.com:6651",
-    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
-    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
-}
-
-```
-
-## Schema
-
-This example shows how to create a producer and consumer with schema.
-
-```go
-
-var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
-    		"\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
-jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
-// create producer
-producer, err := client.CreateProducerWithSchema(ProducerOptions{
-	Topic: "jsonTopic",
-}, jsonSchema)
-err = producer.Send(context.Background(), ProducerMessage{
-	Value: &testJson{
-		ID:   100,
-		Name: "pulsar",
-	},
-})
-if err != nil {
-	log.Fatal(err)
-}
-defer producer.Close()
-//create consumer
-var s testJson
-consumerJS := NewJsonSchema(exampleSchemaDef, nil)
-consumer, err := client.SubscribeWithSchema(ConsumerOptions{
-	Topic:            "jsonTopic",
-	SubscriptionName: "sub-2",
-}, consumerJS)
-if err != nil {
-	log.Fatal(err)
-}
-msg, err := consumer.Receive(context.Background())
-if err != nil {
-	log.Fatal(err)
-}
-err = msg.GetValue(&s)
-if err != nil {
-	log.Fatal(err)
-}
-fmt.Println(s.ID) // output: 100
-fmt.Println(s.Name) // output: pulsar
-defer consumer.Close()
-
-```
+> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
 
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | 
+| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | 
+| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 |
+| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
+| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar |
+| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB |
diff --git a/site2/website-next/versioned_docs/version-2.8.0/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.8.0/developing-binary-protocol.md
deleted file mode 100644
index b233f10..0000000
--- a/site2/website-next/versioned_docs/version-2.8.0/developing-binary-protocol.md
+++ /dev/null
@@ -1,581 +0,0 @@
----
-id: develop-binary-protocol
-title: Pulsar binary protocol specification
-sidebar_label: "Binary protocol"
-original_id: develop-binary-protocol
----
-
-Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency.
-
-Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below.
-
-> ### Connection sharing
-> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction.
-
-All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand.
-
-## Framing
-
-Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB.
-
-The Pulsar protocol allows for two types of commands:
-
-1. **Simple commands** that do not carry a message payload.
-2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers.
-
-> Message payloads are passed in raw format rather than protobuf format for efficiency reasons.
-
-### Simple commands
-
-Simple (payload-free) commands have this basic structure:
-
-| Component   | Description                                                                             | Size (in bytes) |
-|:------------|:----------------------------------------------------------------------------------------|:----------------|
-| totalSize   | The size of the frame, counting everything that comes after it (in bytes)               | 4               |
-| commandSize | The size of the protobuf-serialized command                                             | 4               |
-| message     | The protobuf message serialized in a raw binary format (rather than in protobuf format) |                 |
-
-### Payload commands
-
-Payload commands have this basic structure:
-
-| Component    | Description                                                                                 | Size (in bytes) |
-|:-------------|:--------------------------------------------------------------------------------------------|:----------------|
-| totalSize    | The size of the frame, counting everything that comes after it (in bytes)                   | 4               |
-| commandSize  | The size of the protobuf-serialized command                                                 | 4               |
-| message      | The protobuf message serialized in a raw binary format (rather than in protobuf format)     |                 |
-| magicNumber  | A 2-byte byte array (`0x0e01`) identifying the current format                               | 2               |
-| checksum     | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4               |
-| metadataSize | The size of the message [metadata](#message-metadata)                                       | 4               |
-| metadata     | The message [metadata](#message-metadata) stored as a binary protobuf message               |                 |
-| payload      | Anything left in the frame is considered the payload and can include any sequence of bytes  |                 |
-
-## Message metadata
-
-Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer.
-
-| Field                                | Description                                                                                                                                                                                                                                               |
-|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `producer_name`                      | The name of the producer that published the message                                                                                                                                                                                         |
-| `sequence_id`                        | The sequence ID of the message, assigned by producer                                                                                                                                                                                        |
-| `publish_time`                       | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC)                                                                                                                                                    |
-| `properties`                         | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. |
-| `replicated_from` *(optional)*       | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published                                                                                                             |
-| `partition_key` *(optional)*         | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose                                                                                                                          |
-| `compression` *(optional)*           | Signals that payload has been compressed and with which compression library                                                                                                                                                                               |
-| `uncompressed_size` *(optional)*     | If compression is used, the producer must fill the uncompressed size field with the original payload size                                                                                                                                                 |
-| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch                                                                                                                   |
-
-### Batch messages
-
-When using batch messages, the payload will be containing a list of entries,
-each of them with its individual metadata, defined by the `SingleMessageMetadata`
-object.
-
-
-For a single batch, the payload format will look like this:
-
-
-| Field         | Description                                                 |
-|:--------------|:------------------------------------------------------------|
-| metadataSizeN | The size of the single message metadata serialized Protobuf |
-| metadataN     | Single message metadata                                     |
-| payloadN      | Message payload passed by application                       |
-
-Each metadata field looks like this;
-
-| Field                      | Description                                             |
-|:---------------------------|:--------------------------------------------------------|
-| properties                 | Application-defined properties                          |
-| partition key *(optional)* | Key to indicate the hashing to a particular partition   |
-| payload_size               | Size of the payload for the single message in the batch |
-
-When compression is enabled, the whole batch will be compressed at once.
-
-## Interactions
-
-### Connection establishment
-
-After opening a TCP connection to a broker, typically on port 6650, the client
-is responsible to initiate the session.
-
-![Connect interaction](/assets/binary-protocol-connect.png)
-
-After receiving a `Connected` response from the broker, the client can
-consider the connection ready to use. Alternatively, if the broker doesn't
-validate the client authentication, it will reply with an `Error` command and
-close the TCP connection.
-
-Example:
-
-```protobuf
-
-message CommandConnect {
-  "client_version" : "Pulsar-Client-Java-v1.15.2",
-  "auth_method_name" : "my-authentication-plugin",
-  "auth_data" : "my-auth-data",
-  "protocol_version" : 6
-}
-
-```
-
-Fields:
- * `client_version` → String based identifier. Format is not enforced
- * `auth_method_name` → *(optional)* Name of the authentication plugin if auth
-   enabled
- * `auth_data` → *(optional)* Plugin specific authentication data
- * `protocol_version` → Indicates the protocol version supported by the
-   client. Broker will not send commands introduced in newer revisions of the
-   protocol. Broker might be enforcing a minimum version
-
-```protobuf
-
-message CommandConnected {
-  "server_version" : "Pulsar-Broker-v1.15.2",
-  "protocol_version" : 6
-}
-
-```
-
-Fields:
- * `server_version` → String identifier of broker version
- * `protocol_version` → Protocol version supported by the broker. Client
-   must not attempt to send commands introduced in newer revisions of the
-   protocol
-
-### Keep Alive
-
-To identify prolonged network partitions between clients and brokers or cases
-in which a machine crashes without interrupting the TCP connection on the remote
-end (eg: power outage, kernel panic, hard reboot...), we have introduced a
-mechanism to probe for the availability status of the remote peer.
-
-Both clients and brokers are sending `Ping` commands periodically and they will
-close the socket if a `Pong` response is not received within a timeout (default
-used by broker is 60s).
-
-A valid implementation of a Pulsar client is not required to send the `Ping`
-probe, though it is required to promptly reply after receiving one from the
-broker in order to prevent the remote side from forcibly closing the TCP connection.
-
-
-### Producer
-
-In order to send messages, a client needs to establish a producer. When creating
-a producer, the broker will first verify that this particular client is
-authorized to publish on the topic.
-
-Once the client gets confirmation of the producer creation, it can publish
-messages to the broker, referring to the producer id negotiated before.
-
-![Producer interaction](/assets/binary-protocol-producer.png)
-
-##### Command Producer
-
-```protobuf
-
-message CommandProducer {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "producer_id" : 1,
-  "request_id" : 1
-}
-
-```
-
-Parameters:
- * `topic` → Complete topic name to where you want to create the producer on
- * `producer_id` → Client generated producer identifier. Needs to be unique
-    within the same connection
- * `request_id` → Identifier for this request. Used to match the response with
-    the originating request. Needs to be unique within the same connection
- * `producer_name` → *(optional)* If a producer name is specified, the name will
-    be used, otherwise the broker will generate a unique name. Generated
-    producer name is guaranteed to be globally unique. Implementations are
-    expected to let the broker generate a new producer name when the producer
-    is initially created, then reuse it when recreating the producer after
-    reconnections.
-
-The broker will reply with either `ProducerSuccess` or `Error` commands.
-
-##### Command ProducerSuccess
-
-```protobuf
-
-message CommandProducerSuccess {
-  "request_id" :  1,
-  "producer_name" : "generated-unique-producer-name"
-}
-
-```
-
-Parameters:
- * `request_id` → Original id of the `CreateProducer` request
- * `producer_name` → Generated globally unique producer name or the name
-    specified by the client, if any.
-
-##### Command Send
-
-Command `Send` is used to publish a new message within the context of an
-already existing producer. This command is used in a frame that includes command
-as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section.
-
-```protobuf
-
-message CommandSend {
-  "producer_id" : 1,
-  "sequence_id" : 0,
-  "num_messages" : 1
-}
-
-```
-
-Parameters:
- * `producer_id` → id of an existing producer
- * `sequence_id` → each message has an associated sequence id which is expected
-   to be implemented with a counter starting at 0. The `SendReceipt` that
-   acknowledges the effective publishing of a messages will refer to it by
-   its sequence id.
- * `num_messages` → *(optional)* Used when publishing a batch of messages at
-   once.
-
-##### Command SendReceipt
-
-After a message has been persisted on the configured number of replicas, the
-broker will send the acknowledgment receipt to the producer.
-
-```protobuf
-
-message CommandSendReceipt {
-  "producer_id" : 1,
-  "sequence_id" : 0,
-  "message_id" : {
-    "ledgerId" : 123,
-    "entryId" : 456
-  }
-}
-
-```
-
-Parameters:
- * `producer_id` → id of producer originating the send request
- * `sequence_id` → sequence id of the published message
- * `message_id` → message id assigned by the system to the published message
-   Unique within a single cluster. Message id is composed of 2 longs, `ledgerId`
-   and `entryId`, that reflect that this unique id is assigned when appending
-   to a BookKeeper ledger
-
-
-##### Command CloseProducer
-
-**Note**: *This command can be sent by either producer or broker*.
-
-When receiving a `CloseProducer` command, the broker will stop accepting any
-more messages for the producer, wait until all pending messages are persisted
-and then reply `Success` to the client.
-
-The broker can send a `CloseProducer` command to client when it's performing
-a graceful failover (eg: broker is being restarted, or the topic is being unloaded
-by load balancer to be transferred to a different broker).
-
-When receiving the `CloseProducer`, the client is expected to go through the
-service discovery lookup again and recreate the producer again. The TCP
-connection is not affected.
-
-### Consumer
-
-A consumer is used to attach to a subscription and consume messages from it.
-After every reconnection, a client needs to subscribe to the topic. If a
-subscription is not already there, a new one will be created.
-
-![Consumer](/assets/binary-protocol-consumer.png)
-
-#### Flow control
-
-After the consumer is ready, the client needs to *give permission* to the
-broker to push messages. This is done with the `Flow` command.
-
-A `Flow` command gives additional *permits* to send messages to the consumer.
-A typical consumer implementation will use a queue to accumulate these messages
-before the application is ready to consume them.
-
-After the application has dequeued half of the messages in the queue, the consumer 
-sends permits to the broker to ask for more messages (equals to half of the messages in the queue).
-
-For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue.
-Then the consumer sends permits to the broker to ask for 500 messages.
-
-##### Command Subscribe
-
-```protobuf
-
-message CommandSubscribe {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "subscription" : "my-subscription-name",
-  "subType" : "Exclusive",
-  "consumer_id" : 1,
-  "request_id" : 1
-}
-
-```
-
-Parameters:
- * `topic` → Complete topic name to where you want to create the consumer on
- * `subscription` → Subscription name
- * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared
- * `consumer_id` → Client generated consumer identifier. Needs to be unique
-    within the same connection
- * `request_id` → Identifier for this request. Used to match the response with
-    the originating request. Needs to be unique within the same connection
- * `consumer_name` → *(optional)* Clients can specify a consumer name. This
-    name can be used to track a particular consumer in the stats. Also, in
-    Failover subscription type, the name is used to decide which consumer is
-    elected as *master* (the one receiving messages): consumers are sorted by
-    their consumer name and the first one is elected master.
-
-##### Command Flow
-
-```protobuf
-
-message CommandFlow {
-  "consumer_id" : 1,
-  "messagePermits" : 1000
-}
-
-```
-
-Parameters:
-* `consumer_id` → Id of an already established consumer
-* `messagePermits` → Number of additional permits to grant to the broker for
-  pushing more messages
-
-##### Command Message
-
-Command `Message` is used by the broker to push messages to an existing consumer,
-within the limits of the given permits.
-
-
-This command is used in a frame that includes the message payload as well, for
-which the complete format is specified in the [payload commands](#payload-commands)
-section.
-
-```protobuf
-
-message CommandMessage {
-  "consumer_id" : 1,
-  "message_id" : {
-    "ledgerId" : 123,
-    "entryId" : 456
-  }
-}
-
-```
-
-##### Command Ack
-
-An `Ack` is used to signal to the broker that a given message has been
-successfully processed by the application and can be discarded by the broker.
-
-In addition, the broker will also maintain the consumer position based on the
-acknowledged messages.
-
-```protobuf
-
-message CommandAck {
-  "consumer_id" : 1,
-  "ack_type" : "Individual",
-  "message_id" : {
-    "ledgerId" : 123,
-    "entryId" : 456
-  }
-}
-
-```
-
-Parameters:
- * `consumer_id` → Id of an already established consumer
- * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative`
- * `message_id` → Id of the message to acknowledge
- * `validation_error` → *(optional)* Indicates that the consumer has discarded
-   the messages due to: `UncompressedSizeCorruption`,
-   `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError`
-
-##### Command CloseConsumer
-
-***Note***: **This command can be sent by either producer or broker*.
-
-This command behaves the same as [`CloseProducer`](#command-closeproducer)
-
-##### Command RedeliverUnacknowledgedMessages
-
-A consumer can ask the broker to redeliver some or all of the pending messages
-that were pushed to that particular consumer and not yet acknowledged.
-
-The protobuf object accepts a list of message ids that the consumer wants to
-be redelivered. If the list is empty, the broker will redeliver all the
-pending messages.
-
-On redelivery, messages can be sent to the same consumer or, in the case of a
-shared subscription, spread across all available consumers.
-
-
-##### Command ReachedEndOfTopic
-
-This is sent by a broker to a particular consumer, whenever the topic
-has been "terminated" and all the messages on the subscription were
-acknowledged.
-
-The client should use this command to notify the application that no more
-messages are coming from the consumer.
-
-##### Command ConsumerStats
-
-This command is sent by the client to retrieve Subscriber and Consumer level 
-stats from the broker.
-Parameters:
- * `request_id` → Id of the request, used to correlate the request 
-      and the response.
- * `consumer_id` → Id of an already established consumer.
-
-##### Command ConsumerStatsResponse
-
-This is the broker's response to ConsumerStats request by the client. 
-It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request.
-If the `error_code` or the `error_message` field is set it indicates that the request has failed.
-
-##### Command Unsubscribe
-
-This command is sent by the client to unsubscribe the `consumer_id` from the associated topic.
-Parameters:
- * `request_id` → Id of the request.
- * `consumer_id` → Id of an already established consumer which needs to unsubscribe.
-
-
-## Service discovery
-
-### Topic lookup
-
-Topic lookup needs to be performed each time a client needs to create or
-reconnect a producer or a consumer. Lookup is used to discover which particular
-broker is serving the topic we are about to use.
-
-Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#lookup-of-topic)
-docs.
-
-Since Pulsar-1.16 it is also possible to perform the lookup within the binary
-protocol.
-
-For the sake of example, let's assume we have a service discovery component
-running at `pulsar://broker.example.com:6650`
-
-Individual brokers will be running at `pulsar://broker-1.example.com:6650`,
-`pulsar://broker-2.example.com:6650`, ...
-
-A client can use a connection to the discovery service host to issue a
-`LookupTopic` command. The response can either be a broker hostname to
-connect to, or a broker hostname to which retry the lookup.
-
-The `LookupTopic` command has to be used in a connection that has already
-gone through the `Connect` / `Connected` initial handshake.
-
-![Topic lookup](/assets/binary-protocol-topic-lookup.png)
-
-```protobuf
-
-message CommandLookupTopic {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "request_id" : 1,
-  "authoritative" : false
-}
-
-```
-
-Fields:
- * `topic` → Topic name to lookup
- * `request_id` → Id of the request that will be passed with its response
- * `authoritative` → Initial lookup request should use false. When following a
-   redirect response, client should pass the same value contained in the
-   response
-
-##### LookupTopicResponse
-
-Example of response with successful lookup:
-
-```protobuf
-
-message CommandLookupTopicResponse {
-  "request_id" : 1,
-  "response" : "Connect",
-  "brokerServiceUrl" : "pulsar://broker-1.example.com:6650",
-  "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651",
-  "authoritative" : true
-}
-
-```
-
-Example of lookup response with redirection:
-
-```protobuf
-
-message CommandLookupTopicResponse {
-  "request_id" : 1,
-  "response" : "Redirect",
-  "brokerServiceUrl" : "pulsar://broker-2.example.com:6650",
-  "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651",
-  "authoritative" : true
-}
-
-```
-
-In this second case, we need to reissue the `LookupTopic` command request
-to `broker-2.example.com` and this broker will be able to give a definitive
-answer to the lookup request.
-
-### Partitioned topics discovery
-
-Partitioned topics metadata discovery is used to find out if a topic is a
-"partitioned topic" and how many partitions were set up.
-
-If the topic is marked as "partitioned", the client is expected to create
-multiple producers or consumers, one for each partition, using the `partition-X`
-suffix.
-
-This information only needs to be retrieved the first time a producer or
-consumer is created. There is no need to do this after reconnections.
-
-The discovery of partitioned topics metadata works very similar to the topic
-lookup. The client send a request to the service discovery address and the
-response will contain actual metadata.
-
-##### Command PartitionedTopicMetadata
-
-```protobuf
-
-message CommandPartitionedTopicMetadata {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "request_id" : 1
-}
-
-```
-
-Fields:
- * `topic` → the topic for which to check the partitions metadata
- * `request_id` → Id of the request that will be passed with its response
-
-
-##### Command PartitionedTopicMetadataResponse
-
-Example of response with metadata:
-
-```protobuf
-
-message CommandPartitionedTopicMetadataResponse {
-  "request_id" : 1,
-  "response" : "Success",
-  "partitions" : 32
-}
-
-```
-
-## Protobuf interface
-
-All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}.
diff --git a/site2/website-next/versioned_docs/version-2.8.0/developing-load-manager.md b/site2/website-next/versioned_docs/version-2.8.0/developing-load-manager.md
deleted file mode 100644
index 509209b..0000000
--- a/site2/website-next/versioned_docs/version-2.8.0/developing-load-manager.md
+++ /dev/null
@@ -1,227 +0,0 @@
----
-id: develop-load-manager
-title: Modular load manager
-sidebar_label: "Modular load manager"
-original_id: develop-load-manager
----
-
-The *modular load manager*, implemented in  [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load  [...]
-
-## Usage
-
-There are two ways that you can enable the modular load manager:
-
-1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`.
-2. Using the `pulsar-admin` tool. Here's an example:
-
-   ```shell
-   
-   $ pulsar-admin update-dynamic-config \
-    --config loadManagerClassName \
-    --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
-   
-   ```
-
-   You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`.
-
-## Verification
-
-There are a few different ways to determine which load manager is being used:
-
-1. Use `pulsar-admin` to examine the `loadManagerClassName` element:
-
-   ```shell
-   
-   $ bin/pulsar-admin brokers get-all-dynamic-config
-   {
-    "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl"
-   }
-   
-   ```
-
-   If there is no `loadManagerClassName` element, then the default load manager is used.
-
-2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager:
-
-   ```json
-   
-   {
-     "bandwidthIn": {
-       "limit": 10240000.0,
-       "usage": 4.256510416666667
-     },
-     "bandwidthOut": {
-       "limit": 10240000.0,
-       "usage": 5.287239583333333
-     },
-     "bundles": [],
-     "cpu": {
-       "limit": 2400.0,
-       "usage": 5.7353247655435915
-     },
-     "directMemory": {
-       "limit": 16384.0,
-       "usage": 1.0
-     }
-   }
-   
-   ```
-
-   With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this:
-
-   ```json
-   
-   {
-     "systemResourceUsage": {
-       "bandwidthIn": {
-         "limit": 10240000.0,
-         "usage": 0.0
-       },
-       "bandwidthOut": {
-         "limit": 10240000.0,
-         "usage": 0.0
-       },
-       "cpu": {
-         "limit": 2400.0,
-         "usage": 0.0
-       },
-       "directMemory": {
-         "limit": 16384.0,
-         "usage": 1.0
-       },
-       "memory": {
-         "limit": 8192.0,
-         "usage": 3903.0
-       }
-     }
-   }
-   
-   ```
-
-3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used.
-
-   Here is an example from the modular load manager:
-
-   ```
-   
-   ===================================================================================================================
-   ||SYSTEM         |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
-   ||               |0.00           |48.33          |0.01           |0.00           |0.00           |48.33          ||
-   ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
-   ||               |4              |4              |0              |2              |4              |0              ||
-   ||LATEST         |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
-   ||SHORT          |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
-   ||LONG           |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
-   ===================================================================================================================
-   
-   ```
-
-   Here is an example from the simple load manager:
-
-   ```
-   
-   ===================================================================================================================
-   ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
-   ||               |4              |4              |0              |2              |0              |0              ||
-   ||RAW SYSTEM     |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
-   ||               |0.25           |47.94          |0.01           |0.00           |0.00           |47.94          ||
-   ||ALLOC SYSTEM   |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
-   ||               |0.20           |1.89           |               |1.27           |3.21           |3.21           ||
-   ||RAW MSG        |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.01           |0.01           |0.01           ||
-   ||ALLOC MSG      |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |54.84          |134.48         |189.31         |126.54         |320.96         |447.50         ||
-   ===================================================================================================================
-   
-   ```
-
-It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper.
-
-## Implementation
-
-### Data
-
-The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class.
-Here, the available data is subdivided into the bundle data and the broker data.
-
-#### Broker
-
-The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts,
-one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker
-data which is written to ZooKeeper by the leader broker.
-
-##### Local Broker Data
-The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources:
-
-* CPU usage
-* JVM heap memory usage
-* Direct memory usage
-* Bandwidth in/out usage
-* Most recent total message rate in/out across all bundles
-* Total number of topics, bundles, producers, and consumers
-* Names of all bundles assigned to this broker
-* Most recent changes in bundle assignments for this broker
-
-The local broker data is updated periodically according to the service configuration
-"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will
-receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node
-`/loadbalance/brokers/<broker host/port>`
-
-##### Historical Broker Data
-
-The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class.
-
-In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information:
-
-* Message rate in/out for the entire broker
-* Message throughput in/out for the entire broker
-
-Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained.
-
-The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
-
-##### Bundle Data
-
-The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame:
-
-* Message rate in/out for this bundle
-* Message Throughput In/Out for this bundle
-* Current number of samples for this bundle
-
-The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where
-the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval
-for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the
-short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term
-data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame,
-the average is taken only over the existing samples. When no samples are available, default values are assumed until
-they are overwritten by the first sample. Currently, the default values are
-
-* Message rate in/out: 50 messages per second both ways
-* Message throughput in/out: 50KB per second both ways
-
-The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper.
-Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical
-broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
-
-### Traffic Distribution
-
-The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](h [...]
-
-#### Least Long Term Message Rate Strategy
-
-As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that
-the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based
-on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system
-resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the
-assignment process. This is done by weighting the final message rate according to
-`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration
-`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources
-that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed
-by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded,
-then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload
-threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly
-assigned.
-
diff --git a/site2/website-next/versioned_docs/version-2.8.0/developing-tools.md b/site2/website-next/versioned_docs/version-2.8.0/developing-tools.md
deleted file mode 100644
index b545779..0000000
--- a/site2/website-next/versioned_docs/version-2.8.0/developing-tools.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-id: develop-tools
-title: Simulation tools
-sidebar_label: "Simulation tools"
-original_id: develop-tools
----
-
-It is sometimes necessary create an test environment and incur artificial load to observe how well load managers
-handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an
-effort to make create this load and observe the effects on the managers more easily.
-
-## Simulation Client
-The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes.
-Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact
-with the simulation client directly, but instead delegates their requests to the simulation controller, which will then
-send signals to clients to start incurring load. The client implementation is in the class
-`org.apache.pulsar.testclient.LoadSimulationClient`.
-
-### Usage
-To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows:
-
-```
-
-pulsar-perf simulation-client --port <listen port> --service-url <pulsar service url>
-
-```
-
-The client will then be ready to receive controller commands.
-## Simulation Controller
-The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old
-topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class
-`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send
-command with.
-
-### Usage
-To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows:
-
-```
-
-pulsar-perf simulation-controller --cluster <cluster to simulate on> --client-port <listen port for clients>
---clients <comma-separated list of client host names>
-
-```
-
-The clients should already be started before the controller is started. You will then be presented with a simple prompt,
-where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic
-names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic
-`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is
-`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions:
-
-* Create a topic with a producer and a consumer
-  * `trade <tenant> <namespace> <topic> [--rate <message rate per second>]
-  [--rand-rate <lower bound>,<upper bound>]
-  [--size <message size in bytes>]`
-* Create a group of topics with a producer and a consumer
-  * `trade_group <tenant> <group> <num_namespaces> [--rate <message rate per second>]
-  [--rand-rate <lower bound>,<upper bound>]
-  [--separation <separation between creating topics in ms>] [--size <message size in bytes>]
-  [--topics-per-namespace <number of topics to create per namespace>]`
-* Change the configuration of an existing topic
-  * `change <tenant> <namespace> <topic> [--rate <message rate per second>]
-  [--rand-rate <lower bound>,<upper bound>]
-  [--size <message size in bytes>]`
-* Change the configuration of a group of topics
-  * `change_group <tenant> <group> [--rate <message rate per second>] [--rand-rate <lower bound>,<upper bound>]
-  [--size <message size in bytes>] [--topics-per-namespace <number of topics to create per namespace>]`
-* Shutdown a previously created topic
-  * `stop <tenant> <namespace> <topic>`
-* Shutdown a previously created group of topics
-  * `stop_group <tenant> <group>`
-* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history
-  * `copy <tenant> <source zookeeper> <target zookeeper> [--rate-multiplier value]`
-* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on)
-  * `simulate <tenant> <zookeeper> [--rate-multiplier value]`
-* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper.
-  * `stream <tenant> <zookeeper> [--rate-multiplier value]`
-
-The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created
-when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped
-with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form
-`zookeeper_host:port`.
-
-### Difference Between Copy, Simulate, and Stream
-The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when
-you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus,
-`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are
-simulating on, and then it will get the full benefit of the historical data of the source in both load manager
-implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes
-that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent
-historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the
-clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams
-load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the
-user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to
-be sent at only `5%` of the rate of the load that is being simulated.
-
-## Broker Monitor
-To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is
-implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the
-console as it is updated using watchers.
-
-### Usage
-To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script:
-
-```
-
-pulsar-perf monitor-brokers --connect-string <zookeeper host:port>
-
-```
-
-The console will then continuously print load data until it is interrupted.
-
diff --git a/site2/website-next/versioned_docs/version-2.8.0/getting-started-docker.md b/site2/website-next/versioned_docs/version-2.8.0/getting-started-docker.md
deleted file mode 100644
index 05ac2a1..0000000
--- a/site2/website-next/versioned_docs/version-2.8.0/getting-started-docker.md
+++ /dev/null
@@ -1,179 +0,0 @@
----
-id: standalone-docker
-title: Set up a standalone Pulsar in Docker
-sidebar_label: "Run Pulsar in Docker"
-original_id: standalone-docker
----
-
-For local development and testing, you can run Pulsar in standalone
-mode on your own machine within a Docker container.
-
-If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition)
-and follow the instructions for your OS.
-
-## Start Pulsar in Docker
-
-* For MacOS, Linux, and Windows:
-
-  ```shell
-  
-  $ docker run -it \
-  -p 6650:6650 \
-  -p 8080:8080 \
-  --mount source=pulsardata,target=/pulsar/data \
-  --mount source=pulsarconf,target=/pulsar/conf \
-  apachepulsar/pulsar:@pulsar:version@ \
-  bin/pulsar standalone
-  
-  ```
-
-A few things to note about this command:
- * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
-time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
- * For Docker on Windows make sure to configure it to use Linux containers
-
-If you start Pulsar successfully, you will see `INFO`-level log messages like this:
-
-```
-
-2017-08-09 22:34:04,030 - INFO  - [main:WebService@213] - Web Service started at http://127.0.0.1:8080
-2017-08-09 22:34:04,038 - INFO  - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246
-...
-
-```
-
-:::tip
-
-When you start a local standalone cluster, a `public/default`
-
-:::
-
-namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.
-For more information, see [Topics](concepts-messaging.md#topics).
-
-## Use Pulsar in Docker
-
-Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python) 
-and [C++](client-libraries-cpp). If you're running a local standalone cluster, you can
-use one of these root URLs to interact with your cluster:
-
-* `pulsar://localhost:6650`
-* `http://localhost:8080`
-
-The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python)
-client API.
-
-Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/):
-
-```shell
-
-$ pip install pulsar-client
-
-```
-
-### Consume a message
-
-Create a consumer and subscribe to the topic:
-
-```python
-
-import pulsar
-
-client = pulsar.Client('pulsar://localhost:6650')
-consumer = client.subscribe('my-topic',
-                            subscription_name='my-sub')
-
-while True:
-    msg = consumer.receive()
-    print("Received message: '%s'" % msg.data())
-    consumer.acknowledge(msg)
-
-client.close()
-
-```
-
-### Produce a message
-
-Now start a producer to send some test messages:
-
-```python
-
-import pulsar
-
-client = pulsar.Client('pulsar://localhost:6650')
-producer = client.create_producer('my-topic')
-
-for i in range(10):
-    producer.send(('hello-pulsar-%d' % i).encode('utf-8'))
-
-client.close()
-
-```
-
-## Get the topic statistics
-
-In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.
-For details on APIs, refer to [Admin API Overview](admin-api-overview).
-
-In the simplest example, you can use curl to probe the stats for a particular topic:
-
-```shell
-
-$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool
-
-```
-
-The output is something like this:
-
-```json
-
-{
-  "averageMsgSize": 0.0,
-  "msgRateIn": 0.0,
-  "msgRateOut": 0.0,
-  "msgThroughputIn": 0.0,
-  "msgThroughputOut": 0.0,
-  "publishers": [
-    {
-      "address": "/172.17.0.1:35048",
-      "averageMsgSize": 0.0,
-      "clientVersion": "1.19.0-incubating",
-      "connectedSince": "2017-08-09 20:59:34.621+0000",
-      "msgRateIn": 0.0,
-      "msgThroughputIn": 0.0,
-      "producerId": 0,
-      "producerName": "standalone-0-1"
-    }
-  ],
-  "replication": {},
-  "storageSize": 16,
-  "subscriptions": {
-    "my-sub": {
-      "blockedSubscriptionOnUnackedMsgs": false,
-      "consumers": [
-        {
-          "address": "/172.17.0.1:35064",
-          "availablePermits": 996,
-          "blockedConsumerOnUnackedMsgs": false,
-          "clientVersion": "1.19.0-incubating",
-          "connectedSince": "2017-08-09 21:05:39.222+0000",
-          "consumerName": "166111",
-          "msgRateOut": 0.0,
-          "msgRateRedeliver": 0.0,
-          "msgThroughputOut": 0.0,
-          "unackedMessages": 0
-        }
-      ],
-      "msgBacklog": 0,
-      "msgRateExpired": 0.0,
-      "msgRateOut": 0.0,
-      "msgRateRedeliver": 0.0,
-      "msgThroughputOut": 0.0,
-      "type": "Exclusive",
-      "unackedMessages": 0
-    }
-  }
-}
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.8.0/getting-started-helm.md b/site2/website-next/versioned_docs/version-2.8.0/getting-started-helm.md
deleted file mode 100644
index bbbd307..0000000
--- a/site2/website-next/versioned_docs/version-2.8.0/getting-started-helm.md
+++ /dev/null
@@ -1,438 +0,0 @@
----
-id: kubernetes-helm
-title: Get started in Kubernetes
-sidebar_label: "Run Pulsar in Kubernetes"
-original_id: kubernetes-helm
----
-
-This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections:
-
-- Install the Apache Pulsar on Kubernetes using Helm
-- Start and stop Apache Pulsar
-- Create topics using `pulsar-admin`
-- Produce and consume messages using Pulsar clients
-- Monitor Apache Pulsar status with Prometheus and Grafana
-
-For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy).
-
-## Prerequisite
-
-- Kubernetes server 1.14.0+
-- kubectl 1.14.0+
-- Helm 3.0+
-
-:::tip
-
-For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**.
-
-:::
-
-## Step 0: Prepare a Kubernetes cluster
-
-Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare) to prepare a Kubernetes cluster.
-
-We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps:
-
-1. Create a Kubernetes cluster on Minikube.
-
-   ```bash
-   
-   minikube start --memory=8192 --cpus=4 --kubernetes-version=<k8s-version>
-   
-   ```
-
-   The `<k8s-version>` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`.
-
-2. Set `kubectl` to use Minikube.
-
-   ```bash
-   
-   kubectl config use-context minikube
-   
-   ```
-
-3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below:
-
-   ```bash
-   
-   minikube dashboard
-   
-   ```
-
-   The command automatically triggers opening a webpage in your browser. 
-
-## Step 1: Install Pulsar Helm chart
-
-0. Add Pulsar charts repo.
-
-   ```bash
-   
-   helm repo add apache https://pulsar.apache.org/charts
-   
-   ```
-
-   ```bash
-   
-   helm repo update
-   
-   ```
-
-1. Clone the Pulsar Helm chart repository.
-
-   ```bash
-   
-   git clone https://github.com/apache/pulsar-helm-chart
-   cd pulsar-helm-chart
-   
-   ```
-
-2. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager.
-
-   ```bash
-   
-   ./scripts/pulsar/prepare_helm_release.sh \
-       -n pulsar \
-       -k pulsar-mini \
-       -c
-   
-   ```
-
-3. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes.
-
-   > **NOTE**  
-   > You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar.
-
-   ```bash
-   
-   helm install \
-       --values examples/values-minikube.yaml \
-       --set initialize=true \
-       --namespace pulsar \
-       pulsar-mini apache/pulsar
-   
-   ```
-
-4. Check the status of all pods.
-
-   ```bash
-   
-   kubectl get pods -n pulsar
-   
-   ```
-
-   If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`.
-
-   **Output**
-
-   ```bash
-   
-   NAME                                         READY   STATUS      RESTARTS   AGE
-   pulsar-mini-bookie-0                         1/1     Running     0          9m27s
-   pulsar-mini-bookie-init-5gphs                0/1     Completed   0          9m27s
-   pulsar-mini-broker-0                         1/1     Running     0          9m27s
-   pulsar-mini-grafana-6b7bcc64c7-4tkxd         1/1     Running     0          9m27s
-   pulsar-mini-prometheus-5fcf5dd84c-w8mgz      1/1     Running     0          9m27s
-   pulsar-mini-proxy-0                          1/1     Running     0          9m27s
-   pulsar-mini-pulsar-init-t7cqt                0/1     Completed   0          9m27s
-   pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs   1/1     Running     0          9m27s
-   pulsar-mini-toolset-0                        1/1     Running     0          9m27s
-   pulsar-mini-zookeeper-0                      1/1     Running     0          9m27s
-   
-   ```
-
-5. Check the status of all services in the namespace `pulsar`.
-
-   ```bash
-   
-   kubectl get services -n pulsar
-   
-   ```
-
-   **Output**
-
-   ```bash
-   
-   NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
-   pulsar-mini-bookie           ClusterIP      None             <none>        3181/TCP,8000/TCP             11m
-   pulsar-mini-broker           ClusterIP      None             <none>        8080/TCP,6650/TCP             11m
-   pulsar-mini-grafana          LoadBalancer   10.106.141.246   <pending>     3000:31905/TCP                11m
-   pulsar-mini-prometheus       ClusterIP      None             <none>        9090/TCP                      11m
-   pulsar-mini-proxy            LoadBalancer   10.97.240.109    <pending>     80:32305/TCP,6650:31816/TCP   11m
-   pulsar-mini-pulsar-manager   LoadBalancer   10.103.192.175   <pending>     9527:30190/TCP                11m
-   pulsar-mini-toolset          ClusterIP      None             <none>        <none>                        11m
-   pulsar-mini-zookeeper        ClusterIP      None             <none>        2888/TCP,3888/TCP,2181/TCP    11m
-   
-   ```
-
-## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics
-
-`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics.
-
-1. Enter the `toolset` container.
-
-   ```bash
-   
-   kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash
-   
-   ```
-
-2. In the `toolset` container, create a tenant named `apache`.
-
-   ```bash
-   
-   bin/pulsar-admin tenants create apache
-   
-   ```
-
-   Then you can list the tenants to see if the tenant is created successfully.
-
-   ```bash
-   
-   bin/pulsar-admin tenants list
-   
-   ```
-
-   You should see a similar output as below. The tenant `apache` has been successfully created. 
-
-   ```bash
-   
-   "apache"
-   "public"
-   "pulsar"
-   
-   ```
-
-3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`.
-
-   ```bash
-   
-   bin/pulsar-admin namespaces create apache/pulsar
-   
-   ```
-
-   Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully.
-
-   ```bash
-   
-   bin/pulsar-admin namespaces list apache
-   
-   ```
-
-   You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. 
-
-   ```bash
-   
-   "apache/pulsar"
-   
-   ```
-
-4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`.
-
-   ```bash
-   
-   bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4
-   
-   ```
-
-5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`.
-
-   ```bash
-   
-   bin/pulsar-admin topics list-partitioned-topics apache/pulsar
-   
-   ```
-
-   Then you can see all the partitioned topics in the namespace `apache/pulsar`.
-
-   ```bash
-   
-   "persistent://apache/pulsar/test-topic"
-   
-   ```
-
-## Step 3: Use Pulsar client to produce and consume messages
-
-You can use the Pulsar client to create producers and consumers to produce and consume messages.
-
-By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service.
-
-```bash
-
-kubectl get services -n pulsar | grep pulsar-mini-proxy
-
-```
-
-You will see a similar output as below.
-
-```bash
-
-pulsar-mini-proxy            LoadBalancer   10.97.240.109    <pending>     80:32305/TCP,6650:31816/TCP   28m
-
-```
-
-This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port.
-
-Then you can find the IP address and exposed ports of your Minikube server by running the following command.
-
-```bash
-
-minikube service pulsar-mini-proxy -n pulsar
-
-```
-
-**Output**
-
-```bash
-
-|-----------|-------------------|-------------|-------------------------|
-| NAMESPACE |       NAME        | TARGET PORT |           URL           |
-|-----------|-------------------|-------------|-------------------------|
-| pulsar    | pulsar-mini-proxy | http/80     | http://172.17.0.4:32305 |
-|           |                   | pulsar/6650 | http://172.17.0.4:31816 |
-|-----------|-------------------|-------------|-------------------------|
-🏃  Starting tunnel for service pulsar-mini-proxy.
-|-----------|-------------------|-------------|------------------------|
-| NAMESPACE |       NAME        | TARGET PORT |          URL           |
-|-----------|-------------------|-------------|------------------------|
-| pulsar    | pulsar-mini-proxy |             | http://127.0.0.1:61853 |
-|           |                   |             | http://127.0.0.1:61854 |
-|-----------|-------------------|-------------|------------------------|
-
-```
-
-At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples:
-
-```
-
-webServiceUrl=http://127.0.0.1:61853/
-brokerServiceUrl=pulsar://127.0.0.1:61854/
-
-```
-
-Then you can proceed with the following steps:
-
-1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/en/download/).
-
-2. Decompress the tarball based on your download file.
-
-   ```bash
-   
-   tar -xf <file-name>.tar.gz
-   
-   ```
-
-3. Expose `PULSAR_HOME`.
-
-   (1) Enter the directory of the decompressed download file.
-
-   (2) Expose `PULSAR_HOME` as the environment variable.
-
-   ```bash
-   
-   export PULSAR_HOME=$(pwd)
-   
-   ```
-
-4. Configure the Pulsar client.
-
-   In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps.
-
-5. Create a subscription to consume messages from `apache/pulsar/test-topic`.
-
-   ```bash
-   
-   bin/pulsar-client consume -s sub apache/pulsar/test-topic  -n 0
-   
-   ```
-
-6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic.
-
-   ```bash
-   
-   bin/pulsar-client produce apache/pulsar/test-topic  -m "---------hello apache pulsar-------" -n 10
-   
-   ```
-
-7. Verify the results.
-
-   - From the producer side
-
-       **Output**
-       
-       The messages have been produced successfully.
-
-       ```bash
-       
-       18:15:15.489 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced
-       
-       ```
-
-   - From the consumer side
-
-       **Output**
-
-       At the same time, you can receive the messages as below.
-
-       ```bash
-       
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       
-       ```
-
-## Step 4: Use Pulsar Manager to manage the cluster
-
-[Pulsar Manager](administration-pulsar-manager) is a web-based GUI management tool for managing and monitoring Pulsar.
-
-1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command:
-
-   ```bash
-   
-   minikube service -n pulsar pulsar-mini-pulsar-manager
-   
-   ```
-
-2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager.
-
-3. In Pulsar Manager UI, you can create an environment. 
-
-   - Click `New Environment` button in the top-left corner.
-   - Type `pulsar-mini` for the field `Environment Name` in the popup window.
-   - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window.
-   - Click `Confirm` button in the popup window.
-
-4. After successfully created an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager.
-
-## Step 5: Use Prometheus and Grafana to monitor cluster
-
-Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards.
-
-1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command:
-
-   ```bash
-   
-   minikube service pulsar-mini-grafana -n pulsar
-   
-   ```
-
-2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard.
-
-3. You can view dashboards for different components of a Pulsar cluster.
diff --git a/site2/website-next/versioned_docs/version-2.8.0/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.8.0/getting-started-standalone.md
deleted file mode 100644
index c2da381..0000000
--- a/site2/website-next/versioned_docs/version-2.8.0/getting-started-standalone.md
+++ /dev/null
@@ -1,272 +0,0 @@
----
-slug: /
-id: standalone
-title: Set up a standalone Pulsar locally
-sidebar_label: "Run Pulsar locally"
-original_id: standalone
----
-
-For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
-
-> #### Pulsar in production? 
-> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal) guide.
-
-## Install Pulsar standalone
-
-This tutorial guides you through every step of the installation process.
-
-### System requirements
-
-Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
-
-:::tip
-
-By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. 
-
-:::
-
-:::note
-
-Broker is only supported on 64-bit JVM.
-
-:::
-
-### Install Pulsar using binary release
-
-To get started with Pulsar, download a binary tarball release in one of the following ways:
-
-* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>)
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)  
-  
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-  
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:binary_release_url
-  
-  ```
-
-After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
-$ cd apache-pulsar-@pulsar:version@
-
-```
-
-#### What your package contains
-
-The Pulsar binary package initially contains the following directories:
-
-Directory | Contains
-:---------|:--------
-`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/).
-`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
-`examples` | A Java JAR file containing [Pulsar Functions](functions-overview) example.
-`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
-`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
-
-These directories are created once you begin running Pulsar.
-
-Directory | Contains
-:---------|:--------
-`data` | The data storage directory used by ZooKeeper and BookKeeper.
-`instances` | Artifacts created for [Pulsar Functions](functions-overview).
-`logs` | Logs created by the installation.
-
-:::tip
-
-If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
-* [Install builtin connectors (optional)](#install-builtin-connectors-optional)
-* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
-Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
-
-:::
-
-### Install builtin connectors (optional)
-
-Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
-To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
-
-* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)
-
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar
-  
-  ```
-
-After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
-For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands:
-
-```bash
-
-$ mkdir connectors
-$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors
-
-$ ls connectors
-pulsar-io-aerospike-@pulsar:version@.nar
-...
-
-```
-
-:::note
-
-* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
-(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
-* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
-you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
-
-:::
-
-### Install tiered storage offloaders (optional)
-
-:::tip
-
-Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
-To enable tiered storage feature, follow the instructions below; otherwise skip this section.
-
-:::
-
-To get started with [tiered storage offloaders](concepts-tiered-storage), you need to download the offloaders tarball release on every broker node in one of the following ways:
-
-* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders @pulsar:version@ release</a>
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)
-
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:offloader_release_url
-  
-  ```
-
-After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
-in the pulsar directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz
-
-// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory
-// then copy the offloaders
-
-$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
-
-$ ls offloaders
-tiered-storage-jcloud-@pulsar:version@.nar
-
-```
-
-For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage).
-
-:::note
-
-* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
-* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
-you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
-
-:::
-
-## Start Pulsar standalone
-
-Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
-
-```bash
-
-$ bin/pulsar standalone
-
-```
-
-If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
-
-```bash
-
-2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
-2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
-2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
-
-```
-
-:::tip
-
-* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
-
-:::
-
-You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
-> 
-> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview) document to secure your deployment.
->
-> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
-
-## Use Pulsar standalone
-
-Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
-
-### Consume a message
-
-The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
-
-```bash
-
-$ bin/pulsar-client consume my-topic -s "first-subscription"
-
-```
-
-If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
-
-```
-
-09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
-
-```
-
-:::tip
-
-As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
-
-:::
-
-### Produce a message
-
-The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
-
-```bash
-
-$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
-
-```
-
-If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
-
-```
-
-13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
-
-```
-
-## Stop Pulsar standalone
-
-Press `Ctrl+C` to stop a local standalone Pulsar.
-
-:::tip
-
-If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
-For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
-
-:::
-
diff --git a/site2/website-next/versioned_docs/version-2.8.0/standalone.md b/site2/website-next/versioned_docs/version-2.8.0/standalone.md
index 05ac2a1..c2da381 100644
--- a/site2/website-next/versioned_docs/version-2.8.0/standalone.md
+++ b/site2/website-next/versioned_docs/version-2.8.0/standalone.md
@@ -1,179 +1,272 @@
 ---
-id: standalone-docker
-title: Set up a standalone Pulsar in Docker
-sidebar_label: "Run Pulsar in Docker"
-original_id: standalone-docker
+slug: /
+id: standalone
+title: Set up a standalone Pulsar locally
+sidebar_label: "Run Pulsar locally"
+original_id: standalone
 ---
 
-For local development and testing, you can run Pulsar in standalone
-mode on your own machine within a Docker container.
+For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
 
-If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition)
-and follow the instructions for your OS.
+> #### Pulsar in production? 
+> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal) guide.
 
-## Start Pulsar in Docker
+## Install Pulsar standalone
 
-* For MacOS, Linux, and Windows:
+This tutorial guides you through every step of the installation process.
+
+### System requirements
+
+Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
+
+:::tip
+
+By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. 
+
+:::
+
+:::note
+
+Broker is only supported on 64-bit JVM.
+
+:::
+
+### Install Pulsar using binary release
+
+To get started with Pulsar, download a binary tarball release in one of the following ways:
+
+* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>)
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)  
+  
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+  
+* use [wget](https://www.gnu.org/software/wget):
 
   ```shell
   
-  $ docker run -it \
-  -p 6650:6650 \
-  -p 8080:8080 \
-  --mount source=pulsardata,target=/pulsar/data \
-  --mount source=pulsarconf,target=/pulsar/conf \
-  apachepulsar/pulsar:@pulsar:version@ \
-  bin/pulsar standalone
+  $ wget pulsar:binary_release_url
   
   ```
 
-A few things to note about this command:
- * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
-time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
- * For Docker on Windows make sure to configure it to use Linux containers
+After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
+
+```bash
 
-If you start Pulsar successfully, you will see `INFO`-level log messages like this:
+$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
+$ cd apache-pulsar-@pulsar:version@
 
 ```
 
-2017-08-09 22:34:04,030 - INFO  - [main:WebService@213] - Web Service started at http://127.0.0.1:8080
-2017-08-09 22:34:04,038 - INFO  - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246
+#### What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/).
+`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
+`examples` | A Java JAR file containing [Pulsar Functions](functions-overview) example.
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
+`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
+
+These directories are created once you begin running Pulsar.
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`instances` | Artifacts created for [Pulsar Functions](functions-overview).
+`logs` | Logs created by the installation.
+
+:::tip
+
+If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
+* [Install builtin connectors (optional)](#install-builtin-connectors-optional)
+* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
+Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
+
+:::
+
+### Install builtin connectors (optional)
+
+Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
+
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar
+  
+  ```
+
+After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
+For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands:
+
+```bash
+
+$ mkdir connectors
+$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-@pulsar:version@.nar
 ...
 
 ```
 
+:::note
+
+* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
+(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
+* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
+you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+:::
+
+### Install tiered storage offloaders (optional)
+
 :::tip
 
-When you start a local standalone cluster, a `public/default`
+Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+To enable tiered storage feature, follow the instructions below; otherwise skip this section.
 
 :::
 
-namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.
-For more information, see [Topics](concepts-messaging.md#topics).
+To get started with [tiered storage offloaders](concepts-tiered-storage), you need to download the offloaders tarball release on every broker node in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders @pulsar:version@ release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
 
-## Use Pulsar in Docker
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:offloader_release_url
+  
+  ```
 
-Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python) 
-and [C++](client-libraries-cpp). If you're running a local standalone cluster, you can
-use one of these root URLs to interact with your cluster:
+After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
+in the pulsar directory:
 
-* `pulsar://localhost:6650`
-* `http://localhost:8080`
+```bash
 
-The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python)
-client API.
+$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz
 
-Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/):
+// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory
+// then copy the offloaders
 
-```shell
+$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
 
-$ pip install pulsar-client
+$ ls offloaders
+tiered-storage-jcloud-@pulsar:version@.nar
 
 ```
 
-### Consume a message
+For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage).
+
+:::note
 
-Create a consumer and subscribe to the topic:
+* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
+* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
+you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
 
-```python
+:::
 
-import pulsar
+## Start Pulsar standalone
 
-client = pulsar.Client('pulsar://localhost:6650')
-consumer = client.subscribe('my-topic',
-                            subscription_name='my-sub')
+Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
 
-while True:
-    msg = consumer.receive()
-    print("Received message: '%s'" % msg.data())
-    consumer.acknowledge(msg)
+```bash
 
-client.close()
+$ bin/pulsar standalone
 
 ```
 
-### Produce a message
+If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
+
+```bash
+
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
+2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
 
-Now start a producer to send some test messages:
+```
 
-```python
+:::tip
 
-import pulsar
+* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
 
-client = pulsar.Client('pulsar://localhost:6650')
-producer = client.create_producer('my-topic')
+:::
+
+You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+> 
+> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview) document to secure your deployment.
+>
+> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
+
+## Use Pulsar standalone
+
+Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
+
+### Consume a message
 
-for i in range(10):
-    producer.send(('hello-pulsar-%d' % i).encode('utf-8'))
+The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
 
-client.close()
+```bash
+
+$ bin/pulsar-client consume my-topic -s "first-subscription"
+
+```
+
+If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
 
 ```
 
-## Get the topic statistics
+09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
+
+```
+
+:::tip
+
+As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
+
+:::
+
+### Produce a message
+
+The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
 
-In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.
-For details on APIs, refer to [Admin API Overview](admin-api-overview).
+```bash
 
-In the simplest example, you can use curl to probe the stats for a particular topic:
+$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
 
-```shell
+```
 
-$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool
+If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
 
 ```
 
-The output is something like this:
-
-```json
-
-{
-  "averageMsgSize": 0.0,
-  "msgRateIn": 0.0,
-  "msgRateOut": 0.0,
-  "msgThroughputIn": 0.0,
-  "msgThroughputOut": 0.0,
-  "publishers": [
-    {
-      "address": "/172.17.0.1:35048",
-      "averageMsgSize": 0.0,
-      "clientVersion": "1.19.0-incubating",
-      "connectedSince": "2017-08-09 20:59:34.621+0000",
-      "msgRateIn": 0.0,
-      "msgThroughputIn": 0.0,
-      "producerId": 0,
-      "producerName": "standalone-0-1"
-    }
-  ],
-  "replication": {},
-  "storageSize": 16,
-  "subscriptions": {
-    "my-sub": {
-      "blockedSubscriptionOnUnackedMsgs": false,
-      "consumers": [
-        {
-          "address": "/172.17.0.1:35064",
-          "availablePermits": 996,
-          "blockedConsumerOnUnackedMsgs": false,
-          "clientVersion": "1.19.0-incubating",
-          "connectedSince": "2017-08-09 21:05:39.222+0000",
-          "consumerName": "166111",
-          "msgRateOut": 0.0,
-          "msgRateRedeliver": 0.0,
-          "msgThroughputOut": 0.0,
-          "unackedMessages": 0
-        }
-      ],
-      "msgBacklog": 0,
-      "msgRateExpired": 0.0,
-      "msgRateOut": 0.0,
-      "msgRateRedeliver": 0.0,
-      "msgThroughputOut": 0.0,
-      "type": "Exclusive",
-      "unackedMessages": 0
-    }
-  }
-}
+13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
 
 ```
 
+## Stop Pulsar standalone
+
+Press `Ctrl+C` to stop a local standalone Pulsar.
+
+:::tip
+
+If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
+For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+
+:::
+
diff --git a/site2/website-next/versioned_docs/version-2.8.1/client-libraries.md b/site2/website-next/versioned_docs/version-2.8.1/client-libraries.md
index c79f7bb..23e5a06 100644
--- a/site2/website-next/versioned_docs/version-2.8.1/client-libraries.md
+++ b/site2/website-next/versioned_docs/version-2.8.1/client-libraries.md
@@ -1,579 +1,35 @@
 ---
-id: client-libraries-cgo
-title: Pulsar CGo client
-sidebar_label: "CGo(deprecated)"
-original_id: client-libraries-cgo
+id: client-libraries
+title: Pulsar client libraries
+sidebar_label: "Overview"
+original_id: client-libraries
 ---
 
-You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+Pulsar supports the following client libraries:
 
-All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+- [Java client](client-libraries-java)
+- [Go client](client-libraries-go)
+- [Python client](client-libraries-python)
+- [C++ client](client-libraries-cpp)
+- [Node.js client](client-libraries-node)
+- [WebSocket client](client-libraries-websocket)
+- [C# client](client-libraries-dotnet)
 
-Currently, the following Go clients are maintained in two repositories.
+## Feature matrix
+Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
 
-| Language | Project | Maintainer | License | Description |
-|----------|---------|------------|---------|-------------|
-| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
-| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
-
-> **API docs available as well**  
-> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
-
-## Installation
-
-### Requirements
-
-Pulsar Go client library is based on the C++ client library. Follow
-the instructions for [C++ library](client-libraries-cpp) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
-
-### Install go package
-
-> **Compatibility Warning**  
-> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
-
-You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
-
-```bash
-
-$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
-
-```
-
-Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
-
-```bash
-
-$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@
-
-```
-
-Once installed locally, you can import it into your project:
-
-```go
-
-import "github.com/apache/pulsar/pulsar-client-go/pulsar"
-
-```
-
-## Connection URLs
-
-To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol) URL.
-
-Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
-
-```http
-
-pulsar://localhost:6650
-
-```
-
-A URL for a production Pulsar cluster may look something like this:
-
-```http
-
-pulsar://pulsar.us-west.example.com:6650
-
-```
-
-If you're using [TLS](security-tls-authentication) authentication, the URL will look like something like this:
-
-```http
-
-pulsar+ssl://pulsar.us-west.example.com:6651
-
-```
-
-## Create a client
-
-In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
-
-```go
-
-import (
-    "log"
-    "runtime"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-        OperationTimeoutSeconds: 5,
-        MessageListenerThreads: runtime.NumCPU(),
-    })
-
-    if err != nil {
-        log.Fatalf("Could not instantiate Pulsar client: %v", err)
-    }
-}
-
-```
-
-The following configurable parameters are available for Pulsar clients:
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
-`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
-`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
-`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
-`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
-`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
-`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
-`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
-`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
-`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
-
-## Producers
-
-Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
-
-```go
-
-producer, err := client.CreateProducer(pulsar.ProducerOptions{
-    Topic: "my-topic",
-})
-
-if err != nil {
-    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
-}
-
-defer producer.Close()
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Hello, Pulsar"),
-}
-
-if err := producer.Send(context.Background(), msg); err != nil {
-    log.Fatalf("Producer could not send message: %v", err)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Producer operations
-
-Pulsar Go producers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
-`Name()` | Fetches the producer's name | `string`
-`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
-`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
-`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
-`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
-`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
-`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
-`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
-`Schema()` | | Schema
-
-Here's a more involved example usage of a producer:
-
-```go
-
-import (
-    "context"
-    "fmt"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client to instantiate a producer
-    producer, err := client.CreateProducer(pulsar.ProducerOptions{
-        Topic: "my-topic",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    ctx := context.Background()
-
-    // Send 10 messages synchronously and 10 messages asynchronously
-    for i := 0; i < 10; i++ {
-        // Create a message
-        msg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("message-%d", i)),
-        }
-
-        // Attempt to send the message
-        if err := producer.Send(ctx, msg); err != nil {
-            log.Fatal(err)
-        }
-
-        // Create a different message to send asynchronously
-        asyncMsg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
-        }
-
-        // Attempt to send the message asynchronously and handle the response
-        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
-            if err != nil { log.Fatal(err) }
-
-            fmt.Printf("the %s successfully published", string(msg.Payload))
-        })
-    }
-}
-
-```
-
-### Producer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
-`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
-`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
-`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication) feature. | 30 seconds
-`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
-`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
-`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
-`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
-`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
-`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
-`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
-`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
-`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms
-`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
-
-## Consumers
-
-Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
-
-```go
-
-msgChannel := make(chan pulsar.ConsumerMessage)
-
-consumerOpts := pulsar.ConsumerOptions{
-    Topic:            "my-topic",
-    SubscriptionName: "my-subscription-1",
-    Type:             pulsar.Exclusive,
-    MessageChannel:   msgChannel,
-}
+## Third-party clients
 
-consumer, err := client.Subscribe(consumerOpts)
+Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages.
 
-if err != nil {
-    log.Fatalf("Could not establish subscription: %v", err)
-}
-
-defer consumer.Close()
-
-for cm := range msgChannel {
-    msg := cm.Message
-
-    fmt.Printf("Message ID: %s", msg.ID())
-    fmt.Printf("Message value: %s", string(msg.Payload()))
-
-    consumer.Ack(msg)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Consumer operations
-
-Pulsar Go consumers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
-`Subscription()` | Returns the consumer's subscription name | `string`
-`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
-`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
-`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
-`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
-`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
-`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
-`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
-`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
-`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
-`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
-`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
-
-#### Receive example
-
-Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client object to instantiate a consumer
-    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
-        Topic:            "my-golang-topic",
-        SubscriptionName: "sub-1",
-        Type: pulsar.Exclusive,
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    defer consumer.Close()
-
-    ctx := context.Background()
-
-    // Listen indefinitely on the topic
-    for {
-        msg, err := consumer.Receive(ctx)
-        if err != nil { log.Fatal(err) }
-
-        // Do something with the message
-        err = processMessage(msg)
-
-        if err == nil {
-            // Message processed successfully
-            consumer.Ack(msg)
-        } else {
-            // Failed to process messages
-            consumer.Nack(msg)
-        }
-    }
-}
-
-```
-
-### Consumer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
-`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
-`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
-`SubscriptionName` | The subscription name for this consumer |
-`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
-`Name` | The name of the consumer |
-`AckTimeout` | Set the timeout for unacked messages | 0
-`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
-`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
-`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
-`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
-`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
-`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
-
-## Readers
-
-Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
-
-```go
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic: "my-golang-topic",
-    StartMessageId: pulsar.LatestMessage,
-})
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
-
-
-### Reader operations
-
-Pulsar Go readers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
-`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
-`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
-`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
-
-#### "Next" example
-
-Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatalf("Could not create client: %v", err) }
-
-    // Use the client to instantiate a reader
-    reader, err := client.CreateReader(pulsar.ReaderOptions{
-        Topic:          "my-golang-topic",
-        StartMessageID: pulsar.EarliestMessage,
-    })
-
-    if err != nil { log.Fatalf("Could not create reader: %v", err) }
-
-    defer reader.Close()
-
-    ctx := context.Background()
-
-    // Listen on the topic for incoming messages
-    for {
-        msg, err := reader.Next(ctx)
-        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
-
-        // Process the message
-    }
-}
-
-```
-
-In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
-
-```go
-
-lastSavedId := // Read last saved message id from external store as byte[]
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic:          "my-golang-topic",
-    StartMessageID: DeserializeMessageID(lastSavedId),
-})
-
-```
-
-### Reader configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages 
-`Name` | The name of the reader 
-`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
-`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
-`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
-`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
-
-## Messages
-
-The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
-
-```go
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Here is some message data"),
-    Key: "message-key",
-    Properties: map[string]string{
-        "foo": "bar",
-    },
-    EventTime: time.Now(),
-    ReplicationClusters: []string{"cluster1", "cluster3"},
-}
-
-if err := producer.send(msg); err != nil {
-    log.Fatalf("Could not publish message due to: %v", err)
-}
-
-```
-
-The following methods parameters are available for `ProducerMessage` objects:
-
-Parameter | Description
-:---------|:-----------
-`Payload` | The actual data payload of the message
-`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
-`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
-`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
-`EventTime` | The timestamp associated with the message
-`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
-`SequenceID` | Set the sequence id to assign to the current message
-
-## TLS encryption and authentication
-
-In order to use [TLS encryption](security-tls-transport), you'll need to configure your client to do so:
-
- * Use `pulsar+ssl` URL type
- * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
- * Configure `Authentication` option
-
-Here's an example:
-
-```go
-
-opts := pulsar.ClientOptions{
-    URL: "pulsar+ssl://my-cluster.com:6651",
-    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
-    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
-}
-
-```
-
-## Schema
-
-This example shows how to create a producer and consumer with schema.
-
-```go
-
-var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
-    		"\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
-jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
-// create producer
-producer, err := client.CreateProducerWithSchema(ProducerOptions{
-	Topic: "jsonTopic",
-}, jsonSchema)
-err = producer.Send(context.Background(), ProducerMessage{
-	Value: &testJson{
-		ID:   100,
-		Name: "pulsar",
-	},
-})
-if err != nil {
-	log.Fatal(err)
-}
-defer producer.Close()
-//create consumer
-var s testJson
-consumerJS := NewJsonSchema(exampleSchemaDef, nil)
-consumer, err := client.SubscribeWithSchema(ConsumerOptions{
-	Topic:            "jsonTopic",
-	SubscriptionName: "sub-2",
-}, consumerJS)
-if err != nil {
-	log.Fatal(err)
-}
-msg, err := consumer.Receive(context.Background())
-if err != nil {
-	log.Fatal(err)
-}
-err = msg.GetValue(&s)
-if err != nil {
-	log.Fatal(err)
-}
-fmt.Println(s.ID) // output: 100
-fmt.Println(s.Name) // output: pulsar
-defer consumer.Close()
-
-```
+> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
 
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | 
+| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | 
+| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 |
+| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
+| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar |
+| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB |
diff --git a/site2/website-next/versioned_docs/version-2.8.1/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.8.1/developing-binary-protocol.md
deleted file mode 100644
index b233f10..0000000
--- a/site2/website-next/versioned_docs/version-2.8.1/developing-binary-protocol.md
+++ /dev/null
@@ -1,581 +0,0 @@
----
-id: develop-binary-protocol
-title: Pulsar binary protocol specification
-sidebar_label: "Binary protocol"
-original_id: develop-binary-protocol
----
-
-Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency.
-
-Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below.
-
-> ### Connection sharing
-> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction.
-
-All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand.
-
-## Framing
-
-Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB.
-
-The Pulsar protocol allows for two types of commands:
-
-1. **Simple commands** that do not carry a message payload.
-2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers.
-
-> Message payloads are passed in raw format rather than protobuf format for efficiency reasons.
-
-### Simple commands
-
-Simple (payload-free) commands have this basic structure:
-
-| Component   | Description                                                                             | Size (in bytes) |
-|:------------|:----------------------------------------------------------------------------------------|:----------------|
-| totalSize   | The size of the frame, counting everything that comes after it (in bytes)               | 4               |
-| commandSize | The size of the protobuf-serialized command                                             | 4               |
-| message     | The protobuf message serialized in a raw binary format (rather than in protobuf format) |                 |
-
-### Payload commands
-
-Payload commands have this basic structure:
-
-| Component    | Description                                                                                 | Size (in bytes) |
-|:-------------|:--------------------------------------------------------------------------------------------|:----------------|
-| totalSize    | The size of the frame, counting everything that comes after it (in bytes)                   | 4               |
-| commandSize  | The size of the protobuf-serialized command                                                 | 4               |
-| message      | The protobuf message serialized in a raw binary format (rather than in protobuf format)     |                 |
-| magicNumber  | A 2-byte byte array (`0x0e01`) identifying the current format                               | 2               |
-| checksum     | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4               |
-| metadataSize | The size of the message [metadata](#message-metadata)                                       | 4               |
-| metadata     | The message [metadata](#message-metadata) stored as a binary protobuf message               |                 |
-| payload      | Anything left in the frame is considered the payload and can include any sequence of bytes  |                 |
-
-## Message metadata
-
-Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer.
-
-| Field                                | Description                                                                                                                                                                                                                                               |
-|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `producer_name`                      | The name of the producer that published the message                                                                                                                                                                                         |
-| `sequence_id`                        | The sequence ID of the message, assigned by producer                                                                                                                                                                                        |
-| `publish_time`                       | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC)                                                                                                                                                    |
-| `properties`                         | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. |
-| `replicated_from` *(optional)*       | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published                                                                                                             |
-| `partition_key` *(optional)*         | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose                                                                                                                          |
-| `compression` *(optional)*           | Signals that payload has been compressed and with which compression library                                                                                                                                                                               |
-| `uncompressed_size` *(optional)*     | If compression is used, the producer must fill the uncompressed size field with the original payload size                                                                                                                                                 |
-| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch                                                                                                                   |
-
-### Batch messages
-
-When using batch messages, the payload will be containing a list of entries,
-each of them with its individual metadata, defined by the `SingleMessageMetadata`
-object.
-
-
-For a single batch, the payload format will look like this:
-
-
-| Field         | Description                                                 |
-|:--------------|:------------------------------------------------------------|
-| metadataSizeN | The size of the single message metadata serialized Protobuf |
-| metadataN     | Single message metadata                                     |
-| payloadN      | Message payload passed by application                       |
-
-Each metadata field looks like this;
-
-| Field                      | Description                                             |
-|:---------------------------|:--------------------------------------------------------|
-| properties                 | Application-defined properties                          |
-| partition key *(optional)* | Key to indicate the hashing to a particular partition   |
-| payload_size               | Size of the payload for the single message in the batch |
-
-When compression is enabled, the whole batch will be compressed at once.
-
-## Interactions
-
-### Connection establishment
-
-After opening a TCP connection to a broker, typically on port 6650, the client
-is responsible to initiate the session.
-
-![Connect interaction](/assets/binary-protocol-connect.png)
-
-After receiving a `Connected` response from the broker, the client can
-consider the connection ready to use. Alternatively, if the broker doesn't
-validate the client authentication, it will reply with an `Error` command and
-close the TCP connection.
-
-Example:
-
-```protobuf
-
-message CommandConnect {
-  "client_version" : "Pulsar-Client-Java-v1.15.2",
-  "auth_method_name" : "my-authentication-plugin",
-  "auth_data" : "my-auth-data",
-  "protocol_version" : 6
-}
-
-```
-
-Fields:
- * `client_version` → String based identifier. Format is not enforced
- * `auth_method_name` → *(optional)* Name of the authentication plugin if auth
-   enabled
- * `auth_data` → *(optional)* Plugin specific authentication data
- * `protocol_version` → Indicates the protocol version supported by the
-   client. Broker will not send commands introduced in newer revisions of the
-   protocol. Broker might be enforcing a minimum version
-
-```protobuf
-
-message CommandConnected {
-  "server_version" : "Pulsar-Broker-v1.15.2",
-  "protocol_version" : 6
-}
-
-```
-
-Fields:
- * `server_version` → String identifier of broker version
- * `protocol_version` → Protocol version supported by the broker. Client
-   must not attempt to send commands introduced in newer revisions of the
-   protocol
-
-### Keep Alive
-
-To identify prolonged network partitions between clients and brokers or cases
-in which a machine crashes without interrupting the TCP connection on the remote
-end (eg: power outage, kernel panic, hard reboot...), we have introduced a
-mechanism to probe for the availability status of the remote peer.
-
-Both clients and brokers are sending `Ping` commands periodically and they will
-close the socket if a `Pong` response is not received within a timeout (default
-used by broker is 60s).
-
-A valid implementation of a Pulsar client is not required to send the `Ping`
-probe, though it is required to promptly reply after receiving one from the
-broker in order to prevent the remote side from forcibly closing the TCP connection.
-
-
-### Producer
-
-In order to send messages, a client needs to establish a producer. When creating
-a producer, the broker will first verify that this particular client is
-authorized to publish on the topic.
-
-Once the client gets confirmation of the producer creation, it can publish
-messages to the broker, referring to the producer id negotiated before.
-
-![Producer interaction](/assets/binary-protocol-producer.png)
-
-##### Command Producer
-
-```protobuf
-
-message CommandProducer {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "producer_id" : 1,
-  "request_id" : 1
-}
-
-```
-
-Parameters:
- * `topic` → Complete topic name to where you want to create the producer on
- * `producer_id` → Client generated producer identifier. Needs to be unique
-    within the same connection
- * `request_id` → Identifier for this request. Used to match the response with
-    the originating request. Needs to be unique within the same connection
- * `producer_name` → *(optional)* If a producer name is specified, the name will
-    be used, otherwise the broker will generate a unique name. Generated
-    producer name is guaranteed to be globally unique. Implementations are
-    expected to let the broker generate a new producer name when the producer
-    is initially created, then reuse it when recreating the producer after
-    reconnections.
-
-The broker will reply with either `ProducerSuccess` or `Error` commands.
-
-##### Command ProducerSuccess
-
-```protobuf
-
-message CommandProducerSuccess {
-  "request_id" :  1,
-  "producer_name" : "generated-unique-producer-name"
-}
-
-```
-
-Parameters:
- * `request_id` → Original id of the `CreateProducer` request
- * `producer_name` → Generated globally unique producer name or the name
-    specified by the client, if any.
-
-##### Command Send
-
-Command `Send` is used to publish a new message within the context of an
-already existing producer. This command is used in a frame that includes command
-as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section.
-
-```protobuf
-
-message CommandSend {
-  "producer_id" : 1,
-  "sequence_id" : 0,
-  "num_messages" : 1
-}
-
-```
-
-Parameters:
- * `producer_id` → id of an existing producer
- * `sequence_id` → each message has an associated sequence id which is expected
-   to be implemented with a counter starting at 0. The `SendReceipt` that
-   acknowledges the effective publishing of a messages will refer to it by
-   its sequence id.
- * `num_messages` → *(optional)* Used when publishing a batch of messages at
-   once.
-
-##### Command SendReceipt
-
-After a message has been persisted on the configured number of replicas, the
-broker will send the acknowledgment receipt to the producer.
-
-```protobuf
-
-message CommandSendReceipt {
-  "producer_id" : 1,
-  "sequence_id" : 0,
-  "message_id" : {
-    "ledgerId" : 123,
-    "entryId" : 456
-  }
-}
-
-```
-
-Parameters:
- * `producer_id` → id of producer originating the send request
- * `sequence_id` → sequence id of the published message
- * `message_id` → message id assigned by the system to the published message
-   Unique within a single cluster. Message id is composed of 2 longs, `ledgerId`
-   and `entryId`, that reflect that this unique id is assigned when appending
-   to a BookKeeper ledger
-
-
-##### Command CloseProducer
-
-**Note**: *This command can be sent by either producer or broker*.
-
-When receiving a `CloseProducer` command, the broker will stop accepting any
-more messages for the producer, wait until all pending messages are persisted
-and then reply `Success` to the client.
-
-The broker can send a `CloseProducer` command to client when it's performing
-a graceful failover (eg: broker is being restarted, or the topic is being unloaded
-by load balancer to be transferred to a different broker).
-
-When receiving the `CloseProducer`, the client is expected to go through the
-service discovery lookup again and recreate the producer again. The TCP
-connection is not affected.
-
-### Consumer
-
-A consumer is used to attach to a subscription and consume messages from it.
-After every reconnection, a client needs to subscribe to the topic. If a
-subscription is not already there, a new one will be created.
-
-![Consumer](/assets/binary-protocol-consumer.png)
-
-#### Flow control
-
-After the consumer is ready, the client needs to *give permission* to the
-broker to push messages. This is done with the `Flow` command.
-
-A `Flow` command gives additional *permits* to send messages to the consumer.
-A typical consumer implementation will use a queue to accumulate these messages
-before the application is ready to consume them.
-
-After the application has dequeued half of the messages in the queue, the consumer 
-sends permits to the broker to ask for more messages (equals to half of the messages in the queue).
-
-For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue.
-Then the consumer sends permits to the broker to ask for 500 messages.
-
-##### Command Subscribe
-
-```protobuf
-
-message CommandSubscribe {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "subscription" : "my-subscription-name",
-  "subType" : "Exclusive",
-  "consumer_id" : 1,
-  "request_id" : 1
-}
-
-```
-
-Parameters:
- * `topic` → Complete topic name to where you want to create the consumer on
- * `subscription` → Subscription name
- * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared
- * `consumer_id` → Client generated consumer identifier. Needs to be unique
-    within the same connection
- * `request_id` → Identifier for this request. Used to match the response with
-    the originating request. Needs to be unique within the same connection
- * `consumer_name` → *(optional)* Clients can specify a consumer name. This
-    name can be used to track a particular consumer in the stats. Also, in
-    Failover subscription type, the name is used to decide which consumer is
-    elected as *master* (the one receiving messages): consumers are sorted by
-    their consumer name and the first one is elected master.
-
-##### Command Flow
-
-```protobuf
-
-message CommandFlow {
-  "consumer_id" : 1,
-  "messagePermits" : 1000
-}
-
-```
-
-Parameters:
-* `consumer_id` → Id of an already established consumer
-* `messagePermits` → Number of additional permits to grant to the broker for
-  pushing more messages
-
-##### Command Message
-
-Command `Message` is used by the broker to push messages to an existing consumer,
-within the limits of the given permits.
-
-
-This command is used in a frame that includes the message payload as well, for
-which the complete format is specified in the [payload commands](#payload-commands)
-section.
-
-```protobuf
-
-message CommandMessage {
-  "consumer_id" : 1,
-  "message_id" : {
-    "ledgerId" : 123,
-    "entryId" : 456
-  }
-}
-
-```
-
-##### Command Ack
-
-An `Ack` is used to signal to the broker that a given message has been
-successfully processed by the application and can be discarded by the broker.
-
-In addition, the broker will also maintain the consumer position based on the
-acknowledged messages.
-
-```protobuf
-
-message CommandAck {
-  "consumer_id" : 1,
-  "ack_type" : "Individual",
-  "message_id" : {
-    "ledgerId" : 123,
-    "entryId" : 456
-  }
-}
-
-```
-
-Parameters:
- * `consumer_id` → Id of an already established consumer
- * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative`
- * `message_id` → Id of the message to acknowledge
- * `validation_error` → *(optional)* Indicates that the consumer has discarded
-   the messages due to: `UncompressedSizeCorruption`,
-   `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError`
-
-##### Command CloseConsumer
-
-***Note***: **This command can be sent by either producer or broker*.
-
-This command behaves the same as [`CloseProducer`](#command-closeproducer)
-
-##### Command RedeliverUnacknowledgedMessages
-
-A consumer can ask the broker to redeliver some or all of the pending messages
-that were pushed to that particular consumer and not yet acknowledged.
-
-The protobuf object accepts a list of message ids that the consumer wants to
-be redelivered. If the list is empty, the broker will redeliver all the
-pending messages.
-
-On redelivery, messages can be sent to the same consumer or, in the case of a
-shared subscription, spread across all available consumers.
-
-
-##### Command ReachedEndOfTopic
-
-This is sent by a broker to a particular consumer, whenever the topic
-has been "terminated" and all the messages on the subscription were
-acknowledged.
-
-The client should use this command to notify the application that no more
-messages are coming from the consumer.
-
-##### Command ConsumerStats
-
-This command is sent by the client to retrieve Subscriber and Consumer level 
-stats from the broker.
-Parameters:
- * `request_id` → Id of the request, used to correlate the request 
-      and the response.
- * `consumer_id` → Id of an already established consumer.
-
-##### Command ConsumerStatsResponse
-
-This is the broker's response to ConsumerStats request by the client. 
-It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request.
-If the `error_code` or the `error_message` field is set it indicates that the request has failed.
-
-##### Command Unsubscribe
-
-This command is sent by the client to unsubscribe the `consumer_id` from the associated topic.
-Parameters:
- * `request_id` → Id of the request.
- * `consumer_id` → Id of an already established consumer which needs to unsubscribe.
-
-
-## Service discovery
-
-### Topic lookup
-
-Topic lookup needs to be performed each time a client needs to create or
-reconnect a producer or a consumer. Lookup is used to discover which particular
-broker is serving the topic we are about to use.
-
-Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#lookup-of-topic)
-docs.
-
-Since Pulsar-1.16 it is also possible to perform the lookup within the binary
-protocol.
-
-For the sake of example, let's assume we have a service discovery component
-running at `pulsar://broker.example.com:6650`
-
-Individual brokers will be running at `pulsar://broker-1.example.com:6650`,
-`pulsar://broker-2.example.com:6650`, ...
-
-A client can use a connection to the discovery service host to issue a
-`LookupTopic` command. The response can either be a broker hostname to
-connect to, or a broker hostname to which retry the lookup.
-
-The `LookupTopic` command has to be used in a connection that has already
-gone through the `Connect` / `Connected` initial handshake.
-
-![Topic lookup](/assets/binary-protocol-topic-lookup.png)
-
-```protobuf
-
-message CommandLookupTopic {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "request_id" : 1,
-  "authoritative" : false
-}
-
-```
-
-Fields:
- * `topic` → Topic name to lookup
- * `request_id` → Id of the request that will be passed with its response
- * `authoritative` → Initial lookup request should use false. When following a
-   redirect response, client should pass the same value contained in the
-   response
-
-##### LookupTopicResponse
-
-Example of response with successful lookup:
-
-```protobuf
-
-message CommandLookupTopicResponse {
-  "request_id" : 1,
-  "response" : "Connect",
-  "brokerServiceUrl" : "pulsar://broker-1.example.com:6650",
-  "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651",
-  "authoritative" : true
-}
-
-```
-
-Example of lookup response with redirection:
-
-```protobuf
-
-message CommandLookupTopicResponse {
-  "request_id" : 1,
-  "response" : "Redirect",
-  "brokerServiceUrl" : "pulsar://broker-2.example.com:6650",
-  "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651",
-  "authoritative" : true
-}
-
-```
-
-In this second case, we need to reissue the `LookupTopic` command request
-to `broker-2.example.com` and this broker will be able to give a definitive
-answer to the lookup request.
-
-### Partitioned topics discovery
-
-Partitioned topics metadata discovery is used to find out if a topic is a
-"partitioned topic" and how many partitions were set up.
-
-If the topic is marked as "partitioned", the client is expected to create
-multiple producers or consumers, one for each partition, using the `partition-X`
-suffix.
-
-This information only needs to be retrieved the first time a producer or
-consumer is created. There is no need to do this after reconnections.
-
-The discovery of partitioned topics metadata works very similar to the topic
-lookup. The client send a request to the service discovery address and the
-response will contain actual metadata.
-
-##### Command PartitionedTopicMetadata
-
-```protobuf
-
-message CommandPartitionedTopicMetadata {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "request_id" : 1
-}
-
-```
-
-Fields:
- * `topic` → the topic for which to check the partitions metadata
- * `request_id` → Id of the request that will be passed with its response
-
-
-##### Command PartitionedTopicMetadataResponse
-
-Example of response with metadata:
-
-```protobuf
-
-message CommandPartitionedTopicMetadataResponse {
-  "request_id" : 1,
-  "response" : "Success",
-  "partitions" : 32
-}
-
-```
-
-## Protobuf interface
-
-All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}.
diff --git a/site2/website-next/versioned_docs/version-2.8.1/developing-load-manager.md b/site2/website-next/versioned_docs/version-2.8.1/developing-load-manager.md
deleted file mode 100644
index 509209b..0000000
--- a/site2/website-next/versioned_docs/version-2.8.1/developing-load-manager.md
+++ /dev/null
@@ -1,227 +0,0 @@
----
-id: develop-load-manager
-title: Modular load manager
-sidebar_label: "Modular load manager"
-original_id: develop-load-manager
----
-
-The *modular load manager*, implemented in  [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load  [...]
-
-## Usage
-
-There are two ways that you can enable the modular load manager:
-
-1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`.
-2. Using the `pulsar-admin` tool. Here's an example:
-
-   ```shell
-   
-   $ pulsar-admin update-dynamic-config \
-    --config loadManagerClassName \
-    --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
-   
-   ```
-
-   You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`.
-
-## Verification
-
-There are a few different ways to determine which load manager is being used:
-
-1. Use `pulsar-admin` to examine the `loadManagerClassName` element:
-
-   ```shell
-   
-   $ bin/pulsar-admin brokers get-all-dynamic-config
-   {
-    "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl"
-   }
-   
-   ```
-
-   If there is no `loadManagerClassName` element, then the default load manager is used.
-
-2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager:
-
-   ```json
-   
-   {
-     "bandwidthIn": {
-       "limit": 10240000.0,
-       "usage": 4.256510416666667
-     },
-     "bandwidthOut": {
-       "limit": 10240000.0,
-       "usage": 5.287239583333333
-     },
-     "bundles": [],
-     "cpu": {
-       "limit": 2400.0,
-       "usage": 5.7353247655435915
-     },
-     "directMemory": {
-       "limit": 16384.0,
-       "usage": 1.0
-     }
-   }
-   
-   ```
-
-   With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this:
-
-   ```json
-   
-   {
-     "systemResourceUsage": {
-       "bandwidthIn": {
-         "limit": 10240000.0,
-         "usage": 0.0
-       },
-       "bandwidthOut": {
-         "limit": 10240000.0,
-         "usage": 0.0
-       },
-       "cpu": {
-         "limit": 2400.0,
-         "usage": 0.0
-       },
-       "directMemory": {
-         "limit": 16384.0,
-         "usage": 1.0
-       },
-       "memory": {
-         "limit": 8192.0,
-         "usage": 3903.0
-       }
-     }
-   }
-   
-   ```
-
-3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used.
-
-   Here is an example from the modular load manager:
-
-   ```
-   
-   ===================================================================================================================
-   ||SYSTEM         |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
-   ||               |0.00           |48.33          |0.01           |0.00           |0.00           |48.33          ||
-   ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
-   ||               |4              |4              |0              |2              |4              |0              ||
-   ||LATEST         |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
-   ||SHORT          |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
-   ||LONG           |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
-   ===================================================================================================================
-   
-   ```
-
-   Here is an example from the simple load manager:
-
-   ```
-   
-   ===================================================================================================================
-   ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
-   ||               |4              |4              |0              |2              |0              |0              ||
-   ||RAW SYSTEM     |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
-   ||               |0.25           |47.94          |0.01           |0.00           |0.00           |47.94          ||
-   ||ALLOC SYSTEM   |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
-   ||               |0.20           |1.89           |               |1.27           |3.21           |3.21           ||
-   ||RAW MSG        |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.01           |0.01           |0.01           ||
-   ||ALLOC MSG      |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |54.84          |134.48         |189.31         |126.54         |320.96         |447.50         ||
-   ===================================================================================================================
-   
-   ```
-
-It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper.
-
-## Implementation
-
-### Data
-
-The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class.
-Here, the available data is subdivided into the bundle data and the broker data.
-
-#### Broker
-
-The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts,
-one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker
-data which is written to ZooKeeper by the leader broker.
-
-##### Local Broker Data
-The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources:
-
-* CPU usage
-* JVM heap memory usage
-* Direct memory usage
-* Bandwidth in/out usage
-* Most recent total message rate in/out across all bundles
-* Total number of topics, bundles, producers, and consumers
-* Names of all bundles assigned to this broker
-* Most recent changes in bundle assignments for this broker
-
-The local broker data is updated periodically according to the service configuration
-"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will
-receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node
-`/loadbalance/brokers/<broker host/port>`
-
-##### Historical Broker Data
-
-The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class.
-
-In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information:
-
-* Message rate in/out for the entire broker
-* Message throughput in/out for the entire broker
-
-Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained.
-
-The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
-
-##### Bundle Data
-
-The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame:
-
-* Message rate in/out for this bundle
-* Message Throughput In/Out for this bundle
-* Current number of samples for this bundle
-
-The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where
-the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval
-for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the
-short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term
-data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame,
-the average is taken only over the existing samples. When no samples are available, default values are assumed until
-they are overwritten by the first sample. Currently, the default values are
-
-* Message rate in/out: 50 messages per second both ways
-* Message throughput in/out: 50KB per second both ways
-
-The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper.
-Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical
-broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
-
-### Traffic Distribution
-
-The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](h [...]
-
-#### Least Long Term Message Rate Strategy
-
-As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that
-the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based
-on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system
-resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the
-assignment process. This is done by weighting the final message rate according to
-`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration
-`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources
-that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed
-by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded,
-then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload
-threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly
-assigned.
-
diff --git a/site2/website-next/versioned_docs/version-2.8.1/developing-tools.md b/site2/website-next/versioned_docs/version-2.8.1/developing-tools.md
deleted file mode 100644
index b545779..0000000
--- a/site2/website-next/versioned_docs/version-2.8.1/developing-tools.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-id: develop-tools
-title: Simulation tools
-sidebar_label: "Simulation tools"
-original_id: develop-tools
----
-
-It is sometimes necessary create an test environment and incur artificial load to observe how well load managers
-handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an
-effort to make create this load and observe the effects on the managers more easily.
-
-## Simulation Client
-The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes.
-Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact
-with the simulation client directly, but instead delegates their requests to the simulation controller, which will then
-send signals to clients to start incurring load. The client implementation is in the class
-`org.apache.pulsar.testclient.LoadSimulationClient`.
-
-### Usage
-To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows:
-
-```
-
-pulsar-perf simulation-client --port <listen port> --service-url <pulsar service url>
-
-```
-
-The client will then be ready to receive controller commands.
-## Simulation Controller
-The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old
-topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class
-`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send
-command with.
-
-### Usage
-To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows:
-
-```
-
-pulsar-perf simulation-controller --cluster <cluster to simulate on> --client-port <listen port for clients>
---clients <comma-separated list of client host names>
-
-```
-
-The clients should already be started before the controller is started. You will then be presented with a simple prompt,
-where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic
-names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic
-`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is
-`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions:
-
-* Create a topic with a producer and a consumer
-  * `trade <tenant> <namespace> <topic> [--rate <message rate per second>]
-  [--rand-rate <lower bound>,<upper bound>]
-  [--size <message size in bytes>]`
-* Create a group of topics with a producer and a consumer
-  * `trade_group <tenant> <group> <num_namespaces> [--rate <message rate per second>]
-  [--rand-rate <lower bound>,<upper bound>]
-  [--separation <separation between creating topics in ms>] [--size <message size in bytes>]
-  [--topics-per-namespace <number of topics to create per namespace>]`
-* Change the configuration of an existing topic
-  * `change <tenant> <namespace> <topic> [--rate <message rate per second>]
-  [--rand-rate <lower bound>,<upper bound>]
-  [--size <message size in bytes>]`
-* Change the configuration of a group of topics
-  * `change_group <tenant> <group> [--rate <message rate per second>] [--rand-rate <lower bound>,<upper bound>]
-  [--size <message size in bytes>] [--topics-per-namespace <number of topics to create per namespace>]`
-* Shutdown a previously created topic
-  * `stop <tenant> <namespace> <topic>`
-* Shutdown a previously created group of topics
-  * `stop_group <tenant> <group>`
-* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history
-  * `copy <tenant> <source zookeeper> <target zookeeper> [--rate-multiplier value]`
-* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on)
-  * `simulate <tenant> <zookeeper> [--rate-multiplier value]`
-* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper.
-  * `stream <tenant> <zookeeper> [--rate-multiplier value]`
-
-The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created
-when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped
-with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form
-`zookeeper_host:port`.
-
-### Difference Between Copy, Simulate, and Stream
-The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when
-you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus,
-`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are
-simulating on, and then it will get the full benefit of the historical data of the source in both load manager
-implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes
-that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent
-historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the
-clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams
-load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the
-user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to
-be sent at only `5%` of the rate of the load that is being simulated.
-
-## Broker Monitor
-To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is
-implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the
-console as it is updated using watchers.
-
-### Usage
-To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script:
-
-```
-
-pulsar-perf monitor-brokers --connect-string <zookeeper host:port>
-
-```
-
-The console will then continuously print load data until it is interrupted.
-
diff --git a/site2/website-next/versioned_docs/version-2.8.1/getting-started-docker.md b/site2/website-next/versioned_docs/version-2.8.1/getting-started-docker.md
deleted file mode 100644
index 05ac2a1..0000000
--- a/site2/website-next/versioned_docs/version-2.8.1/getting-started-docker.md
+++ /dev/null
@@ -1,179 +0,0 @@
----
-id: standalone-docker
-title: Set up a standalone Pulsar in Docker
-sidebar_label: "Run Pulsar in Docker"
-original_id: standalone-docker
----
-
-For local development and testing, you can run Pulsar in standalone
-mode on your own machine within a Docker container.
-
-If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition)
-and follow the instructions for your OS.
-
-## Start Pulsar in Docker
-
-* For MacOS, Linux, and Windows:
-
-  ```shell
-  
-  $ docker run -it \
-  -p 6650:6650 \
-  -p 8080:8080 \
-  --mount source=pulsardata,target=/pulsar/data \
-  --mount source=pulsarconf,target=/pulsar/conf \
-  apachepulsar/pulsar:@pulsar:version@ \
-  bin/pulsar standalone
-  
-  ```
-
-A few things to note about this command:
- * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
-time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
- * For Docker on Windows make sure to configure it to use Linux containers
-
-If you start Pulsar successfully, you will see `INFO`-level log messages like this:
-
-```
-
-2017-08-09 22:34:04,030 - INFO  - [main:WebService@213] - Web Service started at http://127.0.0.1:8080
-2017-08-09 22:34:04,038 - INFO  - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246
-...
-
-```
-
-:::tip
-
-When you start a local standalone cluster, a `public/default`
-
-:::
-
-namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.
-For more information, see [Topics](concepts-messaging.md#topics).
-
-## Use Pulsar in Docker
-
-Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python) 
-and [C++](client-libraries-cpp). If you're running a local standalone cluster, you can
-use one of these root URLs to interact with your cluster:
-
-* `pulsar://localhost:6650`
-* `http://localhost:8080`
-
-The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python)
-client API.
-
-Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/):
-
-```shell
-
-$ pip install pulsar-client
-
-```
-
-### Consume a message
-
-Create a consumer and subscribe to the topic:
-
-```python
-
-import pulsar
-
-client = pulsar.Client('pulsar://localhost:6650')
-consumer = client.subscribe('my-topic',
-                            subscription_name='my-sub')
-
-while True:
-    msg = consumer.receive()
-    print("Received message: '%s'" % msg.data())
-    consumer.acknowledge(msg)
-
-client.close()
-
-```
-
-### Produce a message
-
-Now start a producer to send some test messages:
-
-```python
-
-import pulsar
-
-client = pulsar.Client('pulsar://localhost:6650')
-producer = client.create_producer('my-topic')
-
-for i in range(10):
-    producer.send(('hello-pulsar-%d' % i).encode('utf-8'))
-
-client.close()
-
-```
-
-## Get the topic statistics
-
-In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.
-For details on APIs, refer to [Admin API Overview](admin-api-overview).
-
-In the simplest example, you can use curl to probe the stats for a particular topic:
-
-```shell
-
-$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool
-
-```
-
-The output is something like this:
-
-```json
-
-{
-  "averageMsgSize": 0.0,
-  "msgRateIn": 0.0,
-  "msgRateOut": 0.0,
-  "msgThroughputIn": 0.0,
-  "msgThroughputOut": 0.0,
-  "publishers": [
-    {
-      "address": "/172.17.0.1:35048",
-      "averageMsgSize": 0.0,
-      "clientVersion": "1.19.0-incubating",
-      "connectedSince": "2017-08-09 20:59:34.621+0000",
-      "msgRateIn": 0.0,
-      "msgThroughputIn": 0.0,
-      "producerId": 0,
-      "producerName": "standalone-0-1"
-    }
-  ],
-  "replication": {},
-  "storageSize": 16,
-  "subscriptions": {
-    "my-sub": {
-      "blockedSubscriptionOnUnackedMsgs": false,
-      "consumers": [
-        {
-          "address": "/172.17.0.1:35064",
-          "availablePermits": 996,
-          "blockedConsumerOnUnackedMsgs": false,
-          "clientVersion": "1.19.0-incubating",
-          "connectedSince": "2017-08-09 21:05:39.222+0000",
-          "consumerName": "166111",
-          "msgRateOut": 0.0,
-          "msgRateRedeliver": 0.0,
-          "msgThroughputOut": 0.0,
-          "unackedMessages": 0
-        }
-      ],
-      "msgBacklog": 0,
-      "msgRateExpired": 0.0,
-      "msgRateOut": 0.0,
-      "msgRateRedeliver": 0.0,
-      "msgThroughputOut": 0.0,
-      "type": "Exclusive",
-      "unackedMessages": 0
-    }
-  }
-}
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.8.1/getting-started-helm.md b/site2/website-next/versioned_docs/version-2.8.1/getting-started-helm.md
deleted file mode 100644
index bbbd307..0000000
--- a/site2/website-next/versioned_docs/version-2.8.1/getting-started-helm.md
+++ /dev/null
@@ -1,438 +0,0 @@
----
-id: kubernetes-helm
-title: Get started in Kubernetes
-sidebar_label: "Run Pulsar in Kubernetes"
-original_id: kubernetes-helm
----
-
-This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections:
-
-- Install the Apache Pulsar on Kubernetes using Helm
-- Start and stop Apache Pulsar
-- Create topics using `pulsar-admin`
-- Produce and consume messages using Pulsar clients
-- Monitor Apache Pulsar status with Prometheus and Grafana
-
-For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy).
-
-## Prerequisite
-
-- Kubernetes server 1.14.0+
-- kubectl 1.14.0+
-- Helm 3.0+
-
-:::tip
-
-For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**.
-
-:::
-
-## Step 0: Prepare a Kubernetes cluster
-
-Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare) to prepare a Kubernetes cluster.
-
-We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps:
-
-1. Create a Kubernetes cluster on Minikube.
-
-   ```bash
-   
-   minikube start --memory=8192 --cpus=4 --kubernetes-version=<k8s-version>
-   
-   ```
-
-   The `<k8s-version>` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`.
-
-2. Set `kubectl` to use Minikube.
-
-   ```bash
-   
-   kubectl config use-context minikube
-   
-   ```
-
-3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below:
-
-   ```bash
-   
-   minikube dashboard
-   
-   ```
-
-   The command automatically triggers opening a webpage in your browser. 
-
-## Step 1: Install Pulsar Helm chart
-
-0. Add Pulsar charts repo.
-
-   ```bash
-   
-   helm repo add apache https://pulsar.apache.org/charts
-   
-   ```
-
-   ```bash
-   
-   helm repo update
-   
-   ```
-
-1. Clone the Pulsar Helm chart repository.
-
-   ```bash
-   
-   git clone https://github.com/apache/pulsar-helm-chart
-   cd pulsar-helm-chart
-   
-   ```
-
-2. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager.
-
-   ```bash
-   
-   ./scripts/pulsar/prepare_helm_release.sh \
-       -n pulsar \
-       -k pulsar-mini \
-       -c
-   
-   ```
-
-3. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes.
-
-   > **NOTE**  
-   > You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar.
-
-   ```bash
-   
-   helm install \
-       --values examples/values-minikube.yaml \
-       --set initialize=true \
-       --namespace pulsar \
-       pulsar-mini apache/pulsar
-   
-   ```
-
-4. Check the status of all pods.
-
-   ```bash
-   
-   kubectl get pods -n pulsar
-   
-   ```
-
-   If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`.
-
-   **Output**
-
-   ```bash
-   
-   NAME                                         READY   STATUS      RESTARTS   AGE
-   pulsar-mini-bookie-0                         1/1     Running     0          9m27s
-   pulsar-mini-bookie-init-5gphs                0/1     Completed   0          9m27s
-   pulsar-mini-broker-0                         1/1     Running     0          9m27s
-   pulsar-mini-grafana-6b7bcc64c7-4tkxd         1/1     Running     0          9m27s
-   pulsar-mini-prometheus-5fcf5dd84c-w8mgz      1/1     Running     0          9m27s
-   pulsar-mini-proxy-0                          1/1     Running     0          9m27s
-   pulsar-mini-pulsar-init-t7cqt                0/1     Completed   0          9m27s
-   pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs   1/1     Running     0          9m27s
-   pulsar-mini-toolset-0                        1/1     Running     0          9m27s
-   pulsar-mini-zookeeper-0                      1/1     Running     0          9m27s
-   
-   ```
-
-5. Check the status of all services in the namespace `pulsar`.
-
-   ```bash
-   
-   kubectl get services -n pulsar
-   
-   ```
-
-   **Output**
-
-   ```bash
-   
-   NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
-   pulsar-mini-bookie           ClusterIP      None             <none>        3181/TCP,8000/TCP             11m
-   pulsar-mini-broker           ClusterIP      None             <none>        8080/TCP,6650/TCP             11m
-   pulsar-mini-grafana          LoadBalancer   10.106.141.246   <pending>     3000:31905/TCP                11m
-   pulsar-mini-prometheus       ClusterIP      None             <none>        9090/TCP                      11m
-   pulsar-mini-proxy            LoadBalancer   10.97.240.109    <pending>     80:32305/TCP,6650:31816/TCP   11m
-   pulsar-mini-pulsar-manager   LoadBalancer   10.103.192.175   <pending>     9527:30190/TCP                11m
-   pulsar-mini-toolset          ClusterIP      None             <none>        <none>                        11m
-   pulsar-mini-zookeeper        ClusterIP      None             <none>        2888/TCP,3888/TCP,2181/TCP    11m
-   
-   ```
-
-## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics
-
-`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics.
-
-1. Enter the `toolset` container.
-
-   ```bash
-   
-   kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash
-   
-   ```
-
-2. In the `toolset` container, create a tenant named `apache`.
-
-   ```bash
-   
-   bin/pulsar-admin tenants create apache
-   
-   ```
-
-   Then you can list the tenants to see if the tenant is created successfully.
-
-   ```bash
-   
-   bin/pulsar-admin tenants list
-   
-   ```
-
-   You should see a similar output as below. The tenant `apache` has been successfully created. 
-
-   ```bash
-   
-   "apache"
-   "public"
-   "pulsar"
-   
-   ```
-
-3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`.
-
-   ```bash
-   
-   bin/pulsar-admin namespaces create apache/pulsar
-   
-   ```
-
-   Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully.
-
-   ```bash
-   
-   bin/pulsar-admin namespaces list apache
-   
-   ```
-
-   You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. 
-
-   ```bash
-   
-   "apache/pulsar"
-   
-   ```
-
-4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`.
-
-   ```bash
-   
-   bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4
-   
-   ```
-
-5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`.
-
-   ```bash
-   
-   bin/pulsar-admin topics list-partitioned-topics apache/pulsar
-   
-   ```
-
-   Then you can see all the partitioned topics in the namespace `apache/pulsar`.
-
-   ```bash
-   
-   "persistent://apache/pulsar/test-topic"
-   
-   ```
-
-## Step 3: Use Pulsar client to produce and consume messages
-
-You can use the Pulsar client to create producers and consumers to produce and consume messages.
-
-By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service.
-
-```bash
-
-kubectl get services -n pulsar | grep pulsar-mini-proxy
-
-```
-
-You will see a similar output as below.
-
-```bash
-
-pulsar-mini-proxy            LoadBalancer   10.97.240.109    <pending>     80:32305/TCP,6650:31816/TCP   28m
-
-```
-
-This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port.
-
-Then you can find the IP address and exposed ports of your Minikube server by running the following command.
-
-```bash
-
-minikube service pulsar-mini-proxy -n pulsar
-
-```
-
-**Output**
-
-```bash
-
-|-----------|-------------------|-------------|-------------------------|
-| NAMESPACE |       NAME        | TARGET PORT |           URL           |
-|-----------|-------------------|-------------|-------------------------|
-| pulsar    | pulsar-mini-proxy | http/80     | http://172.17.0.4:32305 |
-|           |                   | pulsar/6650 | http://172.17.0.4:31816 |
-|-----------|-------------------|-------------|-------------------------|
-🏃  Starting tunnel for service pulsar-mini-proxy.
-|-----------|-------------------|-------------|------------------------|
-| NAMESPACE |       NAME        | TARGET PORT |          URL           |
-|-----------|-------------------|-------------|------------------------|
-| pulsar    | pulsar-mini-proxy |             | http://127.0.0.1:61853 |
-|           |                   |             | http://127.0.0.1:61854 |
-|-----------|-------------------|-------------|------------------------|
-
-```
-
-At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples:
-
-```
-
-webServiceUrl=http://127.0.0.1:61853/
-brokerServiceUrl=pulsar://127.0.0.1:61854/
-
-```
-
-Then you can proceed with the following steps:
-
-1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/en/download/).
-
-2. Decompress the tarball based on your download file.
-
-   ```bash
-   
-   tar -xf <file-name>.tar.gz
-   
-   ```
-
-3. Expose `PULSAR_HOME`.
-
-   (1) Enter the directory of the decompressed download file.
-
-   (2) Expose `PULSAR_HOME` as the environment variable.
-
-   ```bash
-   
-   export PULSAR_HOME=$(pwd)
-   
-   ```
-
-4. Configure the Pulsar client.
-
-   In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps.
-
-5. Create a subscription to consume messages from `apache/pulsar/test-topic`.
-
-   ```bash
-   
-   bin/pulsar-client consume -s sub apache/pulsar/test-topic  -n 0
-   
-   ```
-
-6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic.
-
-   ```bash
-   
-   bin/pulsar-client produce apache/pulsar/test-topic  -m "---------hello apache pulsar-------" -n 10
-   
-   ```
-
-7. Verify the results.
-
-   - From the producer side
-
-       **Output**
-       
-       The messages have been produced successfully.
-
-       ```bash
-       
-       18:15:15.489 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced
-       
-       ```
-
-   - From the consumer side
-
-       **Output**
-
-       At the same time, you can receive the messages as below.
-
-       ```bash
-       
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       
-       ```
-
-## Step 4: Use Pulsar Manager to manage the cluster
-
-[Pulsar Manager](administration-pulsar-manager) is a web-based GUI management tool for managing and monitoring Pulsar.
-
-1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command:
-
-   ```bash
-   
-   minikube service -n pulsar pulsar-mini-pulsar-manager
-   
-   ```
-
-2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager.
-
-3. In Pulsar Manager UI, you can create an environment. 
-
-   - Click `New Environment` button in the top-left corner.
-   - Type `pulsar-mini` for the field `Environment Name` in the popup window.
-   - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window.
-   - Click `Confirm` button in the popup window.
-
-4. After successfully created an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager.
-
-## Step 5: Use Prometheus and Grafana to monitor cluster
-
-Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards.
-
-1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command:
-
-   ```bash
-   
-   minikube service pulsar-mini-grafana -n pulsar
-   
-   ```
-
-2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard.
-
-3. You can view dashboards for different components of a Pulsar cluster.
diff --git a/site2/website-next/versioned_docs/version-2.8.1/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.8.1/getting-started-standalone.md
deleted file mode 100644
index c2da381..0000000
--- a/site2/website-next/versioned_docs/version-2.8.1/getting-started-standalone.md
+++ /dev/null
@@ -1,272 +0,0 @@
----
-slug: /
-id: standalone
-title: Set up a standalone Pulsar locally
-sidebar_label: "Run Pulsar locally"
-original_id: standalone
----
-
-For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
-
-> #### Pulsar in production? 
-> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal) guide.
-
-## Install Pulsar standalone
-
-This tutorial guides you through every step of the installation process.
-
-### System requirements
-
-Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
-
-:::tip
-
-By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. 
-
-:::
-
-:::note
-
-Broker is only supported on 64-bit JVM.
-
-:::
-
-### Install Pulsar using binary release
-
-To get started with Pulsar, download a binary tarball release in one of the following ways:
-
-* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>)
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)  
-  
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-  
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:binary_release_url
-  
-  ```
-
-After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
-$ cd apache-pulsar-@pulsar:version@
-
-```
-
-#### What your package contains
-
-The Pulsar binary package initially contains the following directories:
-
-Directory | Contains
-:---------|:--------
-`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/).
-`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
-`examples` | A Java JAR file containing [Pulsar Functions](functions-overview) example.
-`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
-`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
-
-These directories are created once you begin running Pulsar.
-
-Directory | Contains
-:---------|:--------
-`data` | The data storage directory used by ZooKeeper and BookKeeper.
-`instances` | Artifacts created for [Pulsar Functions](functions-overview).
-`logs` | Logs created by the installation.
-
-:::tip
-
-If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
-* [Install builtin connectors (optional)](#install-builtin-connectors-optional)
-* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
-Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
-
-:::
-
-### Install builtin connectors (optional)
-
-Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
-To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
-
-* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)
-
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar
-  
-  ```
-
-After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
-For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands:
-
-```bash
-
-$ mkdir connectors
-$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors
-
-$ ls connectors
-pulsar-io-aerospike-@pulsar:version@.nar
-...
-
-```
-
-:::note
-
-* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
-(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
-* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
-you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
-
-:::
-
-### Install tiered storage offloaders (optional)
-
-:::tip
-
-Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
-To enable tiered storage feature, follow the instructions below; otherwise skip this section.
-
-:::
-
-To get started with [tiered storage offloaders](concepts-tiered-storage), you need to download the offloaders tarball release on every broker node in one of the following ways:
-
-* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders @pulsar:version@ release</a>
-
-* download from the Pulsar [downloads page](pulsar:download_page_url)
-
-* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
-
-* use [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:offloader_release_url
-  
-  ```
-
-After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
-in the pulsar directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz
-
-// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory
-// then copy the offloaders
-
-$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
-
-$ ls offloaders
-tiered-storage-jcloud-@pulsar:version@.nar
-
-```
-
-For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage).
-
-:::note
-
-* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
-* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
-you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
-
-:::
-
-## Start Pulsar standalone
-
-Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
-
-```bash
-
-$ bin/pulsar standalone
-
-```
-
-If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
-
-```bash
-
-2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
-2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
-2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
-
-```
-
-:::tip
-
-* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
-
-:::
-
-You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
-> 
-> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview) document to secure your deployment.
->
-> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
-
-## Use Pulsar standalone
-
-Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
-
-### Consume a message
-
-The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
-
-```bash
-
-$ bin/pulsar-client consume my-topic -s "first-subscription"
-
-```
-
-If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
-
-```
-
-09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
-
-```
-
-:::tip
-
-As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
-
-:::
-
-### Produce a message
-
-The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
-
-```bash
-
-$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
-
-```
-
-If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
-
-```
-
-13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
-
-```
-
-## Stop Pulsar standalone
-
-Press `Ctrl+C` to stop a local standalone Pulsar.
-
-:::tip
-
-If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
-For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
-
-:::
-
diff --git a/site2/website-next/versioned_docs/version-2.8.1/standalone.md b/site2/website-next/versioned_docs/version-2.8.1/standalone.md
index 05ac2a1..c2da381 100644
--- a/site2/website-next/versioned_docs/version-2.8.1/standalone.md
+++ b/site2/website-next/versioned_docs/version-2.8.1/standalone.md
@@ -1,179 +1,272 @@
 ---
-id: standalone-docker
-title: Set up a standalone Pulsar in Docker
-sidebar_label: "Run Pulsar in Docker"
-original_id: standalone-docker
+slug: /
+id: standalone
+title: Set up a standalone Pulsar locally
+sidebar_label: "Run Pulsar locally"
+original_id: standalone
 ---
 
-For local development and testing, you can run Pulsar in standalone
-mode on your own machine within a Docker container.
+For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
 
-If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition)
-and follow the instructions for your OS.
+> #### Pulsar in production? 
+> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal) guide.
 
-## Start Pulsar in Docker
+## Install Pulsar standalone
 
-* For MacOS, Linux, and Windows:
+This tutorial guides you through every step of the installation process.
+
+### System requirements
+
+Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
+
+:::tip
+
+By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. 
+
+:::
+
+:::note
+
+Broker is only supported on 64-bit JVM.
+
+:::
+
+### Install Pulsar using binary release
+
+To get started with Pulsar, download a binary tarball release in one of the following ways:
+
+* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>)
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)  
+  
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+  
+* use [wget](https://www.gnu.org/software/wget):
 
   ```shell
   
-  $ docker run -it \
-  -p 6650:6650 \
-  -p 8080:8080 \
-  --mount source=pulsardata,target=/pulsar/data \
-  --mount source=pulsarconf,target=/pulsar/conf \
-  apachepulsar/pulsar:@pulsar:version@ \
-  bin/pulsar standalone
+  $ wget pulsar:binary_release_url
   
   ```
 
-A few things to note about this command:
- * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
-time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
- * For Docker on Windows make sure to configure it to use Linux containers
+After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
+
+```bash
 
-If you start Pulsar successfully, you will see `INFO`-level log messages like this:
+$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
+$ cd apache-pulsar-@pulsar:version@
 
 ```
 
-2017-08-09 22:34:04,030 - INFO  - [main:WebService@213] - Web Service started at http://127.0.0.1:8080
-2017-08-09 22:34:04,038 - INFO  - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246
+#### What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/).
+`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
+`examples` | A Java JAR file containing [Pulsar Functions](functions-overview) example.
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
+`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
+
+These directories are created once you begin running Pulsar.
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`instances` | Artifacts created for [Pulsar Functions](functions-overview).
+`logs` | Logs created by the installation.
+
+:::tip
+
+If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
+* [Install builtin connectors (optional)](#install-builtin-connectors-optional)
+* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
+Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
+
+:::
+
+### Install builtin connectors (optional)
+
+Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
+
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar
+  
+  ```
+
+After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
+For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands:
+
+```bash
+
+$ mkdir connectors
+$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-@pulsar:version@.nar
 ...
 
 ```
 
+:::note
+
+* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
+(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
+* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
+you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+:::
+
+### Install tiered storage offloaders (optional)
+
 :::tip
 
-When you start a local standalone cluster, a `public/default`
+Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+To enable tiered storage feature, follow the instructions below; otherwise skip this section.
 
 :::
 
-namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.
-For more information, see [Topics](concepts-messaging.md#topics).
+To get started with [tiered storage offloaders](concepts-tiered-storage), you need to download the offloaders tarball release on every broker node in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders @pulsar:version@ release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
 
-## Use Pulsar in Docker
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:offloader_release_url
+  
+  ```
 
-Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python) 
-and [C++](client-libraries-cpp). If you're running a local standalone cluster, you can
-use one of these root URLs to interact with your cluster:
+After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
+in the pulsar directory:
 
-* `pulsar://localhost:6650`
-* `http://localhost:8080`
+```bash
 
-The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python)
-client API.
+$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz
 
-Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/):
+// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory
+// then copy the offloaders
 
-```shell
+$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
 
-$ pip install pulsar-client
+$ ls offloaders
+tiered-storage-jcloud-@pulsar:version@.nar
 
 ```
 
-### Consume a message
+For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage).
+
+:::note
 
-Create a consumer and subscribe to the topic:
+* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
+* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
+you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
 
-```python
+:::
 
-import pulsar
+## Start Pulsar standalone
 
-client = pulsar.Client('pulsar://localhost:6650')
-consumer = client.subscribe('my-topic',
-                            subscription_name='my-sub')
+Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
 
-while True:
-    msg = consumer.receive()
-    print("Received message: '%s'" % msg.data())
-    consumer.acknowledge(msg)
+```bash
 
-client.close()
+$ bin/pulsar standalone
 
 ```
 
-### Produce a message
+If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
+
+```bash
+
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
+2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
 
-Now start a producer to send some test messages:
+```
 
-```python
+:::tip
 
-import pulsar
+* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
 
-client = pulsar.Client('pulsar://localhost:6650')
-producer = client.create_producer('my-topic')
+:::
+
+You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+> 
+> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview) document to secure your deployment.
+>
+> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
+
+## Use Pulsar standalone
+
+Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
+
+### Consume a message
 
-for i in range(10):
-    producer.send(('hello-pulsar-%d' % i).encode('utf-8'))
+The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
 
-client.close()
+```bash
+
+$ bin/pulsar-client consume my-topic -s "first-subscription"
+
+```
+
+If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
 
 ```
 
-## Get the topic statistics
+09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
+
+```
+
+:::tip
+
+As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
+
+:::
+
+### Produce a message
+
+The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
 
-In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.
-For details on APIs, refer to [Admin API Overview](admin-api-overview).
+```bash
 
-In the simplest example, you can use curl to probe the stats for a particular topic:
+$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
 
-```shell
+```
 
-$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool
+If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
 
 ```
 
-The output is something like this:
-
-```json
-
-{
-  "averageMsgSize": 0.0,
-  "msgRateIn": 0.0,
-  "msgRateOut": 0.0,
-  "msgThroughputIn": 0.0,
-  "msgThroughputOut": 0.0,
-  "publishers": [
-    {
-      "address": "/172.17.0.1:35048",
-      "averageMsgSize": 0.0,
-      "clientVersion": "1.19.0-incubating",
-      "connectedSince": "2017-08-09 20:59:34.621+0000",
-      "msgRateIn": 0.0,
-      "msgThroughputIn": 0.0,
-      "producerId": 0,
-      "producerName": "standalone-0-1"
-    }
-  ],
-  "replication": {},
-  "storageSize": 16,
-  "subscriptions": {
-    "my-sub": {
-      "blockedSubscriptionOnUnackedMsgs": false,
-      "consumers": [
-        {
-          "address": "/172.17.0.1:35064",
-          "availablePermits": 996,
-          "blockedConsumerOnUnackedMsgs": false,
-          "clientVersion": "1.19.0-incubating",
-          "connectedSince": "2017-08-09 21:05:39.222+0000",
-          "consumerName": "166111",
-          "msgRateOut": 0.0,
-          "msgRateRedeliver": 0.0,
-          "msgThroughputOut": 0.0,
-          "unackedMessages": 0
-        }
-      ],
-      "msgBacklog": 0,
-      "msgRateExpired": 0.0,
-      "msgRateOut": 0.0,
-      "msgRateRedeliver": 0.0,
-      "msgThroughputOut": 0.0,
-      "type": "Exclusive",
-      "unackedMessages": 0
-    }
-  }
-}
+13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
 
 ```
 
+## Stop Pulsar standalone
+
+Press `Ctrl+C` to stop a local standalone Pulsar.
+
+:::tip
+
+If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
+For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+
+:::
+
diff --git a/site2/website-next/versioned_docs/version-2.8.2/client-libraries.md b/site2/website-next/versioned_docs/version-2.8.2/client-libraries.md
index c79f7bb..23e5a06 100644
--- a/site2/website-next/versioned_docs/version-2.8.2/client-libraries.md
+++ b/site2/website-next/versioned_docs/version-2.8.2/client-libraries.md
@@ -1,579 +1,35 @@
 ---
-id: client-libraries-cgo
-title: Pulsar CGo client
-sidebar_label: "CGo(deprecated)"
-original_id: client-libraries-cgo
+id: client-libraries
+title: Pulsar client libraries
+sidebar_label: "Overview"
+original_id: client-libraries
 ---
 
-You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+Pulsar supports the following client libraries:
 
-All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+- [Java client](client-libraries-java)
+- [Go client](client-libraries-go)
+- [Python client](client-libraries-python)
+- [C++ client](client-libraries-cpp)
+- [Node.js client](client-libraries-node)
+- [WebSocket client](client-libraries-websocket)
+- [C# client](client-libraries-dotnet)
 
-Currently, the following Go clients are maintained in two repositories.
+## Feature matrix
+Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
 
-| Language | Project | Maintainer | License | Description |
-|----------|---------|------------|---------|-------------|
-| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
-| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
-
-> **API docs available as well**  
-> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
-
-## Installation
-
-### Requirements
-
-Pulsar Go client library is based on the C++ client library. Follow
-the instructions for [C++ library](client-libraries-cpp) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
-
-### Install go package
-
-> **Compatibility Warning**  
-> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
-
-You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
-
-```bash
-
-$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
-
-```
-
-Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
-
-```bash
-
-$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@
-
-```
-
-Once installed locally, you can import it into your project:
-
-```go
-
-import "github.com/apache/pulsar/pulsar-client-go/pulsar"
-
-```
-
-## Connection URLs
-
-To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol) URL.
-
-Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
-
-```http
-
-pulsar://localhost:6650
-
-```
-
-A URL for a production Pulsar cluster may look something like this:
-
-```http
-
-pulsar://pulsar.us-west.example.com:6650
-
-```
-
-If you're using [TLS](security-tls-authentication) authentication, the URL will look like something like this:
-
-```http
-
-pulsar+ssl://pulsar.us-west.example.com:6651
-
-```
-
-## Create a client
-
-In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
-
-```go
-
-import (
-    "log"
-    "runtime"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-        OperationTimeoutSeconds: 5,
-        MessageListenerThreads: runtime.NumCPU(),
-    })
-
-    if err != nil {
-        log.Fatalf("Could not instantiate Pulsar client: %v", err)
-    }
-}
-
-```
-
-The following configurable parameters are available for Pulsar clients:
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
-`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
-`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
-`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
-`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
-`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
-`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
-`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
-`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
-`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
-
-## Producers
-
-Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
-
-```go
-
-producer, err := client.CreateProducer(pulsar.ProducerOptions{
-    Topic: "my-topic",
-})
-
-if err != nil {
-    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
-}
-
-defer producer.Close()
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Hello, Pulsar"),
-}
-
-if err := producer.Send(context.Background(), msg); err != nil {
-    log.Fatalf("Producer could not send message: %v", err)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Producer operations
-
-Pulsar Go producers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
-`Name()` | Fetches the producer's name | `string`
-`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
-`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
-`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
-`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
-`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
-`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
-`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
-`Schema()` | | Schema
-
-Here's a more involved example usage of a producer:
-
-```go
-
-import (
-    "context"
-    "fmt"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client to instantiate a producer
-    producer, err := client.CreateProducer(pulsar.ProducerOptions{
-        Topic: "my-topic",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    ctx := context.Background()
-
-    // Send 10 messages synchronously and 10 messages asynchronously
-    for i := 0; i < 10; i++ {
-        // Create a message
-        msg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("message-%d", i)),
-        }
-
-        // Attempt to send the message
-        if err := producer.Send(ctx, msg); err != nil {
-            log.Fatal(err)
-        }
-
-        // Create a different message to send asynchronously
-        asyncMsg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
-        }
-
-        // Attempt to send the message asynchronously and handle the response
-        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
-            if err != nil { log.Fatal(err) }
-
-            fmt.Printf("the %s successfully published", string(msg.Payload))
-        })
-    }
-}
-
-```
-
-### Producer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
-`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
-`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
-`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication) feature. | 30 seconds
-`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
-`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
-`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
-`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
-`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
-`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
-`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
-`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
-`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms
-`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
-
-## Consumers
-
-Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
-
-```go
-
-msgChannel := make(chan pulsar.ConsumerMessage)
-
-consumerOpts := pulsar.ConsumerOptions{
-    Topic:            "my-topic",
-    SubscriptionName: "my-subscription-1",
-    Type:             pulsar.Exclusive,
-    MessageChannel:   msgChannel,
-}
+## Third-party clients
 
-consumer, err := client.Subscribe(consumerOpts)
+Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages.
 
-if err != nil {
-    log.Fatalf("Could not establish subscription: %v", err)
-}
-
-defer consumer.Close()
-
-for cm := range msgChannel {
-    msg := cm.Message
-
-    fmt.Printf("Message ID: %s", msg.ID())
-    fmt.Printf("Message value: %s", string(msg.Payload()))
-
-    consumer.Ack(msg)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Consumer operations
-
-Pulsar Go consumers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
-`Subscription()` | Returns the consumer's subscription name | `string`
-`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
-`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
-`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
-`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
-`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
-`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
-`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
-`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
-`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
-`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
-`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
-
-#### Receive example
-
-Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client object to instantiate a consumer
-    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
-        Topic:            "my-golang-topic",
-        SubscriptionName: "sub-1",
-        Type: pulsar.Exclusive,
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    defer consumer.Close()
-
-    ctx := context.Background()
-
-    // Listen indefinitely on the topic
-    for {
-        msg, err := consumer.Receive(ctx)
-        if err != nil { log.Fatal(err) }
-
-        // Do something with the message
-        err = processMessage(msg)
-
-        if err == nil {
-            // Message processed successfully
-            consumer.Ack(msg)
-        } else {
-            // Failed to process messages
-            consumer.Nack(msg)
-        }
-    }
-}
-
-```
-
-### Consumer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
-`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
-`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
-`SubscriptionName` | The subscription name for this consumer |
-`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
-`Name` | The name of the consumer |
-`AckTimeout` | Set the timeout for unacked messages | 0
-`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
-`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
-`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
-`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
-`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
-`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
-
-## Readers
-
-Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
-
-```go
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic: "my-golang-topic",
-    StartMessageId: pulsar.LatestMessage,
-})
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
-
-
-### Reader operations
-
-Pulsar Go readers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
-`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
-`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
-`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
-
-#### "Next" example
-
-Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatalf("Could not create client: %v", err) }
-
-    // Use the client to instantiate a reader
-    reader, err := client.CreateReader(pulsar.ReaderOptions{
-        Topic:          "my-golang-topic",
-        StartMessageID: pulsar.EarliestMessage,
-    })
-
-    if err != nil { log.Fatalf("Could not create reader: %v", err) }
-
-    defer reader.Close()
-
-    ctx := context.Background()
-
-    // Listen on the topic for incoming messages
-    for {
-        msg, err := reader.Next(ctx)
-        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
-
-        // Process the message
-    }
-}
-
-```
-
-In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
-
-```go
-
-lastSavedId := // Read last saved message id from external store as byte[]
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic:          "my-golang-topic",
-    StartMessageID: DeserializeMessageID(lastSavedId),
-})
-
-```
-
-### Reader configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages 
-`Name` | The name of the reader 
-`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
-`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
-`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
-`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
-
-## Messages
-
-The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
-
-```go
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Here is some message data"),
-    Key: "message-key",
-    Properties: map[string]string{
-        "foo": "bar",
-    },
-    EventTime: time.Now(),
-    ReplicationClusters: []string{"cluster1", "cluster3"},
-}
-
-if err := producer.send(msg); err != nil {
-    log.Fatalf("Could not publish message due to: %v", err)
-}
-
-```
-
-The following methods parameters are available for `ProducerMessage` objects:
-
-Parameter | Description
-:---------|:-----------
-`Payload` | The actual data payload of the message
-`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
-`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
-`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
-`EventTime` | The timestamp associated with the message
-`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
-`SequenceID` | Set the sequence id to assign to the current message
-
-## TLS encryption and authentication
-
-In order to use [TLS encryption](security-tls-transport), you'll need to configure your client to do so:
-
- * Use `pulsar+ssl` URL type
- * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
- * Configure `Authentication` option
-
-Here's an example:
-
-```go
-
-opts := pulsar.ClientOptions{
-    URL: "pulsar+ssl://my-cluster.com:6651",
-    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
-    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
-}
-
-```
-
-## Schema
-
-This example shows how to create a producer and consumer with schema.
-
-```go
-
-var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
-    		"\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
-jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
-// create producer
-producer, err := client.CreateProducerWithSchema(ProducerOptions{
-	Topic: "jsonTopic",
-}, jsonSchema)
-err = producer.Send(context.Background(), ProducerMessage{
-	Value: &testJson{
-		ID:   100,
-		Name: "pulsar",
-	},
-})
-if err != nil {
-	log.Fatal(err)
-}
-defer producer.Close()
-//create consumer
-var s testJson
-consumerJS := NewJsonSchema(exampleSchemaDef, nil)
-consumer, err := client.SubscribeWithSchema(ConsumerOptions{
-	Topic:            "jsonTopic",
-	SubscriptionName: "sub-2",
-}, consumerJS)
-if err != nil {
-	log.Fatal(err)
-}
-msg, err := consumer.Receive(context.Background())
-if err != nil {
-	log.Fatal(err)
-}
-err = msg.GetValue(&s)
-if err != nil {
-	log.Fatal(err)
-}
-fmt.Println(s.ID) // output: 100
-fmt.Println(s.Name) // output: pulsar
-defer consumer.Close()
-
-```
+> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
 
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | 
+| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | 
+| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 |
+| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
+| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar |
+| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB |
diff --git a/site2/website-next/versioned_docs/version-2.8.2/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.8.2/developing-binary-protocol.md
deleted file mode 100644
index b233f10..0000000
--- a/site2/website-next/versioned_docs/version-2.8.2/developing-binary-protocol.md
+++ /dev/null
@@ -1,581 +0,0 @@
----
-id: develop-binary-protocol
-title: Pulsar binary protocol specification
-sidebar_label: "Binary protocol"
-original_id: develop-binary-protocol
----
-
-Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency.
-
-Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below.
-
-> ### Connection sharing
-> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction.
-
-All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand.
-
-## Framing
-
-Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB.
-
-The Pulsar protocol allows for two types of commands:
-
-1. **Simple commands** that do not carry a message payload.
-2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers.
-
-> Message payloads are passed in raw format rather than protobuf format for efficiency reasons.
-
-### Simple commands
-
-Simple (payload-free) commands have this basic structure:
-
-| Component   | Description                                                                             | Size (in bytes) |
-|:------------|:----------------------------------------------------------------------------------------|:----------------|
-| totalSize   | The size of the frame, counting everything that comes after it (in bytes)               | 4               |
-| commandSize | The size of the protobuf-serialized command                                             | 4               |
-| message     | The protobuf message serialized in a raw binary format (rather than in protobuf format) |                 |
-
-### Payload commands
-
-Payload commands have this basic structure:
-
-| Component    | Description                                                                                 | Size (in bytes) |
-|:-------------|:--------------------------------------------------------------------------------------------|:----------------|
-| totalSize    | The size of the frame, counting everything that comes after it (in bytes)                   | 4               |
-| commandSize  | The size of the protobuf-serialized command                                                 | 4               |
-| message      | The protobuf message serialized in a raw binary format (rather than in protobuf format)     |                 |
-| magicNumber  | A 2-byte byte array (`0x0e01`) identifying the current format                               | 2               |
-| checksum     | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4               |
-| metadataSize | The size of the message [metadata](#message-metadata)                                       | 4               |
-| metadata     | The message [metadata](#message-metadata) stored as a binary protobuf message               |                 |
-| payload      | Anything left in the frame is considered the payload and can include any sequence of bytes  |                 |
-
-## Message metadata
-
-Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer.
-
-| Field                                | Description                                                                                                                                                                                                                                               |
-|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `producer_name`                      | The name of the producer that published the message                                                                                                                                                                                         |
-| `sequence_id`                        | The sequence ID of the message, assigned by producer                                                                                                                                                                                        |
-| `publish_time`                       | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC)                                                                                                                                                    |
-| `properties`                         | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. |
-| `replicated_from` *(optional)*       | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published                                                                                                             |
-| `partition_key` *(optional)*         | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose                                                                                                                          |
-| `compression` *(optional)*           | Signals that payload has been compressed and with which compression library                                                                                                                                                                               |
-| `uncompressed_size` *(optional)*     | If compression is used, the producer must fill the uncompressed size field with the original payload size                                                                                                                                                 |
-| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch                                                                                                                   |
-
-### Batch messages
-
-When using batch messages, the payload will be containing a list of entries,
-each of them with its individual metadata, defined by the `SingleMessageMetadata`
-object.
-
-
-For a single batch, the payload format will look like this:
-
-
-| Field         | Description                                                 |
-|:--------------|:------------------------------------------------------------|
-| metadataSizeN | The size of the single message metadata serialized Protobuf |
-| metadataN     | Single message metadata                                     |
-| payloadN      | Message payload passed by application                       |
-
-Each metadata field looks like this;
-
-| Field                      | Description                                             |
-|:---------------------------|:--------------------------------------------------------|
-| properties                 | Application-defined properties                          |
-| partition key *(optional)* | Key to indicate the hashing to a particular partition   |
-| payload_size               | Size of the payload for the single message in the batch |
-
-When compression is enabled, the whole batch will be compressed at once.
-
-## Interactions
-
-### Connection establishment
-
-After opening a TCP connection to a broker, typically on port 6650, the client
-is responsible to initiate the session.
-
-![Connect interaction](/assets/binary-protocol-connect.png)
-
-After receiving a `Connected` response from the broker, the client can
-consider the connection ready to use. Alternatively, if the broker doesn't
-validate the client authentication, it will reply with an `Error` command and
-close the TCP connection.
-
-Example:
-
-```protobuf
-
-message CommandConnect {
-  "client_version" : "Pulsar-Client-Java-v1.15.2",
-  "auth_method_name" : "my-authentication-plugin",
-  "auth_data" : "my-auth-data",
-  "protocol_version" : 6
-}
-
-```
-
-Fields:
- * `client_version` → String based identifier. Format is not enforced
- * `auth_method_name` → *(optional)* Name of the authentication plugin if auth
-   enabled
- * `auth_data` → *(optional)* Plugin specific authentication data
- * `protocol_version` → Indicates the protocol version supported by the
-   client. Broker will not send commands introduced in newer revisions of the
-   protocol. Broker might be enforcing a minimum version
-
-```protobuf
-
-message CommandConnected {
-  "server_version" : "Pulsar-Broker-v1.15.2",
-  "protocol_version" : 6
-}
-
-```
-
-Fields:
- * `server_version` → String identifier of broker version
- * `protocol_version` → Protocol version supported by the broker. Client
-   must not attempt to send commands introduced in newer revisions of the
-   protocol
-
-### Keep Alive
-
-To identify prolonged network partitions between clients and brokers or cases
-in which a machine crashes without interrupting the TCP connection on the remote
-end (eg: power outage, kernel panic, hard reboot...), we have introduced a
-mechanism to probe for the availability status of the remote peer.
-
-Both clients and brokers are sending `Ping` commands periodically and they will
-close the socket if a `Pong` response is not received within a timeout (default
-used by broker is 60s).
-
-A valid implementation of a Pulsar client is not required to send the `Ping`
-probe, though it is required to promptly reply after receiving one from the
-broker in order to prevent the remote side from forcibly closing the TCP connection.
-
-
-### Producer
-
-In order to send messages, a client needs to establish a producer. When creating
-a producer, the broker will first verify that this particular client is
-authorized to publish on the topic.
-
-Once the client gets confirmation of the producer creation, it can publish
-messages to the broker, referring to the producer id negotiated before.
-
-![Producer interaction](/assets/binary-protocol-producer.png)
-
-##### Command Producer
-
-```protobuf
-
-message CommandProducer {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "producer_id" : 1,
-  "request_id" : 1
-}
-
-```
-
-Parameters:
- * `topic` → Complete topic name to where you want to create the producer on
- * `producer_id` → Client generated producer identifier. Needs to be unique
-    within the same connection
- * `request_id` → Identifier for this request. Used to match the response with
-    the originating request. Needs to be unique within the same connection
- * `producer_name` → *(optional)* If a producer name is specified, the name will
-    be used, otherwise the broker will generate a unique name. Generated
-    producer name is guaranteed to be globally unique. Implementations are
-    expected to let the broker generate a new producer name when the producer
-    is initially created, then reuse it when recreating the producer after
-    reconnections.
-
-The broker will reply with either `ProducerSuccess` or `Error` commands.
-
-##### Command ProducerSuccess
-
-```protobuf
-
-message CommandProducerSuccess {
-  "request_id" :  1,
-  "producer_name" : "generated-unique-producer-name"
-}
-
-```
-
-Parameters:
- * `request_id` → Original id of the `CreateProducer` request
- * `producer_name` → Generated globally unique producer name or the name
-    specified by the client, if any.
-
-##### Command Send
-
-Command `Send` is used to publish a new message within the context of an
-already existing producer. This command is used in a frame that includes command
-as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section.
-
-```protobuf
-
-message CommandSend {
-  "producer_id" : 1,
-  "sequence_id" : 0,
-  "num_messages" : 1
-}
-
-```
-
-Parameters:
- * `producer_id` → id of an existing producer
- * `sequence_id` → each message has an associated sequence id which is expected
-   to be implemented with a counter starting at 0. The `SendReceipt` that
-   acknowledges the effective publishing of a messages will refer to it by
-   its sequence id.
- * `num_messages` → *(optional)* Used when publishing a batch of messages at
-   once.
-
-##### Command SendReceipt
-
-After a message has been persisted on the configured number of replicas, the
-broker will send the acknowledgment receipt to the producer.
-
-```protobuf
-
-message CommandSendReceipt {
-  "producer_id" : 1,
-  "sequence_id" : 0,
-  "message_id" : {
-    "ledgerId" : 123,
-    "entryId" : 456
-  }
-}
-
-```
-
-Parameters:
- * `producer_id` → id of producer originating the send request
- * `sequence_id` → sequence id of the published message
- * `message_id` → message id assigned by the system to the published message
-   Unique within a single cluster. Message id is composed of 2 longs, `ledgerId`
-   and `entryId`, that reflect that this unique id is assigned when appending
-   to a BookKeeper ledger
-
-
-##### Command CloseProducer
-
-**Note**: *This command can be sent by either producer or broker*.
-
-When receiving a `CloseProducer` command, the broker will stop accepting any
-more messages for the producer, wait until all pending messages are persisted
-and then reply `Success` to the client.
-
-The broker can send a `CloseProducer` command to client when it's performing
-a graceful failover (eg: broker is being restarted, or the topic is being unloaded
-by load balancer to be transferred to a different broker).
-
-When receiving the `CloseProducer`, the client is expected to go through the
-service discovery lookup again and recreate the producer again. The TCP
-connection is not affected.
-
-### Consumer
-
-A consumer is used to attach to a subscription and consume messages from it.
-After every reconnection, a client needs to subscribe to the topic. If a
-subscription is not already there, a new one will be created.
-
-![Consumer](/assets/binary-protocol-consumer.png)
-
-#### Flow control
-
-After the consumer is ready, the client needs to *give permission* to the
-broker to push messages. This is done with the `Flow` command.
-
-A `Flow` command gives additional *permits* to send messages to the consumer.
-A typical consumer implementation will use a queue to accumulate these messages
-before the application is ready to consume them.
-
-After the application has dequeued half of the messages in the queue, the consumer 
-sends permits to the broker to ask for more messages (equals to half of the messages in the queue).
-
-For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue.
-Then the consumer sends permits to the broker to ask for 500 messages.
-
-##### Command Subscribe
-
-```protobuf
-
-message CommandSubscribe {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "subscription" : "my-subscription-name",
-  "subType" : "Exclusive",
-  "consumer_id" : 1,
-  "request_id" : 1
-}
-
-```
-
-Parameters:
- * `topic` → Complete topic name to where you want to create the consumer on
- * `subscription` → Subscription name
- * `subType` → Subscription type: Exclusive, Shared, Failover, Key_Shared
- * `consumer_id` → Client generated consumer identifier. Needs to be unique
-    within the same connection
- * `request_id` → Identifier for this request. Used to match the response with
-    the originating request. Needs to be unique within the same connection
- * `consumer_name` → *(optional)* Clients can specify a consumer name. This
-    name can be used to track a particular consumer in the stats. Also, in
-    Failover subscription type, the name is used to decide which consumer is
-    elected as *master* (the one receiving messages): consumers are sorted by
-    their consumer name and the first one is elected master.
-
-##### Command Flow
-
-```protobuf
-
-message CommandFlow {
-  "consumer_id" : 1,
-  "messagePermits" : 1000
-}
-
-```
-
-Parameters:
-* `consumer_id` → Id of an already established consumer
-* `messagePermits` → Number of additional permits to grant to the broker for
-  pushing more messages
-
-##### Command Message
-
-Command `Message` is used by the broker to push messages to an existing consumer,
-within the limits of the given permits.
-
-
-This command is used in a frame that includes the message payload as well, for
-which the complete format is specified in the [payload commands](#payload-commands)
-section.
-
-```protobuf
-
-message CommandMessage {
-  "consumer_id" : 1,
-  "message_id" : {
-    "ledgerId" : 123,
-    "entryId" : 456
-  }
-}
-
-```
-
-##### Command Ack
-
-An `Ack` is used to signal to the broker that a given message has been
-successfully processed by the application and can be discarded by the broker.
-
-In addition, the broker will also maintain the consumer position based on the
-acknowledged messages.
-
-```protobuf
-
-message CommandAck {
-  "consumer_id" : 1,
-  "ack_type" : "Individual",
-  "message_id" : {
-    "ledgerId" : 123,
-    "entryId" : 456
-  }
-}
-
-```
-
-Parameters:
- * `consumer_id` → Id of an already established consumer
- * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative`
- * `message_id` → Id of the message to acknowledge
- * `validation_error` → *(optional)* Indicates that the consumer has discarded
-   the messages due to: `UncompressedSizeCorruption`,
-   `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError`
-
-##### Command CloseConsumer
-
-***Note***: **This command can be sent by either producer or broker*.
-
-This command behaves the same as [`CloseProducer`](#command-closeproducer)
-
-##### Command RedeliverUnacknowledgedMessages
-
-A consumer can ask the broker to redeliver some or all of the pending messages
-that were pushed to that particular consumer and not yet acknowledged.
-
-The protobuf object accepts a list of message ids that the consumer wants to
-be redelivered. If the list is empty, the broker will redeliver all the
-pending messages.
-
-On redelivery, messages can be sent to the same consumer or, in the case of a
-shared subscription, spread across all available consumers.
-
-
-##### Command ReachedEndOfTopic
-
-This is sent by a broker to a particular consumer, whenever the topic
-has been "terminated" and all the messages on the subscription were
-acknowledged.
-
-The client should use this command to notify the application that no more
-messages are coming from the consumer.
-
-##### Command ConsumerStats
-
-This command is sent by the client to retrieve Subscriber and Consumer level 
-stats from the broker.
-Parameters:
- * `request_id` → Id of the request, used to correlate the request 
-      and the response.
- * `consumer_id` → Id of an already established consumer.
-
-##### Command ConsumerStatsResponse
-
-This is the broker's response to ConsumerStats request by the client. 
-It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request.
-If the `error_code` or the `error_message` field is set it indicates that the request has failed.
-
-##### Command Unsubscribe
-
-This command is sent by the client to unsubscribe the `consumer_id` from the associated topic.
-Parameters:
- * `request_id` → Id of the request.
- * `consumer_id` → Id of an already established consumer which needs to unsubscribe.
-
-
-## Service discovery
-
-### Topic lookup
-
-Topic lookup needs to be performed each time a client needs to create or
-reconnect a producer or a consumer. Lookup is used to discover which particular
-broker is serving the topic we are about to use.
-
-Lookup can be done with a REST call as described in the [admin API](admin-api-topics.md#lookup-of-topic)
-docs.
-
-Since Pulsar-1.16 it is also possible to perform the lookup within the binary
-protocol.
-
-For the sake of example, let's assume we have a service discovery component
-running at `pulsar://broker.example.com:6650`
-
-Individual brokers will be running at `pulsar://broker-1.example.com:6650`,
-`pulsar://broker-2.example.com:6650`, ...
-
-A client can use a connection to the discovery service host to issue a
-`LookupTopic` command. The response can either be a broker hostname to
-connect to, or a broker hostname to which retry the lookup.
-
-The `LookupTopic` command has to be used in a connection that has already
-gone through the `Connect` / `Connected` initial handshake.
-
-![Topic lookup](/assets/binary-protocol-topic-lookup.png)
-
-```protobuf
-
-message CommandLookupTopic {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "request_id" : 1,
-  "authoritative" : false
-}
-
-```
-
-Fields:
- * `topic` → Topic name to lookup
- * `request_id` → Id of the request that will be passed with its response
- * `authoritative` → Initial lookup request should use false. When following a
-   redirect response, client should pass the same value contained in the
-   response
-
-##### LookupTopicResponse
-
-Example of response with successful lookup:
-
-```protobuf
-
-message CommandLookupTopicResponse {
-  "request_id" : 1,
-  "response" : "Connect",
-  "brokerServiceUrl" : "pulsar://broker-1.example.com:6650",
-  "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651",
-  "authoritative" : true
-}
-
-```
-
-Example of lookup response with redirection:
-
-```protobuf
-
-message CommandLookupTopicResponse {
-  "request_id" : 1,
-  "response" : "Redirect",
-  "brokerServiceUrl" : "pulsar://broker-2.example.com:6650",
-  "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651",
-  "authoritative" : true
-}
-
-```
-
-In this second case, we need to reissue the `LookupTopic` command request
-to `broker-2.example.com` and this broker will be able to give a definitive
-answer to the lookup request.
-
-### Partitioned topics discovery
-
-Partitioned topics metadata discovery is used to find out if a topic is a
-"partitioned topic" and how many partitions were set up.
-
-If the topic is marked as "partitioned", the client is expected to create
-multiple producers or consumers, one for each partition, using the `partition-X`
-suffix.
-
-This information only needs to be retrieved the first time a producer or
-consumer is created. There is no need to do this after reconnections.
-
-The discovery of partitioned topics metadata works very similar to the topic
-lookup. The client send a request to the service discovery address and the
-response will contain actual metadata.
-
-##### Command PartitionedTopicMetadata
-
-```protobuf
-
-message CommandPartitionedTopicMetadata {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "request_id" : 1
-}
-
-```
-
-Fields:
- * `topic` → the topic for which to check the partitions metadata
- * `request_id` → Id of the request that will be passed with its response
-
-
-##### Command PartitionedTopicMetadataResponse
-
-Example of response with metadata:
-
-```protobuf
-
-message CommandPartitionedTopicMetadataResponse {
-  "request_id" : 1,
-  "response" : "Success",
-  "partitions" : 32
-}
-
-```
-
-## Protobuf interface
-
-All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}.
diff --git a/site2/website-next/versioned_docs/version-2.8.2/developing-load-manager.md b/site2/website-next/versioned_docs/version-2.8.2/developing-load-manager.md
deleted file mode 100644
index 509209b..0000000
--- a/site2/website-next/versioned_docs/version-2.8.2/developing-load-manager.md
+++ /dev/null
@@ -1,227 +0,0 @@
----
-id: develop-load-manager
-title: Modular load manager
-sidebar_label: "Modular load manager"
-original_id: develop-load-manager
----
-
-The *modular load manager*, implemented in  [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load  [...]
-
-## Usage
-
-There are two ways that you can enable the modular load manager:
-
-1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`.
-2. Using the `pulsar-admin` tool. Here's an example:
-
-   ```shell
-   
-   $ pulsar-admin update-dynamic-config \
-    --config loadManagerClassName \
-    --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
-   
-   ```
-
-   You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`.
-
-## Verification
-
-There are a few different ways to determine which load manager is being used:
-
-1. Use `pulsar-admin` to examine the `loadManagerClassName` element:
-
-   ```shell
-   
-   $ bin/pulsar-admin brokers get-all-dynamic-config
-   {
-    "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl"
-   }
-   
-   ```
-
-   If there is no `loadManagerClassName` element, then the default load manager is used.
-
-2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager:
-
-   ```json
-   
-   {
-     "bandwidthIn": {
-       "limit": 10240000.0,
-       "usage": 4.256510416666667
-     },
-     "bandwidthOut": {
-       "limit": 10240000.0,
-       "usage": 5.287239583333333
-     },
-     "bundles": [],
-     "cpu": {
-       "limit": 2400.0,
-       "usage": 5.7353247655435915
-     },
-     "directMemory": {
-       "limit": 16384.0,
-       "usage": 1.0
-     }
-   }
-   
-   ```
-
-   With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this:
-
-   ```json
-   
-   {
-     "systemResourceUsage": {
-       "bandwidthIn": {
-         "limit": 10240000.0,
-         "usage": 0.0
-       },
-       "bandwidthOut": {
-         "limit": 10240000.0,
-         "usage": 0.0
-       },
-       "cpu": {
-         "limit": 2400.0,
-         "usage": 0.0
-       },
-       "directMemory": {
-         "limit": 16384.0,
-         "usage": 1.0
-       },
-       "memory": {
-         "limit": 8192.0,
-         "usage": 3903.0
-       }
-     }
-   }
-   
-   ```
-
-3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used.
-
-   Here is an example from the modular load manager:
-
-   ```
-   
-   ===================================================================================================================
-   ||SYSTEM         |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
-   ||               |0.00           |48.33          |0.01           |0.00           |0.00           |48.33          ||
-   ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
-   ||               |4              |4              |0              |2              |4              |0              ||
-   ||LATEST         |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
-   ||SHORT          |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
-   ||LONG           |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
-   ===================================================================================================================
-   
-   ```
-
-   Here is an example from the simple load manager:
-
-   ```
-   
-   ===================================================================================================================
-   ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
-   ||               |4              |4              |0              |2              |0              |0              ||
-   ||RAW SYSTEM     |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
-   ||               |0.25           |47.94          |0.01           |0.00           |0.00           |47.94          ||
-   ||ALLOC SYSTEM   |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
-   ||               |0.20           |1.89           |               |1.27           |3.21           |3.21           ||
-   ||RAW MSG        |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |0.00           |0.00           |0.00           |0.01           |0.01           |0.01           ||
-   ||ALLOC MSG      |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
-   ||               |54.84          |134.48         |189.31         |126.54         |320.96         |447.50         ||
-   ===================================================================================================================
-   
-   ```
-
-It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper.
-
-## Implementation
-
-### Data
-
-The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class.
-Here, the available data is subdivided into the bundle data and the broker data.
-
-#### Broker
-
-The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts,
-one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker
-data which is written to ZooKeeper by the leader broker.
-
-##### Local Broker Data
-The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources:
-
-* CPU usage
-* JVM heap memory usage
-* Direct memory usage
-* Bandwidth in/out usage
-* Most recent total message rate in/out across all bundles
-* Total number of topics, bundles, producers, and consumers
-* Names of all bundles assigned to this broker
-* Most recent changes in bundle assignments for this broker
-
-The local broker data is updated periodically according to the service configuration
-"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will
-receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node
-`/loadbalance/brokers/<broker host/port>`
-
-##### Historical Broker Data
-
-The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class.
-
-In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information:
-
-* Message rate in/out for the entire broker
-* Message throughput in/out for the entire broker
-
-Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained.
-
-The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
-
-##### Bundle Data
-
-The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame:
-
-* Message rate in/out for this bundle
-* Message Throughput In/Out for this bundle
-* Current number of samples for this bundle
-
-The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where
-the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval
-for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the
-short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term
-data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame,
-the average is taken only over the existing samples. When no samples are available, default values are assumed until
-they are overwritten by the first sample. Currently, the default values are
-
-* Message rate in/out: 50 messages per second both ways
-* Message throughput in/out: 50KB per second both ways
-
-The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper.
-Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical
-broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
-
-### Traffic Distribution
-
-The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](h [...]
-
-#### Least Long Term Message Rate Strategy
-
-As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that
-the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based
-on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system
-resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the
-assignment process. This is done by weighting the final message rate according to
-`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration
-`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources
-that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed
-by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded,
-then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload
-threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly
-assigned.
-
diff --git a/site2/website-next/versioned_docs/version-2.8.2/developing-tools.md b/site2/website-next/versioned_docs/version-2.8.2/developing-tools.md
deleted file mode 100644
index b545779..0000000
--- a/site2/website-next/versioned_docs/version-2.8.2/developing-tools.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-id: develop-tools
-title: Simulation tools
-sidebar_label: "Simulation tools"
-original_id: develop-tools
----
-
-It is sometimes necessary create an test environment and incur artificial load to observe how well load managers
-handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an
-effort to make create this load and observe the effects on the managers more easily.
-
-## Simulation Client
-The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes.
-Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact
-with the simulation client directly, but instead delegates their requests to the simulation controller, which will then
-send signals to clients to start incurring load. The client implementation is in the class
-`org.apache.pulsar.testclient.LoadSimulationClient`.
-
-### Usage
-To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows:
-
-```
-
-pulsar-perf simulation-client --port <listen port> --service-url <pulsar service url>
-
-```
-
-The client will then be ready to receive controller commands.
-## Simulation Controller
-The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old
-topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class
-`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send
-command with.
-
-### Usage
-To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows:
-
-```
-
-pulsar-perf simulation-controller --cluster <cluster to simulate on> --client-port <listen port for clients>
---clients <comma-separated list of client host names>
-
-```
-
-The clients should already be started before the controller is started. You will then be presented with a simple prompt,
-where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic
-names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic
-`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is
-`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions:
-
-* Create a topic with a producer and a consumer
-  * `trade <tenant> <namespace> <topic> [--rate <message rate per second>]
-  [--rand-rate <lower bound>,<upper bound>]
-  [--size <message size in bytes>]`
-* Create a group of topics with a producer and a consumer
-  * `trade_group <tenant> <group> <num_namespaces> [--rate <message rate per second>]
-  [--rand-rate <lower bound>,<upper bound>]
-  [--separation <separation between creating topics in ms>] [--size <message size in bytes>]
-  [--topics-per-namespace <number of topics to create per namespace>]`
-* Change the configuration of an existing topic
-  * `change <tenant> <namespace> <topic> [--rate <message rate per second>]
-  [--rand-rate <lower bound>,<upper bound>]
-  [--size <message size in bytes>]`
-* Change the configuration of a group of topics
-  * `change_group <tenant> <group> [--rate <message rate per second>] [--rand-rate <lower bound>,<upper bound>]
-  [--size <message size in bytes>] [--topics-per-namespace <number of topics to create per namespace>]`
-* Shutdown a previously created topic
-  * `stop <tenant> <namespace> <topic>`
-* Shutdown a previously created group of topics
-  * `stop_group <tenant> <group>`
-* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that history
-  * `copy <tenant> <source zookeeper> <target zookeeper> [--rate-multiplier value]`
-* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on)
-  * `simulate <tenant> <zookeeper> [--rate-multiplier value]`
-* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper.
-  * `stream <tenant> <zookeeper> [--rate-multiplier value]`
-
-The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created
-when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped
-with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form
-`zookeeper_host:port`.
-
-### Difference Between Copy, Simulate, and Stream
-The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when
-you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus,
-`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are
-simulating on, and then it will get the full benefit of the historical data of the source in both load manager
-implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes
-that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent
-historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the
-clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams
-load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the
-user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to
-be sent at only `5%` of the rate of the load that is being simulated.
-
-## Broker Monitor
-To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is
-implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the
-console as it is updated using watchers.
-
-### Usage
-To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script:
-
-```
-
-pulsar-perf monitor-brokers --connect-string <zookeeper host:port>
-
-```
-
-The console will then continuously print load data until it is interrupted.
-
diff --git a/site2/website-next/versioned_docs/version-2.8.2/getting-started-docker.md b/site2/website-next/versioned_docs/version-2.8.2/getting-started-docker.md
deleted file mode 100644
index 05ac2a1..0000000
--- a/site2/website-next/versioned_docs/version-2.8.2/getting-started-docker.md
+++ /dev/null
@@ -1,179 +0,0 @@
----
-id: standalone-docker
-title: Set up a standalone Pulsar in Docker
-sidebar_label: "Run Pulsar in Docker"
-original_id: standalone-docker
----
-
-For local development and testing, you can run Pulsar in standalone
-mode on your own machine within a Docker container.
-
-If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition)
-and follow the instructions for your OS.
-
-## Start Pulsar in Docker
-
-* For MacOS, Linux, and Windows:
-
-  ```shell
-  
-  $ docker run -it \
-  -p 6650:6650 \
-  -p 8080:8080 \
-  --mount source=pulsardata,target=/pulsar/data \
-  --mount source=pulsarconf,target=/pulsar/conf \
-  apachepulsar/pulsar:@pulsar:version@ \
-  bin/pulsar standalone
-  
-  ```
-
-A few things to note about this command:
- * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
-time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
- * For Docker on Windows make sure to configure it to use Linux containers
-
-If you start Pulsar successfully, you will see `INFO`-level log messages like this:
-
-```
-
-2017-08-09 22:34:04,030 - INFO  - [main:WebService@213] - Web Service started at http://127.0.0.1:8080
-2017-08-09 22:34:04,038 - INFO  - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246
-...
-
-```
-
-:::tip
-
-When you start a local standalone cluster, a `public/default`
-
-:::
-
-namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.
-For more information, see [Topics](concepts-messaging.md#topics).
-
-## Use Pulsar in Docker
-
-Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python) 
-and [C++](client-libraries-cpp). If you're running a local standalone cluster, you can
-use one of these root URLs to interact with your cluster:
-
-* `pulsar://localhost:6650`
-* `http://localhost:8080`
-
-The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python)
-client API.
-
-Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/):
-
-```shell
-
-$ pip install pulsar-client
-
-```
-
-### Consume a message
-
-Create a consumer and subscribe to the topic:
-
-```python
-
-import pulsar
-
-client = pulsar.Client('pulsar://localhost:6650')
-consumer = client.subscribe('my-topic',
-                            subscription_name='my-sub')
-
-while True:
-    msg = consumer.receive()
-    print("Received message: '%s'" % msg.data())
-    consumer.acknowledge(msg)
-
-client.close()
-
-```
-
-### Produce a message
-
-Now start a producer to send some test messages:
-
-```python
-
-import pulsar
-
-client = pulsar.Client('pulsar://localhost:6650')
-producer = client.create_producer('my-topic')
-
-for i in range(10):
-    producer.send(('hello-pulsar-%d' % i).encode('utf-8'))
-
-client.close()
-
-```
-
-## Get the topic statistics
-
-In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.
-For details on APIs, refer to [Admin API Overview](admin-api-overview).
-
-In the simplest example, you can use curl to probe the stats for a particular topic:
-
-```shell
-
-$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool
-
-```
-
-The output is something like this:
-
-```json
-
-{
-  "averageMsgSize": 0.0,
-  "msgRateIn": 0.0,
-  "msgRateOut": 0.0,
-  "msgThroughputIn": 0.0,
-  "msgThroughputOut": 0.0,
-  "publishers": [
-    {
-      "address": "/172.17.0.1:35048",
-      "averageMsgSize": 0.0,
-      "clientVersion": "1.19.0-incubating",
-      "connectedSince": "2017-08-09 20:59:34.621+0000",
-      "msgRateIn": 0.0,
-      "msgThroughputIn": 0.0,
-      "producerId": 0,
-      "producerName": "standalone-0-1"
-    }
-  ],
-  "replication": {},
-  "storageSize": 16,
-  "subscriptions": {
-    "my-sub": {
-      "blockedSubscriptionOnUnackedMsgs": false,
-      "consumers": [
-        {
-          "address": "/172.17.0.1:35064",
-          "availablePermits": 996,
-          "blockedConsumerOnUnackedMsgs": false,
-          "clientVersion": "1.19.0-incubating",
-          "connectedSince": "2017-08-09 21:05:39.222+0000",
-          "consumerName": "166111",
-          "msgRateOut": 0.0,
-          "msgRateRedeliver": 0.0,
-          "msgThroughputOut": 0.0,
-          "unackedMessages": 0
-        }
-      ],
-      "msgBacklog": 0,
-      "msgRateExpired": 0.0,
-      "msgRateOut": 0.0,
-      "msgRateRedeliver": 0.0,
-      "msgThroughputOut": 0.0,
-      "type": "Exclusive",
-      "unackedMessages": 0
-    }
-  }
-}
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.8.2/getting-started-helm.md b/site2/website-next/versioned_docs/version-2.8.2/getting-started-helm.md
deleted file mode 100644
index bbbd307..0000000
--- a/site2/website-next/versioned_docs/version-2.8.2/getting-started-helm.md
+++ /dev/null
@@ -1,438 +0,0 @@
----
-id: kubernetes-helm
-title: Get started in Kubernetes
-sidebar_label: "Run Pulsar in Kubernetes"
-original_id: kubernetes-helm
----
-
-This section guides you through every step of installing and running Apache Pulsar with Helm on Kubernetes quickly, including the following sections:
-
-- Install the Apache Pulsar on Kubernetes using Helm
-- Start and stop Apache Pulsar
-- Create topics using `pulsar-admin`
-- Produce and consume messages using Pulsar clients
-- Monitor Apache Pulsar status with Prometheus and Grafana
-
-For deploying a Pulsar cluster for production usage, read the documentation on [how to configure and install a Pulsar Helm chart](helm-deploy).
-
-## Prerequisite
-
-- Kubernetes server 1.14.0+
-- kubectl 1.14.0+
-- Helm 3.0+
-
-:::tip
-
-For the following steps, step 2 and step 3 are for **developers** and step 4 and step 5 are for **administrators**.
-
-:::
-
-## Step 0: Prepare a Kubernetes cluster
-
-Before installing a Pulsar Helm chart, you have to create a Kubernetes cluster. You can follow [the instructions](helm-prepare) to prepare a Kubernetes cluster.
-
-We use [Minikube](https://minikube.sigs.k8s.io/docs/start/) in this quick start guide. To prepare a Kubernetes cluster, follow these steps:
-
-1. Create a Kubernetes cluster on Minikube.
-
-   ```bash
-   
-   minikube start --memory=8192 --cpus=4 --kubernetes-version=<k8s-version>
-   
-   ```
-
-   The `<k8s-version>` can be any [Kubernetes version supported by your Minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/), such as `v1.16.1`.
-
-2. Set `kubectl` to use Minikube.
-
-   ```bash
-   
-   kubectl config use-context minikube
-   
-   ```
-
-3. To use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with the local Kubernetes cluster on Minikube, enter the command below:
-
-   ```bash
-   
-   minikube dashboard
-   
-   ```
-
-   The command automatically triggers opening a webpage in your browser. 
-
-## Step 1: Install Pulsar Helm chart
-
-0. Add Pulsar charts repo.
-
-   ```bash
-   
-   helm repo add apache https://pulsar.apache.org/charts
-   
-   ```
-
-   ```bash
-   
-   helm repo update
-   
-   ```
-
-1. Clone the Pulsar Helm chart repository.
-
-   ```bash
-   
-   git clone https://github.com/apache/pulsar-helm-chart
-   cd pulsar-helm-chart
-   
-   ```
-
-2. Run the script `prepare_helm_release.sh` to create secrets required for installing the Apache Pulsar Helm chart. The username `pulsar` and password `pulsar` are used for logging into the Grafana dashboard and Pulsar Manager.
-
-   ```bash
-   
-   ./scripts/pulsar/prepare_helm_release.sh \
-       -n pulsar \
-       -k pulsar-mini \
-       -c
-   
-   ```
-
-3. Use the Pulsar Helm chart to install a Pulsar cluster to Kubernetes.
-
-   > **NOTE**  
-   > You need to specify `--set initialize=true` when installing Pulsar the first time. This command installs and starts Apache Pulsar.
-
-   ```bash
-   
-   helm install \
-       --values examples/values-minikube.yaml \
-       --set initialize=true \
-       --namespace pulsar \
-       pulsar-mini apache/pulsar
-   
-   ```
-
-4. Check the status of all pods.
-
-   ```bash
-   
-   kubectl get pods -n pulsar
-   
-   ```
-
-   If all pods start up successfully, you can see that the `STATUS` is changed to `Running` or `Completed`.
-
-   **Output**
-
-   ```bash
-   
-   NAME                                         READY   STATUS      RESTARTS   AGE
-   pulsar-mini-bookie-0                         1/1     Running     0          9m27s
-   pulsar-mini-bookie-init-5gphs                0/1     Completed   0          9m27s
-   pulsar-mini-broker-0                         1/1     Running     0          9m27s
-   pulsar-mini-grafana-6b7bcc64c7-4tkxd         1/1     Running     0          9m27s
-   pulsar-mini-prometheus-5fcf5dd84c-w8mgz      1/1     Running     0          9m27s
-   pulsar-mini-proxy-0                          1/1     Running     0          9m27s
-   pulsar-mini-pulsar-init-t7cqt                0/1     Completed   0          9m27s
-   pulsar-mini-pulsar-manager-9bcbb4d9f-htpcs   1/1     Running     0          9m27s
-   pulsar-mini-toolset-0                        1/1     Running     0          9m27s
-   pulsar-mini-zookeeper-0                      1/1     Running     0          9m27s
-   
-   ```
-
-5. Check the status of all services in the namespace `pulsar`.
-
-   ```bash
-   
-   kubectl get services -n pulsar
-   
-   ```
-
-   **Output**
-
-   ```bash
-   
-   NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
-   pulsar-mini-bookie           ClusterIP      None             <none>        3181/TCP,8000/TCP             11m
-   pulsar-mini-broker           ClusterIP      None             <none>        8080/TCP,6650/TCP             11m
-   pulsar-mini-grafana          LoadBalancer   10.106.141.246   <pending>     3000:31905/TCP                11m
-   pulsar-mini-prometheus       ClusterIP      None             <none>        9090/TCP                      11m
-   pulsar-mini-proxy            LoadBalancer   10.97.240.109    <pending>     80:32305/TCP,6650:31816/TCP   11m
-   pulsar-mini-pulsar-manager   LoadBalancer   10.103.192.175   <pending>     9527:30190/TCP                11m
-   pulsar-mini-toolset          ClusterIP      None             <none>        <none>                        11m
-   pulsar-mini-zookeeper        ClusterIP      None             <none>        2888/TCP,3888/TCP,2181/TCP    11m
-   
-   ```
-
-## Step 2: Use pulsar-admin to create Pulsar tenants/namespaces/topics
-
-`pulsar-admin` is the CLI (command-Line Interface) tool for Pulsar. In this step, you can use `pulsar-admin` to create resources, including tenants, namespaces, and topics.
-
-1. Enter the `toolset` container.
-
-   ```bash
-   
-   kubectl exec -it -n pulsar pulsar-mini-toolset-0 -- /bin/bash
-   
-   ```
-
-2. In the `toolset` container, create a tenant named `apache`.
-
-   ```bash
-   
-   bin/pulsar-admin tenants create apache
-   
-   ```
-
-   Then you can list the tenants to see if the tenant is created successfully.
-
-   ```bash
-   
-   bin/pulsar-admin tenants list
-   
-   ```
-
-   You should see a similar output as below. The tenant `apache` has been successfully created. 
-
-   ```bash
-   
-   "apache"
-   "public"
-   "pulsar"
-   
-   ```
-
-3. In the `toolset` container, create a namespace named `pulsar` in the tenant `apache`.
-
-   ```bash
-   
-   bin/pulsar-admin namespaces create apache/pulsar
-   
-   ```
-
-   Then you can list the namespaces of tenant `apache` to see if the namespace is created successfully.
-
-   ```bash
-   
-   bin/pulsar-admin namespaces list apache
-   
-   ```
-
-   You should see a similar output as below. The namespace `apache/pulsar` has been successfully created. 
-
-   ```bash
-   
-   "apache/pulsar"
-   
-   ```
-
-4. In the `toolset` container, create a topic `test-topic` with `4` partitions in the namespace `apache/pulsar`.
-
-   ```bash
-   
-   bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4
-   
-   ```
-
-5. In the `toolset` container, list all the partitioned topics in the namespace `apache/pulsar`.
-
-   ```bash
-   
-   bin/pulsar-admin topics list-partitioned-topics apache/pulsar
-   
-   ```
-
-   Then you can see all the partitioned topics in the namespace `apache/pulsar`.
-
-   ```bash
-   
-   "persistent://apache/pulsar/test-topic"
-   
-   ```
-
-## Step 3: Use Pulsar client to produce and consume messages
-
-You can use the Pulsar client to create producers and consumers to produce and consume messages.
-
-By default, the Pulsar Helm chart exposes the Pulsar cluster through a Kubernetes `LoadBalancer`. In Minikube, you can use the following command to check the proxy service.
-
-```bash
-
-kubectl get services -n pulsar | grep pulsar-mini-proxy
-
-```
-
-You will see a similar output as below.
-
-```bash
-
-pulsar-mini-proxy            LoadBalancer   10.97.240.109    <pending>     80:32305/TCP,6650:31816/TCP   28m
-
-```
-
-This output tells what are the node ports that Pulsar cluster's binary port and HTTP port are mapped to. The port after `80:` is the HTTP port while the port after `6650:` is the binary port.
-
-Then you can find the IP address and exposed ports of your Minikube server by running the following command.
-
-```bash
-
-minikube service pulsar-mini-proxy -n pulsar
-
-```
-
-**Output**
-
-```bash
-
-|-----------|-------------------|-------------|-------------------------|
-| NAMESPACE |       NAME        | TARGET PORT |           URL           |
-|-----------|-------------------|-------------|-------------------------|
-| pulsar    | pulsar-mini-proxy | http/80     | http://172.17.0.4:32305 |
-|           |                   | pulsar/6650 | http://172.17.0.4:31816 |
-|-----------|-------------------|-------------|-------------------------|
-🏃  Starting tunnel for service pulsar-mini-proxy.
-|-----------|-------------------|-------------|------------------------|
-| NAMESPACE |       NAME        | TARGET PORT |          URL           |
-|-----------|-------------------|-------------|------------------------|
-| pulsar    | pulsar-mini-proxy |             | http://127.0.0.1:61853 |
-|           |                   |             | http://127.0.0.1:61854 |
-|-----------|-------------------|-------------|------------------------|
-
-```
-
-At this point, you can get the service URLs to connect to your Pulsar client. Here are URL examples:
-
-```
-
-webServiceUrl=http://127.0.0.1:61853/
-brokerServiceUrl=pulsar://127.0.0.1:61854/
-
-```
-
-Then you can proceed with the following steps:
-
-1. Download the Apache Pulsar tarball from the [downloads page](https://pulsar.apache.org/en/download/).
-
-2. Decompress the tarball based on your download file.
-
-   ```bash
-   
-   tar -xf <file-name>.tar.gz
-   
-   ```
-
-3. Expose `PULSAR_HOME`.
-
-   (1) Enter the directory of the decompressed download file.
-
-   (2) Expose `PULSAR_HOME` as the environment variable.
-
-   ```bash
-   
-   export PULSAR_HOME=$(pwd)
-   
-   ```
-
-4. Configure the Pulsar client.
-
-   In the `${PULSAR_HOME}/conf/client.conf` file, replace `webServiceUrl` and `brokerServiceUrl` with the service URLs you get from the above steps.
-
-5. Create a subscription to consume messages from `apache/pulsar/test-topic`.
-
-   ```bash
-   
-   bin/pulsar-client consume -s sub apache/pulsar/test-topic  -n 0
-   
-   ```
-
-6. Open a new terminal. In the new terminal, create a producer and send 10 messages to the `test-topic` topic.
-
-   ```bash
-   
-   bin/pulsar-client produce apache/pulsar/test-topic  -m "---------hello apache pulsar-------" -n 10
-   
-   ```
-
-7. Verify the results.
-
-   - From the producer side
-
-       **Output**
-       
-       The messages have been produced successfully.
-
-       ```bash
-       
-       18:15:15.489 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 10 messages successfully produced
-       
-       ```
-
-   - From the consumer side
-
-       **Output**
-
-       At the same time, you can receive the messages as below.
-
-       ```bash
-       
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       ----- got message -----
-       ---------hello apache pulsar-------
-       
-       ```
-
-## Step 4: Use Pulsar Manager to manage the cluster
-
-[Pulsar Manager](administration-pulsar-manager) is a web-based GUI management tool for managing and monitoring Pulsar.
-
-1. By default, the `Pulsar Manager` is exposed as a separate `LoadBalancer`. You can open the Pulsar Manager UI using the following command:
-
-   ```bash
-   
-   minikube service -n pulsar pulsar-mini-pulsar-manager
-   
-   ```
-
-2. The Pulsar Manager UI will be open in your browser. You can use the username `pulsar` and password `pulsar` to log into Pulsar Manager.
-
-3. In Pulsar Manager UI, you can create an environment. 
-
-   - Click `New Environment` button in the top-left corner.
-   - Type `pulsar-mini` for the field `Environment Name` in the popup window.
-   - Type `http://pulsar-mini-broker:8080` for the field `Service URL` in the popup window.
-   - Click `Confirm` button in the popup window.
-
-4. After successfully created an environment, you are redirected to the `tenants` page of that environment. Then you can create `tenants`, `namespaces` and `topics` using the Pulsar Manager.
-
-## Step 5: Use Prometheus and Grafana to monitor cluster
-
-Grafana is an open-source visualization tool, which can be used for visualizing time series data into dashboards.
-
-1. By default, the Grafana is exposed as a separate `LoadBalancer`. You can open the Grafana UI using the following command:
-
-   ```bash
-   
-   minikube service pulsar-mini-grafana -n pulsar
-   
-   ```
-
-2. The Grafana UI is open in your browser. You can use the username `pulsar` and password `pulsar` to log into the Grafana Dashboard.
-
-3. You can view dashboards for different components of a Pulsar cluster.
diff --git a/site2/website-next/versioned_docs/version-2.8.2/standalone.md b/site2/website-next/versioned_docs/version-2.8.2/standalone.md
index 05ac2a1..c2da381 100644
--- a/site2/website-next/versioned_docs/version-2.8.2/standalone.md
+++ b/site2/website-next/versioned_docs/version-2.8.2/standalone.md
@@ -1,179 +1,272 @@
 ---
-id: standalone-docker
-title: Set up a standalone Pulsar in Docker
-sidebar_label: "Run Pulsar in Docker"
-original_id: standalone-docker
+slug: /
+id: standalone
+title: Set up a standalone Pulsar locally
+sidebar_label: "Run Pulsar locally"
+original_id: standalone
 ---
 
-For local development and testing, you can run Pulsar in standalone
-mode on your own machine within a Docker container.
+For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
 
-If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition)
-and follow the instructions for your OS.
+> #### Pulsar in production? 
+> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal) guide.
 
-## Start Pulsar in Docker
+## Install Pulsar standalone
 
-* For MacOS, Linux, and Windows:
+This tutorial guides you through every step of the installation process.
+
+### System requirements
+
+Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
+
+:::tip
+
+By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. 
+
+:::
+
+:::note
+
+Broker is only supported on 64-bit JVM.
+
+:::
+
+### Install Pulsar using binary release
+
+To get started with Pulsar, download a binary tarball release in one of the following ways:
+
+* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>)
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)  
+  
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+  
+* use [wget](https://www.gnu.org/software/wget):
 
   ```shell
   
-  $ docker run -it \
-  -p 6650:6650 \
-  -p 8080:8080 \
-  --mount source=pulsardata,target=/pulsar/data \
-  --mount source=pulsarconf,target=/pulsar/conf \
-  apachepulsar/pulsar:@pulsar:version@ \
-  bin/pulsar standalone
+  $ wget pulsar:binary_release_url
   
   ```
 
-A few things to note about this command:
- * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
-time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
- * For Docker on Windows make sure to configure it to use Linux containers
+After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
+
+```bash
 
-If you start Pulsar successfully, you will see `INFO`-level log messages like this:
+$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
+$ cd apache-pulsar-@pulsar:version@
 
 ```
 
-2017-08-09 22:34:04,030 - INFO  - [main:WebService@213] - Web Service started at http://127.0.0.1:8080
-2017-08-09 22:34:04,038 - INFO  - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246
+#### What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/).
+`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
+`examples` | A Java JAR file containing [Pulsar Functions](functions-overview) example.
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
+`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
+
+These directories are created once you begin running Pulsar.
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`instances` | Artifacts created for [Pulsar Functions](functions-overview).
+`logs` | Logs created by the installation.
+
+:::tip
+
+If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
+* [Install builtin connectors (optional)](#install-builtin-connectors-optional)
+* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
+Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
+
+:::
+
+### Install builtin connectors (optional)
+
+Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
+
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar
+  
+  ```
+
+After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
+For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands:
+
+```bash
+
+$ mkdir connectors
+$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-@pulsar:version@.nar
 ...
 
 ```
 
+:::note
+
+* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
+(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
+* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
+you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+:::
+
+### Install tiered storage offloaders (optional)
+
 :::tip
 
-When you start a local standalone cluster, a `public/default`
+Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+To enable tiered storage feature, follow the instructions below; otherwise skip this section.
 
 :::
 
-namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.
-For more information, see [Topics](concepts-messaging.md#topics).
+To get started with [tiered storage offloaders](concepts-tiered-storage), you need to download the offloaders tarball release on every broker node in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders @pulsar:version@ release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
 
-## Use Pulsar in Docker
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:offloader_release_url
+  
+  ```
 
-Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python) 
-and [C++](client-libraries-cpp). If you're running a local standalone cluster, you can
-use one of these root URLs to interact with your cluster:
+After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
+in the pulsar directory:
 
-* `pulsar://localhost:6650`
-* `http://localhost:8080`
+```bash
 
-The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python)
-client API.
+$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz
 
-Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/):
+// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory
+// then copy the offloaders
 
-```shell
+$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
 
-$ pip install pulsar-client
+$ ls offloaders
+tiered-storage-jcloud-@pulsar:version@.nar
 
 ```
 
-### Consume a message
+For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage).
+
+:::note
 
-Create a consumer and subscribe to the topic:
+* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
+* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
+you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
 
-```python
+:::
 
-import pulsar
+## Start Pulsar standalone
 
-client = pulsar.Client('pulsar://localhost:6650')
-consumer = client.subscribe('my-topic',
-                            subscription_name='my-sub')
+Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
 
-while True:
-    msg = consumer.receive()
-    print("Received message: '%s'" % msg.data())
-    consumer.acknowledge(msg)
+```bash
 
-client.close()
+$ bin/pulsar standalone
 
 ```
 
-### Produce a message
+If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
+
+```bash
+
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
+2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
 
-Now start a producer to send some test messages:
+```
 
-```python
+:::tip
 
-import pulsar
+* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
 
-client = pulsar.Client('pulsar://localhost:6650')
-producer = client.create_producer('my-topic')
+:::
+
+You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+> 
+> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview) document to secure your deployment.
+>
+> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
+
+## Use Pulsar standalone
+
+Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
+
+### Consume a message
 
-for i in range(10):
-    producer.send(('hello-pulsar-%d' % i).encode('utf-8'))
+The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
 
-client.close()
+```bash
+
+$ bin/pulsar-client consume my-topic -s "first-subscription"
+
+```
+
+If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
 
 ```
 
-## Get the topic statistics
+09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
+
+```
+
+:::tip
+
+As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
+
+:::
+
+### Produce a message
+
+The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
 
-In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.
-For details on APIs, refer to [Admin API Overview](admin-api-overview).
+```bash
 
-In the simplest example, you can use curl to probe the stats for a particular topic:
+$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
 
-```shell
+```
 
-$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool
+If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
 
 ```
 
-The output is something like this:
-
-```json
-
-{
-  "averageMsgSize": 0.0,
-  "msgRateIn": 0.0,
-  "msgRateOut": 0.0,
-  "msgThroughputIn": 0.0,
-  "msgThroughputOut": 0.0,
-  "publishers": [
-    {
-      "address": "/172.17.0.1:35048",
-      "averageMsgSize": 0.0,
-      "clientVersion": "1.19.0-incubating",
-      "connectedSince": "2017-08-09 20:59:34.621+0000",
-      "msgRateIn": 0.0,
-      "msgThroughputIn": 0.0,
-      "producerId": 0,
-      "producerName": "standalone-0-1"
-    }
-  ],
-  "replication": {},
-  "storageSize": 16,
-  "subscriptions": {
-    "my-sub": {
-      "blockedSubscriptionOnUnackedMsgs": false,
-      "consumers": [
-        {
-          "address": "/172.17.0.1:35064",
-          "availablePermits": 996,
-          "blockedConsumerOnUnackedMsgs": false,
-          "clientVersion": "1.19.0-incubating",
-          "connectedSince": "2017-08-09 21:05:39.222+0000",
-          "consumerName": "166111",
-          "msgRateOut": 0.0,
-          "msgRateRedeliver": 0.0,
-          "msgThroughputOut": 0.0,
-          "unackedMessages": 0
-        }
-      ],
-      "msgBacklog": 0,
-      "msgRateExpired": 0.0,
-      "msgRateOut": 0.0,
-      "msgRateRedeliver": 0.0,
-      "msgThroughputOut": 0.0,
-      "type": "Exclusive",
-      "unackedMessages": 0
-    }
-  }
-}
+13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
 
 ```
 
+## Stop Pulsar standalone
+
+Press `Ctrl+C` to stop a local standalone Pulsar.
+
+:::tip
+
+If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
+For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+
+:::
+
diff --git a/site2/website-next/versioned_docs/version-2.9.0/client-libraries.md b/site2/website-next/versioned_docs/version-2.9.0/client-libraries.md
index c79f7bb..997f426 100644
--- a/site2/website-next/versioned_docs/version-2.9.0/client-libraries.md
+++ b/site2/website-next/versioned_docs/version-2.9.0/client-libraries.md
@@ -1,579 +1,36 @@
 ---
-id: client-libraries-cgo
-title: Pulsar CGo client
-sidebar_label: "CGo(deprecated)"
-original_id: client-libraries-cgo
+id: client-libraries
+title: Pulsar client libraries
+sidebar_label: "Overview"
+original_id: client-libraries
 ---
 
-You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+Pulsar supports the following client libraries:
 
-All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+- [Java client](client-libraries-java)
+- [Go client](client-libraries-go)
+- [Python client](client-libraries-python)
+- [C++ client](client-libraries-cpp)
+- [Node.js client](client-libraries-node)
+- [WebSocket client](client-libraries-websocket)
+- [C# client](client-libraries-dotnet)
 
-Currently, the following Go clients are maintained in two repositories.
+## Feature matrix
+Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
 
-| Language | Project | Maintainer | License | Description |
-|----------|---------|------------|---------|-------------|
-| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
-| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
-
-> **API docs available as well**  
-> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
-
-## Installation
-
-### Requirements
-
-Pulsar Go client library is based on the C++ client library. Follow
-the instructions for [C++ library](client-libraries-cpp) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
-
-### Install go package
-
-> **Compatibility Warning**  
-> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
-
-You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
-
-```bash
-
-$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
-
-```
-
-Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
-
-```bash
-
-$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@
-
-```
-
-Once installed locally, you can import it into your project:
-
-```go
-
-import "github.com/apache/pulsar/pulsar-client-go/pulsar"
-
-```
-
-## Connection URLs
-
-To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol) URL.
-
-Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
-
-```http
-
-pulsar://localhost:6650
-
-```
-
-A URL for a production Pulsar cluster may look something like this:
-
-```http
-
-pulsar://pulsar.us-west.example.com:6650
-
-```
-
-If you're using [TLS](security-tls-authentication) authentication, the URL will look like something like this:
-
-```http
-
-pulsar+ssl://pulsar.us-west.example.com:6651
-
-```
-
-## Create a client
-
-In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
-
-```go
-
-import (
-    "log"
-    "runtime"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-        OperationTimeoutSeconds: 5,
-        MessageListenerThreads: runtime.NumCPU(),
-    })
-
-    if err != nil {
-        log.Fatalf("Could not instantiate Pulsar client: %v", err)
-    }
-}
-
-```
-
-The following configurable parameters are available for Pulsar clients:
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
-`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
-`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
-`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
-`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
-`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
-`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
-`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
-`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
-`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
-
-## Producers
-
-Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
-
-```go
-
-producer, err := client.CreateProducer(pulsar.ProducerOptions{
-    Topic: "my-topic",
-})
-
-if err != nil {
-    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
-}
-
-defer producer.Close()
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Hello, Pulsar"),
-}
-
-if err := producer.Send(context.Background(), msg); err != nil {
-    log.Fatalf("Producer could not send message: %v", err)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Producer operations
-
-Pulsar Go producers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
-`Name()` | Fetches the producer's name | `string`
-`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
-`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
-`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
-`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
-`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
-`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
-`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
-`Schema()` | | Schema
-
-Here's a more involved example usage of a producer:
-
-```go
-
-import (
-    "context"
-    "fmt"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client to instantiate a producer
-    producer, err := client.CreateProducer(pulsar.ProducerOptions{
-        Topic: "my-topic",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    ctx := context.Background()
-
-    // Send 10 messages synchronously and 10 messages asynchronously
-    for i := 0; i < 10; i++ {
-        // Create a message
-        msg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("message-%d", i)),
-        }
-
-        // Attempt to send the message
-        if err := producer.Send(ctx, msg); err != nil {
-            log.Fatal(err)
-        }
-
-        // Create a different message to send asynchronously
-        asyncMsg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
-        }
-
-        // Attempt to send the message asynchronously and handle the response
-        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
-            if err != nil { log.Fatal(err) }
-
-            fmt.Printf("the %s successfully published", string(msg.Payload))
-        })
-    }
-}
-
-```
-
-### Producer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
-`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
-`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
-`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication) feature. | 30 seconds
-`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
-`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
-`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
-`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
-`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
-`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
-`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
-`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
-`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms
-`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
-
-## Consumers
-
-Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
-
-```go
-
-msgChannel := make(chan pulsar.ConsumerMessage)
-
-consumerOpts := pulsar.ConsumerOptions{
-    Topic:            "my-topic",
-    SubscriptionName: "my-subscription-1",
-    Type:             pulsar.Exclusive,
-    MessageChannel:   msgChannel,
-}
+## Third-party clients
 
-consumer, err := client.Subscribe(consumerOpts)
+Besides the official released clients, multiple projects on developing Pulsar clients are available in different languages.
 
-if err != nil {
-    log.Fatalf("Could not establish subscription: %v", err)
-}
-
-defer consumer.Close()
-
-for cm := range msgChannel {
-    msg := cm.Message
-
-    fmt.Printf("Message ID: %s", msg.ID())
-    fmt.Printf("Message value: %s", string(msg.Payload()))
-
-    consumer.Ack(msg)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Consumer operations
-
-Pulsar Go consumers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
-`Subscription()` | Returns the consumer's subscription name | `string`
-`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
-`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
-`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
-`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
-`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
-`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
-`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
-`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
-`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
-`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
-`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
-
-#### Receive example
-
-Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client object to instantiate a consumer
-    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
-        Topic:            "my-golang-topic",
-        SubscriptionName: "sub-1",
-        Type: pulsar.Exclusive,
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    defer consumer.Close()
-
-    ctx := context.Background()
-
-    // Listen indefinitely on the topic
-    for {
-        msg, err := consumer.Receive(ctx)
-        if err != nil { log.Fatal(err) }
-
-        // Do something with the message
-        err = processMessage(msg)
-
-        if err == nil {
-            // Message processed successfully
-            consumer.Ack(msg)
-        } else {
-            // Failed to process messages
-            consumer.Nack(msg)
-        }
-    }
-}
-
-```
-
-### Consumer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
-`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
-`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
-`SubscriptionName` | The subscription name for this consumer |
-`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
-`Name` | The name of the consumer |
-`AckTimeout` | Set the timeout for unacked messages | 0
-`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
-`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
-`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
-`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
-`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
-`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
-
-## Readers
-
-Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
-
-```go
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic: "my-golang-topic",
-    StartMessageId: pulsar.LatestMessage,
-})
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
-
-
-### Reader operations
-
-Pulsar Go readers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
-`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
-`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
-`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
-
-#### "Next" example
-
-Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatalf("Could not create client: %v", err) }
-
-    // Use the client to instantiate a reader
-    reader, err := client.CreateReader(pulsar.ReaderOptions{
-        Topic:          "my-golang-topic",
-        StartMessageID: pulsar.EarliestMessage,
-    })
-
-    if err != nil { log.Fatalf("Could not create reader: %v", err) }
-
-    defer reader.Close()
-
-    ctx := context.Background()
-
-    // Listen on the topic for incoming messages
-    for {
-        msg, err := reader.Next(ctx)
-        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
-
-        // Process the message
-    }
-}
-
-```
-
-In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
-
-```go
-
-lastSavedId := // Read last saved message id from external store as byte[]
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic:          "my-golang-topic",
-    StartMessageID: DeserializeMessageID(lastSavedId),
-})
-
-```
-
-### Reader configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages 
-`Name` | The name of the reader 
-`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
-`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
-`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
-`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
-
-## Messages
-
-The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
-
-```go
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Here is some message data"),
-    Key: "message-key",
-    Properties: map[string]string{
-        "foo": "bar",
-    },
-    EventTime: time.Now(),
-    ReplicationClusters: []string{"cluster1", "cluster3"},
-}
-
-if err := producer.send(msg); err != nil {
-    log.Fatalf("Could not publish message due to: %v", err)
-}
-
-```
-
-The following methods parameters are available for `ProducerMessage` objects:
-
-Parameter | Description
-:---------|:-----------
-`Payload` | The actual data payload of the message
-`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
-`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
-`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
-`EventTime` | The timestamp associated with the message
-`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
-`SequenceID` | Set the sequence id to assign to the current message
-
-## TLS encryption and authentication
-
-In order to use [TLS encryption](security-tls-transport), you'll need to configure your client to do so:
-
- * Use `pulsar+ssl` URL type
- * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
- * Configure `Authentication` option
-
-Here's an example:
-
-```go
-
-opts := pulsar.ClientOptions{
-    URL: "pulsar+ssl://my-cluster.com:6651",
-    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
-    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
-}
-
-```
-
-## Schema
-
-This example shows how to create a producer and consumer with schema.
-
-```go
-
-var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
-    		"\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
-jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
-// create producer
-producer, err := client.CreateProducerWithSchema(ProducerOptions{
-	Topic: "jsonTopic",
-}, jsonSchema)
-err = producer.Send(context.Background(), ProducerMessage{
-	Value: &testJson{
-		ID:   100,
-		Name: "pulsar",
-	},
-})
-if err != nil {
-	log.Fatal(err)
-}
-defer producer.Close()
-//create consumer
-var s testJson
-consumerJS := NewJsonSchema(exampleSchemaDef, nil)
-consumer, err := client.SubscribeWithSchema(ConsumerOptions{
-	Topic:            "jsonTopic",
-	SubscriptionName: "sub-2",
-}, consumerJS)
-if err != nil {
-	log.Fatal(err)
-}
-msg, err := consumer.Receive(context.Background())
-if err != nil {
-	log.Fatal(err)
-}
-err = msg.GetValue(&s)
-if err != nil {
-	log.Fatal(err)
-}
-fmt.Println(s.ID) // output: 100
-fmt.Println(s.Name) // output: pulsar
-defer consumer.Close()
-
-```
+> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
 
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | 
+| Haskell | [supernova](https://github.com/cr-org/supernova) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Native Pulsar client for Haskell | 
+| Scala | [neutron](https://github.com/cr-org/neutron) | [Chatroulette](https://github.com/cr-org) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Purely functional Apache Pulsar client for Scala built on top of Fs2 |
+| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
+| Rust | [pulsar-rs](https://github.com/wyyerd/pulsar-rs) | [Wyyerd Group](https://github.com/wyyerd) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Future-based Rust bindings for Apache Pulsar |
+| .NET | [pulsar-client-dotnet](https://github.com/fsharplang-ru/pulsar-client-dotnet) | [Lanayx](https://github.com/Lanayx) | [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native .NET client for C#/F#/VB |
+| Node.js | [pulsar-flex](https://github.com/ayeo-flex-org/pulsar-flex) | [Daniel Sinai](https://github.com/danielsinai), [Ron Farkash](https://github.com/ronfarkash), [Gal Rosenberg](https://github.com/galrose)| [![GitHub](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) | Native Nodejs client |
diff --git a/site2/website-next/versioned_docs/version-2.9.0/developing-binary-protocol.md b/site2/website-next/versioned_docs/version-2.9.0/developing-binary-protocol.md
deleted file mode 100644
index 74ef751..0000000
--- a/site2/website-next/versioned_docs/version-2.9.0/developing-binary-protocol.md
+++ /dev/null
@@ -1,581 +0,0 @@
----
-id: develop-binary-protocol
-title: Pulsar binary protocol specification
-sidebar_label: "Binary protocol"
-original_id: develop-binary-protocol
----
-
-Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency.
-
-Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below.
-
-> ### Connection sharing
-> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction.
-
-All commands associated with Pulsar's protocol are contained in a [`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand.
-
-## Framing
-
-Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB.
-
-The Pulsar protocol allows for two types of commands:
-
-1. **Simple commands** that do not carry a message payload.
-2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers.
-
-> Message payloads are passed in raw format rather than protobuf format for efficiency reasons.
-
-### Simple commands
-
-Simple (payload-free) commands have this basic structure:
-
-| Component   | Description                                                                             | Size (in bytes) |
-|:------------|:----------------------------------------------------------------------------------------|:----------------|
-| totalSize   | The size of the frame, counting everything that comes after it (in bytes)               | 4               |
-| commandSize | The size of the protobuf-serialized command                                             | 4               |
-| message     | The protobuf message serialized in a raw binary format (rather than in protobuf format) |                 |
-
-### Payload commands
-
-Payload commands have this basic structure:
-
-| Component    | Description                                                                                 | Size (in bytes) |
-|:-------------|:--------------------------------------------------------------------------------------------|:----------------|
-| totalSize    | The size of the frame, counting everything that comes after it (in bytes)                   | 4               |
-| commandSize  | The size of the protobuf-serialized command                                                 | 4               |
-| message      | The protobuf message serialized in a raw binary format (rather than in protobuf format)     |                 |
-| magicNumber  | A 2-byte byte array (`0x0e01`) identifying the current format                               | 2               |
-| checksum     | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4               |
-| metadataSize | The size of the message [metadata](#message-metadata)                                       | 4               |
-| metadata     | The message [metadata](#message-metadata) stored as a binary protobuf message               |                 |
-| payload      | Anything left in the frame is considered the payload and can include any sequence of bytes  |                 |
-
-## Message metadata
-
-Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer.
-
-| Field                                | Description                                                                                                                                                                                                                                               |
-|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `producer_name`                      | The name of the producer that published the message                                                                                                                                                                                         |
-| `sequence_id`                        | The sequence ID of the message, assigned by producer                                                                                                                                                                                        |
-| `publish_time`                       | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC)                                                                                                                                                    |
-| `properties`                         | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. |
-| `replicated_from` *(optional)*       | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published                                                                                                             |
-| `partition_key` *(optional)*         | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose. Partition key is used as the message key.                                                                                                                          |
-| `compression` *(optional)*           | Signals that payload has been compressed and with which compression library                                                                                                                                                                               |
-| `uncompressed_size` *(optional)*     | If compression is used, the producer must fill the uncompressed size field with the original payload size                                                                                                                                                 |
-| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch                                                                                                                   |
-
-### Batch messages
-
-When using batch messages, the payload will be containing a list of entries,
-each of them with its individual metadata, defined by the `SingleMessageMetadata`
-object.
-
-
-For a single batch, the payload format will look like this:
-
-
-| Field         | Description                                                 |
-|:--------------|:------------------------------------------------------------|
-| metadataSizeN | The size of the single message metadata serialized Protobuf |
-| metadataN     | Single message metadata                                     |
-| payloadN      | Message payload passed by application                       |
-
-Each metadata field looks like this;
-
-| Field                      | Description                                             |
-|:---------------------------|:--------------------------------------------------------|
-| properties                 | Application-defined properties                          |
-| partition key *(optional)* | Key to indicate the hashing to a particular partition   |
-| payload_size               | Size of the payload for the single message in the batch |
-
-When compression is enabled, the whole batch will be compressed at once.
-
-## Interactions
-
-### Connection establishment
-
-After opening a TCP connection to a broker, typically on port 6650, the client
-is responsible to initiate the session.
-
-![Connect interaction](/assets/binary-protocol-connect.png)
-
-After receiving a `Connected` response from the broker, the client can
-consider the connection ready to use. Alternatively, if the broker doesn't
-validate the client authentication, it will reply with an `Error` command and
-close the TCP connection.
-
-Example:
-
-```protobuf
-
-message CommandConnect {
-  "client_version" : "Pulsar-Client-Java-v1.15.2",
-  "auth_method_name" : "my-authentication-plugin",
-  "auth_data" : "my-auth-data",
-  "protocol_version" : 6
-}
-
-```
-
-Fields:
- * `client_version` → String based identifier. Format is not enforced
- * `auth_method_name` → *(optional)* Name of the authentication plugin if auth
-   enabled
- * `auth_data` → *(optional)* Plugin specific authentication data
- * `protocol_version` → Indicates the protocol version supported by the
-   client. Broker will not send commands introduced in newer revisions of the
-   protocol. Broker might be enforcing a minimum version
-
-```protobuf
-
-message CommandConnected {
-  "server_version" : "Pulsar-Broker-v1.15.2",
-  "protocol_version" : 6
-}
-
-```
-
-Fields:
- * `server_version` → String identifier of broker version
- * `protocol_version` → Protocol version supported by the broker. Client
-   must not attempt to send commands introduced in newer revisions of the
-   protocol
-
-### Keep Alive
-
-To identify prolonged network partitions between clients and brokers or cases
-in which a machine crashes without interrupting the TCP connection on the remote
-end (eg: power outage, kernel panic, hard reboot...), we have introduced a
-mechanism to probe for the availability status of the remote peer.
-
-Both clients and brokers are sending `Ping` commands periodically and they will
-close the socket if a `Pong` response is not received within a timeout (default
-used by broker is 60s).
-
-A valid implementation of a Pulsar client is not required to send the `Ping`
-probe, though it is required to promptly reply after receiving one from the
-broker in order to prevent the remote side from forcibly closing the TCP connection.
-
-
-### Producer
-
-In order to send messages, a client needs to establish a producer. When creating
-a producer, the broker will first verify that this particular client is
-authorized to publish on the topic.
-
-Once the client gets confirmation of the producer creation, it can publish
-messages to the broker, referring to the producer id negotiated before.
-
-![Producer interaction](/assets/binary-protocol-producer.png)
-
-##### Command Producer
-
-```protobuf
-
-message CommandProducer {
-  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
-  "producer_id" : 1,
-  "request_id" : 1
-}
-
-```
-
-Parameters:
- * `topic` → Complete topic name to where you want to create the producer on
- * `producer_id` → Client generated producer identifier. Needs to be unique
-    within the same connection
- * `request_id` → Identifier for this request. Used to match the response with
-    the originating request. Needs to be unique within the same connection
- * `producer_name` → *(optional)* If a producer name is specified, the name will
-    be used, otherwise the broker will generate a unique name. Generated
-    producer name is guaranteed to be globally unique. Implementations are
-    expected to let the broker generate a new producer name when the producer
-    is initially created, then reuse it when recreating the producer after
-    reconnections.
-
-The broker will reply with either `ProducerSuccess` or `Error` commands.
-
-##### Command ProducerSuccess
-
-```protobuf
-
-message CommandProducerSuccess {
-  "request_id" :  1,
-  "producer_name" : "generated-unique-producer-name"
-}
-
-```
-
-Parameters:
- * `request_id` → Original id of the `CreateProducer` request
- * `producer_name` → Generated globally unique producer name or the name
-    specified by the client, if any.
-
-##### Command Send
-
-Command `Send` is used to publish a new message within the context of an
-already existing producer. This command is used in a frame that includes command
-as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section.
-
-```protobuf
-
-message CommandSend {
-  "producer_id" : 1,
-  "sequence_id" : 0,
-  "num_messages" : 1
-}
-
... 5093 lines suppressed ...