You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by ur...@apache.org on 2022/10/26 01:30:17 UTC

[pulsar-site] branch main updated: [cleanup][doc] Remove incubating documents from the site repo (#267)

This is an automated email from the ASF dual-hosted git repository.

urfree pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/pulsar-site.git


The following commit(s) were added to refs/heads/main by this push:
     new be877680f19 [cleanup][doc] Remove incubating documents from the site repo (#267)
be877680f19 is described below

commit be877680f19ca4fbbc562c78c85a6ad712e29c95
Author: tison <wa...@gmail.com>
AuthorDate: Wed Oct 26 09:30:13 2022 +0800

    [cleanup][doc] Remove incubating documents from the site repo (#267)
---
 site2/website-next/scripts/replace.js              |    2 +-
 .../version-2.1.0-incubating/about.md              |   56 -
 .../version-2.1.0-incubating/adaptors-kafka.md     |  269 --
 .../version-2.1.0-incubating/adaptors-spark.md     |   72 -
 .../version-2.1.0-incubating/adaptors-storm.md     |  111 -
 .../version-2.1.0-incubating/admin-api-brokers.md  |  174 -
 .../version-2.1.0-incubating/admin-api-clusters.md |  237 --
 .../admin-api-namespaces.md                        |  782 -----
 .../admin-api-non-persistent-topics.md             |  277 --
 .../version-2.1.0-incubating/admin-api-overview.md |   78 -
 .../admin-api-partitioned-topics.md                |  362 ---
 .../admin-api-permissions.md                       |  136 -
 .../admin-api-persistent-topics.md                 |  665 ----
 .../version-2.1.0-incubating/admin-api-schemas.md  |  109 -
 .../version-2.1.0-incubating/admin-api-tenants.md  |   98 -
 .../administration-dashboard.md                    |   57 -
 .../version-2.1.0-incubating/administration-geo.md |  137 -
 .../administration-load-distribution.md            |  235 --
 .../administration-proxy.md                        |   69 -
 .../administration-stats.md                        |   64 -
 .../administration-zk-bk.md                        |  350 --
 .../client-libraries-cpp.md                        |  210 --
 .../client-libraries-go.md                         |  497 ---
 .../client-libraries-java.md                       |  534 ---
 .../client-libraries-python.md                     |  110 -
 .../client-libraries-websocket.md                  |  455 ---
 .../version-2.1.0-incubating/client-libraries.md   |   76 -
 .../concepts-architecture-overview.md              |  160 -
 .../concepts-authentication.md                     |    9 -
 .../version-2.1.0-incubating/concepts-clients.md   |   87 -
 .../version-2.1.0-incubating/concepts-messaging.md |  318 --
 .../concepts-multi-tenancy.md                      |   45 -
 .../version-2.1.0-incubating/concepts-overview.md  |   32 -
 .../concepts-replication.md                        |    9 -
 .../concepts-schema-registry.md                    |   84 -
 .../concepts-tiered-storage.md                     |   18 -
 .../concepts-topic-compaction.md                   |   37 -
 .../cookbooks-compaction.md                        |  147 -
 .../cookbooks-deduplication.md                     |  132 -
 .../cookbooks-encryption.md                        |  184 --
 .../cookbooks-message-queue.md                     |  101 -
 .../cookbooks-non-persistent.md                    |   63 -
 .../cookbooks-partitioned.md                       |   86 -
 .../cookbooks-retention-expiry.md                  |  317 --
 .../cookbooks-tiered-storage.md                    |  154 -
 .../version-2.1.0-incubating/deploy-aws.md         |  271 --
 .../deploy-bare-metal-multi-cluster.md             |  459 ---
 .../version-2.1.0-incubating/deploy-bare-metal.md  |  399 ---
 .../version-2.1.0-incubating/deploy-dcos.md        |  200 --
 .../version-2.1.0-incubating/deploy-kubernetes.md  |  463 ---
 .../version-2.1.0-incubating/deploy-monitoring.md  |  110 -
 .../version-2.1.0-incubating/develop-cpp.md        |  115 -
 .../develop-load-manager.md                        |  227 --
 .../version-2.1.0-incubating/develop-schema.md     |   62 -
 .../version-2.1.0-incubating/develop-tools.md      |  111 -
 .../developing-binary-protocol.md                  |  578 ----
 .../version-2.1.0-incubating/functions-api.md      |  775 -----
 .../functions-deploying.md                         |  261 --
 .../functions-guarantees.md                        |   47 -
 .../version-2.1.0-incubating/functions-metrics.md  |   48 -
 .../version-2.1.0-incubating/functions-overview.md |  499 ---
 .../functions-quickstart.md                        |  316 --
 .../version-2.1.0-incubating/functions-state.md    |  131 -
 .../getting-started-concepts-and-architecture.md   |   16 -
 .../getting-started-docker.md                      |  183 --
 .../getting-started-standalone.md                  |  205 --
 .../version-2.1.0-incubating/io-aerospike.md       |   22 -
 .../version-2.1.0-incubating/io-cassandra.md       |   23 -
 .../version-2.1.0-incubating/io-connectors.md      |   19 -
 .../version-2.1.0-incubating/io-develop.md         |  205 --
 .../version-2.1.0-incubating/io-kafka.md           |   41 -
 .../version-2.1.0-incubating/io-kinesis.md         |   38 -
 .../version-2.1.0-incubating/io-managing.md        |  193 --
 .../version-2.1.0-incubating/io-overview.md        |   40 -
 .../version-2.1.0-incubating/io-quickstart.md      |  473 ---
 .../version-2.1.0-incubating/io-rabbitmq.md        |   20 -
 .../version-2.1.0-incubating/io-twitter.md         |   25 -
 .../version-2.1.0-incubating/pulsar-2.0.md         |   71 -
 .../version-2.1.0-incubating/pulsar-admin.md       | 3394 --------------------
 .../reference-cli-tools.md                         |  734 -----
 .../reference-configuration.md                     |  466 ---
 .../reference-pulsar-admin.md                      | 2045 ------------
 .../reference-rest-api-overview.md                 |   18 -
 .../reference-terminology.md                       |  162 -
 .../version-2.1.0-incubating/security-athenz.md    |   98 -
 .../security-authorization.md                      |  128 -
 .../security-encryption.md                         |  184 --
 .../version-2.1.0-incubating/security-extending.md |  217 --
 .../version-2.1.0-incubating/security-overview.md  |   40 -
 .../security-tls-authentication.md                 |  167 -
 .../security-tls-transport.md                      |  226 --
 .../version-2.1.0-incubating/standalone.md         |  205 --
 .../version-2.1.1-incubating/about.md              |   56 -
 .../version-2.1.1-incubating/adaptors-kafka.md     |  269 --
 .../version-2.1.1-incubating/adaptors-spark.md     |   90 -
 .../version-2.1.1-incubating/adaptors-storm.md     |   95 -
 .../version-2.1.1-incubating/admin-api-brokers.md  |  285 --
 .../version-2.1.1-incubating/admin-api-clusters.md |  317 --
 .../admin-api-namespaces.md                        | 1400 --------
 .../admin-api-non-persistent-topics.md             |    7 -
 .../version-2.1.1-incubating/admin-api-overview.md |  143 -
 .../admin-api-partitioned-topics.md                |    7 -
 .../admin-api-permissions.md                       |  188 --
 .../admin-api-persistent-topics.md                 |    7 -
 .../version-2.1.1-incubating/admin-api-schemas.md  |    6 -
 .../version-2.1.1-incubating/admin-api-tenants.md  |  241 --
 .../administration-dashboard.md                    |   76 -
 .../version-2.1.1-incubating/administration-geo.md |  280 --
 .../administration-load-distribution.md            |  235 --
 .../administration-proxy.md                        |   69 -
 .../administration-stats.md                        |   63 -
 .../administration-zk-bk.md                        |  377 ---
 .../client-libraries-cpp.md                        |  210 --
 .../client-libraries-go.md                         | 1063 ------
 .../client-libraries-java.md                       | 1581 ---------
 .../client-libraries-python.md                     |  110 -
 .../client-libraries-websocket.md                  |  663 ----
 .../version-2.1.1-incubating/client-libraries.md   |   57 -
 .../concepts-architecture-overview.md              |  175 -
 .../concepts-authentication.md                     |    8 -
 .../version-2.1.1-incubating/concepts-clients.md   |   91 -
 .../version-2.1.1-incubating/concepts-messaging.md |  956 ------
 .../concepts-multi-tenancy.md                      |   58 -
 .../version-2.1.1-incubating/concepts-overview.md  |   30 -
 .../concepts-replication.md                        |   68 -
 .../concepts-schema-registry.md                    |    5 -
 .../concepts-tiered-storage.md                     |   17 -
 .../concepts-topic-compaction.md                   |   36 -
 .../cookbooks-compaction.md                        |  141 -
 .../cookbooks-deduplication.md                     |  150 -
 .../cookbooks-encryption.md                        |  183 --
 .../cookbooks-message-queue.md                     |  126 -
 .../cookbooks-non-persistent.md                    |   62 -
 .../cookbooks-partitioned.md                       |    6 -
 .../cookbooks-retention-expiry.md                  |  519 ---
 .../cookbooks-tiered-storage.md                    |  154 -
 .../version-2.1.1-incubating/deploy-aws.md         |  270 --
 .../deploy-bare-metal-multi-cluster.md             |  452 ---
 .../version-2.1.1-incubating/deploy-bare-metal.md  |  399 ---
 .../version-2.1.1-incubating/deploy-kubernetes.md  |   10 -
 .../version-2.1.1-incubating/deploy-monitoring.md  |  137 -
 .../version-2.1.1-incubating/develop-cpp.md        |  115 -
 .../develop-load-manager.md                        |  226 --
 .../version-2.1.1-incubating/develop-schema.md     |   62 -
 .../version-2.1.1-incubating/develop-tools.md      |  110 -
 .../developing-binary-protocol.md                  |  625 ----
 .../version-2.1.1-incubating/functions-api.md      |  769 -----
 .../functions-deploying.md                         |  261 --
 .../functions-guarantees.md                        |   47 -
 .../version-2.1.1-incubating/functions-metrics.md  |    6 -
 .../version-2.1.1-incubating/functions-overview.md |  499 ---
 .../functions-quickstart.md                        |  316 --
 .../version-2.1.1-incubating/functions-state.md    |    5 -
 .../getting-started-docker.md                      |  211 --
 .../getting-started-standalone.md                  |  222 --
 .../version-2.1.1-incubating/io-aerospike.md       |   22 -
 .../version-2.1.1-incubating/io-cassandra.md       |   23 -
 .../version-2.1.1-incubating/io-connectors.md      |   19 -
 .../version-2.1.1-incubating/io-develop.md         |  422 ---
 .../version-2.1.1-incubating/io-kafka.md           |   41 -
 .../version-2.1.1-incubating/io-kinesis.md         |   39 -
 .../version-2.1.1-incubating/io-managing.md        |  193 --
 .../version-2.1.1-incubating/io-overview.md        |  163 -
 .../version-2.1.1-incubating/io-quickstart.md      |  473 ---
 .../version-2.1.1-incubating/io-rabbitmq.md        |   20 -
 .../version-2.1.1-incubating/io-twitter.md         |   25 -
 .../version-2.1.1-incubating/pulsar-2.0.md         |   71 -
 .../version-2.1.1-incubating/pulsar-admin.md       | 3394 --------------------
 .../reference-cli-tools.md                         | 1038 ------
 .../reference-configuration.md                     |  466 ---
 .../reference-pulsar-admin.md                      | 2045 ------------
 .../reference-rest-api-overview.md                 |   18 -
 .../reference-terminology.md                       |  167 -
 .../version-2.1.1-incubating/security-athenz.md    |   97 -
 .../security-authorization.md                      |  129 -
 .../security-encryption.md                         |  334 --
 .../version-2.1.1-incubating/security-extending.md |   82 -
 .../version-2.1.1-incubating/security-overview.md  |   36 -
 .../security-tls-authentication.md                 |  221 --
 .../security-tls-transport.md                      |  312 --
 .../version-2.1.1-incubating/standalone.md         |  222 --
 .../version-2.1.0-incubating-sidebars.json         |  406 ---
 .../version-2.1.1-incubating-sidebars.json         |  406 ---
 183 files changed, 1 insertion(+), 49240 deletions(-)

diff --git a/site2/website-next/scripts/replace.js b/site2/website-next/scripts/replace.js
index f114cc5b1ca..dbaf6c4ae1a 100644
--- a/site2/website-next/scripts/replace.js
+++ b/site2/website-next/scripts/replace.js
@@ -256,4 +256,4 @@ for (let _v of versions) {
     dry: false,
   };
   doReplace(opts);
-}
\ No newline at end of file
+}
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/about.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/about.md
deleted file mode 100644
index 6ed04e87053..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/about.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-slug: /
-id: about
-title: Welcome to the doc portal!
-sidebar_label: "About"
----
-
-import BlockLinks from "@site/src/components/BlockLinks";
-import BlockLink from "@site/src/components/BlockLink";
-import { docUrl } from "@site/src/utils/index";
-
-
-# Welcome to the doc portal!
-***
-
-This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works.
-
-If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for.
-
-## Get Started Now
-<BlockLinks>
-    <BlockLink title="About Pulsar" url="/docs/2.1.0-incubating/concepts-overview/" />
-    <BlockLink title="Get Started" url="/docs/2.1.0-incubating/getting-started-standalone/" />
-    <BlockLink title="Install, Deploy, Upgrade" url="/docs/2.1.0-incubating/deploy-aws/" />
-    <BlockLink title="Pulsar for Developers" url="/docs/2.1.0-incubating/develop-tools/" />
-    <BlockLink title="How To" url="/docs/2.1.0-incubating/functions-develop/" />
-    <BlockLink title="References" url="/docs/2.1.0-incubating/reference-terminology/" />
-</BlockLinks>
-
-## Navigation
-***
-
-There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it.
-
-In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic.
-
-Links to related topics can be found at the bottom of each topic page. Click the link to view the topic.
-
-![Page Linking](/assets/page-linking.png)
-
-## Continuous Improvement
-***
-As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month.
-
-## Help Improve These Documents
-***
-
-You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential!
-
-## Join the Community!
-***
-
-The Pulsar community on github is active, passionate, and knowledgeable.  Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar).
-
-An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too.  Join, hang out, learn, and make some new friends.
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/adaptors-kafka.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/adaptors-kafka.md
deleted file mode 100644
index e98e8a0100b..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/adaptors-kafka.md
+++ /dev/null
@@ -1,269 +0,0 @@
----
-id: adaptors-kafka
-title: Pulsar adaptor for Apache Kafka
-sidebar_label: "Kafka client wrapper"
-original_id: adaptors-kafka
----
-
-
-Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API.
-
-## Using the Pulsar Kafka compatibility wrapper
-
-In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove:
-
-```xml
-
-<dependency>
-  <groupId>org.apache.kafka</groupId>
-  <artifactId>kakfa-clients</artifactId>
-  <version>0.10.2.1</version>
-</dependency>
-
-```
-
-Then include this dependency for the Pulsar Kafka wrapper:
-
-```xml
-
-<dependency>
-  <groupId>org.apache.pulsar</groupId>
-  <artifactId>pulsar-client-kafka</artifactId>
-  <version>@pulsar:version@</version>
-</dependency>
-
-```
-
-With the new dependency, the existing code should work without any changes. The only
-thing that needs to be adjusted is the configuration, to make sure to point the
-producers and consumers to Pulsar service rather than Kafka and to use a particular
-Pulsar topic.
-
-## Using the Pulsar Kafka compatibility wrapper together with existing kafka client.
-
-When migrating from Kafka to Pulsar, the application might have to use the original kafka client
-and the pulsar kafka wrapper together during migration. Then you should consider using the
-unshaded pulsar kafka client wrapper.
-
-```xml
-
-<dependency>
-  <groupId>org.apache.pulsar</groupId>
-  <artifactId>pulsar-client-kafka-original</artifactId>
-  <version>@pulsar:version@</version>
-</dependency>
-
-```
-
-When using this dependency, you need to construct producer using `org.apache.kafka.clients.producer.PulsarKafkaProducer`
-instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers.
-
-## Producer example
-
-```java
-
-// Topic needs to be a regular Pulsar topic
-String topic = "persistent://public/default/my-topic";
-
-Properties props = new Properties();
-// Point to a Pulsar service
-props.put("bootstrap.servers", "pulsar://localhost:6650");
-
-props.put("key.serializer", IntegerSerializer.class.getName());
-props.put("value.serializer", StringSerializer.class.getName());
-
-Producer<Integer, String> producer = new KafkaProducer(props);
-
-for (int i = 0; i < 10; i++) {
-    producer.send(new ProducerRecord<Integer, String>(topic, i, "hello-" + i));
-    log.info("Message {} sent successfully", i);
-}
-
-producer.close();
-
-```
-
-## Consumer example
-
-```java
-
-String topic = "persistent://public/default/my-topic";
-
-Properties props = new Properties();
-// Point to a Pulsar service
-props.put("bootstrap.servers", "pulsar://localhost:6650");
-props.put("group.id", "my-subscription-name");
-props.put("enable.auto.commit", "false");
-props.put("key.deserializer", IntegerDeserializer.class.getName());
-props.put("value.deserializer", StringDeserializer.class.getName());
-
-Consumer<Integer, String> consumer = new KafkaConsumer(props);
-consumer.subscribe(Arrays.asList(topic));
-
-while (true) {
-    ConsumerRecords<Integer, String> records = consumer.poll(100);
-    records.forEach(record -> {
-        log.info("Received record: {}", record);
-    });
-
-    // Commit last offset
-    consumer.commitSync();
-}
-
-```
-
-## Complete Examples
-
-You can find the complete producer and consumer examples [here](https://github.com/apache/incubator-pulsar/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples).
-
-## Compatibility matrix
-
-Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API.
-
-#### Producer
-
-APIs:
-
-| Producer Method                                                               | Supported | Notes                                                                    |
-|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------|
-| `Future<RecordMetadata> send(ProducerRecord<K, V> record)`                    | Yes       | Currently no support for explicitly set the partition id when publishing |
-| `Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback)` | Yes       |                                                                          |
-| `void flush()`                                                                | Yes       |                                                                          |
-| `List<PartitionInfo> partitionsFor(String topic)`                             | No        |                                                                          |
-| `Map<MetricName, ? extends Metric> metrics()`                                 | No        |                                                                          |
-| `void close()`                                                                | Yes       |                                                                          |
-| `void close(long timeout, TimeUnit unit)`                                     | Yes       |                                                                          |
-
-Properties:
-
-| Config property                         | Supported | Notes                                                                         |
-|:----------------------------------------|:----------|:------------------------------------------------------------------------------|
-| `acks`                                  | Ignored   | Durability and quorum writes are configured at the namespace level            |
-| `batch.size`                            | Ignored   |                                                                               |
-| `block.on.buffer.full`                  | Yes       | If true it will block producer, otherwise give error                          |
-| `bootstrap.servers`                     | Yes       | Needs to point to a single Pulsar service URL                                 |
-| `buffer.memory`                         | Ignored   |                                                                               |
-| `client.id`                             | Ignored   |                                                                               |
-| `compression.type`                      | Yes       | Allows `gzip` and `lz4`. No `snappy`.                                         |
-| `connections.max.idle.ms`               | Ignored   |                                                                               |
-| `interceptor.classes`                   | Ignored   |                                                                               |
-| `key.serializer`                        | Yes       |                                                                               |
-| `linger.ms`                             | Yes       | Controls the group commit time when batching messages                         |
-| `max.block.ms`                          | Ignored   |                                                                               |
-| `max.in.flight.requests.per.connection` | Ignored   | In Pulsar ordering is maintained even with multiple requests in flight        |
-| `max.request.size`                      | Ignored   |                                                                               |
-| `metric.reporters`                      | Ignored   |                                                                               |
-| `metrics.num.samples`                   | Ignored   |                                                                               |
-| `metrics.sample.window.ms`              | Ignored   |                                                                               |
-| `partitioner.class`                     | Ignored   |                                                                               |
-| `receive.buffer.bytes`                  | Ignored   |                                                                               |
-| `reconnect.backoff.ms`                  | Ignored   |                                                                               |
-| `request.timeout.ms`                    | Ignored   |                                                                               |
-| `retries`                               | Ignored   | Pulsar client retries with exponential backoff until the send timeout expires |
-| `send.buffer.bytes`                     | Ignored   |                                                                               |
-| `timeout.ms`                            | Ignored   |                                                                               |
-| `value.serializer`                      | Yes       |                                                                               |
-
-
-#### Consumer
-
-APIs:
-
-| Consumer Method                                                                                         | Supported | Notes |
-|:--------------------------------------------------------------------------------------------------------|:----------|:------|
-| `Set<TopicPartition> assignment()`                                                                      | No        |       |
-| `Set<String> subscription()`                                                                            | Yes       |       |
-| `void subscribe(Collection<String> topics)`                                                             | Yes       |       |
-| `void subscribe(Collection<String> topics, ConsumerRebalanceListener callback)`                         | No        |       |
-| `void assign(Collection<TopicPartition> partitions)`                                                    | No        |       |
-| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)`                                   | No        |       |
-| `void unsubscribe()`                                                                                    | Yes       |       |
-| `ConsumerRecords<K, V> poll(long timeoutMillis)`                                                        | Yes       |       |
-| `void commitSync()`                                                                                     | Yes       |       |
-| `void commitSync(Map<TopicPartition, OffsetAndMetadata> offsets)`                                       | Yes       |       |
-| `void commitAsync()`                                                                                    | Yes       |       |
-| `void commitAsync(OffsetCommitCallback callback)`                                                       | Yes       |       |
-| `void commitAsync(Map<TopicPartition, OffsetAndMetadata> offsets, OffsetCommitCallback callback)`       | Yes       |       |
-| `void seek(TopicPartition partition, long offset)`                                                      | Yes       |       |
-| `void seekToBeginning(Collection<TopicPartition> partitions)`                                           | Yes       |       |
-| `void seekToEnd(Collection<TopicPartition> partitions)`                                                 | Yes       |       |
-| `long position(TopicPartition partition)`                                                               | Yes       |       |
-| `OffsetAndMetadata committed(TopicPartition partition)`                                                 | Yes       |       |
-| `Map<MetricName, ? extends Metric> metrics()`                                                           | No        |       |
-| `List<PartitionInfo> partitionsFor(String topic)`                                                       | No        |       |
-| `Map<String, List<PartitionInfo>> listTopics()`                                                         | No        |       |
-| `Set<TopicPartition> paused()`                                                                          | No        |       |
-| `void pause(Collection<TopicPartition> partitions)`                                                     | No        |       |
-| `void resume(Collection<TopicPartition> partitions)`                                                    | No        |       |
-| `Map<TopicPartition, OffsetAndTimestamp> offsetsForTimes(Map<TopicPartition, Long> timestampsToSearch)` | No        |       |
-| `Map<TopicPartition, Long> beginningOffsets(Collection<TopicPartition> partitions)`                     | No        |       |
-| `Map<TopicPartition, Long> endOffsets(Collection<TopicPartition> partitions)`                           | No        |       |
-| `void close()`                                                                                          | Yes       |       |
-| `void close(long timeout, TimeUnit unit)`                                                               | Yes       |       |
-| `void wakeup()`                                                                                         | No        |       |
-
-Properties:
-
-| Config property                 | Supported | Notes                                                 |
-|:--------------------------------|:----------|:------------------------------------------------------|
-| `group.id`                      | Yes       | Maps to a Pulsar subscription name                    |
-| `max.poll.records`              | Ignored   |                                                       |
-| `max.poll.interval.ms`          | Ignored   | Messages are "pushed" from broker                     |
-| `session.timeout.ms`            | Ignored   |                                                       |
-| `heartbeat.interval.ms`         | Ignored   |                                                       |
-| `bootstrap.servers`             | Yes       | Needs to point to a single Pulsar service URL         |
-| `enable.auto.commit`            | Yes       |                                                       |
-| `auto.commit.interval.ms`       | Ignored   | With auto-commit, acks are sent immediately to broker |
-| `partition.assignment.strategy` | Ignored   |                                                       |
-| `auto.offset.reset`             | Ignored   |                                                       |
-| `fetch.min.bytes`               | Ignored   |                                                       |
-| `fetch.max.bytes`               | Ignored   |                                                       |
-| `fetch.max.wait.ms`             | Ignored   |                                                       |
-| `metadata.max.age.ms`           | Ignored   |                                                       |
-| `max.partition.fetch.bytes`     | Ignored   |                                                       |
-| `send.buffer.bytes`             | Ignored   |                                                       |
-| `receive.buffer.bytes`          | Ignored   |                                                       |
-| `client.id`                     | Ignored   |                                                       |
-
-
-## Custom Pulsar configurations
-
-You can configure Pulsar authentication provider directly from the Kafka properties.
-
-### Pulsar client properties:
-
-| Config property                        | Default | Notes                                                                                  |
-|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
-| `pulsar.authentication.class`          |         | Configure to auth provider. Eg. `org.apache.pulsar.client.impl.auth.AuthenticationTls` |
-| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-)                       | `false` | Enable TLS transport encryption                                                        |
-| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-)   |         | Path for the TLS trust certificate store                                               |
-| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers                                           |
-| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout |
-| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval |
-| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | Number of Netty IO threads to use |
-| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | Max number of connection to open to each broker |
-| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay |
-| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | Max number of concurrent topic lookups |
-| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | Threshold of errors to forcefully close a connection |
-
-
-### Pulsar producer properties
-
-| Config property                        | Default | Notes                                                                                  |
-|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
-| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify producer name |
-| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) |  | Specify baseline for sequence id for this producer |
-| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the max size of the queue holding the messages pending to receive an acknowledgment from the broker.  |
-| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the number of max pending messages across all the partitions  |
-| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer |
-| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages permitted in a batch |
-
-
-### Pulsar consumer Properties
-
-| Config property                        | Default | Notes                                                                                  |
-|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
-| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Set the consumer name |
-| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Sets the size of the consumer receive queue |
-| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the max total receiver queue size across partitions |
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/adaptors-spark.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/adaptors-spark.md
deleted file mode 100644
index 230a5a6ba46..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/adaptors-spark.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-id: adaptors-spark
-title: Pulsar adaptor for Apache Spark
-sidebar_label: "Apache Spark"
-original_id: adaptors-spark
----
-
-The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar.
-
-An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways.
-
-## Prerequisites
-
-To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration.
-
-### Maven
-
-If you're using Maven, add this to your `pom.xml`:
-
-```xml
-
-<!-- in your <properties> block -->
-<pulsar.version>@pulsar:version@</pulsar.version>
-
-<!-- in your <dependencies> block -->
-<dependency>
-  <groupId>org.apache.pulsar</groupId>
-  <artifactId>pulsar-spark</artifactId>
-  <version>${pulsar.version}</version>
-</dependency>
-
-```
-
-### Gradle
-
-If you're using Gradle, add this to your `build.gradle` file:
-
-```groovy
-
-def pulsarVersion = "@pulsar:version@"
-
-dependencies {
-    compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion
-}
-
-```
-
-## Usage
-
-Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`:
-
-```java
-
-SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("pulsar-spark");
-JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(5));
-
-ClientConfiguration clientConf = new ClientConfiguration();
-ConsumerConfiguration consConf = new ConsumerConfiguration();
-String url = "pulsar://localhost:6650/";
-String topic = "persistent://public/default/topic1";
-String subs = "sub1";
-
-JavaReceiverInputDStream<byte[]> msgs = jssc
-        .receiverStream(new SparkStreamingPulsarReceiver(clientConf, consConf, url, topic, subs));
-
-```
-
-## Example
-
-You can find a complete example [here](https://github.com/apache/incubator-pulsar/tree/master/tests/pulsar-spark-test/src/test/java/org/apache/pulsar/spark/example/SparkStreamingPulsarReceiverExample.java).
-In this example, the number of messages which contain the string "Pulsar" in received messages is counted.
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/adaptors-storm.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/adaptors-storm.md
deleted file mode 100644
index 8ca5ec4a8c9..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/adaptors-storm.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-id: adaptors-storm
-title: Pulsar adaptor for Apache Storm
-sidebar_label: "Apache Storm"
-original_id: adaptors-storm
----
-
-Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data.
-
-An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt.
-
-## Using the Pulsar Storm Adaptor
-
-Include dependency for Pulsar Storm Adaptor:
-
-```xml
-
-<dependency>
-  <groupId>org.apache.pulsar</groupId>
-  <artifactId>pulsar-storm</artifactId>
-  <version>${pulsar.version}</version>
-</dependency>
-
-```
-
-## Pulsar Spout
-
-The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client.
-
-The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout:
-
-```java
-
-// Configure a Pulsar Client
-ClientConfiguration clientConf = new ClientConfiguration();
-
-// Configure a Pulsar Consumer
-ConsumerConfiguration consumerConf = new ConsumerConfiguration();  
-
-@SuppressWarnings("serial")
-MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() {
-
-    @Override
-    public Values toValues(Message msg) {
-        return new Values(new String(msg.getData()));
-    }
-
-    @Override
-    public void declareOutputFields(OutputFieldsDeclarer declarer) {
-        // declare the output fields
-        declarer.declare(new Fields("string"));
-    }
-};
-
-// Configure a Pulsar Spout
-PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration();
-spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
-spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1");
-spoutConf.setSubscriptionName("my-subscriber-name1");
-spoutConf.setMessageToValuesMapper(messageToValuesMapper);
-
-// Create a Pulsar Spout
-PulsarSpout spout = new PulsarSpout(spoutConf, clientConf, consumerConf);
-
-```
-
-## Pulsar Bolt
-
-The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client.
-
-A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt:
-
-```java
-
-// Configure a Pulsar Client
-ClientConfiguration clientConf = new ClientConfiguration();
-
-// Configure a Pulsar Producer  
-ProducerConfiguration producerConf = new ProducerConfiguration();
-
-@SuppressWarnings("serial")
-TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() {
-
-    @Override
-    public Message toMessage(Tuple tuple) {
-        String receivedMessage = tuple.getString(0);
-        // message processing
-        String processedMsg = receivedMessage + "-processed";
-        return MessageBuilder.create().setContent(processedMsg.getBytes()).build();
-    }
-
-    @Override
-    public void declareOutputFields(OutputFieldsDeclarer declarer) {
-        // declare the output fields
-    }
-};
-
-// Configure a Pulsar Bolt
-PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration();
-boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
-boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2");
-boltConf.setTupleToMessageMapper(tupleToMessageMapper);
-
-// Create a Pulsar Bolt
-PulsarBolt bolt = new PulsarBolt(boltConf, clientConf);
-
-```
-
-## Example
-
-You can find a complete example [here](https://github.com/apache/incubator-pulsar/tree/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/example/StormExample.java).
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-brokers.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-brokers.md
deleted file mode 100644
index 57454d9d572..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-brokers.md
+++ /dev/null
@@ -1,174 +0,0 @@
----
-id: admin-api-brokers
-title: Managing Brokers
-sidebar_label: "Brokers"
-original_id: admin-api-brokers
----
-
-Pulsar brokers consist of two components:
-
-1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup.
-2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers.
-
-[Brokers](reference-terminology.md#broker) can be managed via:
-
-* The [`brokers`](reference-pulsar-admin.md#brokers) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool
-* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API
-* The `brokers` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin.html} object in the [Java API](client-libraries-java.md)
-
-In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration).
-
-> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters.
-
-## Brokers resources
-
-### List active brokers
-
-Fetch all available active brokers that are serving traffic.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin brokers list use
-
-```
-
-```
-
-broker1.use.org.com:8080
-
-```
-
-###### REST
-
-{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers}
-
-###### Java
-
-```java
-
-admin.brokers().getActiveBrokers(clusterName)
-
-```
-
-#### list of namespaces owned by a given broker
-
-It finds all namespaces which are owned and served by a given broker.
-
-###### CLI
-
-```shell
-
-$ pulsar-admin brokers namespaces use \
-  --url broker1.use.org.com:8080
-
-```
-
-```json
-
-{
-  "my-property/use/my-ns/0x00000000_0xffffffff": {
-    "broker_assignment": "shared",
-    "is_controlled": false,
-    "is_active": true
-  }
-}
-
-```
-
-###### REST
-
-{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes}
-
-###### Java
-
-```java
-
-admin.brokers().getOwnedNamespaces(cluster,brokerUrl);
-
-```
-
-### Dynamic broker configuration
-
-One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker).
-
-But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values.
-
-* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more.
-* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint.
-
-### Update dynamic configuration
-
-#### pulsar-admin
-
-The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter:
-
-```shell
-
-$ pulsar-admin brokers update-dynamic-config brokerShutdownTimeoutMs 100
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration}
-
-#### Java
-
-```java
-
-admin.brokers().updateDynamicConfiguration(configName, configValue);
-
-```
-
-### List updated values
-
-Fetch a list of all potentially updatable configuration parameters.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin brokers list-dynamic-config
-brokerShutdownTimeoutMs
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName}
-
-#### Java
-
-```java
-
-admin.brokers().getDynamicConfigurationNames();
-
-```
-
-### List all
-
-Fetch a list of all parameters that have been dynamically updated.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin brokers get-all-dynamic-config
-brokerShutdownTimeoutMs:100
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations}
-
-#### Java
-
-```java
-
-admin.brokers().getAllDynamicConfigurations();
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-clusters.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-clusters.md
deleted file mode 100644
index 7b3635de58f..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-clusters.md
+++ /dev/null
@@ -1,237 +0,0 @@
----
-id: admin-api-clusters
-title: Managing Clusters
-sidebar_label: "Clusters"
-original_id: admin-api-clusters
----
-
-Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper)
-servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management.
-
-Clusters can be managed via:
-
-* The [`clusters`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool
-* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API
-* The `clusters` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md)
-
-## Clusters resources
-
-### Provision
-
-New clusters can be provisioned using the admin interface.
-
-> Please note that this operation requires superuser privileges.
-
-#### pulsar-admin
-
-You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example:
-
-```shell
-
-$ pulsar-admin clusters create cluster-1 \
-  --url http://my-cluster.org.com:8080 \
-  --broker-url pulsar://my-cluster.org.com:6650
-
-```
-
-#### REST API
-
-{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster}
-
-#### Java
-
-```java
-
-ClusterData clusterData = new ClusterData(
-        serviceUrl,
-        serviceUrlTls,
-        brokerServiceUrl,
-        brokerServiceUrlTls
-);
-admin.clusters().createCluster(clusterName, clusterData);
-
-```
-
-### Initialize cluster metadata
-
-When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following:
-
-* The name of the cluster
-* The local ZooKeeper connection string for the cluster
-* The configuration store connection string for the entire instance
-* The web service URL for the cluster
-* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster
-
-You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster.
-
-> #### No cluster metadata initialization through the REST API or the Java admin API
->
-> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API
-> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly.
-> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular
-> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command.
-
-Here's an example cluster metadata initialization command:
-
-```shell
-
-bin/pulsar initialize-cluster-metadata \
-  --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
-  --web-service-url http://pulsar.us-west.example.com:8080/ \
-  --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
-  --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
-  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
-
-```
-
-You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance.
-
-### Get configuration
-
-You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time.
-
-#### pulsar-admin
-
-Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example:
-
-```shell
-
-$ pulsar-admin clusters get cluster-1
-{
-    "serviceUrl": "http://my-cluster.org.com:8080/",
-    "serviceUrlTls": null,
-    "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/",
-    "brokerServiceUrlTls": null
-    "peerClusterNames": null
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster}
-
-#### Java
-
-```java
-
-admin.clusters().getCluster(clusterName);
-
-```
-
-### Update
-
-You can update the configuration for an existing cluster at any time.
-
-#### pulsar-admin
-
-Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags.
-
-```shell
-
-$ pulsar-admin clusters update cluster-1 \
-  --url http://my-cluster.org.com:4081 \
-  --broker-url pulsar://my-cluster.org.com:3350
-
-```
-
-#### REST
-
-{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster}
-
-#### Java
-
-```java
-
-ClusterData clusterData = new ClusterData(
-        serviceUrl,
-        serviceUrlTls,
-        brokerServiceUrl,
-        brokerServiceUrlTls
-);
-admin.clusters().updateCluster(clusterName, clusterData);
-
-```
-
-### Delete
-
-Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance).
-
-#### pulsar-admin
-
-Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster.
-
-```
-
-$ pulsar-admin clusters delete cluster-1
-
-```
-
-#### REST API
-
-{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster}
-
-#### Java
-
-```java
-
-admin.clusters().deleteCluster(clusterName);
-
-```
-
-### List
-
-You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance).
-
-#### pulsar-admin
-
-Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand.
-
-```shell
-
-$ pulsar-admin clusters list
-cluster-1
-cluster-2
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters}
-
-###### Java
-
-```java
-
-admin.clusters().getClusters();
-
-```
-
-### Update peer-cluster data
-
-Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance).
-
-#### pulsar-admin
-
-Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names.
-
-```
-
-$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames}
-
-#### Java
-
-```java
-
-admin.clusters().updatePeerClusterNames(clusterName, peerClusterList);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-namespaces.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-namespaces.md
deleted file mode 100644
index 33a278de8de..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-namespaces.md
+++ /dev/null
@@ -1,782 +0,0 @@
----
-id: admin-api-namespaces
-title: Managing Namespaces
-sidebar_label: "Namespaces"
-original_id: admin-api-namespaces
----
-
-Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic).
-
-Namespaces can be managed via:
-
-* The [`namespaces`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool
-* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API
-* The `namespaces` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md)
-
-## Namespaces resources
-
-### Create
-
-You can create new namespaces under a given [tenant](reference-terminology.md#tenant).
-
-#### pulsar-admin
-
-Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name:
-
-```shell
-
-$ pulsar-admin namespaces create test-tenant/test-namespace
-
-```
-
-#### REST API
-
-{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace}
-
-#### Java
-
-```java
-
-admin.namespaces().createNamespace(namespace);
-
-```
-
-### Get policies
-
-You can fetch the current policies associated with a namespace at any time.
-
-#### pulsar-admin
-
-Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace:
-
-```shell
-
-$ pulsar-admin namespaces policies test-tenant/test-namespace
-{
-  "auth_policies": {
-    "namespace_auth": {},
-    "destination_auth": {}
-  },
-  "replication_clusters": [],
-  "bundles_activated": true,
-  "bundles": {
-    "boundaries": [
-      "0x00000000",
-      "0xffffffff"
-    ],
-    "numBundles": 1
-  },
-  "backlog_quota_map": {},
-  "persistence": null,
-  "latency_stats_sample_rate": {},
-  "message_ttl_in_seconds": 0,
-  "retention_policies": null,
-  "deleted": false
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies}
-
-#### Java
-
-```java
-
-admin.namespaces().getPolicies(namespace);
-
-```
-
-### List namespaces within a tenant
-
-You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant).
-
-#### pulsar-admin
-
-Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant:
-
-```shell
-
-$ pulsar-admin namespaces list test-tenant
-test-tenant/ns1
-test-tenant/ns2
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces}
-
-#### Java
-
-```java
-
-admin.namespaces().getNamespaces(tenant);
-
-```
-
-#### Java
-
-```java
-
-admin.namespaces().getNamespaces(tenant);
-
-```
-
-### Delete
-
-You can delete existing namespaces from a tenant.
-
-#### pulsar-admin
-
-Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace:
-
-```shell
-
-$ pulsar-admin namespaces delete test-tenant/ns1
-
-```
-
-#### REST
-
-{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace}
-
-#### Java
-
-```java
-
-admin.namespaces().deleteNamespace(namespace);
-
-```
-
-#### set replication cluster
-
-It sets replication clusters for a namespace, so Pulsar can internally replicate publish message from one colo to another colo.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces set-clusters test-tenant/ns1 \
-  --clusters cl1
-
-```
-
-###### REST
-
-```
-
-{@inject: endpoint POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters}
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().setNamespaceReplicationClusters(namespace, clusters);
-
-```
-
-#### get replication cluster
-
-It gives a list of replication clusters for a given namespace.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1
-
-```
-
-```
-
-cl2
-
-```
-
-###### REST
-
-```
-
-GET /admin/v2/namespaces/:tenant/:namespace/replication
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().getNamespaceReplicationClusters(namespace)
-
-```
-
-#### set backlog quota policies
-
-Backlog quota helps broker to restrict bandwidth/storage of a namespace once it reach certain threshold limit . Admin can set this limit and one of the following action after the limit is reached.
-
-  1.  producer_request_hold: broker will hold and not persist produce request payload
-
-  2.  producer_exception: broker will disconnects with client by giving exception
-
-  3.  consumer_backlog_eviction: broker will start discarding backlog messages
-
-  Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces set-backlog-quota --limit 10 --policy producer_request_hold test-tenant/ns1
-
-```
-
-```
-
-N/A
-
-```
-
-###### REST
-
-```
-
-POST /admin/v2/namespaces/:tenant/:namespace/backlogQuota
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, policy))
-
-```
-
-#### get backlog quota policies
-
-It shows a configured backlog quota for a given namespace.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1
-
-```
-
-```json
-
-{
-  "destination_storage": {
-    "limit": 10,
-    "policy": "producer_request_hold"
-  }
-}
-
-```
-
-###### REST
-
-```
-
-GET /admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().getBacklogQuotaMap(namespace);
-
-```
-
-#### remove backlog quota policies
-
-It removes backlog quota policies for a given namespace
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1
-
-```
-
-```
-
-N/A
-
-```
-
-###### REST
-
-```
-
-DELETE /admin/v2/namespaces/:tenant/:namespace/backlogQuota
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType)
-
-```
-
-#### set persistence policies
-
-Persistence policies allow to configure persistency-level for all topic messages under a given namespace.
-
-  -   Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0
-
-  -   Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0
-
-  -   Bookkeeper-write-quorum: How many writes to make of each entry, default: 0
-
-  -   Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1
-
-```
-
-```
-
-N/A
-
-```
-
-###### REST
-
-```
-
-POST /admin/v2/namespaces/:tenant/:namespace/persistence
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate))
-
-```
-
-#### get persistence policies
-
-It shows configured persistence policies of a given namespace.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces get-persistence test-tenant/ns1
-
-```
-
-```json
-
-{
-  "bookkeeperEnsemble": 3,
-  "bookkeeperWriteQuorum": 2,
-  "bookkeeperAckQuorum": 2,
-  "managedLedgerMaxMarkDeleteRate": 0
-}
-
-```
-
-###### REST
-
-```
-
-GET /admin/v2/namespaces/:tenant/:namespace/persistence
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().getPersistence(namespace)
-
-```
-
-#### unload namespace bundle
-
-Namespace bundle is a virtual group of topics which belong to same namespace. If broker gets overloaded with number of bundles then this command can help to unload heavy bundle from that broker, so it can be served by some other less loaded broker. Namespace bundle is defined with it’s start and end range such as 0x00000000 and 0xffffffff.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1
-
-```
-
-```
-
-N/A
-
-```
-
-###### REST
-
-```
-
-PUT /admin/v2/namespaces/:tenant/:namespace/unload
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().unloadNamespaceBundle(namespace, bundle)
-
-```
-
-#### set message-ttl
-
-It configures message’s time to live (in seconds) duration.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1
-
-```
-
-```
-
-N/A
-
-```
-
-###### REST
-
-```
-
-POST /admin/v2/namespaces/:tenant/:namespace/messageTTL
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL)
-
-```
-
-#### get message-ttl
-
-It gives a message ttl of configured namespace.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces get-message-ttl test-tenant/ns1
-
-```
-
-```
-
-100
-
-```
-
-###### REST
-
-```
-
-GET /admin/v2/namespaces/:tenant/:namespace/messageTTL
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().getNamespaceReplicationClusters(namespace)
-
-```
-
-#### split bundle
-
-Each namespace bundle can contain multiple topics and each bundle can be served by only one broker. If bundle gets heavy with multiple live topics in it then it creates load on that broker and in order to resolve this issue, admin can split bundle using this command.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1
-
-```
-
-```
-
-N/A
-
-```
-
-###### REST
-
-```
-
-PUT /admin/v2/namespaces/:tenant/:namespace/{bundle}/split
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().splitNamespaceBundle(namespace, bundle)
-
-```
-
-#### clear backlog
-
-It clears all message backlog for all the topics those belong to specific namespace. You can also clear backlog for a specific subscription as well.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1
-
-```
-
-```
-
-N/A
-
-```
-
-###### REST
-
-```
-
-POST /admin/v2/namespaces/:tenant/:namespace/clearBacklog
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription)
-
-```
-
-#### clear bundle backlog
-
-It clears all message backlog for all the topics those belong to specific NamespaceBundle. You can also clear backlog for a specific subscription as well.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces clear-backlog  --bundle 0x00000000_0xffffffff  --sub my-subscription test-tenant/ns1
-
-```
-
-```
-
-N/A
-
-```
-
-###### REST
-
-```
-
-POST /admin/v2/namespaces/:tenant/:namespace/{bundle}/clearBacklog
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription)
-
-```
-
-#### set retention
-
-Each namespace contains multiple topics and each topic’s retention size (storage size) should not exceed to a specific threshold or it should be stored till certain time duration. This command helps to configure retention size and time of topics in a given namespace.
-
-###### CLI
-
-```
-
-$ pulsar-admin set-retention --size 10 --time 100 test-tenant/ns1
-
-```
-
-```
-
-N/A
-
-```
-
-###### REST
-
-```
-
-POST /admin/v2/namespaces/:tenant/:namespace/retention
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB))
-
-```
-
-#### get retention
-
-It shows retention information of a given namespace.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces get-retention test-tenant/ns1
-
-```
-
-```json
-
-{
-  "retentionTimeInMinutes": 10,
-  "retentionSizeInMB": 100
-}
-
-```
-
-###### REST
-
-```
-
-GET /admin/v2/namespaces/:tenant/:namespace/retention
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().getRetention(namespace)
-
-```
-
-#### set dispatch throttling
-
-It sets message dispatch rate for all the topics under a given namespace.
-Dispatch rate can be restricted by number of message per X seconds (`msg-dispatch-rate`) or by number of message-bytes per X second (`byte-dispatch-rate`).
-dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
-disables the throttling.
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \
-  --msg-dispatch-rate 1000 \
-  --byte-dispatch-rate 1048576 \
-  --dispatch-rate-period 1
-
-```
-
-###### REST
-
-```
-
-POST /admin/v2/namespaces/:tenant/:namespace/dispatchRate
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().setDispatchRate(namespace, 1000, 1048576, 1)
-
-```
-
-#### get configured message-rate
-
-It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
-
-###### CLI
-
-```
-
-$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1
-
-```
-
-```json
-
-{
-  "dispatchThrottlingRatePerTopicInMsg" : 1000,
-  "dispatchThrottlingRatePerTopicInByte" : 1048576,
-  "ratePeriodInSecond" : 1
-}
-
-```
-
-###### REST
-
-```
-
-GET /admin/v2/namespaces/:tenant/:namespace/dispatchRate
-
-```
-
-###### Java
-
-```java
-
-admin.namespaces().getDispatchRate(namespace)
-
-```
-
-### Namespace isolation
-
-Coming soon.
-
-### Unloading from a broker
-
-You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it.
-
-#### pulsar-admin
-
-Use the [`unload`](reference-pulsar-admin.md#namespaces-unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command.
-
-##### Example
-
-```shell
-
-$ pulsar-admin namespaces unload my-tenant/my-ns
-
-```
-
-#### REST API
-
-#### Java
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-non-persistent-topics.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-non-persistent-topics.md
deleted file mode 100644
index 8f0b52f32ce..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-non-persistent-topics.md
+++ /dev/null
@@ -1,277 +0,0 @@
----
-id: admin-api-non-persistent-topics
-title: Managing non-persistent topics
-sidebar_label: "Non-Persistent topics"
-original_id: admin-api-non-persistent-topics
----
-
-Non-persistent can be used in applications that only want to consume real time published messages and
-do not need persistent guarantee that can also reduce message-publish latency by removing overhead of
-persisting messages.
-
-In all of the instructions and commands below, the topic name structure is:
-
-```shell
-
-persistent://tenant/namespace/topic
-
-```
-
-## Non-persistent topics resources
-
-### Get stats
-
-It shows current statistics of a given non-partitioned topic.
-
-  -   **msgRateIn**: The sum of all local and replication publishers' publish rates in messages per second
-
-  -   **msgThroughputIn**: Same as above, but in bytes per second instead of messages per second
-
-  -   **msgRateOut**: The sum of all local and replication consumers' dispatch rates in messages per second
-
-  -   **msgThroughputOut**: Same as above, but in bytes per second instead of messages per second
-
-  -   **averageMsgSize**: The average size in bytes of messages published within the last interval
-
-  -   **publishers**: The list of all local publishers into the topic. There can be zero or thousands
-
-  -   **averageMsgSize**: Average message size in bytes from this publisher within the last interval
-
-  -   **producerId**: Internal identifier for this producer on this topic
-
-  -   **producerName**: Internal identifier for this producer, generated by the client library
-
-  -   **address**: IP address and source port for the connection of this producer
-
-  -   **connectedSince**: Timestamp this producer was created or last reconnected
-
-  -   **subscriptions**: The list of all local subscriptions to the topic
-
-  -   **my-subscription**: The name of this subscription (client defined)
-
-  -   **type**: This subscription type
-
-  -   **consumers**: The list of connected consumers for this subscription
-
-  -   **consumerName**: Internal identifier for this consumer, generated by the client library
-
-  -   **availablePermits**: The number of messages this consumer has space for in the client library's listen queue. A value of 0 means the client library's queue is full and receive() isn't being called. A nonzero value means this consumer is ready to be dispatched messages.
-
-  -   **replication**: This section gives the stats for cross-colo replication of this topic
-
-  -   **connected**: Whether the outbound replicator is connected
-
-  -   **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker
-
-  -   **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.
-
-  -   **msgDropRate**: for publisher: publish: broker only allows configured number of in flight per connection, and drops all other published messages above the threshold. Broker also drops messages for subscriptions in case of unavailable limit and connection is not writable.
-
-```json
-
-{
-  "msgRateIn": 4641.528542257553,
-  "msgThroughputIn": 44663039.74947473,
-  "msgRateOut": 0,
-  "msgThroughputOut": 0,
-  "averageMsgSize": 1232439.816728665,
-  "storageSize": 135532389160,
-  "msgDropRate" : 0.0,
-  "publishers": [
-    {
-      "msgRateIn": 57.855383881403576,
-      "msgThroughputIn": 558994.7078932219,
-      "averageMsgSize": 613135,
-      "producerId": 0,
-      "producerName": null,
-      "address": null,
-      "connectedSince": null,
-      "msgDropRate" : 0.0
-    }
-  ],
-  "subscriptions": {
-    "my-topic_subscription": {
-      "msgRateOut": 0,
-      "msgThroughputOut": 0,
-      "msgBacklog": 116632,
-      "type": null,
-      "msgRateExpired": 36.98245516804671,
-       "consumers" : [ {
-        "msgRateOut" : 20343.506296021893,
-        "msgThroughputOut" : 2.0979855364233278E7,
-        "msgRateRedeliver" : 0.0,
-        "consumerName" : "fe3c0",
-        "availablePermits" : 950,
-        "unackedMessages" : 0,
-        "blockedConsumerOnUnackedMsgs" : false,
-        "address" : "/10.73.210.249:60578",
-        "connectedSince" : "2017-07-26 15:13:48.026-0700",
-        "clientVersion" : "1.19-incubating-SNAPSHOT"
-      } ],
-      "msgDropRate" : 432.2390921571593
-
-    }
-  },
-  "replication": {}
-}
-
-```
-
-#### pulsar-admin
-
-Topic stats can be fetched using [`stats`](reference-pulsar-admin.md#stats) command.
-
-```shell
-
-$ pulsar-admin non-persistent stats \
-  non-persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/non-persistent/:tenant/:namespace/:topic/stats|operation/getStats}
-
-
-#### Java
-
-```java
-
-String topic = "non-persistent://my-tenant/my-namespace/my-topic";
-admin.nonPersistentTopics().getStats(topic);
-
-```
-
-### Get internal stats
-
-It shows detailed statistics of a topic.
-
-#### pulsar-admin
-
-Topic internal-stats can be fetched using [`stats-internal`](reference-pulsar-admin.md#stats-internal) command.
-
-```shell
-
-$ pulsar-admin non-persistent stats-internal \
-  non-persistent://test-tenant/ns1/tp1 \
-
-{
-  "entriesAddedCounter" : 48834,
-  "numberOfEntries" : 0,
-  "totalSize" : 0,
-  "cursors" : {
-    "s1" : {
-      "waitingReadOp" : false,
-      "pendingReadOps" : 0,
-      "messagesConsumedCounter" : 0,
-      "cursorLedger" : 0,
-      "cursorLedgerLastEntry" : 0
-    }
-  }
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/non-persistent/:tenant/:namespace/:topic/internalStats|operation/getInternalStats}
-
-#### Java
-
-```java
-
-String topic = "non-persistent://my-tenant/my-namespace/my-topic";
-admin.nonPersistentTopics().getInternalStats(topic);
-
-```
-
-### Create partitioned topic
-
-Partitioned topics in Pulsar must be explicitly created. When creating a new partitioned topic you need to provide a name for the topic as well as the desired number of partitions.
-
-#### pulsar-admin
-
-```shell
-
-$ bin/pulsar-admin non-persistent create-partitioned-topic \
-  non-persistent://my-tenant/my-namespace/my-topic \
-  --partitions 4
-
-```
-
-#### REST API
-
-{@inject: endpoint|PUT|/admin/v2/non-persistent/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic}
-
-#### Java
-
-```java
-
-String topicName = "non-persistent://my-tenant/my-namespace/my-topic";
-int numPartitions = 4;
-admin.nonPersistentTopics().createPartitionedTopic(topicName, numPartitions);
-
-```
-
-### Get metadata
-
-Partitioned topics have metadata associated with them that you can fetch as a JSON object. The following metadata fields are currently available:
-
-Field | Meaning
-:-----|:-------
-`partitions` | The number of partitions into which the topic is divided
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin non-persistent get-partitioned-topic-metadata \
-  non-persistent://my-tenant/my-namespace/my-topic
-{
-  "partitions": 4
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/non-persistent/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata}
-
-
-#### Java
-
-```java
-
-String topicName = "non-persistent://my-tenant/my-namespace/my-topic";
-admin.nonPersistentTopics().getPartitionedTopicMetadata(topicName);
-
-```
-
-### Unload topic
-
-It unloads a topic.
-
-#### pulsar-admin
-
-Topic can be unloaded using [`unload`](reference-pulsar-admin.md#unload) command.
-
-```shell
-
-$ pulsar-admin non-persistent unload \
-  non-persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|PUT|/admin/v2/non-persistent/:tenant/:namespace/:topic/unload|operation/unloadTopic}
-
-#### Java
-
-```java
-
-String topic = "non-persistent://my-tenantmy-namespace/my-topic";
-admin.nonPersistentTopics().unload(topic);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-overview.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-overview.md
deleted file mode 100644
index 182697cc803..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-overview.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-id: admin-api-overview
-title: The Pulsar admin interface
-sidebar_label: "Overview"
-original_id: admin-api-overview
----
-
-The Pulsar admin interface enables you to manage all of the important entities in a Pulsar [instance](reference-terminology.md#instance), such as [tenants](reference-terminology.md#tenant), [topics](reference-terminology.md#topic), and [namespaces](reference-terminology.md#namespace).
-
-You can currently interact with the admin interface via:
-
-- Making HTTP calls against the admin {@inject: rest:REST:/} API provided by Pulsar [brokers](reference-terminology.md#broker). For some restful apis, they might be redirected to topic owner brokers for serving
-   with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you are using `curl`, you should specify `-L`
-   to handle redirections.
-- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your [Pulsar installation](getting-started-standalone.md):
-
-```shell
-
-$ bin/pulsar-admin
-
-```
-
-Full documentation for this tool can be found in the [Pulsar command-line tools](reference-pulsar-admin.md) doc.
-
-- A Java client interface.
-
-> #### The REST API is the admin interface
-> Under the hood, both the `pulsar-admin` CLI tool and the Java client both use the REST API. If you’d like to implement your own admin interface client, you should use the REST API as well. Full documentation can be found here.
-
-In this document, examples from each of the three available interfaces will be shown.
-
-## Admin setup
-
-Each of Pulsar's three admin interfaces---the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool, the [Java admin API](/api/admin), and the {@inject: rest:REST:/} API ---requires some special setup if you have [authentication](security-overview.md#authentication-providers) enabled in your Pulsar [instance](reference-terminology.md#instance).
-
-### pulsar-admin
-
-If you have [authentication](security-overview.md#authentication-providers) enabled, you will need to provide an auth configuration to use the [`pulsar-admin`](reference-pulsar-admin.md) tool. By default, the configuration for the `pulsar-admin` tool is found in the [`conf/client.conf`](reference-configuration.md#client) file. Here are the available parameters:
-
-|Name|Description|Default|
-|----|-----------|-------|
-|webServiceUrl|The web URL for the cluster.|http://localhost:8080/|
-|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/|
-|authPlugin|The authentication plugin.| |
-|authParams|The authentication parameters for the cluster, as a comma-separated string.| |
-|useTls|Whether or not TLS authentication will be enforced in the cluster.|false|
-|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false|
-|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| |
-
-### REST API
-
-You can find documentation for the REST API exposed by Pulsar [brokers](reference-terminology.md#broker) in this reference {@inject: rest:document:/}.
-
-### Java admin client
-
-To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, specifying a URL for a Pulsar [broker](reference-terminology.md#broker) and a {@inject: javadoc:ClientConfiguration:/admin/org/apache/pulsar/client/admin/ClientConfiguration}. Here's a minimal example using `localhost`:
-
-```java
-
-URL url = new URL("http://localhost:8080");
-// Pass auth-plugin class fully-qualified name if Pulsar-security enabled
-String authPluginClassName = "com.org.MyAuthPluginClass"; 
-// Pass auth-param if auth-plugin class requires it
-String authParams = "param1=value1";
-boolean useTls = false;
-boolean tlsAllowInsecureConnection = false;
-String tlsTrustCertsFilePath = null;
-
-ClientConfiguration config = new ClientConfiguration();
-config.setAuthentication(authPluginClassName, authParams);
-config.setUseTls(useTls);
-config.setTlsAllowInsecureConnection(tlsAllowInsecureConnection);
-config.setTlsTrustCertsFilePath(tlsTrustCertsFilePath);
-
-PulsarAdmin admin = new PulsarAdmin(url, config);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-partitioned-topics.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-partitioned-topics.md
deleted file mode 100644
index 4051f494b7a..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-partitioned-topics.md
+++ /dev/null
@@ -1,362 +0,0 @@
----
-id: admin-api-partitioned-topics
-title: Managing partitioned topics
-sidebar_label: "Partitioned topics"
-original_id: admin-api-partitioned-topics
----
-
-
-You can use Pulsar's [admin API](admin-api-overview.md) to create and manage partitioned topics.
-
-In all of the instructions and commands below, the topic name structure is:
-
-```shell
-
-persistent://tenant/namespace/topic
-
-```
-
-## Partitioned topics resources
-
-### Create
-
-Partitioned topics in Pulsar must be explicitly created. When creating a new partitioned topic you
-need to provide a name for the topic as well as the desired number of partitions.
-
-#### pulsar-admin
-
-You can create partitioned topics using the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic)
-command and specifying the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag.
-Here's an example:
-
-```shell
-
-$ bin/pulsar-admin topics create-partitioned-topic \
-  persistent://my-tenant/my-namespace/my-topic \
-  --partitions 4
-
-```
-
-#### REST API
-
-{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic}
-
-#### Java
-
-```java
-
-String topicName = "persistent://my-tenant/my-namespace/my-topic";
-int numPartitions = 4;
-admin.persistentTopics().createPartitionedTopic(topicName, numPartitions);
-
-```
-
-### Get metadata
-
-Partitioned topics have metadata associated with them that you can fetch as a JSON object.
-The following metadata fields are currently available:
-
-Field | Meaning
-:-----|:-------
-`partitions` | The number of partitions into which the topic is divided
-
-#### pulsar-admin
-
-You can see the number of partitions in a partitioned topic using the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata)
-subcommand. Here's an example:
-
-```shell
-
-$ pulsar-admin topics get-partitioned-topic-metadata \
-  persistent://my-tenant/my-namespace/my-topic
-{
-  "partitions": 4
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata}
-
-#### Java
-
-```java
-
-String topicName = "persistent://my-tenant/my-namespace/my-topic";
-admin.persistentTopics().getPartitionedTopicMetadata(topicName);
-
-```
-
-### Update
-
-You can update the number of partitions on an existing partitioned topic
-*if* the topic is non-global. To update, the new number of partitions must be greater
-than the existing number.
-
-Decrementing the number of partitions would deleting the topic, which is not supported in Pulsar.
-
-Already created partitioned producers and consumers can’t see newly created partitions and
-it requires to recreate them at application so, newly created producers and consumers can connect
-to newly added partitions as well. Therefore, it can violate partition ordering at producers until
-all producers are restarted at application.
-
-#### pulsar-admin
-
-Partitioned topics can be updated using the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command.
-
-```shell
-
-$ pulsar-admin topics update-partitioned-topic \
-  persistent://my-tenant/my-namespace/my-topic \
-  --partitions 8
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic}
-
-#### Java
-
-```java
-
-admin.persistentTopics().updatePartitionedTopic(persistentTopic, numPartitions);
-
-```
-
-### Delete
-
-#### pulsar-admin
-
-Partitioned topics can be deleted using the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, specifying the topic by name:
-
-```shell
-
-$ bin/pulsar-admin topics delete-partitioned-topic \
-  persistent://my-tenant/my-namespace/my-topic
-
-```
-
-#### REST API
-
-{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic}
-
-#### Java
-
-```java
-
-admin.persistentTopics().delete(persistentTopic);
-
-```
-
-### List
-
-It provides a list of persistent topics existing under a given namespace.  
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics list tenant/namespace
-persistent://tenant/namespace/topic1
-persistent://tenant/namespace/topic2
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList}
-
-#### Java
-
-```java
-
-admin.persistentTopics().getList(namespace);
-
-```
-
-### Stats
-
-It shows current statistics of a given partitioned topic. Here's an example payload:
-
-```json
-
-{
-  "msgRateIn": 4641.528542257553,
-  "msgThroughputIn": 44663039.74947473,
-  "msgRateOut": 0,
-  "msgThroughputOut": 0,
-  "averageMsgSize": 1232439.816728665,
-  "storageSize": 135532389160,
-  "publishers": [
-    {
-      "msgRateIn": 57.855383881403576,
-      "msgThroughputIn": 558994.7078932219,
-      "averageMsgSize": 613135,
-      "producerId": 0,
-      "producerName": null,
-      "address": null,
-      "connectedSince": null
-    }
-  ],
-  "subscriptions": {
-    "my-topic_subscription": {
-      "msgRateOut": 0,
-      "msgThroughputOut": 0,
-      "msgBacklog": 116632,
-      "type": null,
-      "msgRateExpired": 36.98245516804671,
-      "consumers": []
-    }
-  },
-  "replication": {}
-}
-
-```
-
-The following stats are available:
-
-|Stat|Description|
-|----|-----------|
-|msgRateIn|The sum of all local and replication publishers’ publish rates in messages per second|
-|msgThroughputIn|Same as msgRateIn but in bytes per second instead of messages per second|
-|msgRateOut|The sum of all local and replication consumers’ dispatch rates in messages per second|
-|msgThroughputOut|Same as msgRateOut but in bytes per second instead of messages per second|
-|averageMsgSize|Average message size, in bytes, from this publisher within the last interval|
-|storageSize|The sum of the ledgers’ storage size for this topic|
-|publishers|The list of all local publishers into the topic. There can be anywhere from zero to thousands.|
-|producerId|Internal identifier for this producer on this topic|
-|producerName|Internal identifier for this producer, generated by the client library|
-|address|IP address and source port for the connection of this producer|
-|connectedSince|Timestamp this producer was created or last reconnected|
-|subscriptions|The list of all local subscriptions to the topic|
-|my-subscription|The name of this subscription (client defined)|
-|msgBacklog|The count of messages in backlog for this subscription|
-|type|This subscription type|
-|msgRateExpired|The rate at which messages were discarded instead of dispatched from this subscription due to TTL|
-|consumers|The list of connected consumers for this subscription|
-|consumerName|Internal identifier for this consumer, generated by the client library|
-|availablePermits|The number of messages this consumer has space for in the client library’s listen queue. A value of 0 means the client library’s queue is full and receive() isn’t being called. A nonzero value means this consumer is ready to be dispatched messages.|
-|replication|This section gives the stats for cross-colo replication of this topic|
-|replicationBacklog|The outbound replication backlog in messages|
-|connected|Whether the outbound replicator is connected|
-|replicationDelayInSeconds|How long the oldest message has been waiting to be sent through the connection, if connected is true|
-|inboundConnection|The IP and port of the broker in the remote cluster’s publisher connection to this broker|
-|inboundConnectedSince|The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.|
-
-#### pulsar-admin
-
-The stats for the partitioned topic and its connected producers and consumers can be fetched by using the [`partitioned-stats`](reference-pulsar-admin.md#partitioned-stats) command, specifying the topic by name:
-
-```shell
-
-$ pulsar-admin topics partitioned-stats \
-  persistent://test-tenant/namespace/topic \
-  --per-partition
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats}
-
-#### Java
-
-```java
-
-admin.persistentTopics().getStats(persistentTopic);
-
-```
-
-### Internal stats
-
-It shows detailed statistics of a topic.
-
-|Stat|Description|
-|----|-----------|
-|entriesAddedCounter|Messages published since this broker loaded this topic|
-|numberOfEntries|Total number of messages being tracked|
-|totalSize|Total storage size in bytes of all messages|
-|currentLedgerEntries|Count of messages written to the ledger currently open for writing|
-|currentLedgerSize|Size in bytes of messages written to ledger currently open for writing|
-|lastLedgerCreatedTimestamp|Time when last ledger was created|
-|lastLedgerCreationFailureTimestamp|time when last ledger was failed|
-|waitingCursorsCount|How many cursors are caught up and waiting for a new message to be published|
-|pendingAddEntriesCount|How many messages have (asynchronous) write requests we are waiting on completion|
-|lastConfirmedEntry|The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger has been opened or is currently being opened but has no entries written yet.|
-|state|The state of the cursor ledger. Open means we have a cursor ledger for saving updates of the markDeletePosition.|
-|ledgers|The ordered list of all ledgers for this topic holding its messages|
-|cursors|The list of all cursors on this topic. There will be one for every subscription you saw in the topic stats.|
-|markDeletePosition|The ack position: the last message the subscriber acknowledged receiving|
-|readPosition|The latest position of subscriber for reading message|
-|waitingReadOp|This is true when the subscription has read the latest message published to the topic and is waiting on new messages to be published.|
-|pendingReadOps|The counter for how many outstanding read requests to the BookKeepers we have in progress|
-|messagesConsumedCounter|Number of messages this cursor has acked since this broker loaded this topic|
-|cursorLedger|The ledger being used to persistently store the current markDeletePosition|
-|cursorLedgerLastEntry|The last entryid used to persistently store the current markDeletePosition|
-|individuallyDeletedMessages|If Acks are being done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position|
-|lastLedgerSwitchTimestamp|The last time the cursor ledger was rolled over|
-
-```json
-
-{
-  "entriesAddedCounter": 20449518,
-  "numberOfEntries": 3233,
-  "totalSize": 331482,
-  "currentLedgerEntries": 3233,
-  "currentLedgerSize": 331482,
-  "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825",
-  "lastLedgerCreationFailureTimestamp": null,
-  "waitingCursorsCount": 1,
-  "pendingAddEntriesCount": 0,
-  "lastConfirmedEntry": "324711539:3232",
-  "state": "LedgerOpened",
-  "ledgers": [
-    {
-      "ledgerId": 324711539,
-      "entries": 0,
-      "size": 0
-    }
-  ],
-  "cursors": {
-    "my-subscription": {
-      "markDeletePosition": "324711539:3133",
-      "readPosition": "324711539:3233",
-      "waitingReadOp": true,
-      "pendingReadOps": 0,
-      "messagesConsumedCounter": 20449501,
-      "cursorLedger": 324702104,
-      "cursorLedgerLastEntry": 21,
-      "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]",
-      "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313",
-      "state": "Open"
-    }
-  }
-}
-
-```
-
-#### pulsar-admin
-
-The internal stats for the partitioned topic can be fetched by using the [`stats-internal`](reference-pulsar-admin.md#stats-internal) command, specifying the topic by name:
-
-```shell
-
-$ pulsar-admin topics stats-internal \
-  persistent://test-tenant/namespace/topic
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats}
-
-#### Java
-
-```java
-
-admin.persistentTopics().getInternalStats(persistentTopic);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-permissions.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-permissions.md
deleted file mode 100644
index c7ab724eb12..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-permissions.md
+++ /dev/null
@@ -1,136 +0,0 @@
----
-id: admin-api-permissions
-title: Managing permissions
-sidebar_label: "Permissions"
-original_id: admin-api-permissions
----
-
-Permissions in Pulsar are managed at the [namespace](reference-terminology.md#namespace) level
-(that is, within [tenants](reference-terminology.md#tenant) and [clusters](reference-terminology.md#cluster)).
-
-## Grant permissions
-
-You can grant permissions to specific roles for lists of operations such as `produce` and `consume`.
-
-### pulsar-admin
-
-Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag:
-
-```shell
-
-$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
-  --actions produce,consume \
-  --role admin10
-
-```
-
-Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`.
-
-e.g.
-
-```shell
-
-$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
-                        --actions produce,consume \
-                        --role 'my.role.*'
-
-```
-
-Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume.  
-
-```shell
-
-$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
-                        --actions produce,consume \
-                        --role '*.role.my'
-
-```
-
-Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume.
-
-**Note**: A wildcard matching works at **the beginning or end of the role name only**.
-
-e.g.
-
-```shell
-
-$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
-                        --actions produce,consume \
-                        --role 'my.*.role'
-
-```
-
-In this case, only the role `my.*.role` has permissions.  
-Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume.
-
-### REST API
-
-{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace}
-
-### Java
-
-```java
-
-admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions));
-
-```
-
-## Get permissions
-
-You can see which permissions have been granted to which roles in a namespace.
-
-### pulsar-admin
-
-Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace:
-
-```shell
-
-$ pulsar-admin namespaces permissions test-tenant/ns1
-{
-  "admin10": [
-    "produce",
-    "consume"
-  ]
-}
-
-```
-
-### REST API
-
-{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions}
-
-### Java
-
-```java
-
-admin.namespaces().getPermissions(namespace);
-
-```
-
-## Revoke permissions
-
-You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace.
-
-### pulsar-admin
-
-Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag:
-
-```shell
-
-$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \
-  --role admin10
-
-```
-
-### REST API
-
-{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace}
-
-### Java
-
-```java
-
-admin.namespaces().revokePermissionsOnNamespace(namespace, role);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-persistent-topics.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-persistent-topics.md
deleted file mode 100644
index 1a815bbe59b..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-persistent-topics.md
+++ /dev/null
@@ -1,665 +0,0 @@
----
-id: admin-api-persistent-topics
-title: Managing persistent topics
-sidebar_label: "Persistent topics"
-original_id: admin-api-persistent-topics
----
-
-Persistent helps to access topic which is a logical endpoint for publishing and consuming messages. Producers publish messages to the topic and consumers subscribe to the topic, to consume messages published to the topic.
-
-In all of the instructions and commands below, the topic name structure is:
-
-```shell
-
-persistent://tenant/namespace/topic
-
-```
-
-## Persistent topics resources
-
-### List of topics
-
-It provides a list of persistent topics exist under a given namespace.
-
-#### pulsar-admin
-
-List of topics can be fetched using [`list`](../../reference/CliTools#list) command.
-
-```shell
-
-$ pulsar-admin topics list \
-  my-tenant/my-namespace
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList}
-
-#### Java
-
-```java
-
-String namespace = "my-tenant/my-namespace";
-admin.persistentTopics().getList(namespace);
-
-```
-
-### Grant permission
-
-It grants permissions on a client role to perform specific actions on a given topic.
-
-#### pulsar-admin
-
-Permission can be granted using [`grant-permission`](../../reference/CliTools#grant-permission) command.
-
-```shell
-
-$ pulsar-admin topics grant-permission \
-  --actions produce,consume --role application1 \
-  persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-String role = "test-role";
-Set<AuthAction> actions  = Sets.newHashSet(AuthAction.produce, AuthAction.consume);
-admin.persistentTopics().grantPermission(topic, role, actions);
-
-```
-
-### Get permission
-
-Permission can be fetched using [`permissions`](../../reference/CliTools#permissions) command.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics permissions \
-  persistent://test-tenant/ns1/tp1 \
-
-{
-    "application1": [
-        "consume",
-        "produce"
-    ]
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-admin.persistentTopics().getPermissions(topic);
-
-```
-
-### Revoke permission
-
-It revokes a permission which was granted on a client role.
-
-#### pulsar-admin
-
-Permission can be revoked using [`revoke-permission`](../../reference/CliTools#revoke-permission) command.
-
-```shell
-
-$ pulsar-admin topics revoke-permission \
-  --role application1 \
-  persistent://test-tenant/ns1/tp1 \
-
-{
-  "application1": [
-    "consume",
-    "produce"
-  ]
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-String role = "test-role";
-admin.persistentTopics().revokePermissions(topic, role);
-
-```
-
-### Delete topic
-
-It deletes a topic. The topic cannot be deleted if there's any active subscription or producers connected to it.
-
-#### pulsar-admin
-
-Topic can be deleted using [`delete`](../../reference/CliTools#delete) command.
-
-```shell
-
-$ pulsar-admin topics delete \
-  persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-admin.persistentTopics().delete(topic);
-
-```
-
-### Unload topic
-
-It unloads a topic.
-
-#### pulsar-admin
-
-Topic can be unloaded using [`unload`](../../reference/CliTools#unload) command.
-
-```shell
-
-$ pulsar-admin topics unload \
-  persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-admin.persistentTopics().unload(topic);
-
-```
-
-### Get stats
-
-It shows current statistics of a given non-partitioned topic.
-
-  -   **msgRateIn**: The sum of all local and replication publishers' publish rates in messages per second
-
-  -   **msgThroughputIn**: Same as above, but in bytes per second instead of messages per second
-
-  -   **msgRateOut**: The sum of all local and replication consumers' dispatch rates in messages per second
-
-  -   **msgThroughputOut**: Same as above, but in bytes per second instead of messages per second
-
-  -   **averageMsgSize**: The average size in bytes of messages published within the last interval
-
-  -   **storageSize**: The sum of the ledgers' storage size for this topic. See
-
-  -   **publishers**: The list of all local publishers into the topic. There can be zero or thousands
-
-  -   **averageMsgSize**: Average message size in bytes from this publisher within the last interval
-
-  -   **producerId**: Internal identifier for this producer on this topic
-
-  -   **producerName**: Internal identifier for this producer, generated by the client library
-
-  -   **address**: IP address and source port for the connection of this producer
-
-  -   **connectedSince**: Timestamp this producer was created or last reconnected
-
-  -   **subscriptions**: The list of all local subscriptions to the topic
-
-  -   **my-subscription**: The name of this subscription (client defined)
-
-  -   **msgBacklog**: The count of messages in backlog for this subscription
-
-  -   **type**: This subscription type
-
-  -   **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL
-
-  -   **consumers**: The list of connected consumers for this subscription
-
-  -   **consumerName**: Internal identifier for this consumer, generated by the client library
-
-  -   **availablePermits**: The number of messages this consumer has space for in the client library's listen queue. A value of 0 means the client library's queue is full and receive() isn't being called. A nonzero value means this consumer is ready to be dispatched messages.
-
-  -   **replication**: This section gives the stats for cross-colo replication of this topic
-
-  -   **replicationBacklog**: The outbound replication backlog in messages
-
-  -   **connected**: Whether the outbound replicator is connected
-
-  -   **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is true
-
-  -   **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker
-
-  -   **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.
-
-```json
-
-{
-  "msgRateIn": 4641.528542257553,
-  "msgThroughputIn": 44663039.74947473,
-  "msgRateOut": 0,
-  "msgThroughputOut": 0,
-  "averageMsgSize": 1232439.816728665,
-  "storageSize": 135532389160,
-  "publishers": [
-    {
-      "msgRateIn": 57.855383881403576,
-      "msgThroughputIn": 558994.7078932219,
-      "averageMsgSize": 613135,
-      "producerId": 0,
-      "producerName": null,
-      "address": null,
-      "connectedSince": null
-    }
-  ],
-  "subscriptions": {
-    "my-topic_subscription": {
-      "msgRateOut": 0,
-      "msgThroughputOut": 0,
-      "msgBacklog": 116632,
-      "type": null,
-      "msgRateExpired": 36.98245516804671,
-      "consumers": []
-    }
-  },
-  "replication": {}
-}
-
-```
-
-#### pulsar-admin
-
-Topic stats can be fetched using [`stats`](../../reference/CliTools#stats) command.
-
-```shell
-
-$ pulsar-admin topics stats \
-  persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-admin.persistentTopics().getStats(topic);
-
-```
-
-### Get internal stats
-
-It shows detailed statistics of a topic.
-
-  -   **entriesAddedCounter**: Messages published since this broker loaded this topic
-
-  -   **numberOfEntries**: Total number of messages being tracked
-
-  -   **totalSize**: Total storage size in bytes of all messages
-
-  -   **currentLedgerEntries**: Count of messages written to the ledger currently open for writing
-
-  -   **currentLedgerSize**: Size in bytes of messages written to ledger currently open for writing
-
-  -   **lastLedgerCreatedTimestamp**: time when last ledger was created
-
-  -   **lastLedgerCreationFailureTimestamp:** time when last ledger was failed
-
-  -   **waitingCursorsCount**: How many cursors are "caught up" and waiting for a new message to be published
-
-  -   **pendingAddEntriesCount**: How many messages have (asynchronous) write requests we are waiting on completion
-
-  -   **lastConfirmedEntry**: The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger has been opened or is currently being opened but has no entries written yet.
-
-  -   **state**: The state of this ledger for writing. LedgerOpened means we have a ledger open for saving published messages.
-
-  -   **ledgers**: The ordered list of all ledgers for this topic holding its messages
-
-  -   **cursors**: The list of all cursors on this topic. There will be one for every subscription you saw in the topic stats.
-
-  -   **markDeletePosition**: The ack position: the last message the subscriber acknowledged receiving
-
-  -   **readPosition**: The latest position of subscriber for reading message
-
-  -   **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting on new messages to be published.
-
-  -   **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers we have in progress
-
-  -   **messagesConsumedCounter**: Number of messages this cursor has acked since this broker loaded this topic
-
-  -   **cursorLedger**: The ledger being used to persistently store the current markDeletePosition
-
-  -   **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition
-
-  -   **individuallyDeletedMessages**: If Acks are being done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position
-
-  -   **lastLedgerSwitchTimestamp**: The last time the cursor ledger was rolled over
-
-  -   **state**: The state of the cursor ledger: Open means we have a cursor ledger for saving updates of the markDeletePosition.
-
-```json
-
-{
-    "entriesAddedCounter": 20449518,
-    "numberOfEntries": 3233,
-    "totalSize": 331482,
-    "currentLedgerEntries": 3233,
-    "currentLedgerSize": 331482,
-    "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825",
-    "lastLedgerCreationFailureTimestamp": null,
-    "waitingCursorsCount": 1,
-    "pendingAddEntriesCount": 0,
-    "lastConfirmedEntry": "324711539:3232",
-    "state": "LedgerOpened",
-    "ledgers": [
-        {
-            "ledgerId": 324711539,
-            "entries": 0,
-            "size": 0
-        }
-    ],
-    "cursors": {
-        "my-subscription": {
-            "markDeletePosition": "324711539:3133",
-            "readPosition": "324711539:3233",
-            "waitingReadOp": true,
-            "pendingReadOps": 0,
-            "messagesConsumedCounter": 20449501,
-            "cursorLedger": 324702104,
-            "cursorLedgerLastEntry": 21,
-            "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]",
-            "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313",
-            "state": "Open"
-        }
-    }
-}
-
-```
-
-#### pulsar-admin
-
-Topic internal-stats can be fetched using [`stats-internal`](../../reference/CliTools#stats-internal) command.
-
-```shell
-
-$ pulsar-admin topics stats-internal \
-  persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-admin.persistentTopics().getInternalStats(topic);
-
-```
-
-### Peek messages
-
-It peeks N messages for a specific subscription of a given topic.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics peek-messages \
-  --count 10 --subscription my-subscription \
-  persistent://test-tenant/ns1/tp1 \
-
-Message ID: 315674752:0
-Properties:  {  "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451"  }
-msg-payload
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-String subName = "my-subscription";
-int numMessages = 1;
-admin.persistentTopics().peekMessages(topic, subName, numMessages);
-
-```
-
-### Skip messages
-
-It skips N messages for a specific subscription of a given topic.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics skip \
-  --count 10 --subscription my-subscription \
-  persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-String subName = "my-subscription";
-int numMessages = 1;
-admin.persistentTopics().skipMessages(topic, subName, numMessages);
-
-```
-
-### Skip all messages
-
-It skips all old messages for a specific subscription of a given topic.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics skip-all \
-  --subscription my-subscription \
-  persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages}
-
-[More info](../../reference/RestApi#/admin/persistent/:tenant/:namespace/:topic/subscription/:subName/skip_all)
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-String subName = "my-subscription";
-admin.persistentTopics().skipAllMessages(topic, subName);
-
-```
-
-### Reset cursor
-
-It resets a subscription’s cursor position back to the position which was recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics reset-cursor \
-  --subscription my-subscription --time 10 \
-  persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-String subName = "my-subscription";
-long timestamp = 2342343L;
-admin.persistentTopics().skipAllMessages(topic, subName, timestamp);
-
-```
-
-### Lookup of topic
-
-It locates broker url which is serving the given topic.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics lookup \
-  persistent://test-tenant/ns1/tp1 \
-
- "pulsar://broker1.org.com:4480"
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/lookup/v2/topic/:schema/:tenant:namespace/:topic|/}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-admin.lookup().lookupDestination(topic);
-
-```
-
-### Get bundle
-
-It gives range of the bundle which contains given topic
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics bundle-range \
-  persistent://test-tenant/ns1/tp1 \
-
- "0x00000000_0xffffffff"
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-admin.lookup().getBundleRange(topic);
-
-```
-
-### Get subscriptions
-
-It shows all subscription names for a given topic.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics subscriptions \
-  persistent://test-tenant/ns1/tp1 \
-
- my-subscription
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-admin.persistentTopics().getSubscriptions(topic);
-
-```
-
-### Unsubscribe
-
-It can also help to unsubscribe a subscription which is no more processing further messages.
-
-#### pulsar-admin
-
-```shell
-
-$ pulsar-admin topics unsubscribe \
-  --subscription my-subscription \
-  persistent://test-tenant/ns1/tp1 \
-
-```
-
-#### REST API
-
-{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription}
-
-#### Java
-
-```java
-
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-String subscriptionName = "my-subscription";
-admin.persistentTopics().deleteSubscription(topic, subscriptionName);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-schemas.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-schemas.md
deleted file mode 100644
index 161f1ee3a34..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-schemas.md
+++ /dev/null
@@ -1,109 +0,0 @@
----
-id: admin-api-schemas
-title: Managing Schemas
-sidebar_label: "Schemas"
-original_id: admin-api-schemas
----
-
-Schemas, like other entities in Pulsar, can be managed using the [admin API](admin-api-overview.md). 
-
-## Schema resources
-
-A Pulsar schema is a fairly simple data structure stored in Pulsar for representing the structure of messages stored in a Pulsar topic. The schema structure consists of:
-
-- *Name*: A schema's name is the topic that the schema is associated to.
-- *Type*: A schema type represents the type of the schema. The predefined schema types can be found [here](concepts-schema-registry.md#supported-schema-formats). If it 
-  is a customized schema, it is left as an empty string.
-- *Payload*: It is a binary representation of the schema. How to interpret it is up to the implementation of the schema.
-- *Properties*: It is a user defined properties as a string/string map. Applications can use this bag for carrying any application specific logics. Possible properties
-  might be the Git hash associated with the schema, an environment string like `dev` or `prod`, etc.
-
-All the schemas are versioned with versions. So you can retrieve the schema definition of a given version if the version is not deleted.
-
-### Upload Schema
-
-#### pulsar-admin
-
-You can upload a new schema using the [`upload`](reference-pulsar-admin.md#get-5) subcommand:
-
-```shell
-
-$ pulsar-admin schemas upload <topic-name> --filename /path/to/schema-definition-file
-
-```
-
-The schema definition file should contain following json string on defining how the schema look like:
-
-```json
-
-{
-    "type": "STRING",
-    "schema": "",
-    "properties": {
-        "key1" : "value1"
-    }
-}
-
-```
-
-An example of the schema definition file can be found at {@inject: github:SchemaExample:/conf/schema_example.conf}.
-
-#### REST
-
-{@inject: endpoint|POST|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/uploadSchema}
-
-### Get Schema
-
-#### pulsar-admin
-
-You can get the latest version of Schema using the [`get`](reference-pulsar-admin.md#get-5) subcommand.
-
-```shell
-
-$ pulsar-admin schemas get <topic-name>
-{
-    "version": 0,
-    "type": "String",
-    "timestamp": 0,
-    "data": "string",
-    "properties": {
-        "property1": "string",
-        "property2": "string"
-    }
-}
-
-```
-
-You can also retrieve the Schema of a given version by specifying `--version` option.
-
-```shell
-
-$ pulsar-admin schemas get <topic-name> --version <version>
-
-```
-
-#### REST API
-
-Retrieve the latest version of the schema:
-
-{@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/getSchema}
-
-Retrieve the schema of a given version:
-
-{@inject: endpoint|GET|/admin/v2/schemas/:tenant/:namespace/:topic/schema/:version|operation/getSchema}
-
-### Delete Schema
-
-#### pulsar-admin
-
-You can delete a schema using the [`delete`](reference-pulsar-admin.md#delete-8) subcommand.
-
-```shell
-
-$ pulsar-admin schemas delete <topic-name>
-
-```
-
-#### REST API
-
-{@inject: endpoint|DELETE|/admin/v2/schemas/:tenant/:namespace/:topic/schema|operation/deleteSchema}
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-tenants.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-tenants.md
deleted file mode 100644
index 27764d6da38..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/admin-api-tenants.md
+++ /dev/null
@@ -1,98 +0,0 @@
----
-id: admin-api-tenants
-title: Managing Tenants
-sidebar_label: "Tenants"
-original_id: admin-api-tenants
----
-
-Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants:
-
-* Admin roles
-* Allowed clusters
-
-## Tenant resources
-
-### List
-
-#### pulsar-admin
-
-You can list all of the tenants associated with an [instance](reference-terminology.md#instance) using the [`list`](reference-pulsar-admin.md#tenants-list) subcommand:
-
-```shell
-
-$ pulsar-admin tenants list
-
-```
-
-That will return a simple list, like this:
-
-```
-
-my-tenant-1
-my-tenant-2
-
-```
-
-### Create
-
-#### pulsar-admin
-
-You can create a new tenant using the [`create`](reference-pulsar-admin.md#tenants-create) subcommand:
-
-```shell
-
-$ pulsar-admin tenants create my-tenant
-
-```
-
-When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples:
-
-```shell
-
-$ pulsar-admin tenants create my-tenant \
-  --admin-roles role1,role2,role3
-
-$ pulsar-admin tenants create my-tenant \
-  -r role1
-
-```
-
-### Get configuration
-
-#### pulsar-admin
-
-You can see a tenant's configuration as a JSON object using the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specifying the name of the tenant:
-
-```shell
-
-$ pulsar-admin tenants get my-tenant
-{
-  "adminRoles": [
-    "admin1",
-    "admin2"
-  ],
-  "allowedClusters": [
-    "cl1",
-    "cl2"
-  ]
-}
-
-```
-
-### Delete
-
-#### pulsar-admin
-
-You can delete a tenant using the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specifying the tenant name:
-
-```shell
-
-$ pulsar-admin tenants delete my-tenant
-
-```
-
-### Updating
-
-#### pulsar-admin
-
-You can update a tenant's configuration using the [`update`](reference-pulsar-admin.md#tenants-update) subcommand
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-dashboard.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-dashboard.md
deleted file mode 100644
index 7b0538306d2..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-dashboard.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-id: administration-dashboard
-title: The Pulsar dashboard
-sidebar_label: "Dashboard"
-original_id: administration-dashboard
----
-
-The Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form.
-
-The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database.
-
-A [Django](https://www.djangoproject.com) web app is used to render the collected data.
-
-## Install
-
-The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container. A {@inject: github:Dockerfile:/dashboard/Dockerfile} to generate the image is provided.
-
-To generate the Docker image:
-
-```shell
-
-$ docker build -t apachepulsar/pulsar-dashboard dashboard
-
-```
-
-To run the dashboard:
-
-```shell
-
-$ SERVICE_URL=http://broker.example.com:8080/
-$ docker run -p 80:80 \
-  -e SERVICE_URL=$SERVICE_URL \
-  apachepulsar/pulsar-dashboard
-
-```
-
-You need to specify only one service URL for a Pulsar cluster. Internally, the collector will figure out all the existing clusters and the brokers from where it needs to pull the metrics. If you're connecting the dashboard to Pulsar running in standalone mode, the URL will be `http://<broker-ip>:8080` by default. `<broker-ip>` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard.
-
-Once the Docker container is running, the web dashboard will be accessible via `localhost` or whichever host is being used by Docker.
-
-> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container
-
-If the Pulsar service is running in standalone mode in `localhost`, the `SERVICE_URL` would have to
-be the IP of the machine.
-
-Similarly, given the Pulsar standalone advertises itself with localhost by default, we need to
-explicitly set the advertise address to the host IP. For example:
-
-```shell
-
-$ bin/pulsar standalone --advertised-address 1.2.3.4
-
-```
-
-### Known issues
-
-Pulsar [authentication](security-overview.md#authentication-providers) is not supported at this point. The dashboard's data collector does not pass any authentication-related data and will be denied access if the Pulsar broker requires authentication.
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-geo.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-geo.md
deleted file mode 100644
index 2b30491803c..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-geo.md
+++ /dev/null
@@ -1,137 +0,0 @@
----
-id: administration-geo
-title: Pulsar geo-replication
-sidebar_label: "Geo-replication"
-original_id: administration-geo
----
-
-*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
-
-## How it works
-
-The diagram below illustrates the process of geo-replication across Pulsar clusters:
-
-![Replication Diagram](/assets/geo-replication.png)
-
-In this diagram, whenever producers **P1**, **P2**, and **P3** publish messages to the topic **T1** on clusters **Cluster-A**, **Cluster-B**, and **Cluster-C**, respectively, those messages are instantly replicated across clusters. Once replicated, consumers **C1** and **C2** can consume those messages from their respective clusters.
-
-Without geo-replication, consumers **C1** and **C2** wouldn't be able to consume messages published by producer **P3**.
-
-## Geo-replication and Pulsar properties
-
-Geo-replication must be enabled on a per-tenant basis in Pulsar. Geo-replication can be enabled between clusters only when a tenant has been created that allows access to both clusters.
-
-Although geo-replication must be enabled between two clusters, it's actually managed at the namespace level. You must do the following to enable geo-replication for a namespace:
-
-* [Create a global namespace](#creating-global-namespaces)
-* Configure that namespace to replicate between two or more provisioned clusters
-
-Any message published on *any* topic in that namespace will then be replicated to all clusters in the specified set.
-
-## Local persistence and forwarding
-
-When messages are produced on a Pulsar topic, they are first persisted in the local cluster and then forwarded asynchronously to the remote clusters.
-
-In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions.
-
-Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition).
-
-> #### Subscriptions are local to a cluster
-> While producers and consumers can publish to and consume from any cluster in a Pulsar instance, subscriptions are local to the clusters in which they are created and cannot be transferred between clusters. If you do need to transfer a subscription, you’ll need to create a new subscription in the desired cluster.
-
-In the example in the image above, the topic **T1** is being replicated between 3 clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
-
-All messages produced in any cluster will be delivered to all subscriptions in all the other clusters. In this case, consumers **C1** and **C2** will receive all messages published by producers **P1**, **P2**, and **P3**. Ordering is still guaranteed on a per-producer basis.
-
-## Configuring replication
-
-As stated [above](#geo-replication-and-pulsar-properties), geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level.
-
-### Granting permissions to properties
-
-To establish replication to a cluster, the tenant needs permission to use that cluster. This permission can be granted when the tenant is created or later on.
-
-At creation time, specify all the intended clusters:
-
-```shell
-
-$ bin/pulsar-admin properties create my-tenant \
-  --admin-roles my-admin-role \
-  --allowed-clusters us-west,us-east,us-cent
-
-```
-
-To update permissions of an existing tenant, use `update` instead of `create`.
-
-### Creating global namespaces
-
-Replication must be used with *global* topics, meaning topics that belong to a global namespace and are thus not tied to any particular cluster.
-
-Global namespaces need to be created in the `global` virtual cluster. For example:
-
-```shell
-
-$ bin/pulsar-admin namespaces create my-tenant/my-namespace
-
-```
-
-Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand:
-
-```shell
-
-$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \
-  --clusters us-west,us-east,us-cent
-
-```
-
-The replication clusters for a namespace can be changed at any time, with no disruption to ongoing traffic. Replication channels will be immediately set up or stopped in all the clusters as soon as the configuration changes.
-
-### Using global topics
-
-Once you've created a global namespace, any topics that producers or consumers create within that namespace will be global. Typically, each application will use the `serviceUrl` for the local cluster.
-
-#### Selective replication
-
-By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message. That message will then be replicated only to the subset in the replication list.
-
-Below is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when constructing the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object:
-
-```java
-
-List<String> restrictReplicationTo = Arrays.asList(
-        "us-west",
-        "us-east"
-);
-
-Producer producer = client.newProducer()
-        .topic("some-topic")
-        .create();
-
-producer.newMessage()
-        .value("my-payload".getBytes())
-        .setReplicationClusters(restrictReplicationTo)
-        .send();
-
-```
-
-#### Topic stats
-
-Topic-specific statistics for global topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API:
-
-```shell
-
-$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic
-
-```
-
-Each cluster reports its own local stats, including incoming and outgoing replication rates and backlogs.
-
-#### Deleting a global topic
-
-Given that global topics exist in multiple regions, it's not possible to directly delete a global topic. Instead, you should rely on automatic topic garbage collection.
-
-In Pulsar, a topic is automatically deleted when it's no longer used, that is to say, when no producers or consumers are connected *and* there are no subscriptions *and* no more messages are kept for retention. For global topics, each region will use a fault-tolerant mechanism to decide when it's safe to delete the topic locally.
-
-You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration#broker).
-
-To delete a global topic, close all producers and consumers on the topic and delete all its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic.
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-load-distribution.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-load-distribution.md
deleted file mode 100644
index 07f7e7033b3..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-load-distribution.md
+++ /dev/null
@@ -1,235 +0,0 @@
----
-id: administration-load-distribution
-title: Pulsar load distribution
-sidebar_label: "Load distribution"
-original_id: administration-load-distribution
----
-
-## Load distribution across Pulsar brokers
-
-Pulsar is an horizontally scalable messaging system, so it is a core requirement that the traffic
-in a logical cluster must be spread across all the available Pulsar brokers, as evenly as possible.
-
-In most cases, this is true out of the box and one shouldn't worry about it. There are, though,
-multiple settings and tools to control the traffic distribution and they require a bit of
-context to understand how the traffic is managed in Pulsar.
-
-## Pulsar load manager architecture
-
-### Dynamic assignment of topics to brokers
-
-Topics are dynamically assigned to brokers based on the load conditions of all brokers in the
-cluster.
-
-When a clients starts using new topics that are not assigned to any broker, it will trigger a
-process that, given the load conditions, it will choose the best suited broker to acquire ownership
-of such topic.
-
-In case of partitioned topics, different partitions might be assigned to different brokers. We talk
-about "topic" in this context to mean either a non-partitioned topic or one partition of a topic.
-
-The assignment is "dynamic" because it can change very quickly. For example, if the broker owning
-the topic crashes, the topic will be reassigned immediately to another broker. Another scenario is
-that the broker owning the topic becomes overloaded. In this case too, the topic will be
-reassigned to a less loaded broker.
-
-The dynamic assignment is made possible by the stateless nature of brokers. This also ensure that
-we can quickly expand or shrink the cluster based on usage.
-
-### Assignment granularity
-
-The assignment of topics/partitions to brokers is not done at the individual level. The reason for
-it is to amortize the amount of information that we need to keep track (eg. which topics are
-assigned to a particular broker, what's the load on topics for a broker and similar).
-
-Instead of individual topic/partition assignment, each broker takes ownership of a subset of the
-topics for a namespace. This subset is called a "*bundle*" and effectively it's a sharding
-mechanism.
-
-The namespace is the "administrative" unit: many config knobs or operations are done at the
-namespace level.
-
-For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising
-a portion of overall hash range of the namespace.
-
-Topics are assigned to a particular bundle by taking the hash of the topic name and seeing in which
-bundle the hash falls into.
-
-Each bundle is independent of the others and thus is independently assigned to different brokers.
-
-### Creating namespaces and bundles
-
-When creating a new namespace, it will set to use the default number of bundles. This is set in
-`conf/broker.conf`:
-
-```properties
-
-# When a namespace is created without specifying the number of bundle, this
-# value will be used as the default
-defaultNumberOfNamespaceBundles=4
-
-```
-
-One can either change the system default, or override it when creating a new namespace:
-
-```shell
-
-$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16
-
-```
-
-With this command, we're creating a namespace with 16 initial bundles. Therefore the topics for
-this namespaces can immediately be spread across up to 16 brokers.
-
-In general, if the expected traffic and number of topics is known in advance, it's a good idea to
-start with a reasonable number of bundles instead of waiting for the system to auto-correct the
-distribution.
-
-On a same note, it is normally beneficial to start with more bundles than number of brokers,
-primarily because of the hashing nature of the distribution of topics into bundles. For example,
-for a namespace with 1000 topics, using something like 64 bundles will achieve a good distribution
-of traffic across 16 brokers.
-
-### Unloading topics and bundles
-
-In Pulsar there is an admin operation of "unloading" a topic. Unloading means to close the topics,
-release ownership and reassign the topics to a new broker, based on current load.
-
-When unload happens, the client will experience a small latency blip, typically in the order of
-tens of milliseconds, while the topic is reassigned.
-
-Unloading is the mechanism used by the load-manager to perform the load shedding, but it can
-also be triggered manually, for example to correct the assignments and redistribute traffic
-even before having any broker overloaded.
-
-Unloading a topic has no effect on the assignment, but it will just close and reopen the
-particular topic:
-
-```shell
-
-pulsar-admin topics unload persistent://tenant/namespace/topic
-
-```
-
-To unload all topics for a namespace and trigger reassignments:
-
-```shell
-
-pulsar-admin namespaces unload tenant/namespace
-
-```
-
-### Namespace bundles splitting
-
-Since the load for the topics in a bundle might change over time, or could just be hard to predict
-upfront, bundles can be split in 2 by brokers. The new smaller bundles can then be reassigned
-to different brokers.
-
-The splitting happens based on some tunable thresholds. Any existing bundle that exceeds any
-of the threshold is a candidate to be split. By default the newly split bundles are also
-immediately offloaded to other brokers, to facilitate the traffic distribution.
-
-```properties
-
-# enable/disable namespace bundle auto split
-loadBalancerAutoBundleSplitEnabled=true
-
-# enable/disable automatic unloading of split bundles
-loadBalancerAutoUnloadSplitBundlesEnabled=true
-
-# maximum topics in a bundle, otherwise bundle split will be triggered
-loadBalancerNamespaceBundleMaxTopics=1000
-
-# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered
-loadBalancerNamespaceBundleMaxSessions=1000
-
-# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered
-loadBalancerNamespaceBundleMaxMsgRate=30000
-
-# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered
-loadBalancerNamespaceBundleMaxBandwidthMbytes=100
-
-# maximum number of bundles in a namespace (for auto-split)
-loadBalancerNamespaceMaximumBundles=128
-
-```
-
-### Automatic load shedding
-
-In Pulsar's load manager there is support for automatic load shedding. This means that whenever
-the system recognized a particular broker is overloaded, it will force some traffic to be
-reassigned to less loaded brokers.
-
-When a broker is identifies as overloaded, it will force to "unload" a subset of the bundles, the
-ones with higher traffic, that make up for the overload percentage.
-
-For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then
-it will unload the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`.
-
-Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network
-and memory), broker will unload bundles for at least 15% of traffic.
-
-The automatic load shedding is enabled by default and can be disabled with this setting:
-
-```properties
-
-# Enable/disable automatic bundle unloading for load-shedding
-loadBalancerSheddingEnabled=true
-
-```
-
-There are additional settings that apply to shedding:
-
-```properties
-
-# Load shedding interval. Broker periodically checks whether some traffic should be offload from
-# some over-loaded broker to other under-loaded brokers
-loadBalancerSheddingIntervalMinutes=1
-
-# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe
-loadBalancerSheddingGracePeriodMinutes=30
-
-```
-
-#### Broker overload thresholds
-
-The determinations of when a broker is overloaded is based on threshold of CPU, network and
-memory usage. Whenever either of those metrics reaches the threshold, it will trigger the shedding
-(if enabled).
-
-By default, overload threshold is set at 85%:
-
-```properties
-
-# Usage threshold to determine a broker as over-loaded
-loadBalancerBrokerOverloadedThresholdPercentage=85
-
-```
-
-The usage stats are gathered by Pulsar from the system metrics.
-
-In case of network utilization, in some cases the network interface speed reported by Linux is
-not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps
-NIC speed for which the OS report 10Gbps speed.
-
-Because of the incorrect max speed, the Pulsar load manager might think the broker has not
-reached the NIC capacity, while in fact it's already using all the bandwidth and the traffic is
-being slowed down.
-
-There is a setting to correct the max NIC speed:
-
-```properties
-
-# Override the auto-detection of the network interfaces max speed.
-# This option is useful in some environments (eg: EC2 VMs) where the max speed
-# reported by Linux is not reflecting the real bandwidth available to the broker.
-# Since the network usage is employed by the load manager to decide when a broker
-# is overloaded, it is important to make sure the info is correct or override it
-# with the right value here. The configured value can be a double (eg: 0.8) and that
-# can be used to trigger load-shedding even before hitting on NIC limits.
-loadBalancerOverrideBrokerNicSpeedGbps=
-
-```
-
-When the value is empty, Pulsar will use the value reported by the OS.
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-proxy.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-proxy.md
deleted file mode 100644
index c3948e979be..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-proxy.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-id: administration-proxy
-title: The Pulsar proxy
-sidebar_label: "Pulsar proxy"
-original_id: administration-proxy
----
-
-The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) is an optional gateway that you can run over the brokers in a Pulsar cluster. We recommend running a Pulsar proxy in cases when direction connections between clients and Pulsar brokers are either infeasible, undesirable, or both, for example when running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform.
-
-## Running the proxy
-
-In order to run the Pulsar proxy, you need to have both a local [ZooKeeper](https://zookeeper.apache.org) and configuration store quorum set up for use by your Pulsar cluster. For instructions, see [this document](deploy-bare-metal.md). Once you have ZooKeeper set up and have connection strings for both ZooKeeper quorums, you can use the [`proxy`](reference-cli-tools.md#pulsar-proxy) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool to start up the proxy (preferably on it [...]
-
-To start the proxy:
-
-```bash
-
-$ cd /path/to/pulsar/directory
-$ bin/pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk-2 \
-  --configuration-store-servers zk-0,zk-1,zk-2
-
-```
-
-> You can run as many instances of the Pulsar proxy in a cluster as you would like.
-
-
-## Stopping the proxy
-
-The Pulsar proxy runs by default in the foreground. To stop the proxy, simply stop the process in which it's running.
-
-## Proxy frontends
-
-We recommend running the Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer.
-
-## Using Pulsar clients with the proxy
-
-Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address is used by the frontend. If the address were the DNS address `pulsar.cluster.default`, for example, then the connection URL for clients would be `pulsar://pulsar.cluster.default:6650`.
-
-## Proxy configuration
-
-The Pulsar proxy can be configured using the [`proxy.conf`](reference-configuration.md#proxy) configuration file. The following parameters are available in that file:
-
-|Name|Description|Default|
-|---|---|---|
-|zookeeperServers|  The ZooKeeper quorum connection string (as a comma-separated list)  ||
-|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
-|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
-|servicePort| The port to use for server binary Protobuf requests |6650|
-|servicePortTls|  The port to use to server binary Protobuf TLS requests  |6651|
-|statusFilePath | Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks ||
-|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy  |false|
-|authenticationProviders| Authentication provider name list (a comma-separated list of class names) ||
-|authorizationEnabled|  Whether authorization is enforced by the Pulsar proxy |false|
-|authorizationProvider| Authorization provider as a fully qualified class name  |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider|
-|brokerClientAuthenticationPlugin|  The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers  ||
-|brokerClientAuthenticationParameters|  The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers  ||
-|brokerClientTrustCertsFilePath|  The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers ||
-|superUserRoles|  Role names that are treated as "super-users," meaning that they will be able to perform all admin ||
-|forwardAuthorizationCredentials| Whether client authorization credentials are forwarded to the broker for re-authorization. Authentication must be enabled via authenticationEnabled=true for this to take effect.  |false|
-|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000|
-|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |10000|
-|tlsEnabledInProxy| Whether TLS is enabled for the proxy  |false|
-|tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar brokers |false|
-|tlsCertificateFilePath|  Path for the TLS certificate file ||
-|tlsKeyFilePath|  Path for the TLS private key file ||
-|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file ||
-|tlsHostnameVerificationEnabled|  Whether the hostname is validated when the proxy creates a TLS connection with brokers  |false|
-|tlsRequireTrustedClientCertOnConnect|  Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false|
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-stats.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-stats.md
deleted file mode 100644
index 51626747fe1..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-stats.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-id: administration-stats
-title: Pulsar stats
-sidebar_label: "Pulsar statistics"
-original_id: administration-stats
----
-
-## Partitioned topics
-
-|Stat|Description|
-|---|---|
-|msgRateIn| The sum of all local and replication publishers’ publish rates in messages per second|
-|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second|
-|msgRateOut|  The sum of all local and replication consumers’ dispatch rates in messages per second|
-|msgThroughputOut|  Same as msgRateOut but in bytes per second instead of messages per second|
-|averageMsgSize|  Average message size, in bytes, from this publisher within the last interval|
-|storageSize| The sum of the ledgers’ storage size for this topic|
-|publishers|  The list of all local publishers into the topic. There can be anywhere from zero to thousands.|
-|producerId|  Internal identifier for this producer on this topic|
-|producerName|  Internal identifier for this producer, generated by the client library|
-|address| IP address and source port for the connection of this producer|
-|connectedSince|  Timestamp this producer was created or last reconnected|
-|subscriptions| The list of all local subscriptions to the topic|
-|my-subscription| The name of this subscription (client defined)|
-|msgBacklog|  The count of messages in backlog for this subscription|
-|type|  This subscription type|
-|msgRateExpired|  The rate at which messages were discarded instead of dispatched from this subscription due to TTL|
-|consumers| The list of connected consumers for this subscription|
-|consumerName|  Internal identifier for this consumer, generated by the client library|
-|availablePermits|  The number of messages this consumer has space for in the client library’s listen queue. A value of 0 means the client library’s queue is full and receive() isn’t being called. A nonzero value means this consumer is ready to be dispatched messages.|
-|replication| This section gives the stats for cross-colo replication of this topic|
-|replicationBacklog|  The outbound replication backlog in messages|
-|connected| Whether the outbound replicator is connected|
-|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true|
-|inboundConnection| The IP and port of the broker in the remote cluster’s publisher connection to this broker|
-|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.|
-
-
-## Topics
-
-|Stat|Description|
-|---|---|
-|entriesAddedCounter| Messages published since this broker loaded this topic|
-|numberOfEntries| Total number of messages being tracked|
-|totalSize| Total storage size in bytes of all messages|
-|currentLedgerEntries|  Count of messages written to the ledger currently open for writing|
-|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing|
-|lastLedgerCreatedTimestamp|  Time when last ledger was created|
-|lastLedgerCreationFailureTimestamp|  time when last ledger was failed|
-|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published|
-|pendingAddEntriesCount|  How many messages have (asynchronous) write requests we are waiting on completion|
-|lastConfirmedEntry|  The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger has been opened or is currently being opened but has no entries written yet.|
-|state| The state of the cursor ledger. Open means we have a cursor ledger for saving updates of the markDeletePosition.|
-|ledgers| The ordered list of all ledgers for this topic holding its messages|
-|cursors| The list of all cursors on this topic. There will be one for every subscription you saw in the topic stats.|
-|markDeletePosition|  The ack position: the last message the subscriber acknowledged receiving|
-|readPosition|  The latest position of subscriber for reading message|
-|waitingReadOp| This is true when the subscription has read the latest message published to the topic and is waiting on new messages to be published.|
-|pendingReadOps|  The counter for how many outstanding read requests to the BookKeepers we have in progress|
-|messagesConsumedCounter| Number of messages this cursor has acked since this broker loaded this topic|
-|cursorLedger|  The ledger being used to persistently store the current markDeletePosition|
-|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition|
-|individuallyDeletedMessages| If Acks are being done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position|
-|lastLedgerSwitchTimestamp| The last time the cursor ledger was rolled over|
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-zk-bk.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-zk-bk.md
deleted file mode 100644
index bdc41b5e07d..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/administration-zk-bk.md
+++ /dev/null
@@ -1,350 +0,0 @@
----
-id: administration-zk-bk
-title: ZooKeeper and BookKeeper administration
-sidebar_label: "ZooKeeper and BookKeeper"
-original_id: administration-zk-bk
----
-
-Pulsar relies on two external systems for essential tasks:
-
-* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration- and coordination-related tasks.
-* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data.
-
-ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects.
-
-> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar.
-' %}
-
-## ZooKeeper
-
-Each Pulsar instance relies on two separate ZooKeeper quorums.
-
-* [Local ZooKeeper](#deploying-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
-* [Configuration Store](#deploying-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper.
-
-### Deploying local ZooKeeper
-
-ZooKeeper manages a variety of essential coordination- and configuration-related tasks for Pulsar.
-
-Deploying a Pulsar instance requires you to stand up one local ZooKeeper cluster *per Pulsar cluster*. 
-
-To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. Here's an example for a three-node cluster:
-
-```properties
-
-server.1=zk1.us-west.example.com:2888:3888
-server.2=zk2.us-west.example.com:2888:3888
-server.3=zk3.us-west.example.com:2888:3888
-
-```
-
-On each host, you need to specify the ID of the node in each node's `myid` file, which is in each server's `data/zookeeper` folder by default (this can be changed via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
-
-> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed info on `myid` and more.
-
-
-On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
-
-```shell
-
-$ mkdir -p data/zookeeper
-$ echo 1 > data/zookeeper/myid
-
-```
-
-On `zk2.us-west.example.com` the command would be `echo 2 > data/zookeeper/myid` and so on.
-
-Once each server has been added to the `zookeeper.conf` configuration and has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
-
-```shell
-
-$ bin/pulsar-daemon start zookeeper
-
-```
-
-### Deploying the configuration store {#configuration-store}
-
-The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster used to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
-
-If you're deploying a [single-cluster](#single-cluster-pulsar-instance) instance, then you will not need a separate cluster for the configuration store. If, however, you're deploying a [multi-cluster](#multi-cluster-pulsar-instance) instance, then you should stand up a separate ZooKeeper cluster for configuration tasks.
-
-#### Single-cluster Pulsar instance
-
-If your Pulsar instance will consist of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but running on different TCP ports.
-
-To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers used by the local quorum to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). Here's an example that uses port 2184 for a three-node ZooKeeper cluster:
-
-```properties
-
-clientPort=2184
-server.1=zk1.us-west.example.com:2185:2186
-server.2=zk2.us-west.example.com:2185:2186
-server.3=zk3.us-west.example.com:2185:2186
-
-```
-
-As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
-
-#### Multi-cluster Pulsar instance
-
-When deploying a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
-
-The key here is to make sure the ZK quorum members are spread across at least 3
-regions and that other regions are running as observers.
-
-Again, given the very low expected load on the configuration store servers, we can
-share the same hosts used for the local ZooKeeper quorum.
-
-For example, let's assume a Pulsar instance with the following clusters `us-west`,
-`us-east`, `us-central`, `eu-central`, `ap-south`. Also let's assume, each cluster
-will have its own local ZK servers named such as
-
-```
-
-zk[1-3].${CLUSTER}.example.com
-
-```
-
-In this scenario we want to pick the quorum participants from few clusters and
-let all the others be ZK observers. For example, to form a 7 servers quorum, we
-can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
-
-This will guarantee that writes to configuration store will be possible even if one
-of these regions is unreachable.
-
-The ZK configuration in all the servers will look like:
-
-```properties
-
-clientPort=2184
-server.1=zk1.us-west.example.com:2185:2186
-server.2=zk2.us-west.example.com:2185:2186
-server.3=zk3.us-west.example.com:2185:2186
-server.4=zk1.us-central.example.com:2185:2186
-server.5=zk2.us-central.example.com:2185:2186
-server.6=zk3.us-central.example.com:2185:2186:observer
-server.7=zk1.us-east.example.com:2185:2186
-server.8=zk2.us-east.example.com:2185:2186
-server.9=zk3.us-east.example.com:2185:2186:observer
-server.10=zk1.eu-central.example.com:2185:2186:observer
-server.11=zk2.eu-central.example.com:2185:2186:observer
-server.12=zk3.eu-central.example.com:2185:2186:observer
-server.13=zk1.ap-south.example.com:2185:2186:observer
-server.14=zk2.ap-south.example.com:2185:2186:observer
-server.15=zk3.ap-south.example.com:2185:2186:observer
-
-```
-
-Additionally, ZK observers will need to have:
-
-```properties
-
-peerType=observer
-
-```
-
-##### Starting the service
-
-Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
-
-```shell
-
-$ bin/pulsar-daemon start configuration-store
-
-```
-
-### ZooKeeper configuration
-
-In Pulsar, ZooKeeper configuration is handled by two separate configuration files found in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
-
-#### Local ZooKeeper
-
-Configuration for local ZooKeeper is handled by the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. The table below shows the available parameters:
-
-|Name|Description|Default|
-|---|---|---|
-|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
-|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
-|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
-|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
-|clientPort|  The port on which the ZooKeeper server will listen for connections. |2181|
-|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
-|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1|
-|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
-
-
-#### Configuration Store
-
-Configuration for configuration store is handled by the [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file. The table below shows the available parameters:
-
-
-## BookKeeper
-
-BookKeeper is responsible for all durable message storage in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs called ledgers. Individual BookKeeper servers are also called *bookies*.
-
-> For a guide to managing message persistence, retention, and expiry in Pulsar, see [this cookbook](cookbooks-retention-expiry.md).
-
-### Deploying BookKeeper
-
-BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
-
-Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
-
-### Configuring bookies
-
-BookKeeper bookies can be configured using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the Pulsar cluster's local ZooKeeper.
-
-### Starting up bookies
-
-You can start up a bookie in two ways: in the foreground or as a background daemon.
-
-To start up a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper)
-
-```shell
-
-$ bin/pulsar-daemon start bookie
-
-```
-
-You can verify that the bookie is working properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
-
-```shell
-
-$ bin/bookkeeper shell bookiesanity
-
-```
-
-This will create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger.
-
-### Hardware considerations
-
-Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, it's essential that they have a suitable hardware configuration. There are two key dimensions to bookie hardware capacity:
-
-* Disk I/O capacity read/write
-* Storage capacity
-
-Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is
-designed to use multiple devices:
-
-* A **journal** to ensure durability. For sequential writes, it's critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of  [...]
-* A **ledger storage device** is where data is stored until all consumers have acknowledged the message. Writes will happen in the background, so write I/O is not a big concern. Reads will happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration will involve multiple HDDs with a RAID controller.
-
-
-
-### Configuring BookKeeper
-
-Configurable parameters for BookKeeper bookies can be found in the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) file.
-
-Minimum configuration changes required  in `conf/bookkeeper.conf` are:
-
-```properties
-
-# Change to point to journal disk mount point
-journalDirectory=data/bookkeeper/journal
-
-# Point to ledger storage disk mount point
-ledgerDirectories=data/bookkeeper/ledgers
-
-# Point to local ZK quorum
-zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
-
-#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud).
-advertisedAddress=
-
-```
-
-> Consult the official [BookKeeper docs](http://bookkeeper.apache.org) for more information about BookKeeper.
-
-## BookKeeper persistence policies
-
-In Pulsar, you can set *persistence policies*, at the namespace level, that determine how BookKeeper handles persistent storage of messages. Policies determine four things:
-
-* The number of acks (guaranteed copies) to wait for each ledger entry
-* The number of bookies to use for a topic
-* How many writes to make for each ledger entry
-* The throttling rate for mark-delete operations
-
-### Set persistence policies
-
-You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level.
-
-#### pulsar-admin
-
-Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are:
-
-Flag | Description | Default
-:----|:------------|:-------
-`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0
-`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0
-`-w`, `--bookkeeper-write-quorum` | How many writes to make for each entry | 0
-`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0
-
-##### Example
-
-```shell
-
-$ pulsar-admin namespaces set-persistence my-tenant/my-ns \
-  --bookkeeper-ack-quorum 3 \
-  --bookkeeper-ensemble 2
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence}
-
-#### Java
-
-```java
-
-int bkEnsemble = 2;
-int bkQuorum = 3;
-int bkAckQuorum = 2;
-double markDeleteRate = 0.7;
-PersistencePolicies policies =
-  new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate);
-admin.namespaces().setPersistence(namespace, policies);
-
-```
-
-### List persistence policies
-
-You can see which persistence policy currently applies to a namespace.
-
-#### pulsar-admin
-
-Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace.
-
-##### Example
-
-```shell
-
-$ pulsar-admin namespaces get-persistence my-tenant/my-ns
-{
-  "bookkeeperEnsemble": 1,
-  "bookkeeperWriteQuorum": 1,
-  "bookkeeperAckQuorum", 1,
-  "managedLedgerMaxMarkDeleteRate": 0
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence}
-
-#### Java
-
-```java
-
-PersistencePolicies policies = admin.namespaces().getPersistence(namespace);
-
-```
-
-## How Pulsar uses ZooKeeper and BookKeeper
-
-This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster:
-
-![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png)
-
-Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies.
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-cpp.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-cpp.md
deleted file mode 100644
index dc925903ea1..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-cpp.md
+++ /dev/null
@@ -1,210 +0,0 @@
----
-id: client-libraries-cpp
-title: The Pulsar C++ client
-sidebar_label: "C++"
-original_id: client-libraries-cpp
----
-
-## Supported platforms
-
-The Pulsar C++ client has been successfully tested on **MacOS** and **Linux**.
-
-## Linux
-
-### Install
-
-> Since the 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can choose to download
-> and install those packages instead of building them yourself.
-
-#### RPM
-
-| Link | Crypto files |
-|------|--------------|
-| [client](@pulsar:rpm:client@) | [asc](@pulsar:rpm:client@.asc), [sha512](@pulsar:rpm:client@.sha512) |
-| [client-debuginfo](@pulsar:rpm:client-debuginfo@) | [asc](@pulsar:rpm:client-debuginfo@.asc),  [sha512](@pulsar:rpm:client-debuginfo@.sha512) |
-| [client-devel](@pulsar:rpm:client-devel@) | [asc](@pulsar:rpm:client-devel@.asc),  [sha512](@pulsar:rpm:client-devel@.sha512) |
-
-To install a RPM package, download the RPM packages and install them using the following command:
-
-```bash
-
-$ rpm -ivh apache-pulsar-client*.rpm
-
-```
-
-#### DEB
-
-| Link | Crypto files |
-|------|--------------|
-| [client](@pulsar:deb:client@) | [asc](@pulsar:deb:client@.asc), [sha1](@pulsar:deb:client@.sha1), [sha512](@pulsar:deb:client@.sha512) |
-| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:deb:client-devel@.asc), [sha1](@pulsar:deb:client-devel@.sha1), [sha512](@pulsar:deb:client-devel@.sha512) |
-
-To install a DEB package, download the DEB packages and install them using the following command:
-
-```bash
-
-$ apt-install apache-pulsar-client*.deb
-
-```
-
-### Build
-
-> If you want to build RPM and Debian packages off latest master, you can follow the instructions
-> below to do so. All the instructions are run at the root directory of your cloned Pulsar
-> repo.
-
-There are recipes that build RPM and Debian packages containing a
-statically linked `libpulsar.so` / `libpulsar.a` with all the required
-dependencies.
-
-To build the C++ library packages, first build the Java packages:
-
-```shell
-
-mvn install -DskipTests
-
-```
-
-#### RPM
-
-```shell
-
-pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh
-
-```
-
-This will build the RPM inside a Docker container and it will leave the RPMs
-in `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/`.
-
-| Package name | Content |
-|-----|-----|
-| pulsar-client | Shared library `libpulsar.so` |
-| pulsar-client-devel | Static library `libpulsar.a` and C++ and C headers |
-| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` |
-
-#### Deb
-
-To build Debian packages:
-
-```shell
-
-pulsar-client-cpp/pkg/deb/docker-build-deb.sh
-
-```
-
-Debian packages will be created at `pulsar-client-cpp/pkg/deb/BUILD/DEB/`
-
-| Package name | Content |
-|-----|-----|
-| pulsar-client | Shared library `libpulsar.so` |
-| pulsar-client-dev | Static library `libpulsar.a` and C++ and C headers |
-
-## MacOS
-
-Use the [Homebrew](https://brew.sh/) supplied recipe to build the Pulsar
-client lib on MacOS.
-
-```shell
-
-brew install https://raw.githubusercontent.com/apache/incubator-pulsar/master/pulsar-client-cpp/homebrew/libpulsar.rb
-
-```
-
-If using Python 3 on MacOS, add the flag `--with-python3` to the above command.
-
-This will install the package with the library and headers.
-
-## Connection URLs
-
-
-To connect to Pulsar using client libraries, you need to specify a Pulsar protocol URL.
-
-Pulsar protocol URLs are assigned to specific clusters, use the pulsar URI scheme and have a default port of 6650. Here’s an example for localhost:
-
-```http
-
-pulsar://localhost:6650
-
-```
-
-A URL for a production Pulsar cluster may look something like this:
-
-```http
-
-pulsar://pulsar.us-west.example.com:6650
-
-```
-
-If you’re using TLS authentication, the URL will look like something like this:
-
-```http
-
-pulsar+ssl://pulsar.us-west.example.com:6651
-
-```
-
-## Consumer
-
-```c++
-
-Client client("pulsar://localhost:6650");
-
-Consumer consumer;
-Result result = client.subscribe("my-topic", "my-subscribtion-name", consumer);
-if (result != ResultOk) {
-    LOG_ERROR("Failed to subscribe: " << result);
-    return -1;
-}
-
-Message msg;
-
-while (true) {
-    consumer.receive(msg);
-    LOG_INFO("Received: " << msg
-            << "  with payload '" << msg.getDataAsString() << "'");
-
-    consumer.acknowledge(msg);
-}
-
-client.close();
-
-```
-
-## Producer
-
-```c++
-
-Client client("pulsar://localhost:6650");
-
-Producer producer;
-Result result = client.createProducer("my-topic", producer);
-if (result != ResultOk) {
-    LOG_ERROR("Error creating producer: " << result);
-    return -1;
-}
-
-// Publish 10 messages to the topic
-for (int i = 0; i < 10; i++){
-    Message msg = MessageBuilder().setContent("my-message").build();
-    Result res = producer.send(msg);
-    LOG_INFO("Message sent: " << res);
-}
-client.close();
-
-```
-
-## Authentication
-
-```cpp
-
-ClientConfiguration config = ClientConfiguration();
-config.setUseTls(true);
-config.setTlsTrustCertsFilePath("/path/to/cacert.pem");
-config.setTlsAllowInsecureConnection(false);
-config.setAuth(pulsar::AuthTls::create(
-            "/path/to/client-cert.pem", "/path/to/client-key.pem"););
-
-Client client("pulsar+ssl://my-broker.com:6651", config);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-go.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-go.md
deleted file mode 100644
index 05082c38bfd..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-go.md
+++ /dev/null
@@ -1,497 +0,0 @@
----
-id: client-libraries-go
-title: The Pulsar Go client
-sidebar_label: "Go"
-original_id: client-libraries-go
----
-
-The Pulsar Go client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
-
-> **API docs available as well**  
-> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/incubator-pulsar/pulsar-client-go/pulsar).
-
-
-## Installation
-
-### Requirements
-
-Pulsar Go client library is based on the C++ client library. Follow
-the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries
-through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
-
-### Installing go package
-
-You can install the `pulsar` library locally using `go get`:
-
-> **NOTE**
-> 
-> `go get` doesn't support fetching a specific tag. so it will always pull in pulsar go client
-> from latest master. You need to make sure you have installed the right pulsar cpp client library.
-
-```bash
-
-$ go get -u github.com/apache/incubator-pulsar/pulsar-client-go/pulsar
-
-```
-
-Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
-
-```bash
-
-$ dep ensure -add github.com/apache/incubator-pulsar/pulsar-client-go/pulsar@v@pulsar:version@
-
-```
-
-Once installed locally, you can import it into your project:
-
-```go
-
-import "github.com/apache/incubator-pulsar/pulsar-client-go/pulsar"
-
-```
-
-## Connection URLs
-
-To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
-
-Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
-
-```http
-
-pulsar://localhost:6650
-
-```
-
-A URL for a production Pulsar cluster may look something like this:
-
-```http
-
-pulsar://pulsar.us-west.example.com:6650
-
-```
-
-If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
-
-```http
-
-pulsar+ssl://pulsar.us-west.example.com:6651
-
-```
-
-## Creating a client
-
-In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
-
-```go
-
-import (
-    "log"
-    "runtime"
-
-    "github.com/apache/incubator-pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-        OperationTimeoutSeconds: 5,
-        MessageListenerThreads: runtime.NumCPU(),
-    })
-
-    if err != nil {
-        log.Fatalf("Could not instantiate Pulsar client: %v", err)
-    }
-}
-
-```
-
-The following configurable parameters are available for Pulsar clients:
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
-`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
-`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
-`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
-`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
-`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
-`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
-`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
-`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
-`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
-
-## Producers
-
-Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
-
-```go
-
-producer, err := client.CreateProducer(pulsar.ProducerOptions{
-    Topic: "my-topic",
-})
-
-if err != nil {
-    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
-}
-
-defer producer.Close()
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Hello, Pulsar"),
-}
-
-if err := producer.Send(msg); err != nil {
-    log.Fatalf("Producer could not send message: %v", err)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Producer operations
-
-Pulsar Go producers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
-`Name()` | Fetches the producer's name | `string`
-`Send(context.Context, ProducerMessage) error` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
-`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
-`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
-
-Here's a more involved example usage of a producer:
-
-```go
-
-import (
-    "context"
-    "fmt"
-    "log"
-
-    "github.com/apache/incubator-pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-        URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client to instantiate a producer
-    producer, err := client.CreateProducer(pulsar.ProducerOptions{
-        Topic: "my-topic",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    ctx := context.Background()
-
-    // Send 10 messages synchronously and 10 messages asynchronously
-    for i := 0; i < 10; i++ {
-        // Create a message
-        msg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("message-%d", i)),
-        }
-
-        // Attempt to send the message
-        if err := producer.Send(ctx, msg); err != nil {
-            log.Fatal(err)
-        }
-
-        // Create a different message to send asynchronously
-        asyncMsg := pulsar.ProducerMessage{
-            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
-        }
-
-        // Attempt to send the message asynchronously and handle the response
-        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
-            if err != nil { log.Fatal(err) }
-
-            fmt.Printf("Message %s successfully published", msg.ID())
-        })
-    }
-}
-
-```
-
-### Producer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
-`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
-`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds
-`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
-`MaxPendingMessagesAcrossPartitions` | |
-`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
-`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
-`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
-`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4) and [`ZLIB`](https://zlib.net/). | No compression
-`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
-
-## Consumers
-
-Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
-
-```go
-
-msgChannel := make(chan pulsar.ConsumerMessage)
-
-consumerOpts := pulsar.ConsumerOptions{
-    Topic:            "my-topic",
-    SubscriptionName: "my-subscription-1",
-    Type:             pulsar.Exclusive,
-    MessageChannel:   msgChannel,
-}
-
-consumer, err := client.Subscribe(consumerOpts)
-
-if err != nil {
-    log.Fatalf("Could not establish subscription: %v", err)
-}
-
-defer consumer.Close()
-
-for cm := range channel {
-    msg := cm.Message
-
-    fmt.Printf("Message ID: %s", msg.ID())
-    fmt.Printf("Message value: %s", string(msg.Payload()))
-
-    consumer.Ack(msg)
-}
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
-
-
-### Consumer operations
-
-Pulsar Go consumers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
-`Subscription()` | Returns the consumer's subscription name | `string`
-`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
-`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
-`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
-`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
-`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type.
-`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
-`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
-
-#### Receive example
-
-Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/incubator-pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    // Use the client object to instantiate a consumer
-    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
-        Topic:            "my-golang-topic",
-        SubscriptionName: "sub-1",
-        SubscriptionType: pulsar.Exclusive,
-    })
-
-    if err != nil { log.Fatal(err) }
-
-    defer consumer.Close()
-
-    ctx := context.Background()
-
-    // Listen indefinitely on the topic
-    for {
-        msg, err := consumer.Receive(ctx)
-        if err != nil { log.Fatal(err) }
-
-        // Do something with the message
-
-        consumer.Ack(msg)
-    }
-}
-
-```
-
-### Consumer configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
-`SubscriptionName` | The subscription name for this consumer |
-`Name` | The name of the consumer |
-`AckTimeout` | | 0
-`SubscriptionType` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
-`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
-`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
-
-## Readers
-
-Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
-
-```go
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic: "my-golang-topic",
-    StartMessageId: pulsar.LatestMessage,
-})
-
-```
-
-> **Blocking operation**  
-> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
-
-
-### Reader operations
-
-Pulsar Go readers have the following methods available:
-
-Method | Description | Return type
-:------|:------------|:-----------
-`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
-`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
-`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
-
-#### "Next" example
-
-Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
-
-```go
-
-import (
-    "context"
-    "log"
-
-    "github.com/apache/incubator-pulsar/pulsar-client-go/pulsar"
-)
-
-func main() {
-    // Instantiate a Pulsar client
-    client, err := pulsar.NewClient(pulsar.ClientOptions{
-            URL: "pulsar://localhost:6650",
-    })
-
-    if err != nil { log.Fatalf("Could not create client: %v", err) }
-
-    // Use the client to instantiate a reader
-    reader, err := client.CreateReader(pulsar.ReaderOptions{
-        Topic:          "my-golang-topic",
-        StartMessageID: pulsar.EarliestMessage,
-    })
-
-    if err != nil { log.Fatalf("Could not create reader: %v", err) }
-
-    defer reader.Close()
-
-    ctx := context.Background()
-
-    // Listen on the topic for incoming messages
-    for {
-        msg, err := reader.Next(ctx)
-        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
-
-        // Process the message
-    }
-}
-
-```
-
-In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
-
-```go
-
-lastSavedId := // Read last saved message id from external store as byte[]
-
-reader, err := client.CreateReader(pulsar.ReaderOptions{
-    Topic:          "my-golang-topic",
-    StartMessageID: DeserializeMessageID(lastSavedId),
-})
-
-```
-
-### Reader configuration
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages |
-`Name` | The name of the reader |
-`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
-`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
-`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
-`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
-
-## Messages
-
-The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
-
-```go
-
-msg := pulsar.ProducerMessage{
-    Payload: []byte("Here is some message data"),
-    Key: "message-key",
-    Properties: map[string]string{
-        "foo": "bar",
-    },
-    EventTime: time.Now(),
-    ReplicationClusters: []string{"cluster1", "cluster3"},
-}
-
-if err := producer.send(msg); err != nil {
-    log.Fatalf("Could not publish message due to: %v", err)
-}
-
-```
-
-The following methods parameters are available for `ProducerMessage` objects:
-
-Parameter | Description
-:---------|:-----------
-`Payload` | The actual data payload of the message
-`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
-`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
-`EventTime` | The timestamp associated with the message
-`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
-
-## TLS encryption and authentication
-
-In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so:
-
- * Use `pulsar+ssl` URL type
- * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
- * Configure `Authentication` option
-
-Here's an example:
-
-```go
-
-opts := pulsar.ClientOptions{
-    URL: "pulsar+ssl://my-cluster.com:6651",
-    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
-    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
-}
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-java.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-java.md
deleted file mode 100644
index c0b6c0762a2..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-java.md
+++ /dev/null
@@ -1,534 +0,0 @@
----
-id: client-libraries-java
-title: The Pulsar Java client
-sidebar_label: "Java"
-original_id: client-libraries-java
----
-
-The Pulsar Java client can be used both to create Java producers, consumers, and [readers](#readers) of messages and to perform [administrative tasks](admin-api-overview.md). The current version of the Java client is **@pulsar:version@**.
-
-Javadoc for the Pulsar client is divided up into two domains, by package:
-
-Package | Description | Maven Artifact
-:-------|:------------|:--------------
-[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar)
-[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar)
-
-This document will focus only on the client API for producing and consuming messages on Pulsar topics. For a guide to using the Java admin client, see [The Pulsar admin interface](admin-api-overview.md).
-
-## Installation
-
-The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration.
-
-### Maven
-
-If you're using Maven, add this to your `pom.xml`:
-
-```xml
-
-<!-- in your <properties> block -->
-<pulsar.version>@pulsar:version@</pulsar.version>
-
-<!-- in your <dependencies> block -->
-<dependency>
-  <groupId>org.apache.pulsar</groupId>
-  <artifactId>pulsar-client</artifactId>
-  <version>${pulsar.version}</version>
-</dependency>
-
-```
-
-### Gradle
-
-If you're using Gradle, add this to your `build.gradle` file:
-
-```groovy
-
-def pulsarVersion = '@pulsar:version@'
-
-dependencies {
-    compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion
-}
-
-```
-
-## Connection URLs
-
-To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
-
-Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
-
-```http
-
-pulsar://localhost:6650
-
-```
-
-A URL for a production Pulsar cluster may look something like this:
-
-```http
-
-pulsar://pulsar.us-west.example.com:6650
-
-```
-
-If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
-
-```http
-
-pulsar+ssl://pulsar.us-west.example.com:6651
-
-```
-
-## Client configuration
-
-You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster), like this:
-
-```java
-
-PulsarClient client = PulsarClient.builder()
-        .serviceUrl("pulsar://localhost:6650")
-        .build();
-
-```
-
-> #### Default broker URLs for standalone clusters
-> If you're running a cluster in [standalone mode](getting-started-standalone.md), the broker will be available at the `pulsar://localhost:6650` URL by default.
-
-Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full listing of configurable parameters.
-
-> In addition to client-level configuration, you can also apply [producer](#configuring-producers) and [consumer](#configuring-consumers) specific configuration, as you'll see in the sections below.
-
-
-## Producers
-
-In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic).
-
-```java
-
-Producer<byte[]> producer = client.newProducer()
-        .topic("my-topic")
-        .create();
-
-// You can then send messages to the broker and topic you specified:
-producer.send("My message".getBytes());
-
-```
-
-By default, producers produce messages that consist of byte arrays. You can produce different types, however, by specifying a message [schema](#schemas).
-
-```java
-
-Producer<String> stringProducer = client.newProducer(Schema.STRING)
-        .topic("my-topic")
-        .create();
-stringProducer.send("My message");
-
-```
-
-> You should always make sure to close your producers, consumers, and clients when they are no longer needed:
-
-> ```java
-> 
-> producer.close();
-> consumer.close();
-> client.close();
->
-> 
-> ```
-
->
-> Close operations can also be asynchronous:
-
-> ```java
-> 
-> producer.closeAsync()
->    .thenRun(() -> System.out.println("Producer closed"));
->    .exceptionally((ex) -> {
->        System.err.println("Failed to close producer: " + ex);
->        return ex;
->    });
->
-> 
-> ```
-
-' %}
-
-### Configuring producers
-
-If you instantiate a `Producer` object specifying only a topic name, as in the example above, the producer will use the default configuration. To use a non-default configuration, there's a variety of configurable parameters that you can set. For a full listing, see the Javadoc for the {@inject javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. Here's an example:
-
-```java
-
-Producer<byte[]> producer = client.newProducer()
-    .topic("my-topic")
-    .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS)
-    .sendTimeout(10, TimeUnit.SECONDS)
-    .blockIfQueueFull(true)
-    .create();
-
-```
-
-### Message routing
-
-When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more on specifying a routing mode using the Java client, see the [Partitioned Topics](cookbooks-partitioned.md) cookbook.
-
-### Async send
-
-You can also publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer will put the message in a blocking queue and return immediately. The client library will then send the message to the broker in the background. If the queue is full (max size configurable), the producer could be blocked or fail immediately when calling the API, depending on arguments passed to the producer.
-
-Here's an example async send operation:
-
-```java
-
-producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> {
-    System.out.printf("Message with ID %s successfully sent", msgId);
-});
-
-```
-
-As you can see from the example above, async send operations return a {@inject javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture).
-
-### Configuring messages
-
-In addition to a value, it's possible to set additional items on a given message:
-
-```java
-
-producer.newMessage()
-    .key("my-message-key")
-    .value("my-async-message".getBytes())
-    .property("my-key", "my-value")
-    .property("my-other-key", "my-other-value")
-    .send();
-
-```
-
-As for the previous case, it's also possible to terminate the builder chain with `sendAsync()` and
-get a future returned.
-
-## Consumers
-
-In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)).
-
-Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes).
-
-```java
-
-Consumer consumer = client.newConsumer()
-        .topic("my-topic")
-        .subscriptionName("my-subscription")
-        .subscribe();
-
-```
-
-The `subscribe` method will automatically subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any message that's received, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed:
-
-```java
-
-do {
-  // Wait for a message
-  Message msg = consumer.receive();
-
-  System.out.printf("Message received: %s", new String(msg.getData()));
-
-  // Acknowledge the message so that it can be deleted by the message broker
-  consumer.acknowledge(msg);
-} while (true);
-
-```
-
-### Configuring consumers
-
-If you instantiate a `Consumer` object specifying only a topic and subscription name, as in the example above, the consumer will use the default configuration. To use a non-default configuration, there's a variety of configurable parameters that you can set. For a full listing, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. Here's an example:
-
-Here's an example configuration:
-
-```java
-
-Consumer consumer = client.newConsumer()
-        .topic("my-topic")
-        .subscriptionName("my-subscription")
-        .ackTimeout(10, TimeUnit.SECONDS)
-        .subscriptionType(SubscriptionType.Exclusive)
-        .subscribe();
-
-```
-
-### Async receive
-
-The `receive` method will receive messages synchronously (the consumer process will be blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which will return immediately with a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object that completes once a new message is available.
-
-Here's an example:
-
-```java
-
-CompletableFuture<Message> asyncMessage = consumer.receiveAsync();
-
-```
-
-Async receive operations return a {@inject javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture).
-
-### Multi-topic subscriptions
-
-In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace.
-
-Here are some examples:
-
-```java
-
-import org.apache.pulsar.client.api.Consumer;
-import org.apache.pulsar.client.api.PulsarClient;
-
-import java.util.Arrays;
-import java.util.List;
-import java.util.regex.Pattern;
-
-ConsumerBuilder consumerBuilder = pulsarClient.newConsumer()
-        .subscriptionName(subscription);
-
-// Subscribe to all topics in a namespace
-Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*");
-Consumer allTopicsConsumer = consumerBuilder
-        .topicsPattern(allTopicsInNamespace)
-        .subscribe();
-
-// Subscribe to a subsets of topics in a namespace, based on regex
-Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*");
-Consumer allTopicsConsumer = consumerBuilder
-        .topicsPattern(someTopicsInNamespace)
-        .subscribe();
-
-```
-
-You can also subscribe to an explicit list of topics (across namespaces if you wish):
-
-```java
-
-List<String> topics = Arrays.asList(
-        "topic-1",
-        "topic-2",
-        "topic-3"
-);
-
-Consumer multiTopicConsumer = consumerBuilder
-        .topics(topics)
-        .subscribe();
-
-// Alternatively:
-Consumer multiTopicConsumer = consumerBuilder
-        .topics(
-            "topic-1",
-            "topic-2",
-            "topic-3"
-        )
-        .subscribe();
-
-```
-
-You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. Here's an example:
-
-```java
-
-Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*");
-consumerBuilder
-        .topics(topics)
-        .subscribeAsync()
-        .thenAccept(consumer -> {
-            do {
-                try {
-                    Message msg = consumer.receive();
-                    // Do something with the received message
-                } catch (PulsarClientException e) {
-                    e.printStackTrace();
-                }
-            } while (true);
-        });
-
-```
-
-## Reader interface {#readers}
-
-With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic, reading all messages from a specified message onward. The Pulsar API for Java enables you to create  {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic, a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}, and {@inject javadoc:ReaderConfiguration:/client/org/apache/pulsar/client/api/ [...]
-
-Here's an example:
-
-```java
-
-ReaderConfiguration conf = new ReaderConfiguration();
-byte[] msgIdBytes = // Some message ID byte array
-MessageId id = MessageId.fromByteArray(msgIdBytes);
-Reader reader = pulsarClient.newReader()
-        .topic(topic)
-        .startMessageId(id)
-        .create();
-
-while (true) {
-    Message message = reader.readNext();
-    // Process message
-}
-
-```
-
-In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader then iterates over each message in the topic after the message identified by `msgIdBytes` (how that value is obtained depends on the application).
-
-The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message.
-
-## Schemas
-
-In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](concepts-schema-registry.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types.md). If you construct, say, a [producer](#producers) without specifying a schema, then the producer can only produce messages of type `byte[]`. Here's an example:
-
-```java
-
-Producer<byte[]> producer = client.newProducer()
-        .topic(topic)
-        .create();
-
-```
-
-The producer above is equivalent to a `Producer<byte[]>` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic).
-
-### Schema example
-
-Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic:
-
-```java
-
-public class SensorReading {
-    public float temperature;
-
-    public SensorReading(float temperature) {
-        this.temperature = temperature;
-    }
-
-    // A no-arg constructor is required
-    public SensorReading() {
-    }
-
-    public float getTemperature() {
-        return temperature;
-    }
-
-    public void setTemperature(float temperature) {
-        this.temperature = temperature;
-    }
-}
-
-```
-
-You could then create a `Producer<SensorReading>` (or `Consumer<SensorReading>`) like so:
-
-```java
-
-Producer<SensorReading> producer = client.newProducer(JSONSchema.of(SensorReading.class))
-        .topic("sensor-readings")
-        .create();
-
-```
-
-The following schema formats are currently available for Java:
-
-* No schema or the byte array schema (which can be applied using `Schema.BYTES`):
-
-  ```java
-  
-  Producer<byte[]> bytesProducer = client.newProducer(Schema.BYTES)
-      .topic("some-raw-bytes-topic")
-      .create();
-  
-  ```
-
-  Or, equivalently:
-
-  ```java
-  
-  Producer<byte[]> bytesProducer = client.newProducer()
-      .topic("some-raw-bytes-topic")
-      .create();
-  
-  ```
-
-* `String` for normal UTF-8-encoded string data. This schema can be applied using `Schema.STRING`:
-
-  ```java
-  
-  Producer<String> stringProducer = client.newProducer(Schema.STRING)
-      .topic("some-string-topic")
-      .create();
-  
-  ```
-
-* JSON schemas can be created for POJOs using the `JSONSchema` class. Here's an example:
-
-  ```java
-  
-  Schema<MyPojo> pojoSchema = JSONSchema.of(MyPojo.class);
-  Producer<MyPojo> pojoProducer = client.newProducer(pojoSchema)
-      .topic("some-pojo-topic")
-      .create();
-  
-  ```
-
-## Authentication
-
-Pulsar currently supports two authentication schemes: [TLS](security-tls-authentication.md) and [Athenz](security-athenz.md). The Pulsar Java client can be used with both.
-
-### TLS Authentication
-
-To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files.
-
-Here's an example configuration:
-
-```java
-
-Map<String, String> authParams = new HashMap();
-authParams.put("tlsCertFile", "/path/to/client-cert.pem");
-authParams.put("tlsKeyFile", "/path/to/client-key.pem");
-
-Authentication tlsAuth = AuthenticationFactory
-        .create(AuthenticationTls.class.getName(), authParams);
-
-PulsarClient client = PulsarClient.builder()
-        .serviceUrl("pulsar+ssl://my-broker.com:6651")
-        .tlsTrustCertsFilePath("/path/to/cacert.pem")
-        .authentication(tlsAuth)
-        .build();
-
-```
-
-### Athenz
-
-To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash:
-
-* `tenantDomain`
-* `tenantService`
-* `providerDomain`
-* `privateKey`
-
-You can also set an optional `keyId`. Here's an example configuration:
-
-```java
-
-Map<String, String> authParams = new HashMap();
-authParams.put("tenantDomain", "shopping"); // Tenant domain name
-authParams.put("tenantService", "some_app"); // Tenant service name
-authParams.put("providerDomain", "pulsar"); // Provider domain name
-authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path
-authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0")
-
-Authentication athenzAuth = AuthenticationFactory
-        .create(AuthenticationAthenz.class.getName(), authParams);
-
-PulsarClient client = PulsarClient.builder()
-        .serviceUrl("pulsar+ssl://my-broker.com:6651")
-        .tlsTrustCertsFilePath("/path/to/cacert.pem")
-        .authentication(athenzAuth)
-        .build();
-
-```
-
-> #### Supported pattern formats
-> The `privateKey` parameter supports the following three pattern formats:
-> * `file:///path/to/file`
-> * `file:/path/to/file`
-> * `data:application/x-pem-file;base64,<base64-encoded value>`
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-python.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-python.md
deleted file mode 100644
index 0729e80326f..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-python.md
+++ /dev/null
@@ -1,110 +0,0 @@
----
-id: client-libraries-python
-title: The Pulsar Python client
-sidebar_label: "Python"
-original_id: client-libraries-python
----
-
-The Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [`python` GitHub Repository](https://github.com/apache/pulsar-client-python) of the C++ client code.
-
-## Installation
-
-You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from source.
-
-### Installation using pip
-
-To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager:
-
-```shell
-
-$ pip install pulsar-client==@pulsar:version_number@
-
-```
-
-Installation via PyPi is available for the following Python versions:
-
-Platform | Supported Python versions
-:--------|:-------------------------
-MacOS 10.12 (Sierra) and 10.13 (High Sierra) | 2.7, 3.6
-Linux | 2.7, 3.3, 3.4, 3.5, 3.6
-
-### Installing from source
-
-To install the `pulsar-client` library by building from source, follow [these instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That will also build the Python binding for the library.
-
-To install the built Python bindings:
-
-```shell
-
-$ git clone https://github.com/apache/pulsar
-$ cd pulsar/pulsar-client-cpp/python
-$ sudo python setup.py install
-
-```
-
-## API Reference
-
-The complete Python API reference is available at [api/python](/api/python).
-
-## Examples
-
-Below you'll find a variety of Python code examples for the `pulsar-client` library.
-
-### Producer example
-
-This creates a Python producer for the `my-topic` topic and send 10 messages on that topic:
-
-```python
-
-import pulsar
-
-client = pulsar.Client('pulsar://localhost:6650')
-
-producer = client.create_producer('my-topic')
-
-for i in range(10):
-    producer.send(('Hello-%d' % i).encode('utf-8'))
-
-client.close()
-
-```
-
-### Consumer example
-
-This creates a consumer with the `my-subscription` subscription on the `my-topic` topic, listen for incoming messages, print the content and ID of messages that arrive, and acknowledge each message to the Pulsar broker:
-
-```python
-
-import pulsar
-
-client = pulsar.Client('pulsar://localhost:6650')
-
-consumer = client.subscribe('my-topic', 'my-subscription')
-
-while True:
-    msg = consumer.receive()
-    print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
-    consumer.acknowledge(msg)
-
-client.close()
-
-```
-
-### Reader interface example
-
-You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example:
-
-```python
-
-# MessageId taken from a previously fetched message
-msg_id = msg.message_id()
-
-reader = client.create_reader('my-topic', msg_id)
-
-while True:
-    msg = reader.read_next()
-    print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
-    # No acknowledgment
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-websocket.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-websocket.md
deleted file mode 100644
index dfc9cbde2f4..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries-websocket.md
+++ /dev/null
@@ -1,455 +0,0 @@
----
-id: client-libraries-websocket
-title: Pulsar's WebSocket API
-sidebar_label: "WebSocket"
-original_id: client-libraries-websocket
----
-
-Pulsar's [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API is meant to provide a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSockets you can publish and consume messages and use all the features available in the [Java](client-libraries-java.md), [Python](client-libraries-python.md), and [C++](client-libraries-cpp.md) client libraries.
-
-
-> You can use Pulsar's WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples).
-
-## Running the WebSocket service
-
-The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled.
-
-In non-standalone mode, there are two ways to deploy the WebSocket service:
-
-* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker
-* as a [separate component](#as-a-separate-component)
-
-### Embedded with a Pulsar broker
-
-In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation.
-
-```properties
-
-webSocketServiceEnabled=true
-
-```
-
-### As a separate component
-
-In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters:
-
-* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers)
-* [`webServicePort`](reference-configuration.md#websocket-webServicePort)
-* [`clusterName`](reference-configuration.md#websocket-clusterName)
-
-Here's an example:
-
-```properties
-
-configurationStoreServers=zk1:2181,zk2:2181,zk3:2181
-webServicePort=8080
-clusterName=my-cluster
-
-```
-
-### Starting the broker
-
-When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool:
-
-```shell
-
-$ bin/pulsar-daemon start websocket
-
-```
-
-## API Reference
-
-Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages.
-
-All exchanges via the WebSocket API use JSON.
-
-### Producer endpoint
-
-The producer endpoint requires you to specify a tenant, namespace, and topic in the URL:
-
-```http
-
-ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic
-
-```
-
-##### Query param
-
-Key | Type | Required? | Explanation
-:---|:-----|:----------|:-----------
-`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs)
-`batchingEnabled` | boolean | no | Enable batching of messages (default: false)
-`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000)
-`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000)
-`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms)
-`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.incubator.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition`
-`compressionType` | string | no | Compression [type](https://pulsar.incubator.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB`
-`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic
-`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer.
-`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash`
-
-
-#### Publishing a message
-
-```json
-
-{
-  "payload": "SGVsbG8gV29ybGQ=",
-  "properties": {"key1": "value1", "key2": "value2"},
-  "context": "1"
-}
-
-```
-
-Key | Type | Required? | Explanation
-:---|:-----|:----------|:-----------
-`payload` | string | yes | Base-64 encoded payload
-`properties` | key-value pairs | no | Application-defined properties
-`context` | string | no | Application-defined request identifier
-`key` | string | no | For partitioned topics, decides which partition to use
-`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name
-
-
-##### Example success response
-
-```json
-
-{
-   "result": "ok",
-   "messageId": "CAAQAw==",
-   "context": "1"
- }
-
-```
-
-##### Example failure response
-
-```json
-
- {
-   "result": "send-error:3",
-   "errorMsg": "Failed to de-serialize from JSON",
-   "context": "1"
- }
-
-```
-
-Key | Type | Required? | Explanation
-:---|:-----|:----------|:-----------
-`result` | string | yes | `ok` if successful or an error message if unsuccessful
-`messageId` | string | yes | Message ID assigned to the published message
-`context` | string | no | Application-defined request identifier
-
-
-### Consumer endpoint
-
-The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL:
-
-```http
-
-ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription
-
-```
-
-##### Query param
-
-Key | Type | Required? | Explanation
-:---|:-----|:----------|:-----------
-`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0)
-`subscriptionType` | string | no | [Subscription type](https://pulsar.incubator.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`
-`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000)
-`consumerName` | string | no | Consumer name
-`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer
-
-##### Receiving messages
-
-Server will push messages on the WebSocket session:
-
-```json
-
-{
-  "messageId": "CAAQAw==",
-  "payload": "SGVsbG8gV29ybGQ=",
-  "properties": {"key1": "value1", "key2": "value2"},
-  "publishTime": "2016-08-30 16:45:57.785"
-}
-
-```
-
-Key | Type | Required? | Explanation
-:---|:-----|:----------|:-----------
-`messageId` | string | yes | Message ID
-`payload` | string | yes | Base-64 encoded payload
-`publishTime` | string | yes | Publish timestamp
-`properties` | key-value pairs | no | Application-defined properties
-`key` | string | no |  Original routing key set by producer
-
-#### Acknowledging the message
-
-Consumer needs to acknowledge the successful processing of the message to
-have the Pulsar broker delete it.
-
-```json
-
-{
-  "messageId": "CAAQAw=="
-}
-
-```
-
-Key | Type | Required? | Explanation
-:---|:-----|:----------|:-----------
-`messageId`| string | yes | Message ID of the processed message
-
-
-### Reader endpoint
-
-The reader endpoint requires you to specify a tenant, namespace, and topic in the URL:
-
-```http
-
-ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic
-
-```
-
-##### Query param
-
-Key | Type | Required? | Explanation
-:---|:-----|:----------|:-----------
-`readerName` | string | no | Reader name
-`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000)
-`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`)
-
-##### Receiving messages
-
-Server will push messages on the WebSocket session:
-
-```json
-
-{
-  "messageId": "CAAQAw==",
-  "payload": "SGVsbG8gV29ybGQ=",
-  "properties": {"key1": "value1", "key2": "value2"},
-  "publishTime": "2016-08-30 16:45:57.785"
-}
-
-```
-
-Key | Type | Required? | Explanation
-:---|:-----|:----------|:-----------
-`messageId` | string | yes | Message ID
-`payload` | string | yes | Base-64 encoded payload
-`publishTime` | string | yes | Publish timestamp
-`properties` | key-value pairs | no | Application-defined properties
-`key` | string | no |  Original routing key set by producer
-
-#### Acknowledging the message
-
-**In WebSocket**, Reader needs to acknowledge the successful processing of the message to
-have the Pulsar WebSocket service update the number of pending messages.
-If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit.
-
-```json
-
-{
-  "messageId": "CAAQAw=="
-}
-
-```
-
-Key | Type | Required? | Explanation
-:---|:-----|:----------|:-----------
-`messageId`| string | yes | Message ID of the processed message
-
-
-### Error codes
-
-In case of error the server will close the WebSocket session using the
-following error codes:
-
-Error Code | Error Message
-:----------|:-------------
-1 | Failed to create producer
-2 | Failed to subscribe
-3 | Failed to deserialize from JSON
-4 | Failed to serialize to JSON
-5 | Failed to authenticate client
-6 | Client is not authorized
-7 | Invalid payload encoding
-8 | Unknown error
-
-> The application is responsible for re-establishing a new WebSocket session after a backoff period.
-
-## Client examples
-
-Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs).
-
-### Python
-
-This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip):
-
-```shell
-
-$ pip install websocket-client
-
-```
-
-You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client).
-
-#### Python producer
-
-Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic):
-
-```python
-
-import websocket, base64, json
-
-TOPIC = 'ws://localhost:8080/ws/producer/persistent/public/default/my-topic'
-
-ws = websocket.create_connection(TOPIC)
-
-# Send one message as JSON
-ws.send(json.dumps({
-    'payload' : base64.b64encode('Hello World'),
-    'properties': {
-        'key1' : 'value1',
-        'key2' : 'value2'
-    },
-    'context' : 5
-}))
-
-response =  json.loads(ws.recv())
-if response['result'] == 'ok':
-    print 'Message published successfully'
-else:
-    print 'Failed to publish message:', response
-ws.close()
-
-```
-
-#### Python consumer
-
-Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives:
-
-```python
-
-import websocket, base64, json
-
-TOPIC = 'ws://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub'
-
-ws = websocket.create_connection(TOPIC)
-
-while True:
-    msg = json.loads(ws.recv())
-    if not msg: break
-
-    print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))
-
-    # Acknowledge successful processing
-    ws.send(json.dumps({'messageId' : msg['messageId']}))
-
-ws.close()
-
-```
-
-#### Python reader
-
-Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives:
-
-```python
-
-import websocket, base64, json
-
-TOPIC = 'ws://localhost:8080/ws/v2/reader/persistent/public/default/my-topic'
-
-ws = websocket.create_connection(TOPIC)
-
-while True:
-    msg = json.loads(ws.recv())
-    if not msg: break
-
-    print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))
-
-    # Acknowledge successful processing
-    ws.send(json.dumps({'messageId' : msg['messageId']}))
-
-ws.close()
-
-```
-
-### Node.js
-
-This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/):
-
-```shell
-
-$ npm install ws
-
-```
-
-#### Node.js producer
-
-Here's an example Node.js producer that sends a simple message to a Pulsar topic:
-
-```javascript
-
-var WebSocket = require('ws'),
-    topic = "ws://localhost:8080/ws/v2/producer/persistent/public/default/my-topic",
-    ws = new WebSocket(topic);
-
-var message = {
-  "payload" : new Buffer("Hello World").toString('base64'),
-  "properties": {
-    "key1" : "value1",
-    "key2" : "value2"
-  },
-  "context" : "1"
-};
-
-ws.on('open', function() {
-  // Send one message
-  ws.send(JSON.stringify(message));
-});
-
-ws.on('message', function(message) {
-  console.log('received ack: %s', message);
-});
-
-```
-
-#### Node.js consumer
-
-Here's an example Node.js consumer that listens on the same topic used by the producer above:
-
-```javascript
-
-var WebSocket = require('ws'),
-    topic = "ws://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub",
-    ws = new WebSocket(topic);
-
-ws.on('message', function(message) {
-    var receiveMsg = JSON.parse(message);
-    console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString());
-    var ackMsg = {"messageId" : receiveMsg.messageId};
-    ws.send(JSON.stringify(ackMsg));
-});
-
-```
-
-#### NodeJS reader
-
-```javascript
-
-var WebSocket = require('ws'),
-    topic = "ws://localhost:8080/ws/v2/reader/persistent/public/default/my-topic",
-    ws = new WebSocket(topic);
-
-ws.on('message', function(message) {
-    var receiveMsg = JSON.parse(message);
-    console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString());
-    var ackMsg = {"messageId" : receiveMsg.messageId};
-    ws.send(JSON.stringify(ackMsg));
-});
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries.md
deleted file mode 100644
index c8f9d4b83a0..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/client-libraries.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-id: client-libraries
-title: Pulsar client libraries
-sidebar_label: "Client libraries"
-original_id: client-libraries
----
-
-Pulsar currently has client libraries available for following languages:
-
-* [Java](#java-client)
-* [Go](#go-client)
-* [Python](#python-client)
-* [C++](#c-client)
-
-## Java client
-
-For a tutorial on using the Pulsar Java client to produce and consume messages, see [The Pulsar Java client](client-libraries-java.md).
-
-There are also two independent sets of Javadoc API docs available:
-
-Library | Purpose
-:-------|:-------
-[`org.apache.pulsar.client.api`](/api/client) | The [Pulsar Java client](client-libraries-java.md) for producing and consuming messages on Pulsar topics [`org.apache.pulsar.client.admin`](/api/admin) | The Java client for the [Pulsar admin interface](admin-api-overview.md)
-
-
-## Go client
-
-For a tutorial on using the Pulsar Go client, see [The Pulsar Go client](client-libraries-go.md).
-
-
-## Python client
-
-For a tutorial on using the Pulsar Python client, see [The Pulsar Python client](client-libraries-python.md).
-
-There are also [pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client [here](/api/python).
-
-## C++ client
-
-For a tutorial on using the Pulsar C++ clent, see [The Pulsar C++ client](client-libraries-cpp.md).
-
-There are also [Doxygen](http://www.stack.nl/~dimitri/doxygen/)-generated API docs for the C++ client [here](/api/cpp).
-
-## Feature Matrix
-
-This matrix lists all the features among different languages in Pulsar @pulsar:version@ release.
-
-| Feature                                   | Java | C++ | Go | Python | WebSocket |
-|:------------------------------------------|:----:|:---:|:--:|:------:|:---------:|
-| Partitioned topics                        |  ✅   |  ✅  | ✅  |   ✅    |     ✅     |
-| Batching                                  |  ✅   |  ✅  | ✅  |   ✅    |     ✅     |
-| Compression                               |  ✅   |  ✅  | ✅  |   ✅    |     ✅     |
-| TLS                                       |  ✅   |  ✅  | ✅  |   ✅    |     ✅     |
-| Authentication                            |  ✅   |  ✅  | ✅  |   ✅    |     ✅     |
-| Reader API                                |  ✅   |  ✅  | ✅  |   ✅    |     ✅     |
-| Proxy Support                             |  ✅   |  ✅  | ✅  |   ✅    |     ✅     |
-| Effectively-Once                          |  ✅   |  ✅  | ✅  |   ✅    |     ❌     |
-| Schema                                    |  ✅   |      |     |         |     ❌     |
-| Consumer seek                             |  ✅   |  ✅  |     |   ✅    |     ❌     |
-| Multi-topics consumer                     |  ✅   |      |     |         |     ❌     |
-| Topics regex consumer                     |  ✅   |      |     |         |     ❌     |
-| Compacted topics                          |  ✅   |  ✅  |     |   ✅    |      ❌    |
-| User defined properties producer/consumer |  ✅   |      |     |         |     ❌     |
-| Reader hasMessageAvailable                |  ✅   |  ✅  |     |   ✅    |     ❌     |
-| Hostname verification                     |  ✅   |      |     |         |     ❌     |
-
-## Thirdparty Clients
-
-Besides the official released clients, there are also multiple projects on developing a Pulsar client in different languages.
-
-> if you have developed a Pulsar client, but it doesn't show up here. Feel free to submit a pull request to add your client to the list below.
-
-| Language | Project | Maintainer | License | Description |
-|----------|---------|------------|---------|-------------|
-| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
-| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | |
-| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-architecture-overview.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-architecture-overview.md
deleted file mode 100644
index 7dd93f7115e..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-architecture-overview.md
+++ /dev/null
@@ -1,160 +0,0 @@
----
-id: concepts-architecture-overview
-title: Architecture Overview
-sidebar_label: "Architecture"
-original_id: concepts-architecture-overview
----
-
-At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication.md) data amongst themselves.
-
-In a Pulsar cluster:
-
-* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more.
-* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages.
-* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters.
-
-The diagram below provides an illustration of a Pulsar cluster:
-
-![Pulsar architecture diagram](/assets/pulsar-system-architecture.png)
-
-At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication.md).
-
-## Brokers
-
-The Pulsar message broker is a stateless component that's primarily responsible for running two other components:
-
-* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers
-* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol.md) used for all data transfers
-
-Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper.
-
-Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java.md).
-
-> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers.md) guide.
-
-## Clusters
-
-A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of:
-
-* One or more Pulsar [brokers](#brokers)
-* A ZooKeeper quorum used for cluster-level configuration and coordination
-* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages
-
-Clusters can replicate amongst themselves using [geo-replication](concepts-replication.md).
-
-> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters.md) guide.
-
-## Metadata store
-
-Pulsar uses [Apache Zookeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. In a Pulsar instance:
-
-* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
-* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as ownership metadata, broker load reports, BookKeeper ledger metadata, and more.
-
-## Persistent storage
-
-Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target.
-
-This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server.
-
-### Apache BookKeeper
-
-Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar:
-
-* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time.
-* It offers very efficient storage for sequential data that handles entry replication.
-* It guarantees read consistency of ledgers in the presence of various system failures.
-* It offers even distribution of I/O across bookies.
-* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster.
-* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations.
-
-In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion.
-
-At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example:
-
-```http
-
-persistent://my-tenant/my-namespace/my-topic
-
-```
-
-> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage.
-
-
-You can see an illustration of how brokers and bookies interact in the diagram below:
-
-![Brokers and bookies](/assets/broker-bookie.png)
-
-
-### Ledgers
-
-A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics:
-
-* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger.
-* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode.
-* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies).
-
-#### Ledger read consistency
-
-The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see  [...]
-
-#### Managed ledgers
-
-Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position.
-
-Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers:
-
-1. After a failure, a ledger is no longer writable and a new one needs to be created.
-2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers.
-
-### Journal storage
-
-In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter).
-
-## Pulsar proxy
-
-One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible.
-
-The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers.
-
-> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like.
-
-Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example:
-
-```bash
-
-$ bin/pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk-2 \
-  --configuration-store-servers zk-0,zk-1,zk-2
-
-```
-
-> #### Pulsar proxy docs
-> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy.md).
-
-
-Some important things to know about the Pulsar proxy:
-
-* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy).
-* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication.md) is supported by the Pulsar proxy
-
-## Service discovery
-
-[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions in the [Deploying a Pulsar instance](deploy-bare-metal.md#service-discovery-setup) guide.
-
-You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means.
-
-The diagram below illustrates Pulsar service discovery:
-
-![alt-text](/assets/pulsar-service-discovery.png)
-
-In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python.md), for example, could access this Pulsar cluster like this:
-
-```python
-
-from pulsar import Client
-
-client = Client('pulsar://pulsar-cluster.acme.com:6650')
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-authentication.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-authentication.md
deleted file mode 100644
index d48e25a1dff..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-authentication.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-id: concepts-authentication
-title: Authentication and Authorization
-sidebar_label: "Authentication and Authorization"
-original_id: concepts-authentication
----
-
-Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at broker and it also supports authorization to identify client and its access rights on topics and tenants.
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-clients.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-clients.md
deleted file mode 100644
index f3b5ae38932..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-clients.md
+++ /dev/null
@@ -1,87 +0,0 @@
----
-id: concepts-clients
-title: Pulsar Clients
-sidebar_label: "Clients"
-original_id: concepts-clients
----
-
-Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md) and [C++](client-libraries-cpp.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications.
-
-Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff.
-
-> #### Custom client libraries
-> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md)
-
-
-## Client setup phase
-
-When an application wants to create a producer/consumer, the Pulsar client library will initiate a setup phase that is composed of two steps:
-
-1. The client will attempt to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata will know who is serving the topic or, in case nobody is serving it, will try to assign it to the least loaded broker.
-1. Once the client library has the broker address, it will create a TCP connection (or reuse an existing connection from the pool) and authenticate it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client will send a command to create producer/consumer to the broker, which will comply after having validated the authorization policy.
-
-Whenever the TCP connection breaks, the client will immediately re-initiate this setup phase and will keep trying with exponential backoff to re-establish the producer or consumer until the operation succeeds.
-
-## Reader interface
-
-In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they've been processed. Whenever a consumer connects to a topic, it automatically begins reading from the earliest un-acked message onward because the topic's cursor is automatically managed by Pulsar.
-
-The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with:
-
-* The **earliest** available message in the topic
-* The **latest** available message in the topic
-* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache.
-
-The reader interface is helpful for use cases like using Pulsar to provide [effectively-once](https://streaml.io/blog/exactly-once/) processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic.
-
-![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png)
-
-> ### Non-partitioned topics only
-> The reader interface for Pulsar cannot currently be used with [partitioned topics](concepts-messaging.md#partitioned-topics).
-
-Here's a Java example that begins reading from the earliest available message on a topic:
-
-```java
-
-import org.apache.pulsar.client.api.Message;
-import org.apache.pulsar.client.api.MessageId;
-import org.apache.pulsar.client.api.Reader;
-
-// Create a reader on a topic and for a specific message (and onward)
-Reader<byte[]> reader = pulsarClient.newReader()
-    .topic("reader-api-test")
-    .startMessageId(MessageId.earliest)
-    .create();
-
-while (true) {
-    Message message = reader.readNext();
-
-    // Process the message
-}
-
-```
-
-To create a reader that will read from the latest available message:
-
-```java
-
-Reader<byte[]> reader = pulsarClient.newReader()
-    .topic(topic)
-    .startMessageId(MessageId.latest)
-    .create();
-
-```
-
-To create a reader that will read from some message between earliest and latest:
-
-```java
-
-byte[] msgIdBytes = // Some byte array
-MessageId id = MessageId.fromByteArray(msgIdBytes);
-Reader<byte[]> reader = pulsarClient.newReader()
-    .topic(topic)
-    .startMessageId(id)
-    .create();
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-messaging.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-messaging.md
deleted file mode 100644
index 9f66ba5e5c7..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-messaging.md
+++ /dev/null
@@ -1,318 +0,0 @@
----
-id: concepts-messaging
-title: Messaging Concepts
-sidebar_label: "Messaging"
-original_id: concepts-messaging
----
-
-Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern, aka pub-sub. In this pattern, [producers](#producers) publish messages to [topics](#topics). [Consumers](#consumers) can then [subscribe](#subscription-types) to those topics, process incoming messages, and send an acknowledgement when processing is complete.
-
-Once a subscription has been created, all messages will be [retained](concepts-architecture-overview.md#persistent-storage) by Pulsar, even if the consumer gets disconnected. Retained messages will be discarded only when a consumer acknowledges that they've been successfully processed.
-
-## Messages
-
-Messages are the basic "unit" of Pulsar. They're what producers publish to topics and what consumers then consume from topics (and acknowledge when the message has been processed). Messages are the analogue of letters in a postal service system.
-
-Component | Purpose
-:---------|:-------
-Value / data payload | The data carried by the message. All Pulsar messages carry raw bytes, although message data can also conform to data [schemas](concepts-schema-registry.md)
-Key | Messages can optionally be tagged with keys, which can be useful for things like [topic compaction](concepts-topic-compaction.md)
-Properties | An optional key/value map of user-defined properties
-Producer name | The name of the producer that produced the message (producers are automatically given default names, but you can apply your own explicitly as well)
-Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. A message's sequence ID is its ordering in that sequence.
-Publish time | The timestamp of when the message was published (automatically applied by the producer)
-Event time | An optional timestamp that applications can attach to the message representing when something happened, e.g. when the message was processed. The event time of a message is 0 if none is explicitly set.
-
-
-> For a more in-depth breakdown of Pulsar message contents, see the documentation on Pulsar's [binary protocol](developing-binary-protocol.md).
-
-## Producers
-
-A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker) for processing.
-
-### Send modes
-
-Producers can send messages to brokers either synchronously (sync) or asynchronously (async).
-
-| Mode       | Description                                                                                                                                                                                                                                                                                                                                                              |
-|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Sync send  | The producer will wait for acknowledgement from the broker after sending each message. If acknowledgment isn't received then the producer will consider the send operation a failure.                                                                                                                                                                                    |
-| Async send | The producer will put the message in a blocking queue and return immediately. The client library will then send the message to the broker in the background. If the queue is full (max size [configurable](reference-configuration.md#broker), the producer could be blocked or fail immediately when calling the API, depending on arguments passed to the producer. |
-
-### Compression
-
-Messages published by producers can be compressed during transportation in order to save bandwidth. Pulsar currently supports two types of compression:
-
-* [LZ4](https://github.com/lz4/lz4)
-* [ZLIB](https://zlib.net/)
-
-### Batching
-
-If batching is enabled, the producer will accumulate and send a batch of messages in a single request. Batching size is defined by the maximum number of messages and maximum publish latency.
-
-## Consumers
-
-A consumer is a process that attaches to a topic via a subscription and then receives messages.
-
-### Receive modes
-
-Messages can be received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async).
-
-| Mode          | Description                                                                                                                                                                                                   |
-|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Sync receive  | A sync receive will be blocked until a message is available.                                                                                                                                                  |
-| Async receive | An async receive will return immediately with a future value---a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java, for example---that completes once a new message is available. |
-
-### Acknowledgement
-
-When a consumer has successfully processed a message, it needs to send an acknowledgement to the broker so that the broker can discard the message (otherwise it [stores](concepts-architecture-overview.md#persistent-storage) the message).
-
-Messages can be acknowledged either one by one or cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message will not be re-delivered to that consumer.
-
-
-> Cumulative acknowledgement cannot be used with [shared subscription type](#subscription-types), because shared mode involves multiple consumers having access to the same subscription.
-
-### Listeners
-
-Client libraries can provide their own listener implementations for consumers. The [Java client](client-libraries-java.md), for example, provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received.
-
-## Topics
-
-As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from [producers](reference-terminology.md#producer) to [consumers](reference-terminology.md#consumer). Topic names are URLs that have a well-defined structure:
-
-```http
-
-{persistent|non-persistent}://tenant/namespace/topic
-
-```
-
-Topic name component | Description
-:--------------------|:-----------
-`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics) (persistent is the default, so if you don't specify a type the topic will be persistent). With persistent topics, all messages are durably [persisted](concepts-architecture-overview.md#persistent-storage) on disk (that means on multiple disks unless the broker is standalone) [...]
-`tenant`             | The topic's tenant within the instance. Tenants are essential to multi-tenancy in Pulsar and can be spread across clusters.
-`namespace`          | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant can have multiple namespaces.
-`topic`              | The final part of the name. Topic names are freeform and have no special meaning in a Pulsar instance.
-
-
-> **No need to explicitly create new topics**  
-> You don't need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar will automatically create that topic under the [namespace](#namespaces) provided in the [topic name](#topics).
-
-
-## Namespaces
-
-A namespace is a logical nomenclature within a tenant. A tenant can create multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace.
-
-## Subscription types
-
-A subscription is a named configuration rule that determines how messages are delivered to consumers. There are three available subscription types in Pulsar: [exclusive](#exclusive), [shared](#shared), and [failover](#failover). These types are illustrated in the figure below.
-
-![Subscription types](/assets/pulsar-subscription-types.png)
-
-### Exclusive
-
-In *exclusive* type, only a single consumer is allowed to attach to the subscription. If more than one consumer attempts to subscribe to a topic using the same subscription, the consumer receives an error.
-
-In the diagram above, only **Consumer A-0** is allowed to consume messages.
-
-> Exclusive is the default subscription type.
-
-![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png)
-
-### Shared
-
-In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers.
-
-In the diagram above, **Consumer-B-1** and **Consumer-B-2** are able to subscribe to the topic, but **Consumer-C-1** and others could as well.
-
-> **Limitations of Shared type**  
-> Be aware when using Shared type:
-> * Message ordering is not guaranteed.
-> * You cannot use cumulative acknowledgment with Shared type.
-
-![Shared subscriptions](/assets/pulsar-shared-subscriptions.png)
-
-### Failover
-
-In *Failover* type, multiple consumers can attach to the same subscription. The consumers will be lexically sorted by the consumer's name and the first consumer will initially be the only one receiving messages. This consumer is called the *master consumer*.
-
-When the master consumer disconnects, all (non-acked and subsequent) messages will be delivered to the next consumer in line.
-
-In the diagram above, Consumer-C-1 is the master consumer while Consumer-C-2 would be the next in line to receive messages if Consumer-C-1 disconnected.
-
-![Failover subscriptions](/assets/pulsar-failover-subscriptions.png)
-
-## Multi-topic subscriptions
-
-When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways:
-
-* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*`
-* By explicitly defining a list of topics
-
-> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces)
-
-When subscribing to multiple topics, the Pulsar client will automatically make a call to the Pulsar API to discover the topics that match the regex pattern/list and then subscribe to all of them. If any of the topics don't currently exist, the consumer will auto-subscribe to them once the topics are created.
-
-> **No ordering guarantees across multiple topics**  
-> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same.
-
-Here are some multi-topic subscription examples for Java:
-
-```java
-
-import java.util.regex.Pattern;
-
-import org.apache.pulsar.client.api.Consumer;
-import org.apache.pulsar.client.api.PulsarClient;
-
-PulsarClient pulsarClient = // Instantiate Pulsar client object
-
-// Subscribe to all topics in a namespace
-Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*");
-Consumer allTopicsConsumer = pulsarClient.subscribe(allTopicsInNamespace, "subscription-1");
-
-// Subscribe to a subsets of topics in a namespace, based on regex
-Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*");
-Consumer someTopicsConsumer = pulsarClient.subscribe(someTopicsInNamespace, "subscription-1");
-
-```
-
-For code examples, see:
-
-* [Java](client-libraries-java.md#multi-topic-subscriptions)
-
-## Partitioned topics
-
-Normal topics can be served only by a single broker, which limits the topic's maximum throughput. *Partitioned topics* are a special type of topic that be handled by multiple brokers, which allows for much higher throughput.
-
-Behind the scenes, a partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar.
-
-The diagram below illustrates this:
-
-![](/assets/partitioning.png)
-
-Here, the topic **Topic1** has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically).
-
-Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines both which broker handles each partition, while the [subscription type](#subscription-types) determines which messages go to which consumers.
-
-Decisions about routing and subscription types can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics.
-
-There is no difference between partitioned topics and normal topics in terms of how subscription types work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer.
-
-Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic.
-
-### Routing modes
-
-When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to.
-
-There are three routing modes available by default:
-
-Mode | Description | Ordering guarantee
-:----|:------------|:------------------
-Key hash | If a key property has been specified on the message, the partitioned producer will hash the key and assign it to a particular partition. | Per-key-bucket ordering
-Single default partition | If no key is provided, each producer's message will be routed to a dedicated partition, initially random selected | Per-producer ordering
-Round robin distribution | If no key is provided, all messages will be routed to different partitions in round-robin fashion to achieve maximum throughput. | None
-
-In addition to these default modes, you can also create a custom routing mode if you're using the [Java client](client-libraries-java.md) by implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface.
-
-
-
-## Non-persistent topics
-
-
-By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover.
-
-Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss.
-
-Non-persistent topics have names of this form (note the `non-persistent` in the name):
-
-```http
-
-non-persistent://tenant/namespace/topic
-
-```
-
-> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md).
-
-In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases [...]
-
-> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it.
-
-By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the [`pulsar-admin topics`](referencereference--pulsar-admin/#topics-1) interface.
-
-### Performance
-
-Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is deliver to all connected subscribers. Producers thus see comparatively low publish latency with non-persistent topic.
-
-### Client API
-
-Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription types---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics.
-
-Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic:
-
-```java
-
-PulsarClient client = PulsarClient.create("pulsar://localhost:6650");
-String npTopic = "non-persistent://public/default/my-topic";
-String subscriptionName = "my-subscription-name";
-
-Consumer consumer = client.subscribe(npTopic, subscriptionName);
-
-```
-
-Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic:
-
-```java
-
-Producer producer = client.createProducer(npTopic);
-
-```
-
-## Message retention and expiry
-
-By default, Pulsar message brokers:
-
-* immediately delete *all* messages that have been acknowledged by a consumer, and
-* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog.
-
-Pulsar has two features, however, that enable you to override this default behavior:
-
-* Message **retention** enables you to store messages that have been acknowledged by a consumer
-* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged
-
-> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook.
-
-The diagram below illustrates both concepts:
-
-![Message retention and expiry](/assets/retention-expiry.png)
-
-With message retention, shown at the top, a <span style={{color: " #89b557"}}>retention policy</span> applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are <span style={{color: " #bb3b3e"}}>deleted</span>. Without a retention policy, *all* of the <span style={{color: " #19967d"}}>acknowledged messages</span> would be deleted.
-
-With message expiry, shown at the bottom, some messages are <span style={{color: " #bb3b3e"}}>deleted</span>, even though they <span style={{color: " #337db6"}}>haven't been acknowledged</span>, because they've expired according to the <span style={{color: " #e39441"}}>TTL applied to the namespace</span> (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old).
-
-## Message deduplication
-
-Message **duplication** occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message ***de**duplication** is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, **even if the message is received more than once*.
-
-The following diagram illustrates what happens when message deduplication is disabled vs. enabled:
-
-![Pulsar message deduplication](/assets/message-deduplication.png)
-
-
-Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred.
-
-In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message.
-
-> Message deduplication is handled at the namespace level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md).
-
-
-### Producer idempotency
-
-The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, which means that you don't need to modify your Pulsar client code. Instead, you only need to make administrative changes (see the [Managi [...]
-
-### Deduplication and effectively-once semantics
-
-Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide [effectively-once](https://streaml.io/blog/exactly-once) processing semantics. Messaging systems that don't offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplica [...]
-
-> More in-depth information can be found in [this post](https://streaml.io/blog/pulsar-effectively-once/) on the [Streamlio blog](https://streaml.io/blog)
-
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-multi-tenancy.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-multi-tenancy.md
deleted file mode 100644
index 8cba09547d2..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-multi-tenancy.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-id: concepts-multi-tenancy
-title: Multi Tenancy
-sidebar_label: "Multi Tenancy"
-original_id: concepts-multi-tenancy
----
-
-Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed.
-
-The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure:
-
-```http
-
-persistent://tenant/namespace/topic
-
-```
-
-As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name).
-
-## Tenants
-
-To each tenant in a Pulsar instance you can assign:
-
-* An [authorization](security-authorization.md) scheme
-* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies
-
-## Namespaces
-
-Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy.
-
-* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant.
-* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application.
-
-Names for topics in the same namespace will look like this:
-
-```http
-
-persistent://tenant/app1/topic-1
-
-persistent://tenant/app1/topic-2
-
-persistent://tenant/app1/topic-3
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-overview.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-overview.md
deleted file mode 100644
index 02b4ef3d5bb..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-overview.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-id: concepts-overview
-title: Pulsar Overview
-sidebar_label: "Overview"
-original_id: concepts-overview
----
-
-Pulsar is a multi-tenant, high-performance solution for server-to-server messaging originally developed by [Yahoo](http://yahoo.github.io/) and now under the stewardship of the [Apache Software Foundation](https://www.apache.org/).
-
-Pulsar's key features include:
-
-* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters
-* Very low publish and end-to-end latency
-* Seamless scalability out to over a million topics
-* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Python](client-libraries-python.md), and [C++](client-libraries-cpp.md)
-* Multiple [subscription types](concepts-messaging.md#subscription-types) for topics ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover))
-* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/)
-* A serverless lightweight computing framework [Pulsar Functions](functions-overview.md) offers stream native data processing.
-* A serverless connector framework [Pulsar IO](io-overview.md) built on-top-of Pulsar Functions to make moving data in and out Apache Pulsar easier.
-* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warn storage to cold/long-term storage (such as S3 and GCS) when the data is aging out.
-
-## Contents
-
-- [Messaging Concepts](concepts-messaging.md)
-- [Architecture Overview](concepts-architecture-overview.md)
-- [Pulsar Clients](concepts-clients.md)
-- [Geo Replication](concepts-replication.md)
-- [Multi Tenancy](concepts-multi-tenancy.md)
-- [Authentication and Authorization](concepts-authentication.md)
-- [Topic Compaction](concepts-topic-compaction.md)
-- [Tiered Storage](concepts-tiered-storage.md)
-- [Schema Registry](concepts-schema-registry.md)
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-replication.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-replication.md
deleted file mode 100644
index 799f0eb4d92..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-replication.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-id: concepts-replication
-title: Geo Replication
-sidebar_label: "Geo Replication"
-original_id: concepts-replication
----
-
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that.
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-schema-registry.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-schema-registry.md
deleted file mode 100644
index d08c0cfa0a9..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-schema-registry.md
+++ /dev/null
@@ -1,84 +0,0 @@
----
-id: concepts-schema-registry
-title: Schema Registry
-sidebar_label: "Schema Registry"
-original_id: concepts-schema-registry
----
-
-Type safety is extremely important in any application built around a message bus like Pulsar. Producers and consumers need some kind of mechanism for coordinating types at the topic level lest a wide variety of potential problems arise (for example serialization and deserialization issues). Applications typically adopt one of two basic approaches to type safety in messaging:
-
-1. A "client-side" approach in which message producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as, say, moisture sensor readings.
-1. A "server-side" approach in which producers and consumers inform the system which data types can be transmitted via the topic. With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced.
-
-Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis.
-
-1. For the "client-side" approach, producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis.
-1. For the "server-side" approach, Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic.
-
-> The Pulsar schema registry is currently available only for the [Java client](client-libraries-java.md).
-
-## Basic architecture
-
-Schemas are automatically uploaded when you create a typed Producer with a Schema. Additionally, Schemas can be manually uploaded to, fetched from, and updated via Pulsar's {@inject: rest:REST:tag/schemas} API.
-
-> #### Other schema registry backends
-> Out of the box, Pulsar uses the [Apache BookKeeper](concepts-architecture-overview#persistent-storage) log storage system for schema storage. You can, however, use different backends if you wish. Documentation for custom schema storage logic is coming soon.
-
-## How schemas work
-
-Pulsar schemas are applied and enforced *at the topic level* (schemas cannot be applied at the namespace or tenant level). Producers and consumers upload schemas to Pulsar brokers.
-
-Pulsar schemas are fairly simple data structures that consist of:
-
-* A **name**. In Pulsar, a schema's name is the topic to which the schema is applied.
-* A **payload**, which is a binary representation of the schema
-* A schema [**type**](#supported-schema-formats)
-* User-defined **properties** as a string/string map. Usage of properties is wholly application specific. Possible properties might be the Git hash associated with a schema, an environment like `dev` or `prod`, etc.
-
-## Schema versions
-
-In order to illustrate how schema versioning works, let's walk through an example. Imagine that the Pulsar [Java client](client-libraries-java.md) created using the code below attempts to connect to Pulsar and begin sending messages:
-
-```java
-
-PulsarClient client = PulsarClient.builder()
-        .serviceUrl("pulsar://localhost:6650")
-        .build();
-
-Producer<SensorReading> producer = client.newProducer(JSONSchema.of(SensorReading.class))
-        .topic("sensor-data")
-        .sendTimeout(3, TimeUnit.SECONDS)
-        .create();
-
-```
-
-The table below lists the possible scenarios when this connection attempt occurs and what will happen in light of each scenario:
-
-Scenario | What happens
-:--------|:------------
-No schema exists for the topic | The producer is created using the given schema. The schema is transmitted to the broker and stored (since no existing schema is "compatible" with the `SensorReading` schema). Any consumer created using the same schema/topic can consume messages from the `sensor-data` topic.
-A schema already exists; the producer connects using the same schema that's already stored | The schema is transmitted to the Pulsar broker. The broker determines that the schema is compatible. The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it's then used to tag produced messages.
-A schema already exists; the producer connects using a new schema that is compatible | The producer transmits the schema to the broker. The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number).
-
-> Schemas are versioned in succession. Schema storage happens in the broker that handles the associated topic so that version assignments can be made. Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version.
-
-
-## Supported schema formats
-
-The following formats are supported by the Pulsar schema registry:
-
-* None. If no schema is specified for a topic, producers and consumers will handle raw bytes.
-* `String` (used for UTF-8-encoded strings)
-* [JSON](https://www.json.org/)
-* [Protobuf](https://developers.google.com/protocol-buffers/)
-* [Avro](https://avro.apache.org/)
-
-For usage instructions, see the documentation for your preferred client library:
-
-* [Java](client-libraries-java.md#schemas)
-
-> Support for other schema formats will be added in future releases of Pulsar.
-
-## Managing Schemas
-
-You can use Pulsar admin tools to manage schemas for topics.
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-tiered-storage.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-tiered-storage.md
deleted file mode 100644
index 3a67f094905..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-tiered-storage.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-id: concepts-tiered-storage
-title: Tiered Storage
-sidebar_label: "Tiered Storage"
-original_id: concepts-tiered-storage
----
-
-Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time.
-
-One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from bookkeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed. 
-
-![Tiered Storage](/assets/pulsar-tiered-storage.png)
-
-> Data written to bookkeeper is replicated to 3 physical machines by default. However, once a segment is sealed in bookkeeper is becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data.
-
-Pulsar currently supports S3 as a long term store. Offloading to S3 triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on bookkeeper, and the broker will copy the backlog data to S3. The original data will then be deleted from bookkeeper after a configured delay (4 hours by default).
-
-> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md).
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-topic-compaction.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-topic-compaction.md
deleted file mode 100644
index 34b7ed7fbbd..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/concepts-topic-compaction.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-id: concepts-topic-compaction
-title: Topic Compaction
-sidebar_label: "Topic Compaction"
-original_id: concepts-topic-compaction
----
-
-Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time int [...]
-
-> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md).
-
-For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message assoc [...]
-
-Pulsar's topic compaction feature:
-
-* Allows for faster "rewind" through topic logs
-* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage)
-* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md)
-* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger.
-
-> #### Topic compaction example: the stock ticker
-> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be high [...]
-
-
-## How topic compaction works
-
-When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key.
-
-After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and con [...]
-
-After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur:
-
-* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either:
-  * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or
-  * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon)
-
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-compaction.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-compaction.md
deleted file mode 100644
index 58ee52a622e..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-compaction.md
+++ /dev/null
@@ -1,147 +0,0 @@
----
-id: cookbooks-compaction
-title: Topic compaction
-sidebar_label: "Topic compaction"
-original_id: cookbooks-compaction
----
-
-Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case).
-
-To use compaction:
-
-* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when)). Messages without keys will be left alone by the compaction process.
-* Compaction can be configured to run [automatically](#automatic), or you can manually [trigger](#trigger) compaction using the Pulsar administrative API.
-* Your consumers must be [configured](#config) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic.
-
-
-> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction.
-
-## When should I use compacted topics? {#when}
-
-The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options:
-
-* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages.
-* They can read from the compacted topic if they only want to see the most up-to-date messages.
-
-Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#config).
-
-> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected.
-
-
-## Configuring compaction to run automatically {#automatic}
-
-Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered.
-
-For example, to trigger compaction when the backlog reaches 100MB:
-
-```bash
-
-$ bin/pulsar-admin namespaces set-compaction-threshold \
-  --threshold 100M my-tenant/my-namespace
-
-```
-
-Configuring the compaction threshold on a namespace will apply to all topics within that namespace.
-
-## Triggering compaction manually {#trigger}
-
-In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example:
-
-```bash
-
-$ bin/pulsar-admin topics compact \
-  persistent://my-tenant/my-namespace/my-topic
-
-```
-
-The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example:
-
-```bash
-
-$ bin/pulsar compact-topic \
-  --topic persistent://my-tenant-namespace/my-topic
-
-```
-
-> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through  [...]
-
-The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration:
-
-```bash
-
-$ bin/pulsar compact-topic \
-  --broker-conf /path/to/broker.conf \
-  --topic persistent://my-tenant/my-namespace/my-topic
-
-# If the configuration is in conf/broker.conf
-$ bin/pulsar compact-topic \
-  --topic persistent://my-tenant/my-namespace/my-topic
-
-```
-
-#### When should I trigger compaction?
-
-How often you [trigger compaction](#trigger) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently.
-
-## Consumer configuration {#config}
-
-Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients.
-
-
-> #### Java only
-> Currently, only [Java](#java) clients can consume messages from compacted topics.
-
-
-### Java
-
-In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic:
-
-```java
-
-Consumer<byte[]> compactedTopicConsumer = client.newConsumer()
-        .topic("some-compacted-topic")
-        .readCompacted(true)
-        .subscribe();
-
-```
-
-As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key:
-
-```java
-
-import org.apache.pulsar.client.api.Message;
-import org.apache.pulsar.client.api.MessageBuilder;
-
-Message<byte[]> msg = MessageBuilder.create()
-        .setContent(someByteArray)
-        .setKey("some-key")
-        .build();
-
-```
-
-The example below shows a message with a key being produced on a compacted Pulsar topic:
-
-```java
-
-import org.apache.pulsar.client.api.Message;
-import org.apache.pulsar.client.api.MessageBuilder;
-import org.apache.pulsar.client.api.Producer;
-import org.apache.pulsar.client.api.PulsarClient;
-
-PulsarClient client = PulsarClient.builder()
-        .serviceUrl("pulsar://localhost:6650")
-        .build();
-
-Producer<byte[]> compactedTopicProducer = client.newProducer()
-        .topic("some-compacted-topic")
-        .create();
-
-Message<byte[]> msg = MessageBuilder.create()
-        .setContent(someByteArray)
-        .setKey("some-key")
-        .build();
-
-compactedTopicProducer.send(msg);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-deduplication.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-deduplication.md
deleted file mode 100644
index eb2327dc9b2..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-deduplication.md
+++ /dev/null
@@ -1,132 +0,0 @@
----
-id: cookbooks-deduplication
-title: Message deduplication
-sidebar_label: "Message deduplication"
-original_id: cookbooks-deduplication
----
-
-**Message deduplication** is a feature of Pulsar that, when enabled, ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication essentially unburdens Pulsar applications of the responsibility of ensuring deduplication and instead handles it automatically on the server side.
-
-Using message deduplication in Pulsar involves making some [configuration changes](#configuration) to your Pulsar brokers as well as some minor changes to the behavior of Pulsar [clients](#clients).
-
-> For a more thorough theoretical explanation of message deduplication, see the [Concepts and Architecture](concepts-messaging.md#message-deduplication) document.
-
-
-## How it works
-
-Message deduplication can be enabled and disabled on a per-namespace basis. By default, it is *disabled* on all namespaces and can enabled in the following ways:
-
-* Using the [`pulsar-admin namespaces`](#enabling) interface
-* As a broker-level [default](#default) for all namespaces
-
-## Configuration for message deduplication
-
-You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available:
-
-Parameter | Description | Default
-:---------|:------------|:-------
-`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar [broker](reference-terminology.md#broker). If set to `true`, message deduplication will be enabled by default on all namespaces; if set to `false` (the default), deduplication will have to be [enabled](#enabling) and [disabled](#disabling) on a per-namespace basis. | `false`
-`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information will be stored for deduplication purposes. | `10000`
-`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000`
-`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. | `360` (6 hours)
-
-### Setting the broker-level default {#default}
-
-By default, message deduplication is *disabled* on all Pulsar namespaces. To enable it by default on all namespaces, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker.
-
-Regardless of the value of `brokerDeduplicationEnabled`, [enabling](#enabling) and [disabling](#disabling) via the CLI will override the broker-level default.
-
-### Enabling message deduplication {#enabling}
-
-You can enable message deduplication on specific namespaces, regardless of the the [default](#default) for the broker, using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace. Here's an example with `<tenant>/<namespace>`:
-
-```bash
-
-$ bin/pulsar-admin namespaces set-deduplication \
-  public/default \
-  --enable # or just -e
-
-```
-
-### Disabling message deduplication {#disabling}
-
-You can disable message deduplication on a specific namespace using the same method shown [above](#enabling), except using the `--disable`/`-d` flag instead. Here's an example with `<tenant>/<namespace>`:
-
-```bash
-
-$ bin/pulsar-admin namespaces set-deduplication \
-  public/default \
-  --disable # or just -d
-
-```
-
-## Message deduplication and Pulsar clients {#clients}
-
-If you enable message deduplication in your Pulsar brokers, you won't need to make any major changes to your Pulsar clients. There are, however, two settings that you need to provide for your client producers:
-
-1. The producer must be given a name
-1. The message send timeout needs to be set to infinity (i.e. no timeout)
-
-Instructions for [Java](#java), [Python](#python), and [C++](#cpp) clients can be found below.
-
-### Java clients {#java}
-
-To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter and set the timeout to 0 using the `sendTimeout` setter. Here's an example:
-
-```java
-
-import org.apache.pulsar.client.api.Producer;
-import org.apache.pulsar.client.api.PulsarClient;
-import java.util.concurrent.TimeUnit;
-
-PulsarClient pulsarClient = PulsarClient.builder()
-        .serviceUrl("pulsar://localhost:6650")
-        .build();
-Producer producer = pulsarClient.newProducer()
-        .producerName("producer-1")
-        .topic("persistent://public/default/topic-1")
-        .sendTimeout(0, TimeUnit.SECONDS)
-        .create();
-
-```
-
-### Python clients {#python}
-
-To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name` and the timeout to 0 using `send_timeout_millis`. Here's an example:
-
-```python
-
-import pulsar
-
-client = pulsar.Client("pulsar://localhost:6650")
-producer = client.create_producer(
-    "persistent://public/default/topic-1",
-    producer_name="producer-1",
-    send_timeout_millis=0)
-
-```
-
-### C++ clients {#cpp}
-
-To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name` and the timeout to 0 using `send_timeout_millis`. Here's an example:
-
-```cpp
-
-#include <pulsar/Client.h>
-
-std::string serviceUrl = "pulsar://localhost:6650";
-std::string topic = "persistent://some-tenant/ns1/topic-1";
-std::string producerName = "producer-1";
-
-Client client(serviceUrl);
-
-ProducerConfiguration producerConfig;
-producerConfig.setSendTimeout(0);
-producerConfig.setProducerName(producerName);
-
-Producer producer;
-
-Result result = client.createProducer(topic, producerConfig, producer);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-encryption.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-encryption.md
deleted file mode 100644
index f0d8fb8735e..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-encryption.md
+++ /dev/null
@@ -1,184 +0,0 @@
----
-id: cookbooks-encryption
-title: Pulsar Encryption
-sidebar_label: "Encryption"
-original_id: cookbooks-encryption
----
-
-Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key.
-
-## Asymmetric and symmetric encryption
-
-Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone.
-
-Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair.
-
-The application configures the producer with the public  key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message.
-
-A message can be encrypted with more than one key.  Any one of the keys used for encrypting the message is sufficient to decrypt the message
-
-Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable
-
-## Producer
-![alt text](/assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer")
-
-## Consumer
-![alt text](/assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer")
-
-## Here are the steps to get started:
-
-1. Create your ECDSA or RSA public/private key pair.
-
-```shell
-
-openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem
-openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem
-
-```
-
-2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys.
-3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key.
-4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key")
-5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader)
-6. Sample producer application:
-
-```java
-
-class RawFileKeyReader implements CryptoKeyReader {
-
-    String publicKeyFile = "";
-    String privateKeyFile = "";
-
-    RawFileKeyReader(String pubKeyFile, String privKeyFile) {
-        publicKeyFile = pubKeyFile;
-        privateKeyFile = privKeyFile;
-    }
-
-    @Override
-    public EncryptionKeyInfo getPublicKey(String keyName, Map<String, String> keyMeta) {
-        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
-        try {
-            keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile)));
-        } catch (IOException e) {
-            System.out.println("ERROR: Failed to read public key from file " + publicKeyFile);
-            e.printStackTrace();
-        }
-        return keyInfo;
-    }
-
-    @Override
-    public EncryptionKeyInfo getPrivateKey(String keyName, Map<String, String> keyMeta) {
-        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
-        try {
-            keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile)));
-        } catch (IOException e) {
-            System.out.println("ERROR: Failed to read private key from file " + privateKeyFile);
-            e.printStackTrace();
-        }
-        return keyInfo;
-    }
-}
-PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080");
-
-ProducerConfiguration prodConf = new ProducerConfiguration();
-prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"));
-prodConf.addEncryptionKey("myappkey");
-
-Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf);
-
-for (int i = 0; i < 10; i++) {
-    producer.send("my-message".getBytes());
-}
-
-pulsarClient.close();
-
-```
-
-7. Sample Consumer Application:
-
-```java
-
-class RawFileKeyReader implements CryptoKeyReader {
-
-    String publicKeyFile = "";
-    String privateKeyFile = "";
-
-    RawFileKeyReader(String pubKeyFile, String privKeyFile) {
-        publicKeyFile = pubKeyFile;
-        privateKeyFile = privKeyFile;
-    }
-
-    @Override
-    public EncryptionKeyInfo getPublicKey(String keyName, Map<String, String> keyMeta) {
-        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
-        try {
-            keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile)));
-        } catch (IOException e) {
-            System.out.println("ERROR: Failed to read public key from file " + publicKeyFile);
-            e.printStackTrace();
-        }
-        return keyInfo;
-    }
-
-    @Override
-    public EncryptionKeyInfo getPrivateKey(String keyName, Map<String, String> keyMeta) {
-        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
-        try {
-            keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile)));
-        } catch (IOException e) {
-            System.out.println("ERROR: Failed to read private key from file " + privateKeyFile);
-            e.printStackTrace();
-        }
-        return keyInfo;
-    }
-}
-
-ConsumerConfiguration consConf = new ConsumerConfiguration();
-consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"));
-PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080");
-Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf);
-Message msg = null;
-
-for (int i = 0; i < 10; i++) {
-    msg = consumer.receive();
-    // do something
-    System.out.println("Received: " + new String(msg.getData()));
-}
-
-// Acknowledge the consumption of all messages at once
-consumer.acknowledgeCumulative(msg);
-pulsarClient.close();
-
-```
-
-## Key rotation
-Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version.
-
-## Enabling encryption at the producer application:
-If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages.  This can be done in two ways:
-1. The consumer application provides you access to their public key, which you add to your producer keys
-1. You grant access to one of the private keys from the pairs used by producer 
-
-In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys.
-
-E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2,
-
-```java
-
-conf.addEncryptionKey("myapp.messagekey1");
-conf.addEncryptionKey("myapp.messagekey2");
-
-```
-
-## Decrypting encrypted messages at the consumer application:
-Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key.
-
-## Handling Failures:
-* Producer/ Consumer loses access to the key
-  * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request.
-  * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request.
-Application will never be able to decrypt the messages if the private key is permanently lost.
-* Batch messaging
-  * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME.
-* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. 
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-message-queue.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-message-queue.md
deleted file mode 100644
index 6897dd9b00b..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-message-queue.md
+++ /dev/null
@@ -1,101 +0,0 @@
----
-id: cookbooks-message-queue
-title: Using Pulsar as a message queue
-sidebar_label: "Message queue"
-original_id: cookbooks-message-queue
----
-
-Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken.
-
-Pulsar is a great choice for a message queue because:
-
-* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind
-* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish)
-
-> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish).
-
-
-# Client configuration changes
-
-To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must:
-
-* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble)
-* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setti [...]
-
-   The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case.
-
-## Java clients
-
-Here's an example Java consumer configuration that uses a shared subscription:
-
-```java
-
-import org.apache.pulsar.client.api.Consumer;
-import org.apache.pulsar.client.api.PulsarClient;
-import org.apache.pulsar.client.api.SubscriptionType;
-
-String SERVICE_URL = "pulsar://localhost:6650";
-String TOPIC = "persistent://public/default/mq-topic-1";
-String subscription = "sub-1";
-
-PulsarClient client = PulsarClient.builder()
-        .serviceUrl(SERVICE_URL)
-        .build();
-
-Consumer consumer = client.newConsumer()
-        .topic(TOPIC)
-        .subscriptionName(subscription)
-        .subscriptionType(SubscriptionType.Shared)
-        // If you'd like to restrict the receiver queue size
-        .receiverQueueSize(10)
-        .subscribe();
-
-```
-
-## Python clients
-
-Here's an example Python consumer configuration that uses a shared subscription:
-
-```python
-
-from pulsar import Client, ConsumerType
-
-SERVICE_URL = "pulsar://localhost:6650"
-TOPIC = "persistent://public/default/mq-topic-1"
-SUBSCRIPTION = "sub-1"
-
-client = Client(SERVICE_URL)
-consumer = client.subscribe(
-    TOPIC,
-    SUBSCRIPTION,
-    # If you'd like to restrict the receiver queue size
-    receiver_queue_size=10,
-    consumer_type=ConsumerType.Shared)
-
-```
-
-## C++ clients
-
-Here's an example C++ consumer configuration that uses a shared subscription:
-
-```cpp
-
-#include <pulsar/Client.h>
-
-std::string serviceUrl = "pulsar://localhost:6650";
-std::string topic = "persistent://public/defaultmq-topic-1";
-std::string subscription = "sub-1";
-
-Client client(serviceUrl);
-
-ConsumerConfiguration consumerConfig;
-consumerConfig.setConsumerType(ConsumerType.ConsumerShared);
-// If you'd like to restrict the receiver queue size
-consumerConfig.setReceiverQueueSize(10);
-
-Consumer consumer;
-
-Result result = client.subscribe(topic, subscription, consumerConfig, consumer);
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-non-persistent.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-non-persistent.md
deleted file mode 100644
index f8311196816..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-non-persistent.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-id: cookbooks-non-persistent
-title: Non-persistent messaging
-sidebar_label: "Non-persistent messaging"
-original_id: cookbooks-non-persistent
----
-
-**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides:
-
-* A basic [conceptual overview](#overview) of non-persistent topics
-* Information about [configurable parameters](#configuration) related to non-persistent topics
-* A guide to the [CLI interface](#cli) for managing non-persistent topics
-
-## Overview
-
-By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover.
-
-Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss.
-
-Non-persistent topics have names of this form (note the `non-persistent` in the name):
-
-```http
-
-non-persistent://tenant/namespace/topic
-
-```
-
-> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation.
-
-## Using
-
-> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration.
-
-In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster:
-
-```bash
-
-$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \
-  --num-produce 1 \
-  --messages "This message will be stored only in memory"
-
-```
-
-> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-non-persistent-topics.md) guide.
-
-## Enabling
-
-In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging.
-
-
-> #### Configuration for standalone mode
-> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. 
-
-If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`.
-
-## Managing with cli
-
-Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more.
-
-## Using with Pulsar clients
-
-You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type.
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-partitioned.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-partitioned.md
deleted file mode 100644
index b033d547904..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-partitioned.md
+++ /dev/null
@@ -1,86 +0,0 @@
----
-id: cookbooks-partitioned
-title: Non-persistent messaging
-sidebar_label: "Partitioned Topics"
-original_id: cookbooks-partitioned
----
-
-By default, Pulsar topics are served by a single broker. Using only a single broker, however, limits a topic's maximum throughput. *Partitioned topics* are a special type of topic that can span multiple brokers and thus allow for much higher throughput. For an explanation of how partitioned topics work, see the [Partitioned Topics](concepts-messaging.md#partitioned-topics) concepts.
-
-You can [publish](#publishing-to-partitioned-topics) to partitioned topics using Pulsar's client libraries and you can [create and manage](#managing-partitioned-topics) partitioned topics using Pulsar's [admin API](admin-api-overview.md).
-
-## Publishing to partitioned topics
-
-When publishing to partitioned topics, the only difference from non-partitioned topics is that you need to specify a [routing mode](concepts-messaging.md#routing-modes) when you create a new [producer](reference-terminology.md#producer). Examples for [Java](#java) are below.
-
-### Java
-
-Publishing messages to partitioned topics in the Java client works much like [publishing to normal topics](client-libraries-java.md#using-producers). The difference is that you need to specify either one of the currently available message routers or a custom router.
-
-#### Routing mode
-
-You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. You have three options:
-
-* `SinglePartition`
-* `RoundRobinPartition`
-* `CustomPartition`
-
-Here's an example:
-
-```java
-
-String pulsarBrokerRootUrl = "pulsar://localhost:6650";
-String topic = "persistent://my-tenant/my-namespace/my-topic";
-
-PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build();
-Producer<byte[]> producer = pulsarClient.newProducer()
-        .topic(topic)
-        .messageRoutingMode(MessageRoutingMode.SinglePartition)
-        .create();
-producer.send("Partitioned topic message".getBytes());
-
-```
-
-#### Custom message router
-
-To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method:
-
-```java
-
-public interface MessageRouter extends Serializable {
-    int choosePartition(Message msg);
-}
-
-```
-
-Here's a (not very useful!) router that routes every message to partition 10:
-
-```java
-
-public class AlwaysTenRouter implements MessageRouter {
-    public int choosePartition(Message msg) {
-        return 10;
-    }
-}
-
-```
-
-With that implementation in hand, you can send
-
-```java
-
-String pulsarBrokerRootUrl = "pulsar://localhost:6650";
-String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic";
-
-PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build();
-Producer<byte[]> producer = pulsarClient.newProducer()
-        .topic(topic)
-        .messageRouter(new AlwaysTenRouter())
-        .create();
-producer.send("Partitioned topic message".getBytes());
-
-```
-
-## Managing partitioned topics
-
-You can use Pulsar's [admin API](admin-api-overview.md) to create and manage [partitioned topics](admin-api-partitioned-topics.md).
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-retention-expiry.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-retention-expiry.md
deleted file mode 100644
index ea2fd385e2e..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-retention-expiry.md
+++ /dev/null
@@ -1,317 +0,0 @@
----
-id: cookbooks-retention-expiry
-title: Message retention and expiry
-sidebar_label: "Message retention and expiry"
-original_id: cookbooks-retention-expiry
----
-
-Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, brokers:
-
-* immediately delete all messages that have been acknowledged on every subscription, and
-* persistently store all unacknowledged messages in a [backlog](#backlog-quotas).
-
-In Pulsar, you can override both of these default behaviors, at the namespace level, in two ways:
-
-* You can persistently store messages that have already been consumed and acknowledged for a minimum time by setting [retention policies](#retention-policies).
-* Messages that are not acknowledged within a specified timeframe, can be automatically marked as consumed, by specifying the [time to live](#time-to-live-ttl) (TTL).
-
-Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL at the namespace level (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster).
-
-
-> #### Retention and TTL are solving two different problems
-> * Message retention: Keep the data for at least X hours (even if acknowledged)
-> * Time-to-live: Discard data after some time (by automatically acknowledging)
->
-> In most cases, applications will want to use either one or the other (or none). 
-
-
-## Retention policies
-
-By default, when a Pulsar message arrives at a broker it will be stored until it has been acknowledged by a consumer, at which point it will be deleted. You can override this behavior and retain even messages that have already been acknowledged by setting a *retention policy* on all the topics in a given namespace. When you set a retention policy you can set either a *size limit* or a *time limit*.
-
-When you set a size limit of, say, 10 gigabytes, then messages in all topics in the namespace, *even acknowledged messages*, will be retained until the size limit for the topic is reached; if you set a time limit of, say, 1 day, then messages for all topics in the namespace will be retained for 24 hours.
-
-It is also possible to set *infinite* retention time or size, by setting `-1` for either time or
-size retention.
-
-### Defaults
-
-There are two configuration parameters that you can use to set [instance](reference-terminology.md#instance)-wide defaults for message retention: [`defaultRetentionTimeInMinutes=0`](reference-configuration.md#broker-defaultRetentionTimeInMinutes) and [`defaultRetentionSizeInMB=0`](reference-configuration.md#broker-defaultRetentionSizeInMB).
-
-Both of these parameters are in the [`broker.conf`](reference-configuration.md#broker) configuration file.
-
-### Set retention policy
-
-You can set a retention policy for a namespace by specifying the namespace as well as both a size limit *and* a time limit.
-
-#### pulsar-admin
-
-Use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag.
-
-##### Examples
-
-To set a size limit of 10 gigabytes and a time limit of 3 hours for the `my-tenant/my-ns` namespace:
-
-```shell
-
-$ pulsar-admin namespaces set-retention my-tenant/my-ns \
-  --size 10G \
-  --time 3h
-
-```
-
-To set retention with infinite time and a size limit:
-
-```shell
-
-$ pulsar-admin namespaces set-retention my-tenant/my-ns \
-  --size 1T \
-  --time -1
-
-```
-
-Similarly, even the size can be to unlimited:
-
-```shell
-
-$ pulsar-admin namespaces set-retention my-tenant/my-ns \
-  --size -1 \
-  --time -1
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention}
-
-#### Java
-
-```java
-
-int retentionTime = 10; // 10 minutes
-int retentionSize = 500; // 500 megabytes
-RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize);
-admin.namespaces().setRetention(namespace, policies);
-
-```
-
-### Get retention policy
-
-You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`.
-
-#### pulsar-admin
-
-Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace.
-
-##### Example
-
-```shell
-
-$ pulsar-admin namespaces get-retention my-tenant/my-ns
-{
-  "retentionTimeInMinutes": 10,
-  "retentionSizeInMB": 0
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention}
-
-#### Java
-
-```java
-
-admin.namespaces().getRetention(namespace);
-
-```
-
-## Backlog quotas
-
-*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged.
-
-You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting:
-
-* an allowable *size threshold* for each topic in the namespace
-* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded.
-
-The following retention policies are available:
-
-Policy | Action
-:------|:------
-`producer_request_hold` | The broker will hold and not persist produce request payload
-`producer_exception` | The broker will disconnect from the client by throwing an exception
-`consumer_backlog_eviction` | The broker will begin discarding backlog messages
-
-
-> #### Beware the distinction between retention policy types
-> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of already-acknowledged messages and one that applies to backlogs.
-
-
-Backlog quotas are handled at the namespace level. They can be managed via:
-
-### Set size thresholds and backlog retention policies
-
-You can set a size threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit, and a policy by name.
-
-#### pulsar-admin
-
-Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` flag, and a retention policy using the `-p`/`--policy` flag.
-
-##### Example
-
-```shell
-
-$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \
-  --limit 2G \
-  --policy producer_request_hold
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap}
-
-#### Java
-
-```java
-
-long sizeLimit = 2147483648L;
-BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold;
-BacklogQuota quota = new BacklogQuota(sizeLimit, policy);
-admin.namespaces().setBacklogQuota(namespace, quota);
-
-```
-
-### Get backlog threshold and backlog retention policy
-
-You can see which size threshold and backlog retention policy has been applied to a namespace.
-
-#### pulsar-admin
-
-Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example:
-
-```shell
-
-$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns
-{
-  "destination_storage": {
-    "limit" : 2147483648,
-    "policy" : "producer_request_hold"
-  }
-}
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap}
-
-#### Java
-
-```java
-
-Map<BacklogQuota.BacklogQuotaType,BacklogQuota> quotas =
-  admin.namespaces().getBacklogQuotas(namespace);
-
-```
-
-### Remove backlog quotas
-
-#### pulsar-admin
-
-Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace. Here's an example:
-
-```shell
-
-$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns
-
-```
-
-#### REST API
-
-{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota}
-
-#### Java
-
-```java
-
-admin.namespaces().removeBacklogQuota(namespace);
-
-```
-
-### Clear backlog
-
-#### pulsar-admin
-
-Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand.
-
-##### Example
-
-```shell
-
-$ pulsar-admin namespaces clear-backlog my-tenant/my-ns
-
-```
-
-By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag.
-
-## Time to live (TTL)
-
-By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained.
-
-### Set the TTL for a namespace
-
-#### pulsar-admin
-
-Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag.
-
-##### Example
-
-```shell
-
-$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \
-  --messageTTL 120 # TTL of 2 minutes
-
-```
-
-#### REST API
-
-{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL}
-
-#### Java
-
-```java
-
-admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds);
-
-```
-
-### Get the TTL configuration for a namespace
-
-#### pulsar-admin
-
-Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace.
-
-##### Example
-
-```shell
-
-$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns
-60
-
-```
-
-#### REST API
-
-{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL}
-
-#### Java
-
-```java
-
-admin.namespaces().get
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-tiered-storage.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-tiered-storage.md
deleted file mode 100644
index 328b7c6d64e..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/cookbooks-tiered-storage.md
+++ /dev/null
@@ -1,154 +0,0 @@
----
-id: cookbooks-tiered-storage
-title: Tiered Storage
-sidebar_label: "Tiered Storage"
-original_id: cookbooks-tiered-storage
----
-
-Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster.
-
-## When should I use Tiered Storage?
-
-Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history.
-
-## The offloading mechanism
-
-A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture.
-
-![Tiered storage](/assets/pulsar-tiered-storage.png "Tiered Storage")
-
-The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded.
-
-## Amazon S3
-
-Tiered storage currently supports S3 for long term storage. On the broker, the administrator must configure a S3 bucket and the AWS region where the bucket exists. Offloaded data will be placed into this bucket.
-
-The configured S3 bucket must exist before attempting to offload. If it does not exist, the offload operation will fail.
-
-Pulsar users multipart objects to update the segment data. It is possible that a broker could crash while uploading the data. We recommend you add a lifecycle rule your S3 bucket to expire incomplete multipart upload after a day or two to avoid getting charged for incomplete uploads.
-
-### Configuring the broker
-
-Offloading is configured in ```broker.conf```. 
-
-At a minimum, the user must configure the driver, the region and the bucket.
-
-```conf
-
-managedLedgerOffloadDriver=S3
-s3ManagedLedgerOffloadRegion=eu-west-3
-s3ManagedLedgerOffloadBucket=pulsar-topic-offload
-
-```
-
-It is also possible to specify the s3 endpoint directly, using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if you are using a non-AWS storage service which provides an S3 compatible API. 
-
-> If the endpoint is specified directly, then the region must _not_ be set.
-
-> The broker.conf of all brokers must have the same configuration for driver, region and bucket for offload to avoid data becoming unavailable as topics move from one broker to another.
-
-Pulsar also provides some knobs to configure the size of requests sent to S3.
-
-- `s3ManagedLedgerOffloadMaxBlockSizeInBytes` configures the maximum size of a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB.
-- `s3ManagedLedgerOffloadReadBufferSizeInBytes` configures the block size for each individual read when reading back data from S3. Default is 1MB.
-
-In both cases, these should not be touched unless you know what you are doing.
-
-> The broker must be rebooted for any changes in the configuration to take effect.
-
-### Authenticating with S3
-
-To be able to access S3, you need to authenticate with S3. Pulsar does not provide any direct means of configuring authentication for S3, but relies on the mechanisms supported by the [DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html).
-
-Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways.
-
-1. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```.
-
-```bash
-
-export AWS_ACCESS_KEY_ID=ABC123456789
-export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
-
-```
-
-> \"export\" is important so that the variables are made available in the environment of spawned processes.
-
-
-2. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`.
-
-```bash
-
-PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacityPerThread=4096"
-
-```
-
-3. Set the access credentials in ```~/.aws/credentials```.
-
-```conf
-
-[default]
-aws_access_key_id=ABC123456789
-aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
-
-```
-
-If you are running in EC2 you can also use instance profile credentials, provided through the EC2 metadata service, but that is out of scope for this cookbook.
-
-> The broker must be rebooted for credentials specified in pulsar_env to take effect.
-
-## Configuring offload to run automatically
-
-Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can.
-
-```bash
-
-$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace
-
-```
-
-> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full.
-
-
-## Triggering offload manually
-
-Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you.
-
-When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met.
-
-```bash
-
-$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1
-Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1
-
-```
-
-The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status.
-
-```bash
-
-$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1
-Offload is currently running
-
-```
-
-To wait for offload to complete, add the -w flag.
-
-```bash
-
-$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1
-Offload was a success
-
-```
-
-If there is an error offloading, the error will be propagated to the offload-status command.
-
-```bash
-
-$ bin/pulsar-admin topics offload-status persistent://public/default/topic1                                                                                                       
-Error in offload
-null
-
-Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads.  Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhr [...]
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-aws.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-aws.md
deleted file mode 100644
index 488a8de2804..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-aws.md
+++ /dev/null
@@ -1,271 +0,0 @@
----
-id: deploy-aws
-title: Deploying a Pulsar cluster on AWS using Terraform and Ansible
-sidebar_label: "Amazon Web Services"
-original_id: deploy-aws
----
-
-> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md).
-
-One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary to run the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---while Ansible can install  [...]
-
-## Requirements and setup
-
-In order install a Pulsar cluster on AWS using Terraform and Ansible, you'll need:
-
-* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool
-* Python and [pip](https://pip.pypa.io/en/stable/)
-* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts
-
-You'll also need to make sure that you're currently logged into your AWS account via the `aws` tool:
-
-```bash
-
-$ aws configure
-
-```
-
-## Installation
-
-You can install Ansible on Linux or macOS using pip.
-
-```bash
-
-$ pip install ansible
-
-```
-
-You can install Terraform using the instructions [here](https://learn.hashicorp.com/tutorials/terraform/install-cli).
-
-You'll also need to have the Terraform and Ansible configurations for Pulsar locally on your machine. They're contained in Pulsar's [GitHub repository](https://github.com/apache/incubator-pulsar), which you can fetch using Git:
-
-```bash
-
-$ git clone https://github.com/apache/incubator-pulsar
-$ cd incubator-pulsar/deployment/terraform-ansible/aws
-
-```
-
-## SSH setup
-
-> If you already have an SSH key and would like to use it, you skip generating the SSH keys and update `private_key_file` setting
-> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file.
->
-> For example, if you already had a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`,
-> you can do followings:
->
-> 1. update `ansible.cfg` with following values:
->
-
-> ```shell
-> 
-> private_key_file=~/.ssh/pulsar_aws
->
-> 
-> ```
-
->
-> 2. update `terraform.tfvars` with following values:
->
-
-> ```shell
-> 
-> public_key_path=~/.ssh/pulsar_aws.pub
->
-> 
-> ```
-
-
-In order to create the necessary AWS resources using Terraform, you'll need to create an SSH key. To create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`:
-
-```bash
-
-$ ssh-keygen -t rsa
-
-```
-
-Do *not* enter a passphrase (hit **Enter** when prompted instead). To verify that a key has been created:
-
-```bash
-
-$ ls ~/.ssh
-id_rsa               id_rsa.pub
-
-```
-
-## Creating AWS resources using Terraform
-
-To get started building AWS resources with Terraform, you'll need to install all Terraform dependencies:
-
-```bash
-
-$ terraform init
-# This will create a .terraform folder
-
-```
-
-Once you've done that, you can apply the default Terraform configuration:
-
-```bash
-
-$ terraform apply
-
-```
-
-You should then see this prompt:
-
-```bash
-
-Do you want to perform these actions?
-  Terraform will perform the actions described above.
-  Only 'yes' will be accepted to approve.
-
-  Enter a value:
-
-```
-
-Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When it's finished, you should see `Apply complete!` along with some other information, including the number of resources created.
-
-### Applying a non-default configuration
-
-You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available:
-
-Variable name | Description | Default
-:-------------|:------------|:-------
-`public_key_path` | The path of the public key that you've generated. | `~/.ssh/id_rsa.pub`
-`region` | The AWS region in which the Pulsar cluster will run | `us-west-2`
-`availability_zone` | The AWS availability zone in which the Pulsar cluster will run | `us-west-2a`
-`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that will be used by the cluster | `ami-9fa343e7`
-`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3
-`num_bookie_nodes` | The number of bookies that will run in the cluster | 3
-`num_broker_nodes` | The number of Pulsar brokers that will run in the cluster | 2
-`num_proxy_nodes` | The number of Pulsar proxies that will run in the cluster | 1
-`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that will be used by network assets for the cluster | `10.0.0.0/16`
-`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies)
-
-### What is installed
-
-When you run the Ansible playbook, the following AWS resources will be used:
-
-* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes:
-  * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances)
-  * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances)
-  * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
-  * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
-* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)
-* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security
-* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world
-* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC
-* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC
-
-All EC2 instances for the cluster will run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region.
-
-### Fetching your Pulsar connection URL
-
-When you apply the Terraform configuration by running `terraform apply`, Terraform will output a value for the `pulsar_service_url`. It should look something like this:
-
-```
-
-pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650
-
-```
-
-You can fetch that value at any time by running `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename doesn't reflect that):
-
-```bash
-
-$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value
-
-```
-
-### Destroying your cluster
-
-At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command:
-
-```bash
-
-$ terraform destroy
-
-```
-
-## Setup Disks
-
-Before you run the Pulsar playbook, you want to mount the disks to the correct directories on those bookie nodes.
-Since different type of machines would have different disk layout, if you change the `instance_types` in your terraform
-config, you need to update the task defined in `setup-disk.yaml` file.
-
-To setup disks on bookie nodes, use this command:
-
-```bash
-
-$ ansible-playbook \
-  --user='ec2-user' \
-  --inventory=`which terraform-inventory` \
-  setup-disk.yaml
-
-```
-
-After running this command, the disks will be mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk.
-It is important to run this command only once! If you attempt to run this command again after you have run Pulsar playbook,
-it might be potentially erase your disks again and cause the bookies to fail to start up.
-
-## Running the Pulsar playbook
-
-Once you've created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. To do so, use this command:
-
-```bash
-
-$ ansible-playbook \
-  --user='ec2-user' \
-  --inventory=`which terraform-inventory` \
-  ../deploy-pulsar.yaml
-
-```
-
-If you've created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag:
-
-```bash
-
-$ ansible-playbook \
-  --user='ec2-user' \
-  --inventory=`which terraform-inventory` \
-  --private-key="~/.ssh/some-non-default-key" \
-  ../deploy-pulsar.yaml
-
-```
-
-## Accessing the cluster
-
-You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain using the instructions [above](#fetching-your-pulsar-connection-url).
-
-For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip:
-
-```bash
-
-$ pip install pulsar-client
-
-```
-
-Now, open up the Python shell using the `python` command:
-
-```bash
-
-$ python
-
-```
-
-Once in the shell, run the following:
-
-```python
-
->>> import pulsar
->>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650')
-# Make sure to use your connection URL
->>> producer = client.create_producer('persistent://public/default/test-topic')
->>> producer.send('Hello world')
->>> client.close()
-
-```
-
-If all of these commands are successful, your cluster can now be used by Pulsar clients!
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-bare-metal-multi-cluster.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-bare-metal-multi-cluster.md
deleted file mode 100644
index 407f04deba5..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-bare-metal-multi-cluster.md
+++ /dev/null
@@ -1,459 +0,0 @@
----
-id: deploy-bare-metal-multi-cluster
-title: Deploying a multi-cluster on bare metal
-sidebar_label: "Bare metal multi-cluster"
-original_id: deploy-bare-metal-multi-cluster
----
-
-:::tip
-
-1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you're interested in experimenting with
-Pulsar or using it in a startup or on a single team, we recommend opting for a single cluster. For instructions on deploying a single cluster,
-see the guide [here](deploy-bare-metal.md).
-2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
-package and make sure it is installed under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
-have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
-
-:::
-
-A Pulsar *instance* consists of multiple Pulsar clusters working in unison. Clusters can be distributed across data centers or geographical regions and can replicate amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance involves the following basic steps:
-
-* Deploying two separate [ZooKeeper](#deploying-zookeeper) quorums: a [local](#deploying-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks
-* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster
-* Deploying a [BookKeeper cluster](#deploying-bookkeeper) of bookies in each Pulsar cluster
-* Deploying [brokers](#deploying-brokers) in each Pulsar cluster
-
-If you're deploying a single Pulsar cluster, see the [Clusters and Brokers](getting-started-standalone.md#starting-the-cluster) guide.
-
-> #### Running Pulsar locally or on Kubernetes?
-> This guide shows you how to deploy Pulsar in production in a non-Kubernetes. If you'd like to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you're looking to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pulsar- [...]
-
-## System requirement
-Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
-
-## Installing Pulsar
-
-To get started running Pulsar, download a binary tarball release in one of the following ways:
-
-* by clicking the link below and downloading the release from an Apache mirror:
-
-  * <a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>
-
-* from the Pulsar [downloads page](pulsar:download_page_url)
-* from the Pulsar [releases page](https://github.com/apache/incubator-pulsar/releases/latest)
-* using [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=incubator/pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:version@-bin.tar.gz' -O apache-pulsar-@pulsar:version@-bin.tar.gz
-  
-  ```
-
-Once the tarball is downloaded, untar it and `cd` into the resulting directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
-$ cd apache-pulsar-@pulsar:version@
-
-```
-
-## What your package contains
-
-The Pulsar binary package initially contains the following directories:
-
-Directory | Contains
-:---------|:--------
-`bin` | Pulsar's [command-line tools](reference-cli-tools.md), such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
-`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
-`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md)
-`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar
-`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase
-
-These directories will be created once you begin running Pulsar:
-
-Directory | Contains
-:---------|:--------
-`data` | The data storage directory used by ZooKeeper and BookKeeper
-`instances` | Artifacts created for [Pulsar Functions](functions-overview.md)
-`logs` | Logs created by the installation
-
-
-## Deploying ZooKeeper
-
-Each Pulsar instance relies on two separate ZooKeeper quorums.
-
-* [Local ZooKeeper](#deploying-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
-* [Configuration Store](#deploying-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper.
-
-### Deploying local ZooKeeper
-
-ZooKeeper manages a variety of essential coordination- and configuration-related tasks for Pulsar.
-
-Deploying a Pulsar instance requires you to stand up one local ZooKeeper cluster *per Pulsar cluster*. 
-
-To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. Here's an example for a three-node cluster:
-
-```properties
-
-server.1=zk1.us-west.example.com:2888:3888
-server.2=zk2.us-west.example.com:2888:3888
-server.3=zk3.us-west.example.com:2888:3888
-
-```
-
-On each host, you need to specify the ID of the node in each node's `myid` file, which is in each server's `data/zookeeper` folder by default (this can be changed via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
-
-> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed info on `myid` and more.
-
-On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
-
-```shell
-
-$ mkdir -p data/zookeeper
-$ echo 1 > data/zookeeper/myid
-
-```
-
-On `zk2.us-west.example.com` the command would be `echo 2 > data/zookeeper/myid` and so on.
-
-Once each server has been added to the `zookeeper.conf` configuration and has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
-
-```shell
-
-$ bin/pulsar-daemon start zookeeper
-
-```
-
-### Deploying the configuration store 
-
-The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster used to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
-
-If you're deploying a [single-cluster](#single-cluster-pulsar-instance) instance, then you will not need a separate cluster for the configuration store. If, however, you're deploying a [multi-cluster](#multi-cluster-pulsar-instance) instance, then you should stand up a separate ZooKeeper cluster for configuration tasks.
-
-#### Single-cluster Pulsar instance
-
-If your Pulsar instance will consist of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but running on different TCP ports.
-
-To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers used by the local quorum to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). Here's an example that uses port 2184 for a three-node ZooKeeper cluster:
-
-```properties
-
-clientPort=2184
-server.1=zk1.us-west.example.com:2185:2186
-server.2=zk2.us-west.example.com:2185:2186
-server.3=zk3.us-west.example.com:2185:2186
-
-```
-
-As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
-
-#### Multi-cluster Pulsar instance
-
-When deploying a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
-
-The key here is to make sure the ZK quorum members are spread across at least 3
-regions and that other regions are running as observers.
-
-Again, given the very low expected load on the configuration store servers, we can
-share the same hosts used for the local ZooKeeper quorum.
-
-For example, let's assume a Pulsar instance with the following clusters `us-west`,
-`us-east`, `us-central`, `eu-central`, `ap-south`. Also let's assume, each cluster
-will have its own local ZK servers named such as
-
-```
-
-zk[1-3].${CLUSTER}.example.com
-
-```
-
-In this scenario we want to pick the quorum participants from few clusters and
-let all the others be ZK observers. For example, to form a 7 servers quorum, we
-can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
-
-This will guarantee that writes to configuration store will be possible even if one
-of these regions is unreachable.
-
-The ZK configuration in all the servers will look like:
-
-```properties
-
-clientPort=2184
-server.1=zk1.us-west.example.com:2185:2186
-server.2=zk2.us-west.example.com:2185:2186
-server.3=zk3.us-west.example.com:2185:2186
-server.4=zk1.us-central.example.com:2185:2186
-server.5=zk2.us-central.example.com:2185:2186
-server.6=zk3.us-central.example.com:2185:2186:observer
-server.7=zk1.us-east.example.com:2185:2186
-server.8=zk2.us-east.example.com:2185:2186
-server.9=zk3.us-east.example.com:2185:2186:observer
-server.10=zk1.eu-central.example.com:2185:2186:observer
-server.11=zk2.eu-central.example.com:2185:2186:observer
-server.12=zk3.eu-central.example.com:2185:2186:observer
-server.13=zk1.ap-south.example.com:2185:2186:observer
-server.14=zk2.ap-south.example.com:2185:2186:observer
-server.15=zk3.ap-south.example.com:2185:2186:observer
-
-```
-
-Additionally, ZK observers will need to have:
-
-```properties
-
-peerType=observer
-
-```
-
-##### Starting the service
-
-Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
-
-```shell
-
-$ bin/pulsar-daemon start configuration-store
-
-```
-
-## Cluster metadata initialization
-
-Once you've set up the cluster-specific ZooKeeper and configuration store quorums for your instance, there is some metadata that needs to be written to ZooKeeper for each cluster in your instance. **It only needs to be written once**.
-
-You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. Here's an example:
-
-```shell
-
-$ bin/pulsar initialize-cluster-metadata \
-  --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
-  --web-service-url http://pulsar.us-west.example.com:8080/ \
-  --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
-  --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
-  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
-
-```
-
-As you can see from the example above, the following needs to be specified:
-
-* The name of the cluster
-* The local ZooKeeper connection string for the cluster
-* The configuration store connection string for the entire instance
-* The web service URL for the cluster
-* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster
-
-If you're using [TLS](security-tls-transport.md), you'll also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster.
-
-Make sure to run `initialize-cluster-metadata` for each cluster in your instance.
-
-## Deploying BookKeeper
-
-BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
-
-Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
-
-### Configuring bookies
-
-BookKeeper bookies can be configured using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the Pulsar cluster's local ZooKeeper.
-
-### Starting up bookies
-
-You can start up a bookie in two ways: in the foreground or as a background daemon.
-
-To start up a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper)
-
-```shell
-
-$ bin/pulsar-daemon start bookie
-
-```
-
-You can verify that the bookie is working properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
-
-```shell
-
-$ bin/bookkeeper shell bookiesanity
-
-```
-
-This will create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger.
-
-### Hardware considerations
-
-Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, it's essential that they have a suitable hardware configuration. There are two key dimensions to bookie hardware capacity:
-
-* Disk I/O capacity read/write
-* Storage capacity
-
-Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is
-designed to use multiple devices:
-
-* A **journal** to ensure durability. For sequential writes, it's critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of  [...]
-* A **ledger storage device** is where data is stored until all consumers have acknowledged the message. Writes will happen in the background, so write I/O is not a big concern. Reads will happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration will involve multiple HDDs with a RAID controller.
-
-
-
-## Deploying brokers
-
-Once you've set up ZooKeeper, initialized cluster metadata, and spun up BookKeeper bookies, you can deploy brokers.
-
-### Broker configuration
-
-Brokers can be configured using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
-
-The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you'll need to specify only tho [...]
-
-You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter.
-
-Here's an example configuration:
-
-```properties
-
-# Local ZooKeeper servers
-zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
-
-# Configuration store quorum connection string.
-configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
-
-clusterName=us-west
-
-```
-
-### Broker hardware
-
-Pulsar brokers do not require any special hardware since they don't use the local disk. Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) are recommended since the software can take full advantage of that.
-
-### Starting the broker service
-
-You can start a broker in the background using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
-
-```shell
-
-$ bin/pulsar-daemon start broker
-
-```
-
-You can also start brokers in the foreground using [`pulsar broker`](reference-cli-tools.md#pulsar-broker):
-
-```shell
-
-$ bin/pulsar broker
-
-```
-
-## Service discovery
-
-[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup).
-
-You can also use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means.
-
-> #### Service discovery already provided by many scheduling systems
-> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes.md), have service discovery systems built in. If you're running Pulsar on such a system, you may not need to provide your own service discovery mechanism.
-
-
-### Service discovery setup
-
-The service discovery mechanism included with Pulsar maintains a list of active brokers, stored in ZooKeeper, and supports lookup using HTTP and also Pulsar's [binary protocol](developing-binary-protocol.md).
-
-To get started setting up Pulsar's built-in service discovery, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the cluster's ZooKeeper quorum connection string and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [configuration
-store](reference-terminology.md#configuration-store) quorum connection string.
-
-```properties
-
-# Zookeeper quorum connection string
-zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
-
-# Global configuration store connection string
-configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
-
-```
-
-To start the discovery service:
-
-```shell
-
-$ bin/pulsar-daemon start discovery
-
-```
-
-## Admin client and verification
-
-At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients.
-
-The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster:
-
-```properties
-
-serviceUrl=http://pulsar.us-west.example.com:8080/
-
-```
-
-## Provisioning new tenants
-
-Pulsar was built as a fundamentally multi-tenant system.
-
-To allow a new tenant to use the system, we need to create a new one. You can create a new tenant using the [`pulsar-admin`](reference-pulsar-admin.md#tenants-create) CLI tool:
-
-```shell
-
-$ bin/pulsar-admin tenants create test-tenant \
-  --allowed-clusters us-west \
-  --admin-roles test-admin-role
-
-```
-
-This will allow users who identify with role `test-admin-role` to administer the configuration for the tenant `test` which will only be allowed to use the cluster `us-west`. From now on, this tenant will be able to self-manage its resources.
-
-Once a tenant has been created, you will need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant.
-
-The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant.
-
-```shell
-
-$ bin/pulsar-admin namespaces create test-tenant/ns1
-
-```
-
-##### Testing producer and consumer
-
-Everything is now ready to send and receive messages. The quickest way to test
-the system is through the `pulsar-perf` client tool.
-
-Let's use a topic in the namespace we just created. Topics are automatically
-created the first time a producer or a consumer tries to use them.
-
-The topic name in this case could be:
-
-```http
-
-persistent://test-tenant/ns1/my-topic
-
-```
-
-Start a consumer that will create a subscription on the topic and will wait
-for messages:
-
-```shell
-
-$ bin/pulsar-perf consume persistent://test-tenant/us-west/ns1/my-topic
-
-```
-
-Start a producer that publishes messages at a fixed rate and report stats every
-10 seconds:
-
-```shell
-
-$ bin/pulsar-perf produce persistent://test-tenant/us-west/ns1/my-topic
-
-```
-
-To report the topic stats:
-
-```shell
-
-$ bin/pulsar-admin persistent stats persistent://test-tenant/us-west/ns1/my-topic
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-bare-metal.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-bare-metal.md
deleted file mode 100644
index 7bc83e01769..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-bare-metal.md
+++ /dev/null
@@ -1,399 +0,0 @@
----
-id: deploy-bare-metal
-title: Deploying a cluster on bare metal
-sidebar_label: "Bare metal"
-original_id: deploy-bare-metal
----
-
-:::tip
-
-1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you're interested in experimenting with
-Pulsar or using it in a startup or on a single team, we recommend opting for a single cluster. If you do need to run a multi-cluster Pulsar instance,
-however, see the guide [here](deploy-bare-metal-multi-cluster.md).
-2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
-package and make sure it is installed under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
-have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
-
-:::
-
-Deploying a Pulsar cluster involves doing the following (in order):
-
-* Deploying a [ZooKeeper](#deploying-a-zookeeper-cluster) cluster (optional)
-* Initializing [cluster metadata](#initializing-cluster-metadata)
-* Deploying a [BookKeeper](#deploying-a-bookkeeper-cluster) cluster
-* Deploying one or more Pulsar [brokers](#deploying-pulsar-brokers)
-
-## Preparation
-
-### Requirements
-
-Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
-
-> If you already have an existing ZooKeeper cluster and would like to reuse it, you don't need to prepare the machines
-> for running ZooKeeper.
-
-To run Pulsar on bare metal, you will need:
-
-* At least 6 Linux machines or VMs
-  * 3 running [ZooKeeper](https://zookeeper.apache.org)
-  * 3 running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie
-* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts
-
-Each machine in your cluster will need to have [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or higher installed.
-
-Here's a diagram showing the basic setup:
-
-![alt-text](/assets/pulsar-basic-setup.png)
-
-In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL, in this case `pulsar-cluster.acme.com`, that abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper.
-
-### Hardware considerations
-
-When deploying a Pulsar cluster, we have some basic recommendations that you should keep in mind when capacity planning.
-
-#### ZooKeeper
-
-For machines running ZooKeeper, we recommend using lighter-weight machines or VMs. Pulsar uses ZooKeeper only for periodic coordination- and configuration-related tasks, *not* for basic operations. If you're running Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance would likely suffice.
-
-#### Bookies & Brokers
-
-For machines running a bookie and a Pulsar broker, we recommend using more powerful machines. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines we also recommend:
-
-* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers)
-* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies)
-
-## Installing the Pulsar binary package
-
-> You'll need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploying-a-zookeeper-cluster) and [BookKeeper](#deploying-a-bookkeeper-cluster).
-
-To get started deploying a Pulsar cluster on bare metal, you'll need to download a binary tarball release in one of the following ways:
-
-* By clicking on the link directly below, which will automatically trigger a download:
-  * <a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>
-* From the Pulsar [downloads page](pulsar:download_page_url)
-* From the Pulsar [releases page](https://github.com/apache/incubator-pulsar/releases/latest) on [GitHub](https://github.com)
-* Using [wget](https://www.gnu.org/software/wget):
-
-```bash
-
-$ wget pulsar:binary_release_url
-
-```
-
-Once you've downloaded the tarball, untar it and `cd` into the resulting directory:
-
-```bash
-
-$ tar xvzf apache-pulsar-@pulsar:version@-bin.tar.gz
-$ cd apache-pulsar-@pulsar:version@
-
-```
-
-The untarred directory contains the following subdirectories:
-
-Directory | Contains
-:---------|:--------
-`bin` | Pulsar's [command-line tools](reference-cli-tools.md), such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
-`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
-`data` | The data storage directory used by ZooKeeper and BookKeeper.
-`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
-`logs` | Logs created by the installation.
-
-## Installing Builtin Connectors (optional)
-
-> Since release `2.1.0-incubating`, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
-> If you would like to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can
-> skip this section for now.
-
-To get started using builtin connectors, you'll need to download the connectors tarball release on every broker node in
-one of the following ways:
-
-* by clicking the link below and downloading the release from an Apache mirror:
-
-  * <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
-
-* from the Pulsar [downloads page](pulsar:download_page_url)
-* from the Pulsar [releases page](https://github.com/apache/incubator-pulsar/releases/latest)
-* using [wget](https://www.gnu.org/software/wget):
-
-  ```shell
-  
-  $ wget pulsar:connector_release_url
-  
-  ```
-
-Once the tarball is downloaded, in the pulsar directory, untar the io-connectors package and copy the connectors as `connectors`
-in the pulsar directory:
-
-```bash
-
-$ tar xvfz apache-pulsar-io-connectors-@pulsar:version@-bin.tar.gz
-
-// you will find a directory named `apache-pulsar-io-connectors-@pulsar:version@` in the pulsar directory
-// then copy the connectors
-
-$ mv apache-pulsar-io-connectors-@pulsar:version@/connectors connectors
-
-$ ls connectors
-pulsar-io-aerospike-@pulsar:version@.nar
-pulsar-io-cassandra-@pulsar:version@.nar
-pulsar-io-kafka-@pulsar:version@.nar
-pulsar-io-kinesis-@pulsar:version@.nar
-pulsar-io-rabbitmq-@pulsar:version@.nar
-pulsar-io-twitter-@pulsar:version@.nar
-...
-
-```
-
-## Deploying a ZooKeeper cluster
-
-> If you already have an existing zookeeper cluster and would like to use it, you can skip this section.
-
-[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster you'll need to deploy ZooKeeper first (before all other components). We recommend deploying a 3-node ZooKeeper cluster. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper.
-
-To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory you created [above](#installing-the-pulsar-binary-package)). Here's an example:
-
-```properties
-
-server.1=zk1.us-west.example.com:2888:3888
-server.2=zk2.us-west.example.com:2888:3888
-server.3=zk3.us-west.example.com:2888:3888
-
-```
-
-On each host, you need to specify the ID of the node in each node's `myid` file, which is in each server's `data/zookeeper` folder by default (this can be changed via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
-
-> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed info on `myid` and more.
-
-On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
-
-```bash
-
-$ mkdir -p data/zookeeper
-$ echo 1 > data/zookeeper/myid
-
-```
-
-On `zk2.us-west.example.com` the command would be `echo 2 > data/zookeeper/myid` and so on.
-
-Once each server has been added to the `zookeeper.conf` configuration and has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
-
-```bash
-
-$ bin/pulsar-daemon start zookeeper
-
-```
-
-## Initializing cluster metadata
-
-Once you've deployed ZooKeeper for your cluster, there is some metadata that needs to be written to ZooKeeper for each cluster in your instance. It only needs to be written **once**.
-
-You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your ZooKeeper cluster. Here's an example:
-
-```shell
-
-$ bin/pulsar initialize-cluster-metadata \
-  --cluster pulsar-cluster-1 \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2181 \
-  --web-service-url http://pulsar.us-west.example.com:8080 \
-  --web-service-url-tls https://pulsar.us-west.example.com:8443 \
-  --broker-service-url pulsar://pulsar.us-west.example.com:6650 \
-  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651
-
-```
-
-As you can see from the example above, the following needs to be specified:
-
-Flag | Description
-:----|:-----------
-`--cluster` | A name for the cluster
-`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster.
-`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster.
-`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (we don't recommend using a different port).
-`--web-service-url-tls` | If you're using [TLS](security-tls-transport.md), you'll also need to specify a TLS web service URL for the cluster. The default port is 8443 (we don't recommend using a different port).
-`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (we don't recommend using a different port).
-`--broker-service-url-tls` | If you're using [TLS](security-tls-transport.md), you'll also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (we don't recommend using a different port).
-
-## Deploying a BookKeeper cluster
-
-[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You will need to deploy a cluster of BookKeeper bookies to use Pulsar. We recommend running a **3-bookie BookKeeper cluster**.
-
-BookKeeper bookies can be configured using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. Here's an example:
-
-```properties
-
-zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
-
-```
-
-Once you've appropriately modified the `zkServers` parameter, you can provide any other configuration modifications you need. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper), although we would recommend consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide.
-
-Once you've applied the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground.
-
-To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
-
-```bash
-
-$ bin/pulsar-daemon start bookie
-
-```
-
-To start the bookie in the foreground:
-
-```bash
-
-$ bin/bookkeeper bookie
-
-```
-
-You can verify that a bookie is working properly by running the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#shell) on it:
-
-```bash
-
-$ bin/bookkeeper shell bookiesanity
-
-```
-
-This will create an ephemeral BookKeeper ledger on the local bookie, write a few entries, read them back, and finally delete the ledger.
-
-After you have started all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to
-verify all the bookies in the cluster are up running.
-
-```bash
-
-$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
-
-```
-
-This command will create a `num-bookies` sized ledger on the cluster, write a few entries, and finally delete the ledger.
-
-
-## Deploying Pulsar brokers
-
-Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide Pulsar's administrative interface. We recommend running **3 brokers**, one for each machine that's already running a BookKeeper bookie.
-
-### Configuring Brokers
-
-The most important element of broker configuration is ensuring that that each broker is aware of the ZooKeeper cluster that you've deployed. Make sure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters. In this case, since we only have 1 cluster and no configuration store setup, the `configurationStoreServers` will point to the same `zookeeperServers`.
-
-```properties
-
-zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
-configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
-
-```
-
-You also need to specify the cluster name (matching the name that you provided when [initializing the cluster's metadata](#initializing-cluster-metadata):
-
-```properties
-
-clusterName=pulsar-cluster-1
-
-```
-
-### Enabling Pulsar Functions (optional)
-
-If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below:
-
-1. Edit `conf/broker.conf` to enable function worker, by setting `functionsWorkerEnabled` to `true`.
-
-   ```conf
-   
-   functionsWorkerEnabled=true
-   
-   ```
-
-2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provided when [initializing the cluster's metadata](#initializing-cluster-metadata). 
-
-   ```conf
-   
-   pulsarFunctionsCluster: pulsar-cluster-1
-   
-   ```
-
-### Starting Brokers
-
-You can then provide any other configuration changes that you'd like in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you've decided on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, brokers can be started either in the foreground or in the background, using nohup.
-
-You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command:
-
-```bash
-
-$ bin/pulsar broker
-
-```
-
-You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
-
-```bash
-
-$ bin/pulsar-daemon start broker
-
-```
-
-Once you've successfully started up all the brokers you intend to use, your Pulsar cluster should be ready to go!
-
-## Connecting to the running cluster
-
-Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provides a simple way to make sure that your cluster is running properly.
-
-To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You'll need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you've assigned to your broker/bookie hosts. Here's an example:
-
-```properties
-
-webServiceUrl=http://us-west.example.com:8080/
-brokerServiceurl=pulsar://us-west.example.com:6650/
-
-```
-
-Once you've done that, you can publish a message to Pulsar topic:
-
-```bash
-
-$ bin/pulsar-client produce \
-  persistent://public/default/test \
-  -n 1 \
-  -m "Hello, Pulsar"
-
-```
-
-> You may need to use a different cluster name in the topic if you specified a cluster name different from `pulsar-cluster-1`.
-
-This will publish a single message to the Pulsar topic.
-
-## Running Functions
-
-> If you have [enabled](#enabling-pulsar-functions-optional) Pulsar Functions, you can also tryout pulsar functions now.
-
-Create a ExclamationFunction `exclamation`.
-
-```bash
-
-bin/pulsar-admin functions create \
-  --jar examples/api-examples.jar \
-  --className org.apache.pulsar.functions.api.examples.ExclamationFunction \
-  --inputs persistent://public/default/exclamation-input \
-  --output persistent://public/default/exclamation-output \
-  --tenant public \
-  --namespace default \
-  --name exclamation
-
-```
-
-Check if the function is running as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function.
-
-```bash
-
-bin/pulsar-admin functions trigger --name exclamation --triggerValue "hello world"
-
-```
-
-You will see output as below:
-
-```shell
-
-hello world!
-
-```
-
diff --git a/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-dcos.md b/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-dcos.md
deleted file mode 100644
index 79919a525c3..00000000000
--- a/site2/website-next/versioned_docs/version-2.1.0-incubating/deploy-dcos.md
+++ /dev/null
@@ -1,200 +0,0 @@
----
-id: deploy-dcos
-title: Deploying Pulsar on DC/OS
-sidebar_label: "DC/OS"
-original_id: deploy-dcos
----
-
-:::tip
-
-If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of
-`apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
-
-:::
-
-[DC/OS](https://dcos.io/) (the <strong>D</strong>ata<strong>C</strong>enter <strong>O</strong>perating <strong>S</strong>ystem) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool created and maintained by [Mesosphere](https://mesosphere.com/).
-
-Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets.
-
-## Prerequisites
-
-In order to run Pulsar on DC/OS, you will need the following:
-
-* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher
-* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes
-* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed
-* The [`PulsarGroups.json`](https://github.com/apache/incubator-pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo.
-
-  ```bash
-  
-  $ curl -O https://raw.githubusercontent.com/apache/incubator-pulsar/master/deployment/dcos/PulsarGroups.json
-  
-  ```
-
-Each node in the DC/OS-managed Mesos cluster must have at least:
-
-* 4 CPU
-* 4 GB of memory
-* 60 GB of total persistent disk
-
-Alternatively, you can change the configuration in `PulsarGroups.json` according to match your DC/OS cluster's resources.
-
-## Deploy Pulsar using the DC/OS command interface
-
-You can deploy Pulsar on DC/OS using this command:
-
-```bash
-
-$ dcos marathon group add PulsarGroups.json
-
-```
-
-This command will deploy Docker container instances in three groups, which together comprise a Pulsar cluster:
-
-* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance)
-* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance)
-* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance
-
-
-> When running DC/OS, a ZooKeeper cluster is already running at `master.mesos:2181`, thus there's no need to install or start up ZooKeeper separately.
-
-After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying.
-
-![DC/OS command executed](/assets/dcos_command_execute.png)
-
-![DC/OS command executed2](/assets/dcos_command_execute2.png)
-
-## The BookKeeper group
-
-To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group.
-
-![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png)
-
-At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that they have been deployed successfully and are now running.
- 
-![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png)
- 
-You can also click into each bookie instance to get more detailed info, such as the bookie running log.
-
-![DC/OS bookie log](/assets/dcos_bookie_log.png)
-
-To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, there are 3 bookies under the `available` directory.
-
-![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png)
-
-## The Pulsar broker Group
-
-Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers.
-
-![DC/OS broker status](/assets/dcos_broker_status.png)
-
-![DC/OS broker running](/assets/dcos_broker_run.png)
-
-You can also click into each broker instance to get more detailed info, such as the broker running log.
-
-![DC/OS broker log](/assets/dcos_broker_log.png)
-
-Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that that the `loadbalance` and `managed-ledgers` directories have been created.
-
-![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png)
-
-## Monitor Group
-
-The **monitory** group consists of Prometheus and Grafana.
-
-![DC/OS monitor status](/assets/dcos_monitor_status.png)
-
-### Prometheus
-
-Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example.
-
-![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png)
-
-If you click that endpoint, you'll see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL will display all the bookies and brokers.
-
-![DC/OS prom targets](/assets/dcos_prom_targets.png)
-
-### Grafana
-
-Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example.
- 
-![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png)
-
-If you click that endpoint, you can access the Grafana dashboard.
-
-![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png)
-
-## Run a simple Pulsar consumer and producer on DC/OS
-
-Now that we have a fully deployed Pulsar cluster, we can run a simple consumer and producer to show Pulsar on DC/OS in action.
-
-### Download and prepare the Pulsar Java tutorial
-
-There's a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo that you can clone. This repo contains a simple Pulsar consumer and producer (more info can be found in the repo's `README` file).
-
-```bash
-
-$ git clone https://github.com/streamlio/pulsar-java-tutorial
-
-```
-
-Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java).
-The `pulsar://a1.dcos:6650` endpoint is for the broker service. Endpoint details for each broker instance can be fetched from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. This can also be replaced by the client agent IP address.
-
-Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it will produce more messages.
-
-Now compile the project code using command:
-
-```bash
-
-$ mvn clean package
-
-```
-
-### Run the consumer and producer
-
-Execute this command to run the consumer:
-
-```bash
-
-$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial"
-
-```
-
-Execute this command to run the producer:
-
-```bash
-
-$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial"
-
-```
-
-You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI.
-
-![DC/OS pulsar producer](/assets/dcos_producer.png)
... 40550 lines suppressed ...