You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by zh...@apache.org on 2020/06/27 23:48:48 UTC

[pulsar] branch master updated: Add full document for version 2.6.0 (#7310)

This is an automated email from the ASF dual-hosted git repository.

zhaijia pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new 9c01a07  Add full document for version 2.6.0 (#7310)
9c01a07 is described below

commit 9c01a072e1f62baacf54478bb8072604600be900
Author: Guangning <gu...@apache.org>
AuthorDate: Sun Jun 28 07:48:38 2020 +0800

    Add full document for version 2.6.0 (#7310)
    
    Master Issue: #7083
    
    ### Motivation
    
    currently, website is released in a versioned way. but each versioned directory, contains only part of md files. This is hard to maintain the content of each version.
    It would be better to make each versioned directory contain all the md files.
    
    
    ### Modifications
    
    * Generate full document for 2.6.0
---
 .../versioned_docs/version-2.6.0/adaptors-kafka.md | 265 ++++++
 .../versioned_docs/version-2.6.0/adaptors-spark.md |  77 ++
 .../versioned_docs/version-2.6.0/adaptors-storm.md |  91 ++
 .../version-2.6.0/admin-api-clusters.md            | 210 +++++
 .../version-2.6.0/admin-api-functions.md           | 546 ++++++++++++
 .../version-2.6.0/admin-api-namespaces.md          | 759 ++++++++++++++++
 .../admin-api-non-partitioned-topics.md            | 160 ++++
 .../admin-api-non-persistent-topics.md             | 264 ++++++
 .../version-2.6.0/admin-api-overview.md            |  89 ++
 .../version-2.6.0/admin-api-partitioned-topics.md  | 377 ++++++++
 .../version-2.6.0/admin-api-permissions.md         | 115 +++
 .../version-2.6.0/admin-api-schemas.md             |   7 +
 .../version-2.6.0/admin-api-tenants.md             |  86 ++
 .../version-2.6.0/administration-dashboard.md      |  63 ++
 .../version-2.6.0/administration-geo.md            | 157 ++++
 .../version-2.6.0/administration-load-balance.md   | 182 ++++
 .../version-2.6.0/administration-pulsar-manager.md | 136 +++
 .../version-2.6.0/administration-stats.md          |  64 ++
 .../version-2.6.0/administration-upgrade.md        | 151 ++++
 .../version-2.6.0/client-libraries-cgo.md          | 545 ++++++++++++
 .../version-2.6.0/client-libraries-cpp.md          | 253 ++++++
 .../version-2.6.0/client-libraries-go.md           | 661 ++++++++++++++
 .../version-2.6.0/client-libraries-node.md         | 404 +++++++++
 .../version-2.6.0/client-libraries-python.md       | 291 +++++++
 .../version-2.6.0/client-libraries-websocket.md    | 444 ++++++++++
 .../version-2.6.0/concepts-authentication.md       |   9 +
 .../version-2.6.0/concepts-multi-tenancy.md        |  40 +
 .../version-2.6.0/concepts-overview.md             |  31 +
 .../version-2.6.0/concepts-replication.md          |   9 +
 .../version-2.6.0/concepts-tiered-storage.md       |  18 +
 .../version-2.6.0/concepts-topic-compaction.md     |  37 +
 .../version-2.6.0/cookbooks-bookkeepermetadata.md  |  21 +
 .../version-2.6.0/cookbooks-compaction.md          | 127 +++
 .../version-2.6.0/cookbooks-deduplication.md       | 121 +++
 .../version-2.6.0/cookbooks-encryption.md          | 170 ++++
 .../version-2.6.0/cookbooks-message-queue.md       |  95 ++
 .../version-2.6.0/cookbooks-non-persistent.md      |  59 ++
 .../version-2.6.0/cookbooks-partitioned.md         |  93 ++
 .../version-2.6.0/cookbooks-retention-expiry.md    | 291 +++++++
 .../versioned_docs/version-2.6.0/deploy-aws.md     | 224 +++++
 .../deploy-bare-metal-multi-cluster.md             | 426 +++++++++
 .../version-2.6.0/deploy-bare-metal.md             | 459 ++++++++++
 .../versioned_docs/version-2.6.0/deploy-dcos.md    | 183 ++++
 .../version-2.6.0/deploy-monitoring.md             |  90 ++
 .../version-2.6.0/developing-binary-protocol.md    | 556 ++++++++++++
 .../versioned_docs/version-2.6.0/developing-cpp.md | 101 +++
 .../version-2.6.0/developing-tools.md              | 106 +++
 .../version-2.6.0/functions-debug.md               | 461 ++++++++++
 .../version-2.6.0/functions-deploy.md              | 211 +++++
 .../version-2.6.0/functions-metrics.md             |   7 +
 .../version-2.6.0/functions-overview.md            | 192 +++++
 .../version-2.6.0/functions-runtime.md             | 183 ++++
 .../version-2.6.0/functions-worker.md              | 242 ++++++
 .../getting-started-concepts-and-architecture.md   |  16 +
 .../version-2.6.0/getting-started-docker.md        | 161 ++++
 .../version-2.6.0/getting-started-standalone.md    | 226 +++++
 .../version-2.6.0/io-aerospike-sink.md             |  26 +
 .../version-2.6.0/io-canal-source.md               | 203 +++++
 .../version-2.6.0/io-cassandra-sink.md             |  54 ++
 .../version-2.6.0/io-cdc-debezium.md               | 475 ++++++++++
 .../website/versioned_docs/version-2.6.0/io-cdc.md |  26 +
 .../website/versioned_docs/version-2.6.0/io-cli.md | 601 +++++++++++++
 .../versioned_docs/version-2.6.0/io-debug.md       | 329 +++++++
 .../versioned_docs/version-2.6.0/io-develop.md     | 241 ++++++
 .../version-2.6.0/io-elasticsearch-sink.md         | 140 +++
 .../versioned_docs/version-2.6.0/io-file-source.md | 138 +++
 .../versioned_docs/version-2.6.0/io-flume-sink.md  |  52 ++
 .../version-2.6.0/io-flume-source.md               |  52 ++
 .../versioned_docs/version-2.6.0/io-hbase-sink.md  |  64 ++
 .../versioned_docs/version-2.6.0/io-hdfs2-sink.md  |  54 ++
 .../versioned_docs/version-2.6.0/io-hdfs3-sink.md  |  54 ++
 .../version-2.6.0/io-influxdb-sink.md              | 108 +++
 .../versioned_docs/version-2.6.0/io-kafka-sink.md  |  69 ++
 .../version-2.6.0/io-kafka-source.md               | 171 ++++
 .../versioned_docs/version-2.6.0/io-mongo-sink.md  |  52 ++
 .../version-2.6.0/io-netty-source.md               | 205 +++++
 .../versioned_docs/version-2.6.0/io-overview.md    | 136 +++
 .../version-2.6.0/io-rabbitmq-sink.md              |  81 ++
 .../version-2.6.0/io-rabbitmq-source.md            |  78 ++
 .../versioned_docs/version-2.6.0/io-redis-sink.md  |  70 ++
 .../versioned_docs/version-2.6.0/io-solr-sink.md   |  61 ++
 .../version-2.6.0/io-twitter-source.md             |  28 +
 .../versioned_docs/version-2.6.0/io-twitter.md     |   7 +
 .../version-2.6.0/reference-cli-tools.md           | 734 ++++++++++++++++
 .../version-2.6.0/reference-connector-admin.md     |   7 +
 .../version-2.6.0/reference-terminology.md         | 167 ++++
 .../schema-evolution-compatibility.md              | 953 +++++++++++++++++++++
 .../version-2.6.0/schema-get-started.md            |  95 ++
 .../versioned_docs/version-2.6.0/schema-manage.md  | 809 +++++++++++++++++
 .../version-2.6.0/schema-understand.md             | 591 +++++++++++++
 .../version-2.6.0/security-athenz.md               |  93 ++
 .../version-2.6.0/security-authorization.md        | 100 +++
 .../version-2.6.0/security-bouncy-castle.md        | 122 +++
 .../version-2.6.0/security-extending.md            | 194 +++++
 .../version-2.6.0/security-kerberos.md             | 391 +++++++++
 .../version-2.6.0/security-overview.md             |  31 +
 .../version-2.6.0/sql-getting-started.md           | 144 ++++
 .../versioned_docs/version-2.6.0/sql-overview.md   |  18 +
 .../versioned_docs/version-2.6.0/sql-rest-api.md   | 186 ++++
 99 files changed, 19572 insertions(+)

diff --git a/site2/website/versioned_docs/version-2.6.0/adaptors-kafka.md b/site2/website/versioned_docs/version-2.6.0/adaptors-kafka.md
new file mode 100644
index 0000000..c2393df
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/adaptors-kafka.md
@@ -0,0 +1,265 @@
+---
+id: version-2.6.0-adaptors-kafka
+title: Pulsar adaptor for Apache Kafka
+sidebar_label: Kafka client wrapper
+original_id: adaptors-kafka
+---
+
+
+Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API.
+
+## Using the Pulsar Kafka compatibility wrapper
+
+In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`:
+
+```xml
+<dependency>
+  <groupId>org.apache.kafka</groupId>
+  <artifactId>kafka-clients</artifactId>
+  <version>0.10.2.1</version>
+</dependency>
+```
+
+Then include this dependency for the Pulsar Kafka wrapper:
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client-kafka</artifactId>
+  <version>{{pulsar:version}}</version>
+</dependency>
+```
+
+With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the
+producers and consumers to Pulsar service rather than Kafka, and uses a particular
+Pulsar topic.
+
+## Using the Pulsar Kafka compatibility wrapper together with existing kafka client
+
+When migrating from Kafka to Pulsar, the application might use the original kafka client
+and the pulsar kafka wrapper together during migration. You should consider using the
+unshaded pulsar kafka client wrapper.
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client-kafka-original</artifactId>
+  <version>{{pulsar:version}}</version>
+</dependency>
+```
+
+When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer`
+instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers.
+
+## Producer example
+
+```java
+// Topic needs to be a regular Pulsar topic
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+
+props.put("key.serializer", IntegerSerializer.class.getName());
+props.put("value.serializer", StringSerializer.class.getName());
+
+Producer<Integer, String> producer = new KafkaProducer<>(props);
+
+for (int i = 0; i < 10; i++) {
+    producer.send(new ProducerRecord<Integer, String>(topic, i, "hello-" + i));
+    log.info("Message {} sent successfully", i);
+}
+
+producer.close();
+```
+
+## Consumer example
+
+```java
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+props.put("group.id", "my-subscription-name");
+props.put("enable.auto.commit", "false");
+props.put("key.deserializer", IntegerDeserializer.class.getName());
+props.put("value.deserializer", StringDeserializer.class.getName());
+
+Consumer<Integer, String> consumer = new KafkaConsumer<>(props);
+consumer.subscribe(Arrays.asList(topic));
+
+while (true) {
+    ConsumerRecords<Integer, String> records = consumer.poll(100);
+    records.forEach(record -> {
+        log.info("Received record: {}", record);
+    });
+
+    // Commit last offset
+    consumer.commitSync();
+}
+```
+
+## Complete Examples
+
+You can find the complete producer and consumer examples
+[here](https://github.com/apache/pulsar/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples).
+
+## Compatibility matrix
+
+Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API.
+
+#### Producer
+
+APIs:
+
+| Producer Method                                                               | Supported | Notes                                                                    |
+|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------|
+| `Future<RecordMetadata> send(ProducerRecord<K, V> record)`                    | Yes       |                                                                          |
+| `Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback)` | Yes       |                                                                          |
+| `void flush()`                                                                | Yes       |                                                                          |
+| `List<PartitionInfo> partitionsFor(String topic)`                             | No        |                                                                          |
+| `Map<MetricName, ? extends Metric> metrics()`                                 | No        |                                                                          |
+| `void close()`                                                                | Yes       |                                                                          |
+| `void close(long timeout, TimeUnit unit)`                                     | Yes       |                                                                          |
+
+Properties:
+
+| Config property                         | Supported | Notes                                                                         |
+|:----------------------------------------|:----------|:------------------------------------------------------------------------------|
+| `acks`                                  | Ignored   | Durability and quorum writes are configured at the namespace level            |
+| `auto.offset.reset`                     | Yes       | Will have a default value of `latest` if user does not give specific setting. |
+| `batch.size`                            | Ignored   |                                                                               |
+| `bootstrap.servers`                     | Yes       |                                 |
+| `buffer.memory`                         | Ignored   |                                                                               |
+| `client.id`                             | Ignored   |                                                                               |
+| `compression.type`                      | Yes       | Allows `gzip` and `lz4`. No `snappy`.                                         |
+| `connections.max.idle.ms`               | Yes       | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time|
+| `interceptor.classes`                   | Yes       |                                                                               |
+| `key.serializer`                        | Yes       |                                                                               |
+| `linger.ms`                             | Yes       | Controls the group commit time when batching messages                         |
+| `max.block.ms`                          | Ignored   |                                                                               |
+| `max.in.flight.requests.per.connection` | Ignored   | In Pulsar ordering is maintained even with multiple requests in flight        |
+| `max.request.size`                      | Ignored   |                                                                               |
+| `metric.reporters`                      | Ignored   |                                                                               |
+| `metrics.num.samples`                   | Ignored   |                                                                               |
+| `metrics.sample.window.ms`              | Ignored   |                                                                               |
+| `partitioner.class`                     | Yes       |                                                                               |
+| `receive.buffer.bytes`                  | Ignored   |                                                                               |
+| `reconnect.backoff.ms`                  | Ignored   |                                                                               |
+| `request.timeout.ms`                    | Ignored   |                                                                               |
+| `retries`                               | Ignored   | Pulsar client retries with exponential backoff until the send timeout expires. |
+| `send.buffer.bytes`                     | Ignored   |                                                                               |
+| `timeout.ms`                            | Yes       |                                                                               |
+| `value.serializer`                      | Yes       |                                                                               |
+
+
+#### Consumer
+
+The following table lists consumer APIs.
+
+| Consumer Method                                                                                         | Supported | Notes |
+|:--------------------------------------------------------------------------------------------------------|:----------|:------|
+| `Set<TopicPartition> assignment()`                                                                      | No        |       |
+| `Set<String> subscription()`                                                                            | Yes       |       |
+| `void subscribe(Collection<String> topics)`                                                             | Yes       |       |
+| `void subscribe(Collection<String> topics, ConsumerRebalanceListener callback)`                         | No        |       |
+| `void assign(Collection<TopicPartition> partitions)`                                                    | No        |       |
+| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)`                                   | No        |       |
+| `void unsubscribe()`                                                                                    | Yes       |       |
+| `ConsumerRecords<K, V> poll(long timeoutMillis)`                                                        | Yes       |       |
+| `void commitSync()`                                                                                     | Yes       |       |
+| `void commitSync(Map<TopicPartition, OffsetAndMetadata> offsets)`                                       | Yes       |       |
+| `void commitAsync()`                                                                                    | Yes       |       |
+| `void commitAsync(OffsetCommitCallback callback)`                                                       | Yes       |       |
+| `void commitAsync(Map<TopicPartition, OffsetAndMetadata> offsets, OffsetCommitCallback callback)`       | Yes       |       |
+| `void seek(TopicPartition partition, long offset)`                                                      | Yes       |       |
+| `void seekToBeginning(Collection<TopicPartition> partitions)`                                           | Yes       |       |
+| `void seekToEnd(Collection<TopicPartition> partitions)`                                                 | Yes       |       |
+| `long position(TopicPartition partition)`                                                               | Yes       |       |
+| `OffsetAndMetadata committed(TopicPartition partition)`                                                 | Yes       |       |
+| `Map<MetricName, ? extends Metric> metrics()`                                                           | No        |       |
+| `List<PartitionInfo> partitionsFor(String topic)`                                                       | No        |       |
+| `Map<String, List<PartitionInfo>> listTopics()`                                                         | No        |       |
+| `Set<TopicPartition> paused()`                                                                          | No        |       |
+| `void pause(Collection<TopicPartition> partitions)`                                                     | No        |       |
+| `void resume(Collection<TopicPartition> partitions)`                                                    | No        |       |
+| `Map<TopicPartition, OffsetAndTimestamp> offsetsForTimes(Map<TopicPartition, Long> timestampsToSearch)` | No        |       |
+| `Map<TopicPartition, Long> beginningOffsets(Collection<TopicPartition> partitions)`                     | No        |       |
+| `Map<TopicPartition, Long> endOffsets(Collection<TopicPartition> partitions)`                           | No        |       |
+| `void close()`                                                                                          | Yes       |       |
+| `void close(long timeout, TimeUnit unit)`                                                               | Yes       |       |
+| `void wakeup()`                                                                                         | No        |       |
+
+Properties:
+
+| Config property                 | Supported | Notes                                                 |
+|:--------------------------------|:----------|:------------------------------------------------------|
+| `group.id`                      | Yes       | Maps to a Pulsar subscription name                    |
+| `max.poll.records`              | Yes       |                                                       |
+| `max.poll.interval.ms`          | Ignored   | Messages are "pushed" from broker                     |
+| `session.timeout.ms`            | Ignored   |                                                       |
+| `heartbeat.interval.ms`         | Ignored   |                                                       |
+| `bootstrap.servers`             | Yes       | Needs to point to a single Pulsar service URL         |
+| `enable.auto.commit`            | Yes       |                                                       |
+| `auto.commit.interval.ms`       | Ignored   | With auto-commit, acks are sent immediately to broker |
+| `partition.assignment.strategy` | Ignored   |                                                       |
+| `auto.offset.reset`             | Yes       | Only support earliest and latest.                     |
+| `fetch.min.bytes`               | Ignored   |                                                       |
+| `fetch.max.bytes`               | Ignored   |                                                       |
+| `fetch.max.wait.ms`             | Ignored   |                                                       |
+| `interceptor.classes`           | Yes       |                                                       |
+| `metadata.max.age.ms`           | Ignored   |                                                       |
+| `max.partition.fetch.bytes`     | Ignored   |                                                       |
+| `send.buffer.bytes`             | Ignored   |                                                       |
+| `receive.buffer.bytes`          | Ignored   |                                                       |
+| `client.id`                     | Ignored   |                                                       |
+
+
+## Customize Pulsar configurations
+
+You can configure Pulsar authentication provider directly from the Kafka properties.
+
+### Pulsar client properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-)          |         | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.|
+| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-)          |         | Map which represents parameters for the Authentication-Plugin. |
+| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-)          |         | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. |
+| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-)                       | `false` | Enable TLS transport encryption.                                                        |
+| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-)   |         | Path for the TLS trust certificate store.                                               |
+| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers.                                           |
+| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. |
+| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. |
+| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. |
+| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. |
+| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. |
+| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. |
+| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. |
+| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection.  |
+
+
+### Pulsar producer properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. |
+| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) |  | Specify baseline for sequence ID of this producer. |
+| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker.  |
+| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions.  |
+| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. |
+| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. |
+| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue  is full. |
+
+
+### Pulsar consumer Properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. |
+| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. |
+| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. |
+| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. |
+| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. |
diff --git a/site2/website/versioned_docs/version-2.6.0/adaptors-spark.md b/site2/website/versioned_docs/version-2.6.0/adaptors-spark.md
new file mode 100644
index 0000000..cd76b6d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/adaptors-spark.md
@@ -0,0 +1,77 @@
+---
+id: version-2.6.0-adaptors-spark
+title: Pulsar adaptor for Apache Spark
+sidebar_label: Apache Spark
+original_id: adaptors-spark
+---
+
+The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive data from Pulsar.
+
+An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming Pulsar receiver and can process it in a variety of ways.
+
+## Prerequisites
+
+To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration.
+
+### Maven
+
+If you're using Maven, add this to your `pom.xml`:
+
+```xml
+<!-- in your <properties> block -->
+<pulsar.version>{{pulsar:version}}</pulsar.version>
+
+<!-- in your <dependencies> block -->
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-spark</artifactId>
+  <version>${pulsar.version}</version>
+</dependency>
+```
+
+### Gradle
+
+If you're using Gradle, add this to your `build.gradle` file:
+
+```groovy
+def pulsarVersion = "{{pulsar:version}}"
+
+dependencies {
+    compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion
+}
+```
+
+## Usage
+
+Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`:
+
+```java
+    String serviceUrl = "pulsar://localhost:6650/";
+    String topic = "persistent://public/default/test_src";
+    String subs = "test_sub";
+
+    SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example");
+
+    JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60));
+
+    ConsumerConfigurationData<byte[]> pulsarConf = new ConsumerConfigurationData();
+
+    Set<String> set = new HashSet<>();
+    set.add(topic);
+    pulsarConf.setTopicNames(set);
+    pulsarConf.setSubscriptionName(subs);
+
+    SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver(
+        serviceUrl,
+        pulsarConf,
+        new AuthenticationDisabled());
+
+    JavaReceiverInputDStream<byte[]> lineDStream = jsc.receiverStream(pulsarReceiver);
+```
+
+
+## Example
+
+You can find a complete example [here](https://github.com/apache/pulsar/tree/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java).
+In this example, the number of messages which contain the string "Pulsar" in received messages is counted.
+
diff --git a/site2/website/versioned_docs/version-2.6.0/adaptors-storm.md b/site2/website/versioned_docs/version-2.6.0/adaptors-storm.md
new file mode 100644
index 0000000..2a07714
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/adaptors-storm.md
@@ -0,0 +1,91 @@
+---
+id: version-2.6.0-adaptors-storm
+title: Pulsar adaptor for Apache Storm
+sidebar_label: Apache Storm
+original_id: adaptors-storm
+---
+
+Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data.
+
+An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt.
+
+## Using the Pulsar Storm Adaptor
+
+Include dependency for Pulsar Storm Adaptor:
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-storm</artifactId>
+  <version>${pulsar.version}</version>
+</dependency>
+```
+
+## Pulsar Spout
+
+The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client.
+
+The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout:
+
+```java
+MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() {
+
+    @Override
+    public Values toValues(Message msg) {
+        return new Values(new String(msg.getData()));
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        // declare the output fields
+        declarer.declare(new Fields("string"));
+    }
+};
+
+// Configure a Pulsar Spout
+PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration();
+spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
+spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1");
+spoutConf.setSubscriptionName("my-subscriber-name1");
+spoutConf.setMessageToValuesMapper(messageToValuesMapper);
+
+// Create a Pulsar Spout
+PulsarSpout spout = new PulsarSpout(spoutConf);
+```
+
+## Pulsar Bolt
+
+The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client.
+
+A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt:
+
+```java
+TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() {
+
+    @Override
+    public TypedMessageBuilder<byte[]> toMessage(TypedMessageBuilder<byte[]> msgBuilder, Tuple tuple) {
+        String receivedMessage = tuple.getString(0);
+        // message processing
+        String processedMsg = receivedMessage + "-processed";
+        return msgBuilder.value(processedMsg.getBytes());
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        // declare the output fields
+    }
+};
+
+// Configure a Pulsar Bolt
+PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration();
+boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
+boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2");
+boltConf.setTupleToMessageMapper(tupleToMessageMapper);
+
+// Create a Pulsar Bolt
+PulsarBolt bolt = new PulsarBolt(boltConf);
+```
+
+## Example
+
+You can find a complete example [here](https://github.com/apache/pulsar/tree/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/example/StormExample.java).
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-clusters.md b/site2/website/versioned_docs/version-2.6.0/admin-api-clusters.md
new file mode 100644
index 0000000..4fe190d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-clusters.md
@@ -0,0 +1,210 @@
+---
+id: version-2.6.0-admin-api-clusters
+title: Managing Clusters
+sidebar_label: Clusters
+original_id: admin-api-clusters
+---
+
+Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper)
+servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management.
+
+Clusters can be managed via:
+
+* The [`clusters`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool
+* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API
+* The `clusters` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md)
+
+## Clusters resources
+
+### Provision
+
+New clusters can be provisioned using the admin interface.
+
+> Please note that this operation requires superuser privileges.
+
+#### pulsar-admin
+
+You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example:
+
+```shell
+$ pulsar-admin clusters create cluster-1 \
+  --url http://my-cluster.org.com:8080 \
+  --broker-url pulsar://my-cluster.org.com:6650
+```
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster}
+
+#### Java
+
+```java
+ClusterData clusterData = new ClusterData(
+        serviceUrl,
+        serviceUrlTls,
+        brokerServiceUrl,
+        brokerServiceUrlTls
+);
+admin.clusters().createCluster(clusterName, clusterData);
+```
+
+### Initialize cluster metadata
+
+When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following:
+
+* The name of the cluster
+* The local ZooKeeper connection string for the cluster
+* The configuration store connection string for the entire instance
+* The web service URL for the cluster
+* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster
+
+You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster.
+
+> #### No cluster metadata initialization through the REST API or the Java admin API
+>
+> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API
+> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly.
+> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular
+> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command.
+
+Here's an example cluster metadata initialization command:
+
+```shell
+bin/pulsar initialize-cluster-metadata \
+  --cluster us-west \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2184 \
+  --web-service-url http://pulsar.us-west.example.com:8080/ \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
+```
+
+You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance.
+
+### Get configuration
+
+You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time.
+
+#### pulsar-admin
+
+Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example:
+
+```shell
+$ pulsar-admin clusters get cluster-1
+{
+    "serviceUrl": "http://my-cluster.org.com:8080/",
+    "serviceUrlTls": null,
+    "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/",
+    "brokerServiceUrlTls": null
+    "peerClusterNames": null
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster}
+
+#### Java
+
+```java
+admin.clusters().getCluster(clusterName);
+```
+
+### Update
+
+You can update the configuration for an existing cluster at any time.
+
+#### pulsar-admin
+
+Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags.
+
+```shell
+$ pulsar-admin clusters update cluster-1 \
+  --url http://my-cluster.org.com:4081 \
+  --broker-url pulsar://my-cluster.org.com:3350
+```
+
+#### REST
+
+{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster}
+
+#### Java
+
+```java
+ClusterData clusterData = new ClusterData(
+        serviceUrl,
+        serviceUrlTls,
+        brokerServiceUrl,
+        brokerServiceUrlTls
+);
+admin.clusters().updateCluster(clusterName, clusterData);
+```
+
+### Delete
+
+Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance).
+
+#### pulsar-admin
+
+Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster.
+
+```
+$ pulsar-admin clusters delete cluster-1
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster}
+
+#### Java
+
+```java
+admin.clusters().deleteCluster(clusterName);
+```
+
+### List
+
+You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance).
+
+#### pulsar-admin
+
+Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand.
+
+```shell
+$ pulsar-admin clusters list
+cluster-1
+cluster-2
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters}
+
+###### Java
+
+```java
+admin.clusters().getClusters();
+```
+
+### Update peer-cluster data
+
+Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance).
+
+#### pulsar-admin
+
+Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names.
+
+```
+$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames}
+
+#### Java
+
+```java
+admin.clusters().updatePeerClusterNames(clusterName, peerClusterList);
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-functions.md b/site2/website/versioned_docs/version-2.6.0/admin-api-functions.md
new file mode 100644
index 0000000..6ca8cc9
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-functions.md
@@ -0,0 +1,546 @@
+---
+id: version-2.6.0-admin-api-functions
+title: Manage Functions
+sidebar_label: Functions
+original_id: admin-api-functions
+---
+
+**Pulsar Functions** are lightweight compute processes that
+
+* consume messages from one or more Pulsar topics
+* apply a user-supplied processing logic to each message
+* publish the results of the computation to another topic
+
+Functions can be managed via the following methods.
+
+Method | Description
+---|---
+**Admin CLI** | The [`functions`](reference-pulsar-admin.md#functions) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool.
+**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API.
+**Java Admin API**| The `functions` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md).
+
+## Function resources
+
+You can perform the following operations on functions.
+
+### Create a function
+
+You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions create \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --inputs test-input-topic \
+  --output persistent://public/default/test-output-topic \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --jar /examples/api-examples.jar
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v3/functions/{tenant}/{namespace}/{functionName}
+
+#### Java Admin API
+
+```java
+FunctionConfig functionConfig = new FunctionConfig();
+functionConfig.setTenant(tenant);
+functionConfig.setNamespace(namespace);
+functionConfig.setName(functionName);
+functionConfig.setRuntime(FunctionConfig.Runtime.JAVA);
+functionConfig.setParallelism(1);
+functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction");
+functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE);
+functionConfig.setTopicsPattern(sourceTopicPattern);
+functionConfig.setSubName(subscriptionName);
+functionConfig.setAutoAck(true);
+functionConfig.setOutput(sinkTopic);
+admin.functions().createFunction(functionConfig, fileName);
+```
+
+### Update a function
+
+You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions update \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --output persistent://public/default/update-output-topic \
+  # other options
+```
+
+#### REST Admin API
+
+{@inject: endpoint|PUT|/admin/v3/functions/{tenant}/{namespace}/{functionName}
+
+#### Java Admin API
+
+```java
+FunctionConfig functionConfig = new FunctionConfig();
+functionConfig.setTenant(tenant);
+functionConfig.setNamespace(namespace);
+functionConfig.setName(functionName);
+functionConfig.setRuntime(FunctionConfig.Runtime.JAVA);
+functionConfig.setParallelism(1);
+functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction");
+UpdateOptions updateOptions = new UpdateOptions();
+updateOptions.setUpdateAuthData(updateAuthData);
+admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions);
+```
+
+### Start an instance of a function
+
+You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. 
+
+```shell
+$ pulsar-admin functions start \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --instance-id 1
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v3/functions/{tenant}/{namespace}/{functionName}/{instanceId}/start
+
+#### Java Admin API
+
+```java
+admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId));
+```
+
+### Start all instances of a function
+
+You can start all stopped function instances using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions start \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v3/functions/{tenant}/{namespace}/{functionName}/start
+
+#### Java
+
+```java
+admin.functions().startFunction(tenant, namespace, functionName);
+```
+
+### Stop an instance of a function
+
+You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions stop \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --instance-id 1
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v3/functions/{tenant}/{namespace}/{functionName}/{instanceId}/stop
+
+#### Java Admin API
+
+```java
+admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId));
+```
+
+### Stop all instances of a function
+
+You can stop all function instances using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions stop \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v3/functions/{tenant}/{namespace}/{functionName}/stop
+
+#### Java Admin API
+
+```java
+admin.functions().stopFunction(tenant, namespace, functionName);
+```
+
+### Restart an instance of a function
+
+Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions restart \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --instance-id 1
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v3/functions/{tenant}/{namespace}/{functionName}/{instanceId}/restart
+
+#### Java Admin API
+
+```java
+admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId));
+```
+
+### Restart all instances of a function
+
+You can restart all function instances using Admin CLI, REST API or Java admin API.
+
+#### Admin CLI
+
+Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions restart \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v3/functions/{tenant}/{namespace}/{functionName}/restart
+
+#### Java Admin API
+
+```java
+admin.functions().restartFunction(tenant, namespace, functionName);
+```
+
+### List all functions
+
+You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand.
+
+**Example**
+
+```shell
+$ pulsar-admin functions list \
+  --tenant public \
+  --namespace default
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v3/functions/{tenant}/{namespace}
+
+#### Java Admin API
+
+```java
+admin.functions().getFunctions(tenant, namespace);
+```
+
+### Delete a function
+
+You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions delete \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) 
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v3/functions/{tenant}/{namespace}/{functionName}
+
+#### Java Admin API
+
+```java
+admin.functions().deleteFunction(tenant, namespace, functionName);
+```
+
+### Get info about a function
+
+You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions get \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) 
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v3/functions/{tenant}/{namespace}/{functionName}
+
+#### Java Admin API
+
+```java
+admin.functions().getFunction(tenant, namespace, functionName);
+```
+
+### Get status of an instance of a function
+
+You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions status \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --instance-id 1
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v3/functions/{tenant}/{namespace}/{functionName}/{instanceId}/status
+
+#### Java Admin API
+
+```java
+admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId));
+```
+
+### Get status of all instances of a function
+
+You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API.
+
+#### Admin CLI
+
+Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions status \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) 
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v3/functions/{tenant}/{namespace}/{functionName}/status
+
+#### Java Admin API
+
+```java
+admin.functions().getFunctionStatus(tenant, namespace, functionName);
+```
+
+### Get stats of an instance of a function
+
+You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API.
+
+#### Admin CLI
+
+Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions stats \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --instance-id 1
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v3/functions/{tenant}/{namespace}/{functionName}/{instanceId}/stats
+
+#### Java Admin API
+
+```java
+admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId));
+```
+
+### Get stats of all instances of a function
+
+You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API.
+
+#### Admin CLI
+
+Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions stats \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) 
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v3/functions/{tenant}/{namespace}/{functionName}/stats
+
+#### Java Admin API
+
+```java
+admin.functions().getFunctionStats(tenant, namespace, functionName);
+```
+
+### Trigger a function
+
+You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API.
+
+#### Admin CLI
+
+Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions trigger \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --topic (the name of input topic) \
+  --trigger-value \"hello pulsar\"
+  # or --trigger-file (the path of trigger file)
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v3/functions/{tenant}/{namespace}/{functionName}/trigger
+
+#### Java Admin API
+
+```java
+admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile);
+```
+
+### Put state associated with a function
+
+You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API.
+
+#### Admin CLI
+
+Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions putstate \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}" 
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v3/functions/{tenant}/{namespace}/{functionName}/state/{key}
+
+#### Java Admin API
+
+```java
+TypeReference<FunctionState> typeRef = new TypeReference<FunctionState>() {};
+FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef);
+admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr);
+```
+
+### Fetch state associated with a function
+
+You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API.
+
+#### Admin CLI
+
+Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand. 
+
+**Example**
+
+```shell
+$ pulsar-admin functions querystate \
+  --tenant public \
+  --namespace default \
+  --name (the name of Pulsar Functions) \
+  --key (the key of state) 
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v3/functions/{tenant}/{namespace}/{functionName}/state/{key}
+
+#### Java Admin CLI
+
+```java
+admin.functions().getFunctionState(tenant, namespace, functionName, key);
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.6.0/admin-api-namespaces.md
new file mode 100644
index 0000000..9b4251d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-namespaces.md
@@ -0,0 +1,759 @@
+---
+id: version-2.6.0-admin-api-namespaces
+title: Managing Namespaces
+sidebar_label: Namespaces
+original_id: admin-api-namespaces
+---
+
+Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic).
+
+Namespaces can be managed via:
+
+* The [`namespaces`](reference-pulsar-admin.md#clusters) command of the [`pulsar-admin`](reference-pulsar-admin.md) tool
+* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API
+* The `namespaces` method of the {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object in the [Java API](client-libraries-java.md)
+
+## Namespaces resources
+
+### Create
+
+You can create new namespaces under a given [tenant](reference-terminology.md#tenant).
+
+#### pulsar-admin
+
+Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name:
+
+```shell
+$ pulsar-admin namespaces create test-tenant/test-namespace
+```
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace}
+
+#### Java
+
+```java
+admin.namespaces().createNamespace(namespace);
+```
+
+### Get policies
+
+You can fetch the current policies associated with a namespace at any time.
+
+#### pulsar-admin
+
+Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace:
+
+```shell
+$ pulsar-admin namespaces policies test-tenant/test-namespace
+{
+  "auth_policies": {
+    "namespace_auth": {},
+    "destination_auth": {}
+  },
+  "replication_clusters": [],
+  "bundles_activated": true,
+  "bundles": {
+    "boundaries": [
+      "0x00000000",
+      "0xffffffff"
+    ],
+    "numBundles": 1
+  },
+  "backlog_quota_map": {},
+  "persistence": null,
+  "latency_stats_sample_rate": {},
+  "message_ttl_in_seconds": 0,
+  "retention_policies": null,
+  "deleted": false
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies}
+
+#### Java
+
+```java
+admin.namespaces().getPolicies(namespace);
+```
+
+### List namespaces within a tenant
+
+You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant).
+
+#### pulsar-admin
+
+Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant:
+
+```shell
+$ pulsar-admin namespaces list test-tenant
+test-tenant/ns1
+test-tenant/ns2
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces}
+
+#### Java
+
+```java
+admin.namespaces().getNamespaces(tenant);
+```
+
+
+### Delete
+
+You can delete existing namespaces from a tenant.
+
+#### pulsar-admin
+
+Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace:
+
+```shell
+$ pulsar-admin namespaces delete test-tenant/ns1
+```
+
+#### REST
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace}
+
+#### Java
+
+```java
+admin.namespaces().deleteNamespace(namespace);
+```
+
+
+#### set replication cluster
+
+It sets replication clusters for a namespace, so Pulsar can internally replicate publish message from one colo to another colo.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-clusters test-tenant/ns1 \
+  --clusters cl1
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters}
+```
+
+###### Java
+
+```java
+admin.namespaces().setNamespaceReplicationClusters(namespace, clusters);
+```
+
+#### get replication cluster
+
+It gives a list of replication clusters for a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1
+```
+
+```
+cl2
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/replication|operation/getNamespaceReplicationClusters}
+```
+
+###### Java
+
+```java
+admin.namespaces().getNamespaceReplicationClusters(namespace)
+```
+
+#### set backlog quota policies
+
+Backlog quota helps broker to restrict bandwidth/storage of a namespace once it reach certain threshold limit . Admin can set this limit and one of the following action after the limit is reached.
+
+  1.  producer_request_hold: broker will hold and not persist produce request payload
+
+  2.  producer_exception: broker will disconnects with client by giving exception
+
+  3.  consumer_backlog_eviction: broker will start discarding backlog messages
+
+  Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-backlog-quota --limit 10 --policy producer_request_hold test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/backlogQuota|operation/setBacklogQuota}
+```
+
+###### Java
+
+```java
+admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, policy))
+```
+
+#### get backlog quota policies
+
+It shows a configured backlog quota for a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1
+```
+
+```json
+{
+  "destination_storage": {
+    "limit": 10,
+    "policy": "producer_request_hold"
+  }
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/backlogQuotaMap|operation/getBacklogQuotaMap}
+```
+
+###### Java
+
+```java
+admin.namespaces().getBacklogQuotaMap(namespace);
+```
+
+#### remove backlog quota policies
+
+It removes backlog quota policies for a given namespace
+
+###### CLI
+
+```
+$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|DELETE|/admin/v2/namespaces/{tenant}/{namespace}/backlogQuota|operation/removeBacklogQuota}
+```
+
+###### Java
+
+```java
+admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType)
+```
+
+#### set persistence policies
+
+Persistence policies allow to configure persistency-level for all topic messages under a given namespace.
+
+  -   Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0
+
+  -   Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0
+
+  -   Bookkeeper-write-quorum: How many writes to make of each entry, default: 0
+
+  -   Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/persistence|operation/setPersistence}
+```
+
+###### Java
+
+```java
+admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate))
+```
+
+
+#### get persistence policies
+
+It shows configured persistence policies of a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-persistence test-tenant/ns1
+```
+
+```json
+{
+  "bookkeeperEnsemble": 3,
+  "bookkeeperWriteQuorum": 2,
+  "bookkeeperAckQuorum": 2,
+  "managedLedgerMaxMarkDeleteRate": 0
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/persistence|operation/getPersistence}
+```
+
+###### Java
+
+```java
+admin.namespaces().getPersistence(namespace)
+```
+
+
+#### unload namespace bundle
+
+Namespace bundle is a virtual group of topics which belong to same namespace. If broker gets overloaded with number of bundles then this command can help to unload heavy bundle from that broker, so it can be served by some other less loaded broker. Namespace bundle is defined with it’s start and end range such as 0x00000000 and 0xffffffff.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|PUT|/admin/v2/namespaces/{tenant}/{namespace}/{bundle}/unload|operation/unloadNamespaceBundle}
+```
+
+###### Java
+
+```java
+admin.namespaces().unloadNamespaceBundle(namespace, bundle)
+```
+
+
+#### set message-ttl
+
+It configures message’s time to live (in seconds) duration.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/messageTTL|operation/setNamespaceMessageTTL}
+```
+
+###### Java
+
+```java
+admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL)
+```
+
+#### get message-ttl
+
+It gives a message ttl of configured namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-message-ttl test-tenant/ns1
+```
+
+```
+100
+```
+
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/messageTTL|operation/getNamespaceMessageTTL}
+```
+
+###### Java
+
+```java
+admin.namespaces().getNamespaceMessageTTL(namespace)
+```
+
+
+#### split bundle
+
+Each namespace bundle can contain multiple topics and each bundle can be served by only one broker. If bundle gets heavy with multiple live topics in it then it creates load on that broker and in order to resolve this issue, admin can split bundle using this command.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|PUT|/admin/v2/namespaces/{tenant}/{namespace}/{bundle}/split|operation/splitNamespaceBundle}
+```
+
+###### Java
+
+```java
+admin.namespaces().splitNamespaceBundle(namespace, bundle)
+```
+
+
+#### clear backlog
+
+It clears all message backlog for all the topics those belong to specific namespace. You can also clear backlog for a specific subscription as well.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/clearBacklog|operation/clearNamespaceBacklogForSubscription}
+```
+
+###### Java
+
+```java
+admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription)
+```
+
+
+#### clear bundle backlog
+
+It clears all message backlog for all the topics those belong to specific NamespaceBundle. You can also clear backlog for a specific subscription as well.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces clear-backlog  --bundle 0x00000000_0xffffffff  --sub my-subscription test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/{bundle}/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription}
+```
+
+###### Java
+
+```java
+admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription)
+```
+
+
+#### set retention
+
+Each namespace contains multiple topics and each topic’s retention size (storage size) should not exceed to a specific threshold or it should be stored till certain time duration. This command helps to configure retention size and time of topics in a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin set-retention --size 10 --time 100 test-tenant/ns1
+```
+
+```
+N/A
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/retention|operation/setRetention}
+```
+
+###### Java
+
+```java
+admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB))
+```
+
+
+#### get retention
+
+It shows retention information of a given namespace.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-retention test-tenant/ns1
+```
+
+```json
+{
+  "retentionTimeInMinutes": 10,
+  "retentionSizeInMB": 100
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/retention|operation/getRetention}
+```
+
+###### Java
+
+```java
+admin.namespaces().getRetention(namespace)
+```
+
+#### set dispatch throttling
+
+It sets message dispatch rate for all the topics under a given namespace.
+Dispatch rate can be restricted by number of message per X seconds (`msg-dispatch-rate`) or by number of message-bytes per X second (`byte-dispatch-rate`).
+dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
+disables the throttling.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \
+  --msg-dispatch-rate 1000 \
+  --byte-dispatch-rate 1048576 \
+  --dispatch-rate-period 1
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/dispatchRate|operation/setDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1))
+```
+
+#### get configured message-rate
+
+It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1
+```
+
+```json
+{
+  "dispatchThrottlingRatePerTopicInMsg" : 1000,
+  "dispatchThrottlingRatePerTopicInByte" : 1048576,
+  "ratePeriodInSecond" : 1
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/dispatchRate|operation/getDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().getDispatchRate(namespace)
+```
+
+
+#### set dispatch throttling for subscription
+
+It sets message dispatch rate for all the subscription of topics under a given namespace.
+Dispatch rate can be restricted by number of message per X seconds (`msg-dispatch-rate`) or by number of message-bytes per X second (`byte-dispatch-rate`).
+dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
+disables the throttling.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \
+  --msg-dispatch-rate 1000 \
+  --byte-dispatch-rate 1048576 \
+  --dispatch-rate-period 1
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/subscriptionDispatchRate|operation/setDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1))
+```
+
+#### get configured message-rate
+
+It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1
+```
+
+```json
+{
+  "dispatchThrottlingRatePerTopicInMsg" : 1000,
+  "dispatchThrottlingRatePerTopicInByte" : 1048576,
+  "ratePeriodInSecond" : 1
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/subscriptionDispatchRate|operation/getDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().getSubscriptionDispatchRate(namespace)
+```
+
+#### set dispatch throttling for subscription
+
+It sets message dispatch rate for all the replicator between replication clusters under a given namespace.
+Dispatch rate can be restricted by number of message per X seconds (`msg-dispatch-rate`) or by number of message-bytes per X second (`byte-dispatch-rate`).
+dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
+disables the throttling.
+
+###### CLI
+
+```
+$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \
+  --msg-dispatch-rate 1000 \
+  --byte-dispatch-rate 1048576 \
+  --dispatch-rate-period 1
+```
+
+###### REST
+
+```
+{@inject: endpoint|POST|/admin/v2/namespaces/{tenant}/{namespace}/replicatorDispatchRate|operation/setDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1))
+```
+
+#### get configured message-rate
+
+It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
+
+###### CLI
+
+```
+$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1
+```
+
+```json
+{
+  "dispatchThrottlingRatePerTopicInMsg" : 1000,
+  "dispatchThrottlingRatePerTopicInByte" : 1048576,
+  "ratePeriodInSecond" : 1
+}
+```
+
+###### REST
+
+```
+{@inject: endpoint|GET|/admin/v2/namespaces/{tenant}/{namespace}/replicatorDispatchRate|operation/getDispatchRate}
+```
+
+###### Java
+
+```java
+admin.namespaces().getReplicatorDispatchRate(namespace)
+```
+
+### Namespace isolation
+
+Coming soon.
+
+### Unloading from a broker
+
+You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it.
+
+#### pulsar-admin
+
+Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command.
+
+###### CLI
+
+```shell
+$ pulsar-admin namespaces unload my-tenant/my-ns
+```
+
+###### REST
+
+```
+{@inject: endpoint|PUT|/admin/v2/namespaces/{tenant}/{namespace}/unload|operation/unloadNamespace}
+```
+
+###### Java
+
+```java
+admin.namespaces().unload(namespace)
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.6.0/admin-api-non-partitioned-topics.md
new file mode 100644
index 0000000..8625c0c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-non-partitioned-topics.md
@@ -0,0 +1,160 @@
+---
+id: version-2.6.0-admin-api-non-partitioned-topics
+title: Managing non-partitioned topics
+sidebar_label: Non-Partitioned topics
+original_id: admin-api-non-partitioned-topics
+---
+
+
+You can use Pulsar's [admin API](admin-api-overview.md) to create and manage non-partitioned topics.
+
+In all of the instructions and commands below, the topic name structure is:
+
+```shell
+persistent://tenant/namespace/topic
+```
+
+## Non-Partitioned topics resources
+
+### Create
+
+Non-partitioned topics in Pulsar must be explicitly created. When creating a new non-partitioned topic you
+need to provide a name for the topic.
+
+> #### Note
+>
+> By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data.
+>
+> To disable this feature, set `brokerDeleteInactiveTopicsEnabled`  to `false`.
+>
+> To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value.
+>
+> For more information about these two parameters, see [here](reference-configuration.md#broker).
+
+#### pulsar-admin
+
+You can create non-partitioned topics using the [`create`](reference-pulsar-admin.md#create-3)
+command and specifying the topic name as an argument.
+Here's an example:
+
+```shell
+$ bin/pulsar-admin topics create \
+  persistent://my-tenant/my-namespace/my-topic
+```
+
+> #### Note
+>
+> It's only allowed to create non partitioned topic of name contains suffix '-partition-' followed by numeric value like
+> 'xyz-topic-partition-10', if there's already a partitioned topic with same name, in this case 'xyz-topic', and has
+> number of partition larger then that numeric value in this case 11(partition index is start from 0). Else creation of such topic will fail.
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic|operation/createNonPartitionedTopic}
+
+#### Java
+
+```java
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().createNonPartitionedTopic(topicName);
+```
+
+### Delete
+
+#### pulsar-admin
+
+Non-partitioned topics can be deleted using the
+[`delete`](reference-pulsar-admin.md#delete-4) command, specifying the topic by name:
+
+```shell
+$ bin/pulsar-admin topics delete \
+  persistent://my-tenant/my-namespace/my-topic
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/persistent/:tenant/:namespace/:topic|operation/deleteTopic}
+
+#### Java
+
+```java
+admin.topics().delete(persistentTopic);
+```
+
+### List
+
+It provides a list of topics existing under a given namespace.  
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin topics list tenant/namespace
+persistent://tenant/namespace/topic1
+persistent://tenant/namespace/topic2
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace|operation/getList}
+
+#### Java
+
+```java
+admin.topics().getList(namespace);
+```
+
+### Stats
+
+It shows current statistics of a given topic. Here's an example payload:
+
+The following stats are available:
+
+|Stat|Description|
+|----|-----------|
+|msgRateIn|The sum of all local and replication publishers’ publish rates in messages per second|
+|msgThroughputIn|Same as msgRateIn but in bytes per second instead of messages per second|
+|msgRateOut|The sum of all local and replication consumers’ dispatch rates in messages per second|
+|msgThroughputOut|Same as msgRateOut but in bytes per second instead of messages per second|
+|averageMsgSize|Average message size, in bytes, from this publisher within the last interval|
+|storageSize|The sum of the ledgers’ storage size for this topic|
+|publishers|The list of all local publishers into the topic. There can be anywhere from zero to thousands.|
+|producerId|Internal identifier for this producer on this topic|
+|producerName|Internal identifier for this producer, generated by the client library|
+|address|IP address and source port for the connection of this producer|
+|connectedSince|Timestamp this producer was created or last reconnected|
+|subscriptions|The list of all local subscriptions to the topic|
+|my-subscription|The name of this subscription (client defined)|
+|msgBacklog|The count of messages in backlog for this subscription|
+|msgBacklogNoDelayed|The count of messages in backlog without delayed messages for this subscription|
+|type|This subscription type|
+|msgRateExpired|The rate at which messages were discarded instead of dispatched from this subscription due to TTL|
+|consumers|The list of connected consumers for this subscription|
+|consumerName|Internal identifier for this consumer, generated by the client library|
+|availablePermits|The number of messages this consumer has space for in the client library’s listen queue. A value of 0 means the client library’s queue is full and receive() isn’t being called. A nonzero value means this consumer is ready to be dispatched messages.|
+|replication|This section gives the stats for cross-colo replication of this topic|
+|replicationBacklog|The outbound replication backlog in messages|
+|connected|Whether the outbound replicator is connected|
+|replicationDelayInSeconds|How long the oldest message has been waiting to be sent through the connection, if connected is true|
+|inboundConnection|The IP and port of the broker in the remote cluster’s publisher connection to this broker|
+|inboundConnectedSince|The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.|
+
+#### pulsar-admin
+
+The stats for the topic and its connected producers and consumers can be fetched by using the
+[`stats`](reference-pulsar-admin.md#stats) command, specifying the topic by name:
+
+```shell
+$ pulsar-admin topics stats \
+  persistent://test-tenant/namespace/topic \
+  --get-precise-backlog
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/stats|operation/getStats}
+
+#### Java
+
+```java
+admin.topics().getStats(persistentTopic, false /* is precise backlog */);
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.6.0/admin-api-non-persistent-topics.md
new file mode 100644
index 0000000..2dc9681
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-non-persistent-topics.md
@@ -0,0 +1,264 @@
+---
+id: version-2.6.0-admin-api-non-persistent-topics
+title: Managing non-persistent topics
+sidebar_label: Non-Persistent topics
+original_id: admin-api-non-persistent-topics
+---
+
+Non-persistent can be used in applications that only want to consume real time published messages and
+do not need persistent guarantee that can also reduce message-publish latency by removing overhead of
+persisting messages.
+
+In all of the instructions and commands below, the topic name structure is:
+
+```shell
+non-persistent://tenant/namespace/topic
+```
+
+## Non-persistent topics resources
+
+### Get stats
+
+It shows current statistics of a given non-partitioned topic.
+
+  -   **msgRateIn**: The sum of all local and replication publishers' publish rates in messages per second
+
+  -   **msgThroughputIn**: Same as above, but in bytes per second instead of messages per second
+
+  -   **msgRateOut**: The sum of all local and replication consumers' dispatch rates in messages per second
+
+  -   **msgThroughputOut**: Same as above, but in bytes per second instead of messages per second
+
+  -   **averageMsgSize**: The average size in bytes of messages published within the last interval
+
+  -   **publishers**: The list of all local publishers into the topic. There can be zero or thousands
+
+  -   **averageMsgSize**: Average message size in bytes from this publisher within the last interval
+
+  -   **producerId**: Internal identifier for this producer on this topic
+
+  -   **producerName**: Internal identifier for this producer, generated by the client library
+
+  -   **address**: IP address and source port for the connection of this producer
+
+  -   **connectedSince**: Timestamp this producer was created or last reconnected
+
+  -   **subscriptions**: The list of all local subscriptions to the topic
+
+  -   **my-subscription**: The name of this subscription (client defined)
+
+  -   **type**: This subscription type
+
+  -   **consumers**: The list of connected consumers for this subscription
+
+  -   **consumerName**: Internal identifier for this consumer, generated by the client library
+
+  -   **availablePermits**: The number of messages this consumer has space for in the client library's listen queue. A value less than 1 means the client library's queue is full and receive() isn't being called. A non-negative value means this consumer is ready to be dispatched messages.
+
+  -   **replication**: This section gives the stats for cross-colo replication of this topic
+
+  -   **connected**: Whether the outbound replicator is connected
+
+  -   **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker
+
+  -   **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.
+
+  -   **msgDropRate**: for publisher: publish: broker only allows configured number of in flight per connection, and drops all other published messages above the threshold. Broker also drops messages for subscriptions in case of unavailable limit and connection is not writable.
+
+
+```json
+{
+  "msgRateIn": 4641.528542257553,
+  "msgThroughputIn": 44663039.74947473,
+  "msgRateOut": 0,
+  "msgThroughputOut": 0,
+  "averageMsgSize": 1232439.816728665,
+  "storageSize": 135532389160,
+  "msgDropRate" : 0.0,
+  "publishers": [
+    {
+      "msgRateIn": 57.855383881403576,
+      "msgThroughputIn": 558994.7078932219,
+      "averageMsgSize": 613135,
+      "producerId": 0,
+      "producerName": null,
+      "address": null,
+      "connectedSince": null,
+      "msgDropRate" : 0.0
+    }
+  ],
+  "subscriptions": {
+    "my-topic_subscription": {
+      "msgRateOut": 0,
+      "msgThroughputOut": 0,
+      "msgBacklog": 116632,
+      "type": null,
+      "msgRateExpired": 36.98245516804671,
+       "consumers" : [ {
+        "msgRateOut" : 20343.506296021893,
+        "msgThroughputOut" : 2.0979855364233278E7,
+        "msgRateRedeliver" : 0.0,
+        "consumerName" : "fe3c0",
+        "availablePermits" : 950,
+        "unackedMessages" : 0,
+        "blockedConsumerOnUnackedMsgs" : false,
+        "address" : "/10.73.210.249:60578",
+        "connectedSince" : "2017-07-26 15:13:48.026-0700",
+        "clientVersion" : "1.19-incubating-SNAPSHOT"
+      } ],
+      "msgDropRate" : 432.2390921571593
+
+    }
+  },
+  "replication": {}
+}
+```
+
+#### pulsar-admin
+
+Topic stats can be fetched using [`stats`](reference-pulsar-admin.md#stats) command.
+
+```shell
+$ pulsar-admin non-persistent stats \
+  non-persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/non-persistent/:tenant/:namespace/:topic/stats|operation/getStats}
+
+
+#### Java
+
+```java
+String topic = "non-persistent://my-tenant/my-namespace/my-topic";
+admin.nonPersistentTopics().getStats(topic);
+```
+
+### Get internal stats
+
+It shows detailed statistics of a topic.
+
+#### pulsar-admin
+
+Topic internal-stats can be fetched using [`stats-internal`](reference-pulsar-admin.md#stats-internal) command.
+
+```shell
+$ pulsar-admin non-persistent stats-internal \
+  non-persistent://test-tenant/ns1/tp1 \
+
+{
+  "entriesAddedCounter" : 48834,
+  "numberOfEntries" : 0,
+  "totalSize" : 0,
+  "cursors" : {
+    "s1" : {
+      "waitingReadOp" : false,
+      "pendingReadOps" : 0,
+      "messagesConsumedCounter" : 0,
+      "cursorLedger" : 0,
+      "cursorLedgerLastEntry" : 0
+    }
+  }
+}
+
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/non-persistent/:tenant/:namespace/:topic/internalStats|operation/getInternalStats}
+
+#### Java
+
+```java
+String topic = "non-persistent://my-tenant/my-namespace/my-topic";
+admin.nonPersistentTopics().getInternalStats(topic);
+```
+
+### Create partitioned topic
+
+Partitioned topics in Pulsar must be explicitly created. When creating a new partitioned topic you need to provide a name for the topic as well as the desired number of partitions.
+
+> #### Note
+>
+> By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data.
+>
+> To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`.
+>
+> To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value.
+>
+> For more information about these two parameters, see [here](reference-configuration.md#broker).
+
+#### pulsar-admin
+
+```shell
+$ bin/pulsar-admin non-persistent create-partitioned-topic \
+  non-persistent://my-tenant/my-namespace/my-topic \
+  --partitions 4
+```
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/non-persistent/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic}
+
+#### Java
+
+```java
+String topicName = "non-persistent://my-tenant/my-namespace/my-topic";
+int numPartitions = 4;
+admin.nonPersistentTopics().createPartitionedTopic(topicName, numPartitions);
+```
+
+### Get metadata
+
+Partitioned topics have metadata associated with them that you can fetch as a JSON object. The following metadata fields are currently available:
+
+Field | Meaning
+:-----|:-------
+`partitions` | The number of partitions into which the topic is divided
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin non-persistent get-partitioned-topic-metadata \
+  non-persistent://my-tenant/my-namespace/my-topic
+{
+  "partitions": 4
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/non-persistent/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata}
+
+
+#### Java
+
+```java
+String topicName = "non-persistent://my-tenant/my-namespace/my-topic";
+admin.nonPersistentTopics().getPartitionedTopicMetadata(topicName);
+```
+
+### Unload topic
+
+It unloads a topic.
+
+#### pulsar-admin
+
+Topic can be unloaded using [`unload`](reference-pulsar-admin.md#unload) command.
+
+```shell
+$ pulsar-admin non-persistent unload \
+  non-persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/non-persistent/:tenant/:namespace/:topic/unload|operation/unloadTopic}
+
+#### Java
+
+```java
+String topic = "non-persistent://my-tenantmy-namespace/my-topic";
+admin.nonPersistentTopics().unload(topic);
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-overview.md b/site2/website/versioned_docs/version-2.6.0/admin-api-overview.md
new file mode 100644
index 0000000..a961bf9
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-overview.md
@@ -0,0 +1,89 @@
+---
+id: version-2.6.0-admin-api-overview
+title: The Pulsar admin interface
+sidebar_label: Overview
+original_id: admin-api-overview
+---
+
+The Pulsar admin interface enables you to manage all of the important entities in a Pulsar [instance](reference-terminology.md#instance), such as [tenants](reference-terminology.md#tenant), [topics](reference-terminology.md#topic), and [namespaces](reference-terminology.md#namespace).
+
+You can currently interact with the admin interface via:
+
+- Making HTTP calls against the admin {@inject: rest:REST:/} API provided by Pulsar [brokers](reference-terminology.md#broker). For some restful apis, they might be redirected to topic owner brokers for serving
+   with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you are using `curl`, you should specify `-L`
+   to handle redirections.
+- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your [Pulsar installation](getting-started-standalone.md):
+
+```shell
+$ bin/pulsar-admin
+```
+
+Full documentation for this tool can be found in the [Pulsar command-line tools](reference-pulsar-admin.md) doc.
+
+- A Java client interface.
+
+> #### The REST API is the admin interface
+> Under the hood, both the `pulsar-admin` CLI tool and the Java client both use the REST API. If you’d like to implement your own admin interface client, you should use the REST API as well. Full documentation can be found here.
+
+In this document, examples from each of the three available interfaces will be shown.
+
+## Admin setup
+
+Each of Pulsar's three admin interfaces---the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool, the [Java admin API](/api/admin), and the {@inject: rest:REST:/} API ---requires some special setup if you have [authentication](security-overview.md#authentication-providers) enabled in your Pulsar [instance](reference-terminology.md#instance).
+
+### pulsar-admin
+
+If you have [authentication](security-overview.md#authentication-providers) enabled, you will need to provide an auth configuration to use the [`pulsar-admin`](reference-pulsar-admin.md) tool. By default, the configuration for the `pulsar-admin` tool is found in the [`conf/client.conf`](reference-configuration.md#client) file. Here are the available parameters:
+
+|Name|Description|Default|
+|----|-----------|-------|
+|webServiceUrl|The web URL for the cluster.|http://localhost:8080/|
+|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/|
+|authPlugin|The authentication plugin.| |
+|authParams|The authentication parameters for the cluster, as a comma-separated string.| |
+|useTls|Whether or not TLS authentication will be enforced in the cluster.|false|
+|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false|
+|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| |
+
+### REST API
+
+You can find documentation for the REST API exposed by Pulsar [brokers](reference-terminology.md#broker) in this reference {@inject: rest:document:/}.
+
+### Java admin client
+
+To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, specifying a URL for a Pulsar [broker](reference-terminology.md#broker) and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. Here's a minimal example using `localhost`:
+
+```java
+String url = "http://localhost:8080";
+// Pass auth-plugin class fully-qualified name if Pulsar-security enabled
+String authPluginClassName = "com.org.MyAuthPluginClass";
+// Pass auth-param if auth-plugin class requires it
+String authParams = "param1=value1";
+boolean useTls = false;
+boolean tlsAllowInsecureConnection = false;
+String tlsTrustCertsFilePath = null;
+PulsarAdmin admin = PulsarAdmin.builder()
+.authentication(authPluginClassName,authParams)
+.serviceHttpUrl(url)
+.tlsTrustCertsFilePath(tlsTrustCertsFilePath)
+.allowTlsInsecureConnection(tlsAllowInsecureConnection)
+.build();
+```
+
+If you have multiple brokers to use, you can use multi-host like Pulsar service. For example,
+```java
+String url = "http://localhost:8080,localhost:8081,localhost:8082";
+// Pass auth-plugin class fully-qualified name if Pulsar-security enabled
+String authPluginClassName = "com.org.MyAuthPluginClass";
+// Pass auth-param if auth-plugin class requires it
+String authParams = "param1=value1";
+boolean useTls = false;
+boolean tlsAllowInsecureConnection = false;
+String tlsTrustCertsFilePath = null;
+PulsarAdmin admin = PulsarAdmin.builder()
+.authentication(authPluginClassName,authParams)
+.serviceHttpUrl(url)
+.tlsTrustCertsFilePath(tlsTrustCertsFilePath)
+.allowTlsInsecureConnection(tlsAllowInsecureConnection)
+.build();
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.6.0/admin-api-partitioned-topics.md
new file mode 100644
index 0000000..00ee62f
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-partitioned-topics.md
@@ -0,0 +1,377 @@
+---
+id: version-2.6.0-admin-api-partitioned-topics
+title: Managing partitioned topics
+sidebar_label: Partitioned topics
+original_id: admin-api-partitioned-topics
+---
+
+
+You can use Pulsar's [admin API](admin-api-overview.md) to create and manage partitioned topics.
+
+In all of the instructions and commands below, the topic name structure is:
+
+```shell
+persistent://tenant/namespace/topic
+```
+
+## Partitioned topics resources
+
+### Create
+
+Partitioned topics in Pulsar must be explicitly created. When creating a new partitioned topic you
+need to provide a name for the topic as well as the desired number of partitions.
+
+> #### Note
+>
+> By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data.
+>
+> To disable this feature, set `brokerDeleteInactiveTopicsEnabled`  to `false`.
+>
+> To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value.
+>
+> For more information about these two parameters, see [here](reference-configuration.md#broker).
+
+#### pulsar-admin
+
+You can create partitioned topics using the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic)
+command and specifying the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag.
+
+Here's an example:
+
+```shell
+$ bin/pulsar-admin topics create-partitioned-topic \
+  persistent://my-tenant/my-namespace/my-topic \
+  --partitions 4
+```
+
+> #### Note
+>
+> If there already exists a non partitioned topic with suffix '-partition-' followed by numeric value like
+> 'xyz-topic-partition-10', then you can not create partitioned topic with name 'xyz-topic' as the partitions
+> of the partitioned topic could override the existing non partitioned topic. You have to delete that non
+> partitioned topic first then create the partitioned topic.
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic}
+
+#### Java
+
+```java
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+int numPartitions = 4;
+admin.persistentTopics().createPartitionedTopic(topicName, numPartitions);
+```
+
+### Create missed partitions
+
+Try to create partitions for partitioned topic. The partitions of partition topic has to be created, 
+can be used by repair partitions when topic auto creation is disabled
+
+#### pulsar-admin
+
+You can create missed partitions using the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions)
+command and specifying the topic name as an argument.
+
+Here's an example:
+
+```shell
+$ bin/pulsar-admin topics create-missed-partitions \
+  persistent://my-tenant/my-namespace/my-topic \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic|operation/createMissedPartitions}
+
+#### Java
+
+```java
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().createMissedPartitions(topicName);
+```
+
+### Get metadata
+
+Partitioned topics have metadata associated with them that you can fetch as a JSON object.
+The following metadata fields are currently available:
+
+Field | Meaning
+:-----|:-------
+`partitions` | The number of partitions into which the topic is divided
+
+#### pulsar-admin
+
+You can see the number of partitions in a partitioned topic using the
+[`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata)
+subcommand. Here's an example:
+
+```shell
+$ pulsar-admin topics get-partitioned-topic-metadata \
+  persistent://my-tenant/my-namespace/my-topic
+{
+  "partitions": 4
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata}
+
+#### Java
+
+```java
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getPartitionedTopicMetadata(topicName);
+```
+
+### Update
+
+You can update the number of partitions on an existing partitioned topic
+*if* the topic is non-global. To update, the new number of partitions must be greater
+than the existing number.
+
+Decrementing the number of partitions would deleting the topic, which is not supported in Pulsar.
+
+Already created partitioned producers and consumers will automatically find the newly created partitions.
+
+#### pulsar-admin
+
+Partitioned topics can be updated using the
+[`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command.
+
+```shell
+$ pulsar-admin topics update-partitioned-topic \
+  persistent://my-tenant/my-namespace/my-topic \
+  --partitions 8
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic}
+
+#### Java
+
+```java
+admin.persistentTopics().updatePartitionedTopic(persistentTopic, numPartitions);
+```
+
+### Delete
+
+#### pulsar-admin
+
+Partitioned topics can be deleted using the
+[`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, specifying the topic by name:
+
+```shell
+$ bin/pulsar-admin topics delete-partitioned-topic \
+  persistent://my-tenant/my-namespace/my-topic
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/persistent/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic}
+
+#### Java
+
+```java
+admin.persistentTopics().delete(persistentTopic);
+```
+
+### List
+
+It provides a list of persistent topics existing under a given namespace.  
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin topics list tenant/namespace
+persistent://tenant/namespace/topic1
+persistent://tenant/namespace/topic2
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace|operation/getPartitionedTopicList}
+
+#### Java
+
+```java
+admin.persistentTopics().getList(namespace);
+```
+
+### Stats
+
+It shows current statistics of a given partitioned topic. Here's an example payload:
+
+```json
+{
+  "msgRateIn": 4641.528542257553,
+  "msgThroughputIn": 44663039.74947473,
+  "msgRateOut": 0,
+  "msgThroughputOut": 0,
+  "averageMsgSize": 1232439.816728665,
+  "storageSize": 135532389160,
+  "publishers": [
+    {
+      "msgRateIn": 57.855383881403576,
+      "msgThroughputIn": 558994.7078932219,
+      "averageMsgSize": 613135,
+      "producerId": 0,
+      "producerName": null,
+      "address": null,
+      "connectedSince": null
+    }
+  ],
+  "subscriptions": {
+    "my-topic_subscription": {
+      "msgRateOut": 0,
+      "msgThroughputOut": 0,
+      "msgBacklog": 116632,
+      "type": null,
+      "msgRateExpired": 36.98245516804671,
+      "consumers": []
+    }
+  },
+  "replication": {}
+}
+```
+
+The following stats are available:
+
+|Stat|Description|
+|----|-----------|
+|msgRateIn|The sum of all local and replication publishers’ publish rates in messages per second|
+|msgThroughputIn|Same as msgRateIn but in bytes per second instead of messages per second|
+|msgRateOut|The sum of all local and replication consumers’ dispatch rates in messages per second|
+|msgThroughputOut|Same as msgRateOut but in bytes per second instead of messages per second|
+|averageMsgSize|Average message size, in bytes, from this publisher within the last interval|
+|storageSize|The sum of the ledgers’ storage size for this topic|
+|publishers|The list of all local publishers into the topic. There can be anywhere from zero to thousands.|
+|producerId|Internal identifier for this producer on this topic|
+|producerName|Internal identifier for this producer, generated by the client library|
+|address|IP address and source port for the connection of this producer|
+|connectedSince|Timestamp this producer was created or last reconnected|
+|subscriptions|The list of all local subscriptions to the topic|
+|my-subscription|The name of this subscription (client defined)|
+|msgBacklog|The count of messages in backlog for this subscription|
+|msgBacklogNoDelayed|The count of messages in backlog without delayed messages for this subscription|
+|type|This subscription type|
+|msgRateExpired|The rate at which messages were discarded instead of dispatched from this subscription due to TTL|
+|consumers|The list of connected consumers for this subscription|
+|consumerName|Internal identifier for this consumer, generated by the client library|
+|availablePermits|The number of messages this consumer has space for in the client library’s listen queue. A value of 0 means the client library’s queue is full and receive() isn’t being called. A nonzero value means this consumer is ready to be dispatched messages.|
+|replication|This section gives the stats for cross-colo replication of this topic|
+|replicationBacklog|The outbound replication backlog in messages|
+|connected|Whether the outbound replicator is connected|
+|replicationDelayInSeconds|How long the oldest message has been waiting to be sent through the connection, if connected is true|
+|inboundConnection|The IP and port of the broker in the remote cluster’s publisher connection to this broker|
+|inboundConnectedSince|The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.|
+
+#### pulsar-admin
+
+The stats for the partitioned topic and its connected producers and consumers can be fetched by using the
+[`partitioned-stats`](reference-pulsar-admin.md#partitioned-stats) command, specifying the topic by name:
+
+```shell
+$ pulsar-admin topics partitioned-stats \
+  persistent://test-tenant/namespace/topic \
+  --per-partition
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats}
+
+#### Java
+
+```java
+admin.topics().getPartitionedStats(persistentTopic, true /* per partition */, false /* is precise backlog */);
+```
+
+### Internal stats
+
+It shows detailed statistics of a topic.
+
+|Stat|Description|
+|----|-----------|
+|entriesAddedCounter|Messages published since this broker loaded this topic|
+|numberOfEntries|Total number of messages being tracked|
+|totalSize|Total storage size in bytes of all messages|
+|currentLedgerEntries|Count of messages written to the ledger currently open for writing|
+|currentLedgerSize|Size in bytes of messages written to ledger currently open for writing|
+|lastLedgerCreatedTimestamp|Time when last ledger was created|
+|lastLedgerCreationFailureTimestamp|time when last ledger was failed|
+|waitingCursorsCount|How many cursors are caught up and waiting for a new message to be published|
+|pendingAddEntriesCount|How many messages have (asynchronous) write requests we are waiting on completion|
+|lastConfirmedEntry|The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger has been opened or is currently being opened but has no entries written yet.|
+|state|The state of the cursor ledger. Open means we have a cursor ledger for saving updates of the markDeletePosition.|
+|ledgers|The ordered list of all ledgers for this topic holding its messages|
+|cursors|The list of all cursors on this topic. There will be one for every subscription you saw in the topic stats.|
+|markDeletePosition|The ack position: the last message the subscriber acknowledged receiving|
+|readPosition|The latest position of subscriber for reading message|
+|waitingReadOp|This is true when the subscription has read the latest message published to the topic and is waiting on new messages to be published.|
+|pendingReadOps|The counter for how many outstanding read requests to the BookKeepers we have in progress|
+|messagesConsumedCounter|Number of messages this cursor has acked since this broker loaded this topic|
+|cursorLedger|The ledger being used to persistently store the current markDeletePosition|
+|cursorLedgerLastEntry|The last entryid used to persistently store the current markDeletePosition|
+|individuallyDeletedMessages|If Acks are being done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position|
+|lastLedgerSwitchTimestamp|The last time the cursor ledger was rolled over|
+
+
+```json
+{
+  "entriesAddedCounter": 20449518,
+  "numberOfEntries": 3233,
+  "totalSize": 331482,
+  "currentLedgerEntries": 3233,
+  "currentLedgerSize": 331482,
+  "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825",
+  "lastLedgerCreationFailureTimestamp": null,
+  "waitingCursorsCount": 1,
+  "pendingAddEntriesCount": 0,
+  "lastConfirmedEntry": "324711539:3232",
+  "state": "LedgerOpened",
+  "ledgers": [
+    {
+      "ledgerId": 324711539,
+      "entries": 0,
+      "size": 0
+    }
+  ],
+  "cursors": {
+    "my-subscription": {
+      "markDeletePosition": "324711539:3133",
+      "readPosition": "324711539:3233",
+      "waitingReadOp": true,
+      "pendingReadOps": 0,
+      "messagesConsumedCounter": 20449501,
+      "cursorLedger": 324702104,
+      "cursorLedgerLastEntry": 21,
+      "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]",
+      "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313",
+      "state": "Open"
+    }
+  }
+}
+```
+
+#### pulsar-admin
+
+The internal stats for the partitioned topic can be fetched by using the
+[`stats-internal`](reference-pulsar-admin.md#stats-internal) command, specifying the topic by name:
+
+```shell
+$ pulsar-admin topics stats-internal \
+  persistent://test-tenant/namespace/topic
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/internalStats|operation/getInternalStats}
+
+#### Java
+
+```java
+admin.persistentTopics().getInternalStats(persistentTopic);
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-permissions.md b/site2/website/versioned_docs/version-2.6.0/admin-api-permissions.md
new file mode 100644
index 0000000..a901bd1
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-permissions.md
@@ -0,0 +1,115 @@
+---
+id: version-2.6.0-admin-api-permissions
+title: Managing permissions
+sidebar_label: Permissions
+original_id: admin-api-permissions
+---
+
+Permissions in Pulsar are managed at the [namespace](reference-terminology.md#namespace) level
+(that is, within [tenants](reference-terminology.md#tenant) and [clusters](reference-terminology.md#cluster)).
+
+## Grant permissions
+
+You can grant permissions to specific roles for lists of operations such as `produce` and `consume`.
+
+### pulsar-admin
+
+Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag:
+
+```shell
+$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
+  --actions produce,consume \
+  --role admin10
+```
+
+Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`.
+
+e.g.
+```shell
+$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
+                        --actions produce,consume \
+                        --role 'my.role.*'
+```
+
+Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume.  
+
+```shell
+$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
+                        --actions produce,consume \
+                        --role '*.role.my'
+```
+
+Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume.
+
+**Note**: A wildcard matching works at **the beginning or end of the role name only**.
+
+e.g.
+```shell
+$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
+                        --actions produce,consume \
+                        --role 'my.*.role'
+```
+
+In this case, only the role `my.*.role` has permissions.  
+Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume.
+
+### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace}
+
+### Java
+
+```java
+admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions));
+```
+
+## Get permissions
+
+You can see which permissions have been granted to which roles in a namespace.
+
+### pulsar-admin
+
+Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace:
+
+```shell
+$ pulsar-admin namespaces permissions test-tenant/ns1
+{
+  "admin10": [
+    "produce",
+    "consume"
+  ]
+}   
+```
+
+### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions}
+
+### Java
+
+```java
+admin.namespaces().getPermissions(namespace);
+```
+
+## Revoke permissions
+
+You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace.
+
+### pulsar-admin
+
+Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag:
+
+```shell
+$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \
+  --role admin10
+```
+
+### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace}
+
+### Java
+
+```java
+admin.namespaces().revokePermissionsOnNamespace(namespace, role);
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-schemas.md b/site2/website/versioned_docs/version-2.6.0/admin-api-schemas.md
new file mode 100644
index 0000000..3b32b79
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-schemas.md
@@ -0,0 +1,7 @@
+---
+id: version-2.6.0-admin-api-schemas
+title: Managing Schemas
+sidebar_label: Schemas
+original_id: admin-api-schemas
+---
+
diff --git a/site2/website/versioned_docs/version-2.6.0/admin-api-tenants.md b/site2/website/versioned_docs/version-2.6.0/admin-api-tenants.md
new file mode 100644
index 0000000..fea5bec
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/admin-api-tenants.md
@@ -0,0 +1,86 @@
+---
+id: version-2.6.0-admin-api-tenants
+title: Managing Tenants
+sidebar_label: Tenants
+original_id: admin-api-tenants
+---
+
+Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants:
+
+* Admin roles
+* Allowed clusters
+
+## Tenant resources
+
+### List
+
+#### pulsar-admin
+
+You can list all of the tenants associated with an [instance](reference-terminology.md#instance) using the [`list`](reference-pulsar-admin.md#tenants-list) subcommand:
+
+```shell
+$ pulsar-admin tenants list
+```
+
+That will return a simple list, like this:
+
+```
+my-tenant-1
+my-tenant-2
+```
+
+### Create
+
+#### pulsar-admin
+
+You can create a new tenant using the [`create`](reference-pulsar-admin.md#tenants-create) subcommand:
+
+```shell
+$ pulsar-admin tenants create my-tenant
+```
+
+When creating a tenant, you can assign admin roles using the `-r`/`--admin-roles` flag. You can specify multiple roles as a comma-separated list. Here are some examples:
+
+```shell
+$ pulsar-admin tenants create my-tenant \
+  --admin-roles role1,role2,role3
+
+$ pulsar-admin tenants create my-tenant \
+  -r role1
+```
+
+### Get configuration
+
+#### pulsar-admin
+
+You can see a tenant's configuration as a JSON object using the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specifying the name of the tenant:
+
+```shell
+$ pulsar-admin tenants get my-tenant
+{
+  "adminRoles": [
+    "admin1",
+    "admin2"
+  ],
+  "allowedClusters": [
+    "cl1",
+    "cl2"
+  ]
+}
+```
+
+### Delete
+
+#### pulsar-admin
+
+You can delete a tenant using the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specifying the tenant name:
+
+```shell
+$ pulsar-admin tenants delete my-tenant
+```
+
+### Updating
+
+#### pulsar-admin
+
+You can update a tenant's configuration using the [`update`](reference-pulsar-admin.md#tenants-update) subcommand
diff --git a/site2/website/versioned_docs/version-2.6.0/administration-dashboard.md b/site2/website/versioned_docs/version-2.6.0/administration-dashboard.md
new file mode 100644
index 0000000..1db9d7b
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/administration-dashboard.md
@@ -0,0 +1,63 @@
+---
+id: version-2.6.0-administration-dashboard
+title: Pulsar dashboard
+sidebar_label: Dashboard
+original_id: administration-dashboard
+---
+
+> Note   
+> Pulsar dashboard is deprecated. If you want to manage and monitor the stats of your topics, use [Pulsar Manager](administration-pulsar-manager.md). 
+
+Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form.
+
+The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database.
+
+You can use the [Django](https://www.djangoproject.com) web app to render the collected data.
+
+## Install
+
+The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+```shell
+$ SERVICE_URL=http://broker.example.com:8080/
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  apachepulsar/pulsar-dashboard:{{pulsar:version}}
+```
+
+You can find the {@inject: github:`Dockerfile`:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well:
+
+```shell
+$ docker build -t apachepulsar/pulsar-dashboard dashboard
+```
+
+If token authentication is enabled:
+> Provided token should have super-user access. 
+```shell
+$ SERVICE_URL=http://broker.example.com:8080/
+$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  -e JWT_TOKEN=$JWT_TOKEN \
+  apachepulsar/pulsar-dashboard
+```
+ 
+You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://<broker-ip>:8080` by default. `<broker-ip>` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard.
+
+Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses.
+
+> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container
+
+If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to
+be the IP of the machine.
+
+Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to
+explicitely set the advertise address to the host IP. For example:
+
+```shell
+$ bin/pulsar standalone --advertised-address 1.2.3.4
+```
+
+### Known issues
+
+Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported.
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/administration-geo.md b/site2/website/versioned_docs/version-2.6.0/administration-geo.md
new file mode 100644
index 0000000..e3080ea
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/administration-geo.md
@@ -0,0 +1,157 @@
+---
+id: version-2.6.0-administration-geo
+title: Pulsar geo-replication
+sidebar_label: Geo-replication
+original_id: administration-geo
+---
+
+*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+
+## How geo-replication works
+
+The diagram below illustrates the process of geo-replication across Pulsar clusters:
+
+![Replication Diagram](assets/geo-replication.png)
+
+In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
+
+Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
+
+## Geo-replication and Pulsar properties
+
+You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
+
+Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
+
+* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
+* Configure that namespace to replicate across two or more provisioned clusters
+
+Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
+
+## Local persistence and forwarding
+
+When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters.
+
+In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency.
+
+Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition).
+
+Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover [...]
+
+In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
+
+All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
+
+## Configure replication
+
+As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level.
+
+### Grant permissions to properties
+
+To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later.
+
+Specify all the intended clusters when you create a tenant:
+
+```shell
+$ bin/pulsar-admin tenants create my-tenant \
+  --admin-roles my-admin-role \
+  --allowed-clusters us-west,us-east,us-cent
+```
+
+To update permissions of an existing tenant, use `update` instead of `create`.
+
+### Enable geo-replication namespaces
+
+You can create a namespace with the following command sample.
+
+```shell
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace
+```
+
+Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand:
+
+```shell
+$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \
+  --clusters us-west,us-east,us-cent
+```
+
+You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes.
+
+### Use topics with geo-replication
+
+Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster.
+
+#### Selective replication
+
+By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list.
+
+The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object:
+
+```java
+List<String> restrictReplicationTo = Arrays.asList(
+        "us-west",
+        "us-east"
+);
+
+Producer producer = client.newProducer()
+        .topic("some-topic")
+        .create();
+
+producer.newMessage()
+        .value("my-payload".getBytes())
+        .setReplicationClusters(restrictReplicationTo)
+        .send();
+```
+
+#### Topic stats
+
+Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API:
+
+```shell
+$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic
+```
+
+Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs.
+
+#### Delete a geo-replication topic
+
+Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection.
+
+In Pulsar, a topic is automatically deleted when the topic meets the following three conditions:
+- no producers or consumers are connected to it;
+- no subscriptions to it;
+- no more messages are kept for retention. 
+For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe.
+
+You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker).
+
+To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic.
+
+## Replicated subscriptions
+
+Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions.
+
+In case of failover, a consumer can restart consuming from the failure point in a different cluster. 
+
+### Enable replicated subscription
+
+Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. 
+
+```java
+Consumer<String> consumer = client.newConsumer(Schema.STRING)
+            .topic("my-topic")
+            .subscriptionName("my-subscription")
+            .replicateSubscriptionState(true)
+            .subscribe();
+```
+
+### Advantages
+
+ * It is easy to implement the logic. 
+ * You can choose to enable or disable replicated subscription.
+ * When you enable it, the overhead is low, and it is easy to configure. 
+ * When you disable it, the overhead is zero.
+
+### Limitations
+
+When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
diff --git a/site2/website/versioned_docs/version-2.6.0/administration-load-balance.md b/site2/website/versioned_docs/version-2.6.0/administration-load-balance.md
new file mode 100644
index 0000000..95d16b6
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/administration-load-balance.md
@@ -0,0 +1,182 @@
+---
+id: version-2.6.0-administration-load-balance
+title: Pulsar load balance
+sidebar_label: Load balance
+original_id: administration-load-balance
+---
+
+## Load balance across Pulsar brokers
+
+Pulsar is an horizontally scalable messaging system, so the traffic
+in a logical cluster must be spread across all the available Pulsar brokers as evenly as possible, which is a core requirement.
+
+You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. 
+
+## Pulsar load manager architecture
+
+The following part introduces the basic architecture of the Pulsar load manager.
+
+### Assign topics to brokers dynamically
+
+Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster.
+
+When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. 
+
+In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic.
+
+The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker.
+
+The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage.
+
+#### Assignment granularity
+
+The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. 
+
+Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism.
+
+The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level.
+
+For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising
+a portion of overall hash range of the namespace.
+
+Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which
+bundle the hash falls into.
+
+Each bundle is independent of the others and thus is independently assigned to different brokers.
+
+### Create namespaces and bundles
+
+When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`:
+
+```properties
+# When a namespace is created without specifying the number of bundle, this
+# value will be used as the default
+defaultNumberOfNamespaceBundles=4
+```
+
+You can either change the system default, or override it when you create a new namespace:
+
+```shell
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16
+```
+
+With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers.
+
+In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution.
+
+On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers.
+
+### Unload topics and bundles
+
+You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics,
+release ownership and reassign the topics to a new broker, based on current load.
+
+When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned.
+
+Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded.
+
+Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic:
+
+```shell
+pulsar-admin topics unload persistent://tenant/namespace/topic
+```
+
+To unload all topics for a namespace and trigger reassignments:
+
+```shell
+pulsar-admin namespaces unload tenant/namespace
+```
+
+### Split namespace bundles 
+
+Since the load for the topics in a bundle might change over time, or predicting upfront might just be hard, brokers can split bundles into two. The new smaller bundles can be reassigned to different brokers.
+
+The splitting happens based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution.
+
+```properties
+# enable/disable namespace bundle auto split
+loadBalancerAutoBundleSplitEnabled=true
+
+# enable/disable automatic unloading of split bundles
+loadBalancerAutoUnloadSplitBundlesEnabled=true
+
+# maximum topics in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxTopics=1000
+
+# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxSessions=1000
+
+# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxMsgRate=30000
+
+# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxBandwidthMbytes=100
+
+# maximum number of bundles in a namespace (for auto-split)
+loadBalancerNamespaceMaximumBundles=128
+```
+
+### Shed load automatically
+
+The support for automatic load shedding is avaliable in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers.
+
+When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the
+ones with higher traffic, that make up for the overload percentage.
+
+For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`.
+
+Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network
+and memory), broker unloads bundles for at least 15% of traffic.
+
+The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting:
+
+```properties
+# Enable/disable automatic bundle unloading for load-shedding
+loadBalancerSheddingEnabled=true
+```
+
+Additional settings that apply to shedding:
+
+```properties
+# Load shedding interval. Broker periodically checks whether some traffic should be offload from
+# some over-loaded broker to other under-loaded brokers
+loadBalancerSheddingIntervalMinutes=1
+
+# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe
+loadBalancerSheddingGracePeriodMinutes=30
+```
+
+#### Broker overload thresholds
+
+The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled).
+
+By default, overload threshold is set at 85%:
+
+```properties
+# Usage threshold to determine a broker as over-loaded
+loadBalancerBrokerOverloadedThresholdPercentage=85
+```
+
+Pulsar gathers the usage stats from the system metrics.
+
+In case of network utilization, in some cases the network interface speed that Linux reports is
+not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps
+NIC speed for which the OS reports 10Gbps speed.
+
+Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down.
+
+You can use the following setting to correct the max NIC speed:
+
+```properties
+# Override the auto-detection of the network interfaces max speed.
+# This option is useful in some environments (eg: EC2 VMs) where the max speed
+# reported by Linux is not reflecting the real bandwidth available to the broker.
+# Since the network usage is employed by the load manager to decide when a broker
+# is overloaded, it is important to make sure the info is correct or override it
+# with the right value here. The configured value can be a double (eg: 0.8) and that
+# can be used to trigger load-shedding even before hitting on NIC limits.
+loadBalancerOverrideBrokerNicSpeedGbps=
+```
+
+When the value is empty, Pulsar uses the value that the OS reports.
+
diff --git a/site2/website/versioned_docs/version-2.6.0/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.6.0/administration-pulsar-manager.md
new file mode 100644
index 0000000..e5b7421
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/administration-pulsar-manager.md
@@ -0,0 +1,136 @@
+---
+id: version-2.6.0-administration-pulsar-manager
+title: Pulsar Manager
+sidebar_label: Pulsar Manager
+original_id: administration-pulsar-manager
+---
+
+Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments.
+
+> Note   
+> If you monitor your current stats with [Pulsar dashboard](administration-dashboard.md), you can try to use Pulsar Manager instead. Pulsar dashboard is deprecated.
+
+## Install
+
+The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+
+```
+docker pull apachepulsar/pulsar-manager:v0.1.0
+docker run -it -p 9527:9527 -e REDIRECT_HOST=http://192.168.0.104 -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -v $PWD:/data apachepulsar/pulsar-manager:v0.1.0 /bin/sh
+```
+
+* REDIRECT_HOST: the IP address of the front-end server.
+
+* REDIRECT_PORT: the port of the front-end server.
+
+* DRIVER_CLASS_NAME: the driver class name of PostgreSQL.
+
+* URL: the URL of PostgreSQL JDBC, For example, `jdbc:postgresql://127.0.0.1:5432/pulsar_manager`.
+
+* USERNAME: the username of PostgreSQL.
+
+* PASSWORD: the password of PostgreSQL.
+
+* LOG_LEVEL: level of log.
+
+You can find the in the [Docker](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from scratch as well:
+
+```
+git clone https://github.com/apache/pulsar-manager
+cd pulsar-manager
+./gradlew build -x test
+cd front-end
+npm install --save
+npm run build:prod
+cd ..
+docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager .
+```
+
+### Use custom databases
+
+If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL.   
+
+1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql).
+
+2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration.
+
+```
+spring.datasource.driver-class-name=org.postgresql.Driver
+spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager
+spring.datasource.username=postgres
+spring.datasource.password=postgres
+```
+
+3. Compile to generate a new executable jar package.
+
+```
+./gradlew -x build -x test
+```
+
+### Enable JWT authentication
+
+If you want to turn on JWT authentication, configure the following parameters:
+
+* `backend.jwt.token`:  token for the superuser. You need to configure this parameter during cluster initialization.
+* `jwt.broker.token.mode`:  two modes of generating token, SECRET and PRIVATE.
+* `jwt.broker.public.key`: configure this option if you are using the PRIVATE mode.
+* `jwt.broker.private.key`: configure this option if you are using the PRIVATE mode.
+* `jwt.broker.secret.key`: configure this option if you are using the SECRET mode.
+
+For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/).
+
+
+If you want to enable JWT authentication, use one of the following methods.
+
+
+* Method 1: use command-line tool
+
+```
+./build/distributions/pulsar-manager/bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key
+```
+
+* Method 2: configure the application.properties file
+
+```
+backend.jwt.token=token
+
+jwt.broker.token.mode=PRIVATE
+jwt.broker.public.key=file:///path/broker-public.key
+jwt.broker.private.key=file:///path/broker-private.key
+
+or 
+jwt.broker.token.mode=SECRET
+jwt.broker.secret.key=file:///path/broker-secret.key
+```
+
+* Method 3: use Docker and turn on token authentication.
+
+```
+export JWT_TOKEN="your-token"
+docker run -it -p 9527:9527 -e REDIRECT_HOST=http://192.168.55.182 -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.1.0 /bin/sh
+```
+
+* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key.
+
+```
+export JWT_TOKEN="your-token"
+export PRIVATE_KEY="file:///private-key-path"
+export PUBLIC_KEY="file:///public-key-path"
+docker run -it -p 9527:9527 -e REDIRECT_HOST=http://192.168.55.182 -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/private-key-path:/pulsar-manager/private-key-path -v $PWD/public-key-path:/pulsar-manager/public-key-path apachepulsar/pulsar-manager:v0.1.0 /bin/sh
+```
+
+* Method 5: use Docker and turn on **token authentication** and **token management** by secret key.
+
+```
+export JWT_TOKEN="your-token"
+export SECRET_KEY="file:///secret-key-path"
+docker run -it -p 9527:9527 -e REDIRECT_HOST=http://192.168.55.182 -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret-key-path:/pulsar-manager/secret-key-path apachepulsar/pulsar-manager:v0.1.0 /bin/sh
+```
+
+* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/8b1f26f7d7c725e6d056c41b98235fbc5deb9f49/src/README.md).
+* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/front-end/README.md).
+
+## Log in
+
+Visit http://localhost:9527 to log in.
diff --git a/site2/website/versioned_docs/version-2.6.0/administration-stats.md b/site2/website/versioned_docs/version-2.6.0/administration-stats.md
new file mode 100644
index 0000000..42a638c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/administration-stats.md
@@ -0,0 +1,64 @@
+---
+id: version-2.6.0-administration-stats
+title: Pulsar stats
+sidebar_label: Pulsar statistics
+original_id: administration-stats
+---
+
+## Partitioned topics
+
+|Stat|Description|
+|---|---|
+|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.|
+|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.|
+|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.|
+|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.|
+|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.|
+|storageSize| The sum of storage size of the ledgers for this topic.|
+|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.|
+|producerId| Internal identifier for this producer on this topic.|
+|producerName|  Internal identifier for this producer, generated by the client library.|
+|address| IP address and source port for the connection of this producer.|
+|connectedSince| Timestamp this producer is created or last reconnected.|
+|subscriptions| The list of all local subscriptions to the topic.|
+|my-subscription| The name of this subscription (client defined).|
+|msgBacklog| The count of messages in backlog for this subscription.|
+|type| This subscription type.|
+|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.|
+|consumers| The list of connected consumers for this subscription.|
+|consumerName| Internal identifier for this consumer, generated by the client library.|
+|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.|
+|replication| This section gives the stats for cross-colo replication of this topic.|
+|replicationBacklog| The outbound replication backlog in messages.|
+|connected| Whether the outbound replicator is connected.|
+|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.|
+|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. |
+|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.|
+
+
+## Topics
+
+|Stat|Description|
+|---|---|
+|entriesAddedCounter| Messages published since this broker loads this topic.|
+|numberOfEntries| Total number of messages being tracked.|
+|totalSize| Total storage size in bytes of all messages.|
+|currentLedgerEntries| Count of messages written to the ledger currently open for writing.|
+|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.|
+|lastLedgerCreatedTimestamp| Time when last ledger is created.|
+|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.|
+|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.|
+|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.|
+|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.|
+|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.|
+|ledgers| The ordered list of all ledgers for this topic holding its messages.|
+|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.|
+|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.|
+|readPosition| The latest position of subscriber for reading message.|
+|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.|
+|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.|
+|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.|
+|cursorLedger| The ledger used to persistently store the current markDeletePosition.|
+|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.|
+|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.|
+|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.|
diff --git a/site2/website/versioned_docs/version-2.6.0/administration-upgrade.md b/site2/website/versioned_docs/version-2.6.0/administration-upgrade.md
new file mode 100644
index 0000000..4ec739d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/administration-upgrade.md
@@ -0,0 +1,151 @@
+---
+id: version-2.6.0-administration-upgrade
+title: Upgrade Guide
+sidebar_label: Upgrade
+original_id: administration-upgrade
+---
+
+## Upgrade guidelines
+
+Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless).
+
+The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading.
+
+- Backup all your configuration files before upgrading.
+- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration.   
+- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. 
+- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process.
+- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade.
+- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly.
+- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode.
+
+> Note: Currently, Apache Pulsar is compatible between versions. 
+
+## Upgrade sequence
+
+To upgrade an Apache Pulsar cluster, follow the upgrade sequence.
+
+1. Upgrade ZooKeeper (optional)  
+- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes.  
+- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process.
+2. Upgrade bookies  
+- Canary test: test an upgraded version in one or a small set of bookies.
+- Rolling upgrade:  
+    - a. Disable `autorecovery` with the following command.
+       ```shell
+       bin/bookkeeper shell autorecovery -disable
+       ```  
+    - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary.  
+    - c. After you upgrade all bookies, re-enable `autorecovery` with the following command.
+       ```shell
+       bin/bookkeeper shell autorecovery -enable
+       ```
+3. Upgrade brokers
+- Canary test: test an upgraded version in one or a small set of brokers.
+- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary.
+4. Upgrade proxies
+- Canary test: test an upgraded version in one or a small set of proxies.
+- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary.
+
+## Upgrade ZooKeeper (optional)
+While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster.
+
+### Canary test
+
+You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster.
+
+To upgrade ZooKeeper server to a new version, complete the following steps:
+
+1. Stop a ZooKeeper server.
+2. Upgrade the binary and configuration files.
+3. Start the ZooKeeper server with the new binary files.
+4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected.
+5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well.
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary.
+
+### Upgrade all ZooKeeper servers
+
+After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. 
+
+You can upgrade all ZooKeeper servers one by one by following steps in canary test.
+
+## Upgrade bookies
+
+While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster.
+For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade).
+
+### Canary test
+
+You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster.
+
+To upgrade bookie to a new version, complete the following steps:
+
+1. Stop a bookie.
+2. Upgrade the binary and configuration files.
+3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload.
+   ```shell
+   bin/pulsar bookie --readOnly
+   ```
+4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode.
+   ```shell
+   bin/pulsar bookie
+   ```
+5. Observe and make sure the cluster serves both write and read traffic.
+
+#### Canary rollback
+
+If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. 
+
+### Upgrade all bookies
+
+After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. 
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each bookie.
+
+1. Stop the bookie. 
+2. Upgrade the software (either new binary or new configuration files).
+2. Start the bookie.
+
+> **Advanced operations**   
+> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process.
+
+## Upgrade brokers and proxies
+
+The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy.
+
+### Canary test
+
+You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster.
+
+To upgrade to a new version, complete the following steps:
+
+1. Stop a broker (or proxy).
+2. Upgrade the binary and configuration file.
+3. Start a broker (or proxy).
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy).
+
+### Upgrade all brokers or proxies
+
+After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. 
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade.
+
+In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each broker or proxy.
+
+1. Stop the broker or proxy. 
+2. Upgrade the software (either new binary or new configuration files).
+3. Start the broker or proxy.
diff --git a/site2/website/versioned_docs/version-2.6.0/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.6.0/client-libraries-cgo.md
new file mode 100644
index 0000000..aa088c5
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/client-libraries-cgo.md
@@ -0,0 +1,545 @@
+---
+id: version-2.6.0-client-libraries-cgo
+title: Pulsar CGo client
+sidebar_label: CGo(deprecated)
+original_id: client-libraries-cgo
+---
+
+You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+
+Currently, the following Go clients are maintained in two repositories.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
+| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+
+> #### API docs available as well
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
+
+## Installation
+
+### Requirements
+
+Pulsar Go client library is based on the C++ client library. Follow
+the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
+
+### Install go package
+
+> #### Compatibility Warning
+> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
+
+You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
+
+```bash
+$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
+```
+
+Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
+
+```bash
+$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v{{pulsar:version}}
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+import "github.com/apache/pulsar/pulsar-client-go/pulsar"
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+pulsar://localhost:6650
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Create a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+
+```go
+import (
+    "log"
+    "runtime"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+        OperationTimeoutSeconds: 5,
+        MessageListenerThreads: runtime.NumCPU(),
+    })
+
+    if err != nil {
+        log.Fatalf("Could not instantiate Pulsar client: %v", err)
+    }
+}
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
+`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
+`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
+`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
+`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
+`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
+`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
+`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
+`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
+`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic: "my-topic",
+})
+
+if err != nil {
+    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
+}
+
+defer producer.Close()
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Hello, Pulsar"),
+}
+
+if err := producer.Send(context.Background(), msg); err != nil {
+    log.Fatalf("Producer could not send message: %v", err)
+}
+```
+
+> #### Blocking operation
+> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
+`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
+`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
+`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
+`Schema()` | | Schema
+
+Here's a more involved example usage of a producer:
+
+```go
+import (
+    "context"
+    "fmt"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client to instantiate a producer
+    producer, err := client.CreateProducer(pulsar.ProducerOptions{
+        Topic: "my-topic",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    ctx := context.Background()
+
+    // Send 10 messages synchronously and 10 messages asynchronously
+    for i := 0; i < 10; i++ {
+        // Create a message
+        msg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("message-%d", i)),
+        }
+
+        // Attempt to send the message
+        if err := producer.Send(ctx, msg); err != nil {
+            log.Fatal(err)
+        }
+
+        // Create a different message to send asynchronously
+        asyncMsg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
+        }
+
+        // Attempt to send the message asynchronously and handle the response
+        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
+            if err != nil { log.Fatal(err) }
+
+            fmt.Printf("the %s successfully published", string(msg.Payload))
+        })
+    }
+}
+```
+
+### Producer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
+`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
+`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
+`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds
+`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
+`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
+`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
+`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
+`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
+`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
+`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
+`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
+`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 10ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 10ms
+`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+msgChannel := make(chan pulsar.ConsumerMessage)
+
+consumerOpts := pulsar.ConsumerOptions{
+    Topic:            "my-topic",
+    SubscriptionName: "my-subscription-1",
+    Type:             pulsar.Exclusive,
+    MessageChannel:   msgChannel,
+}
+
+consumer, err := client.Subscribe(consumerOpts)
+
+if err != nil {
+    log.Fatalf("Could not establish subscription: %v", err)
+}
+
+defer consumer.Close()
+
+for cm := range msgChannel {
+    msg := cm.Message
+
+    fmt.Printf("Message ID: %s", msg.ID())
+    fmt.Printf("Message value: %s", string(msg.Payload()))
+
+    consumer.Ack(msg)
+}
+```
+
+> #### Blocking operation
+> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
+`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
+`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
+`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
+`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
+`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
+
+#### Receive example
+
+Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
+
+```go
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client object to instantiate a consumer
+    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+        Topic:            "my-golang-topic",
+        SubscriptionName: "sub-1",
+        Type: pulsar.Exclusive,
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    defer consumer.Close()
+
+    ctx := context.Background()
+
+    // Listen indefinitely on the topic
+    for {
+        msg, err := consumer.Receive(ctx)
+        if err != nil { log.Fatal(err) }
+
+        // Do something with the message
+        err = processMessage(msg)
+
+        if err == nil {
+            // Message processed successfully
+            consumer.Ack(msg)
+        } else {
+            // Failed to process messages
+            consumer.Nack(msg)
+        }
+    }
+}
+```
+
+### Consumer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
+`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`SubscriptionName` | The subscription name for this consumer |
+`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
+`Name` | The name of the consumer |
+`AckTimeout` | Set the timeout for unacked messages | 0
+`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
+`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
+`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
+`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
+`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
+`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic: "my-golang-topic",
+    StartMessageId: pulsar.LatestMessage,
+})
+```
+
+> #### Blocking operation
+> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
+
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+#### "Next" example
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatalf("Could not create client: %v", err) }
+
+    // Use the client to instantiate a reader
+    reader, err := client.CreateReader(pulsar.ReaderOptions{
+        Topic:          "my-golang-topic",
+        StartMessageID: pulsar.EarliestMessage,
+    })
+
+    if err != nil { log.Fatalf("Could not create reader: %v", err) }
+
+    defer reader.Close()
+
+    ctx := context.Background()
+
+    // Listen on the topic for incoming messages
+    for {
+        msg, err := reader.Next(ctx)
+        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
+
+        // Process the message
+    }
+}
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic:          "my-golang-topic",
+    StartMessageID: DeserializeMessageID(lastSavedId),
+})
+```
+
+### Reader configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages 
+`Name` | The name of the reader 
+`StartMessageID` | THe initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
+`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
+`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
+`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Here is some message data"),
+    Key: "message-key",
+    Properties: map[string]string{
+        "foo": "bar",
+    },
+    EventTime: time.Now(),
+    ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if err := producer.send(msg); err != nil {
+    log.Fatalf("Could not publish message due to: %v", err)
+}
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+opts := pulsar.ClientOptions{
+    URL: "pulsar+ssl://my-cluster.com:6651",
+    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+```
+
+## Schema
+
+This example shows how to create a producer and consumer with schema.
+
+```go
+var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
+    		"\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
+jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
+// create producer
+producer, err := client.CreateProducerWithSchema(ProducerOptions{
+	Topic: "jsonTopic",
+}, jsonSchema)
+err = producer.Send(context.Background(), ProducerMessage{
+	Value: &testJson{
+		ID:   100,
+		Name: "pulsar",
+	},
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+//create consumer
+var s testJson
+consumerJS := NewJsonSchema(exampleSchemaDef, nil)
+consumer, err := client.SubscribeWithSchema(ConsumerOptions{
+	Topic:            "jsonTopic",
+	SubscriptionName: "sub-2",
+}, consumerJS)
+if err != nil {
+	log.Fatal(err)
+}
+msg, err := consumer.Receive(context.Background())
+if err != nil {
+	log.Fatal(err)
+}
+err = msg.GetValue(&s)
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(s.ID) // output: 100
+fmt.Println(s.Name) // output: pulsar
+defer consumer.Close()
+```
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.6.0/client-libraries-cpp.md
new file mode 100644
index 0000000..2682f84
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/client-libraries-cpp.md
@@ -0,0 +1,253 @@
+---
+id: version-2.6.0-client-libraries-cpp
+title: Pulsar C++ client
+sidebar_label: C++
+original_id: client-libraries-cpp
+---
+
+You can use Pulsar C++ client to create Pulsar producers and consumers in C++.
+
+All the methods in producer, consumer, and reader of a C++ client are thread-safe.
+
+## Supported platforms
+
+Pulsar C++ client is supported on **Linux** and **MacOS** platforms.
+
+[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp).
+
+## Linux
+
+> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly.
+
+Four kind of libraries `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` are included in your `/usr/lib` after rpm/deb download and install.
+By default, they are build under code path `${PULSAR_HOME}/pulsar-client-cpp`, using command
+ `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`
+These libraries rely on some other libraries, if you want to get detailed version of dependencies libraries, please reference [these](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) [files](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile).
+
+1. `libpulsar.so` is the Shared library, it contains statically linked `boost` and `openssl`, and will also dynamically link all other needed libraries.
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include
+```
+
+2. `libpulsarnossl.so` is the Shared library that similar to `libpulsar.so` except that the library `openssl` and `crypto` are dynamically linked.
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib
+```
+
+3. `libpulsar.a` is the Static library, it need to load some dependencies library when using it. 
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread  -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz
+```
+
+4. `libpulsarwithdeps.a` is the Static library, base on `libpulsar.a`, and archived in the dependencies libraries of `libboost_regex`,  `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`, 
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread  -I/usr/local/ssl/include -L/usr/local/ssl/lib
+```
+`libpulsarwithdeps.a` does not include library openssl related libraries: `libssl` and `libcrypto`, because these 2 library is related to security, 
+by using user local system provided version is more reasonable, and more easy for user to handling security issue and library upgrade.
+
+### Install RPM
+
+1. Download a RPM package from the links in the table. 
+
+| Link | Crypto files |
+|------|--------------|
+| [client]({{pulsar:dist_rpm:client}}) | [asc]({{pulsar:dist_rpm:client}}.asc), [sha512]({{pulsar:dist_rpm:client}}.sha512) |
+| [client-debuginfo]({{pulsar:dist_rpm:client-debuginfo}}) | [asc]({{pulsar:dist_rpm:client-debuginfo}}.asc),  [sha512]({{pulsar:dist_rpm:client-debuginfo}}.sha512) |
+| [client-devel]({{pulsar:dist_rpm:client-devel}}) | [asc]({{pulsar:dist_rpm:client-devel}}.asc),  [sha512]({{pulsar:dist_rpm:client-devel}}.sha512) |
+
+2. Install the package using the following command.
+
+```bash
+$ rpm -ivh apache-pulsar-client*.rpm
+```
+
+After install, Pulsar libraries will be placed under `/usr/lib`.
+
+### Install Debian
+
+1. Download a Debian package from the links in the table. 
+
+| Link | Crypto files |
+|------|--------------|
+| [client]({{pulsar:deb:client}}) | [asc]({{pulsar:dist_deb:client}}.asc), [sha512]({{pulsar:dist_deb:client}}.sha512) |
+| [client-devel]({{pulsar:deb:client-devel}}) | [asc]({{pulsar:dist_deb:client-devel}}.asc),  [sha512]({{pulsar:dist_deb:client-devel}}.sha512) |
+
+2. Install the package using the following command:
+
+```bash
+$ apt install ./apache-pulsar-client*.deb
+```
+
+After install, Pulsar libraries will be placed under `/usr/lib`.
+
+### Build
+
+> If you want to build RPM and Debian packages from the latest master, follow the instructions below. All the instructions are run at the root directory of your cloned Pulsar repository.
+
+There are recipes that build RPM and Debian packages containing a
+statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all the required
+dependencies.
+
+To build the C++ library packages, build the Java packages first.
+
+```shell
+mvn install -DskipTests
+```
+
+#### RPM
+
+```shell
+pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh
+```
+
+This builds the RPM inside a Docker container and it leaves the RPMs in `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/`.
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` |
+| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers |
+| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` |
+
+#### Debian
+
+To build Debian packages, enter the following command.
+
+```shell
+pulsar-client-cpp/pkg/deb/docker-build-deb.sh
+```
+
+Debian packages are created at `pulsar-client-cpp/pkg/deb/BUILD/DEB/`.
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` |
+| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers |
+
+## MacOS
+
+Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers.
+
+```shell
+brew install libpulsar
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a Pulsar protocol URL.
+
+Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost.
+
+```http
+pulsar://localhost:6650
+```
+
+In a Pulsar cluster in production, the URL looks as follows: 
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example.
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Create a consumer
+To connect to Pulsar as a consumer, you need to create a consumer on the C++ client. The following is an example. 
+
+```c++
+Client client("pulsar://localhost:6650");
+
+Consumer consumer;
+Result result = client.subscribe("my-topic", "my-subscription-name", consumer);
+if (result != ResultOk) {
+    LOG_ERROR("Failed to subscribe: " << result);
+    return -1;
+}
+
+Message msg;
+
+while (true) {
+    consumer.receive(msg);
+    LOG_INFO("Received: " << msg
+            << "  with payload '" << msg.getDataAsString() << "'");
+
+    consumer.acknowledge(msg);
+}
+
+client.close();
+```
+
+## Create a producer
+To connect to Pulsar as a producer, you need to create a producer on the C++ client. The following is an example. 
+
+```c++
+Client client("pulsar://localhost:6650");
+
+Producer producer;
+Result result = client.createProducer("my-topic", producer);
+if (result != ResultOk) {
+    LOG_ERROR("Error creating producer: " << result);
+    return -1;
+}
+
+// Publish 10 messages to the topic
+for (int i = 0; i < 10; i++){
+    Message msg = MessageBuilder().setContent("my-message").build();
+    Result res = producer.send(msg);
+    LOG_INFO("Message sent: " << res);
+}
+client.close();
+```
+
+## Enable authentication in connection URLs
+If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example.
+
+```cpp
+ClientConfiguration config = ClientConfiguration();
+config.setUseTls(true);
+config.setTlsTrustCertsFilePath("/path/to/cacert.pem");
+config.setTlsAllowInsecureConnection(false);
+config.setAuth(pulsar::AuthTls::create(
+            "/path/to/client-cert.pem", "/path/to/client-key.pem"););
+
+Client client("pulsar+ssl://my-broker.com:6651", config);
+```
+
+For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples).
+
+## Schema
+
+This section describes some examples about schema. For more information about schema, see [Pulsar schema](schema-get-started.md).
+
+### Create producer with Avro schema
+
+The following example shows how to create a producer with an Avro schema.
+
+```cpp
+static const std::string exampleSchema =
+    "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\","
+    "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}";
+Producer producer;
+ProducerConfiguration producerConf;
+producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema));
+client.createProducer("topic-avro", producerConf, producer);
+```
+
+### Create consumer with Avro schema
+
+The following example shows how to create a consumer with an Avro schema.
+
+```cpp
+static const std::string exampleSchema =
+    "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\","
+    "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}";
+ConsumerConfiguration consumerConf;
+Consumer consumer;
+consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema));
+client.subscribe("topic-avro", "sub-2", consumerConf, consumer)
+```
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/client-libraries-go.md b/site2/website/versioned_docs/version-2.6.0/client-libraries-go.md
new file mode 100644
index 0000000..5694255
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/client-libraries-go.md
@@ -0,0 +1,661 @@
+---
+id: version-2.6.0-client-libraries-go
+title: Pulsar Go client
+sidebar_label: Go
+original_id: client-libraries-go
+---
+
+> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md)
+
+You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+> #### API docs available as well
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar).
+
+
+## Installation
+
+### Install go package
+
+You can install the `pulsar` library locally using `go get`.  
+
+```bash
+$ go get -u "github.com/apache/pulsar-client-go/pulsar"
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+import "github.com/apache/pulsar-client-go/pulsar"
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+pulsar://localhost:6650
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Create a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+
+```go
+import (
+	"log"
+	"time"
+
+	"github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+	client, err := pulsar.NewClient(pulsar.ClientOptions{
+		URL:               "pulsar://localhost:6650",
+		OperationTimeout:  30 * time.Second,
+		ConnectionTimeout: 30 * time.Second,
+	})
+	if err != nil {
+		log.Fatalf("Could not instantiate Pulsar client: %v", err)
+	}
+
+	defer client.Close()
+}
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| URL | Configure the service URL for the Pulsar service. This parameter is required | |
+| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s |
+| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s|
+| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication |
+| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | |
+| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false |
+| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false |
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+	Topic: "my-topic",
+})
+
+_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{
+	Payload: []byte("hello"),
+})
+
+defer producer.Close()
+
+if err != nil {
+	fmt.Println("Failed to publish message", err)
+}
+fmt.Println("Published message")
+```
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error)
+`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | 
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | 
+
+### Producer Example
+
+#### How to use message router in producer
+
+```go
+client, err := NewClient(ClientOptions{
+	URL: serviceURL,
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer client.Close()
+
+// Only subscribe on the specific partition
+consumer, err := client.Subscribe(ConsumerOptions{
+	Topic:            "my-partitioned-topic-partition-2",
+	SubscriptionName: "my-sub",
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+
+producer, err := client.CreateProducer(ProducerOptions{
+	Topic: "my-partitioned-topic",
+	MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int {
+		fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions())
+		return 2
+	},
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+```
+
+#### How to use delay relative in producer
+
+```go
+client, err := NewClient(ClientOptions{
+	URL: "pulsar://localhost:6650",
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer client.Close()
+
+topicName := newTopicName()
+producer, err := client.CreateProducer(ProducerOptions{
+	Topic: topicName,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+
+consumer, err := client.Subscribe(ConsumerOptions{
+	Topic:            topicName,
+	SubscriptionName: "subName",
+	Type:             Shared,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+
+ID, err := producer.Send(context.Background(), &ProducerMessage{
+	Payload:      []byte(fmt.Sprintf("test")),
+	DeliverAfter: 3 * time.Second,
+})
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(ID)
+
+ctx, canc := context.WithTimeout(context.Background(), 1*time.Second)
+msg, err := consumer.Receive(ctx)
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(msg.Payload())
+canc()
+
+ctx, canc = context.WithTimeout(context.Background(), 5*time.Second)
+msg, err = consumer.Receive(ctx)
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(msg.Payload())
+canc()
+```
+
+
+### Producer configuration
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | |
+| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | 
+| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | |
+| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | |
+| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash |
+| CompressionType | CompressionType set the compression type for the producer. | not compressed | 
+| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | |
+| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false |
+| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 10ms |
+| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | 
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+	Topic:            "topic-1",
+	SubscriptionName: "my-sub",
+	Type:             pulsar.Shared,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+
+for i := 0; i < 10; i++ {
+	msg, err := consumer.Receive(context.Background())
+	if err != nil {
+		log.Fatal(err)
+	}
+
+	fmt.Printf("Received message msgId: %#v -- content: '%s'\n",
+		msg.ID(), string(msg.Payload()))
+
+	consumer.Ack(msg)
+}
+
+if err := consumer.Unsubscribe(); err != nil {
+	log.Fatal(err)
+}
+```
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | 
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | 
+`Nack(Message)` | Acknowledge the failure to process a single message. | 
+`NackID(MessageID)` | Acknowledge the failure to process a single message. | 
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error`
+`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | 
+
+### Receive example
+
+#### How to use regx consumer
+
+```go
+client, err := pulsar.NewClient(pulsar.ClientOptions{
+    URL: "pulsar://localhost:6650",
+})
+
+defer client.Close()
+
+p, err := client.CreateProducer(ProducerOptions{
+	Topic:           topicInRegex,
+	DisableBatching: true,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer p.Close()
+
+topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace)
+opts := ConsumerOptions{
+	TopicsPattern:    topicsPattern,
+	SubscriptionName: "regex-sub",
+}
+consumer, err := client.Subscribe(opts)
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+```
+
+#### How to use multi topics Consumer
+
+```go
+func newTopicName() string {
+	return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond())
+}
+
+
+topic1 := "topic-1"
+topic2 := "topic-2"
+
+client, err := NewClient(ClientOptions{
+	URL: "pulsar://localhost:6650",
+})
+if err != nil {
+	log.Fatal(err)
+}
+topics := []string{topic1, topic2}
+consumer, err := client.Subscribe(ConsumerOptions{
+	Topics:           topics,
+	SubscriptionName: "multi-topic-sub",
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+```
+
+#### How to use consumer listener
+
+```go
+import (
+	"fmt"
+	"log"
+
+	"github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+	client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"})
+	if err != nil {
+		log.Fatal(err)
+	}
+
+	defer client.Close()
+
+	channel := make(chan pulsar.ConsumerMessage, 100)
+
+	options := pulsar.ConsumerOptions{
+		Topic:            "topic-1",
+		SubscriptionName: "my-subscription",
+		Type:             pulsar.Shared,
+	}
+
+	options.MessageChannel = channel
+
+	consumer, err := client.Subscribe(options)
+	if err != nil {
+		log.Fatal(err)
+	}
+
+	defer consumer.Close()
+
+	// Receive messages from channel. The channel returns a struct which contains message and the consumer from where
+	// the message was received. It's not necessary here since we have 1 single consumer, but the channel could be
+	// shared across multiple consumers as well
+	for cm := range channel {
+		msg := cm.Message
+		fmt.Printf("Received message  msgId: %v -- content: '%s'\n",
+			msg.ID(), string(msg.Payload()))
+
+		consumer.Ack(msg)
+	}
+}
+```
+
+#### How to use consumer receive timeout
+
+```go
+client, err := NewClient(ClientOptions{
+	URL: "pulsar://localhost:6650",
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer client.Close()
+
+topic := "test-topic-with-no-messages"
+ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
+defer cancel()
+
+// create consumer
+consumer, err := client.Subscribe(ConsumerOptions{
+	Topic:            topic,
+	SubscriptionName: "my-sub1",
+	Type:             Shared,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+
+msg, err := consumer.Receive(ctx)
+fmt.Println(msg.Payload())
+if err != nil {
+	log.Fatal(err)
+}
+```
+
+
+### Consumer configuration
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | |
+| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| |
+| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | |
+| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | |
+| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | |
+| Name | Set the consumer name | | 
+| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | |
+| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive |
+| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest |
+| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | 
+| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | 
+| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| 
+| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min |
+| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false |
+| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+	Topic:          "topic-1",
+	StartMessageID: pulsar.EarliestMessageID(),
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer reader.Close()
+```
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+### Reader example
+
+#### How to use reader to read 'next' message
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+import (
+	"context"
+	"fmt"
+	"log"
+
+	"github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+	client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"})
+	if err != nil {
+		log.Fatal(err)
+	}
+
+	defer client.Close()
+
+	reader, err := client.CreateReader(pulsar.ReaderOptions{
+		Topic:          "topic-1",
+		StartMessageID: pulsar.EarliestMessageID(),
+	})
+	if err != nil {
+		log.Fatal(err)
+	}
+	defer reader.Close()
+
+	for reader.HasNext() {
+		msg, err := reader.Next(context.Background())
+		if err != nil {
+			log.Fatal(err)
+		}
+
+		fmt.Printf("Received message msgId: %#v -- content: '%s'\n",
+			msg.ID(), string(msg.Payload()))
+	}
+}
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic:          "my-golang-topic",
+    StartMessageID: pulsar.DeserializeMessageID(lastSavedId),
+})
+```
+
+#### How to use reader to read specific message
+
+```go
+client, err := NewClient(ClientOptions{
+	URL: lookupURL,
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer client.Close()
+
+topic := "topic-1"
+ctx := context.Background()
+
+// create producer
+producer, err := client.CreateProducer(ProducerOptions{
+	Topic:           topic,
+	DisableBatching: true,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+
+// send 10 messages
+msgIDs := [10]MessageID{}
+for i := 0; i < 10; i++ {
+	msgID, err := producer.Send(ctx, &ProducerMessage{
+		Payload: []byte(fmt.Sprintf("hello-%d", i)),
+	})
+	assert.NoError(t, err)
+	assert.NotNil(t, msgID)
+	msgIDs[i] = msgID
+}
+
+// create reader on 5th message (not included)
+reader, err := client.CreateReader(ReaderOptions{
+	Topic:          topic,
+	StartMessageID: msgIDs[4],
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer reader.Close()
+
+// receive the remaining 5 messages
+for i := 5; i < 10; i++ {
+	msg, err := reader.Next(context.Background())
+	if err != nil {
+	log.Fatal(err)
+}
+
+// create reader on 5th message (included)
+readerInclusive, err := client.CreateReader(ReaderOptions{
+	Topic:                   topic,
+	StartMessageID:          msgIDs[4],
+	StartMessageIDInclusive: true,
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer readerInclusive.Close()
+```
+
+### Reader configuration
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | |
+| Name | Name set the reader name. | | 
+| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | |
+| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | |
+| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false |
+| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| |
+| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 |
+| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | “reader” | 
+| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic.  ReadCompacted can only be enabled when reading from a persistent topic. | false|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Here is some message data"),
+    Key: "message-key",
+    Properties: map[string]string{
+        "foo": "bar",
+    },
+    EventTime: time.Now(),
+    ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if _, err := producer.send(msg); err != nil {
+    log.Fatalf("Could not publish message due to: %v", err)
+}
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+`DeliverAfter` | Request to deliver the message only after the specified relative delay
+`DeliverAt` | Deliver the message only at or after the specified absolute timestamp
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+opts := pulsar.ClientOptions{
+    URL: "pulsar+ssl://my-cluster.com:6651",
+    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/client-libraries-node.md b/site2/website/versioned_docs/version-2.6.0/client-libraries-node.md
new file mode 100644
index 0000000..cd4ea9a
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/client-libraries-node.md
@@ -0,0 +1,404 @@
+---
+id: version-2.6.0-client-libraries-node
+title: The Pulsar Node.js client
+sidebar_label: Node.js
+original_id: client-libraries-node
+---
+
+The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js.
+
+All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe.
+
+## Installation
+
+You can install the [`pusar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/).
+
+### Requirements
+Pulsar Node.js client library is based on the C++ client library.
+Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library.
+
+### Compatibility
+
+Compatibility between each version of the Node.js client and the C++ client is as follows:
+
+| Node.js client | C++ client     |
+| :------------- | :------------- |
+| 1.0.0          | 2.3.0 or later |
+
+If an incompatible version of the C++ client is installed, you may fail to build or run this library.
+
+### Installation using npm
+
+Install the `pulsar-client` library via [npm](https://www.npmjs.com/):
+
+```shell
+$ npm install pulsar-client
+```
+
+> #### Note
+> 
+> Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library.
+
+## Connection URLs
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`:
+
+```http
+pulsar://localhost:6650
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL will look like something like this:
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Create a client
+
+In order to interact with Pulsar, you will first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)).
+
+Here is an example:
+
+```JavaScript
+const Pulsar = require('pulsar-client');
+
+(async () => {
+  const client = new Pulsar.Client({
+    serviceUrl: 'pulsar://localhost:6650',
+  });
+  
+  await client.close();
+})();
+```
+
+### Client configuration
+
+The following configurable parameters are available for Pulsar clients:
+
+| Parameter | Description | Default |
+| :-------- | :---------- | :------ |
+| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. |  |
+| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | |
+| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 |
+| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 |
+| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 |
+| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 |
+| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | |
+| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` |
+| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` |
+| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 |
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object.
+
+Here is an example:
+
+```JavaScript
+const producer = await client.createProducer({
+  topic: 'my-topic',
+});
+
+await producer.send({
+  data: Buffer.from("Hello, Pulsar"),
+});
+
+await producer.close();
+```
+
+> #### Promise operation
+> When you create a new Pulsar producer, the operation will return `Promise` object and get producer instance or an error through executor function.  
+> In this example, using await operator instead of executor function.
+
+### Producer operations
+
+Pulsar Node.js producers have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error will be thrown, the Promise object run executor function. | `Promise<null>` |
+| `flush()` | Sends message from send queue to Pulser broker. When the message is successfully acknowledged by the Pulsar broker, or an error will be thrown, the Promise object run executor function. | `Promise<null>` |
+| `close()` | Closes the producer and releases all resources allocated to it. If `close()` is called then no more messages will be accepted from the publisher. This method will return Promise object, and when all pending publish requests have been persisted by Pulsar then run executor function. If an error is thrown, no pending writes will be retried. | `Promise<null>` |
+
+### Producer configuration
+
+| Parameter | Description | Default |
+| :-------- | :---------- | :------ |
+| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages. | |
+| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar will automatically generate a globally unique name.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | |
+| `sendTimeoutMs` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `sendTimeoutMs` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 |
+| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | |
+| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method will fail *unless* `blockIfQueueFull` is set to `true`. | 1000 |
+| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's  pending queue. | 50000 |
+| `blockIfQueueFull` | If set to `true`, the producer's `send` method will wait when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations will fail and throw a error when the queue is full. | `false` |
+| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` |
+| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` |
+| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/). | Compression None |
+| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` |
+| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 |
+| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 |
+| `properties` | The metadata of producer. | |
+
+### Producer example
+
+This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic:
+
+```JavaScript
+const Pulsar = require('pulsar-client');
+
+(async () => {
+  // Create a client
+  const client = new Pulsar.Client({
+    serviceUrl: 'pulsar://localhost:6650',
+  });
+
+  // Create a producer
+  const producer = await client.createProducer({
+    topic: 'my-topic',
+  });
+
+  // Send messages
+  for (let i = 0; i < 10; i += 1) {
+    const msg = `my-message-${i}`;
+    producer.send({
+      data: Buffer.from(msg),
+    });
+    console.log(`Sent message: ${msg}`);
+  }
+  await producer.flush();
+
+  await producer.close();
+  await client.close();
+})();
+```
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object.
+
+Here is an example:
+
+```JavaScript
+const consumer = await client.subscribe({
+  topic: 'my-topic',
+  subscription: 'my-subscription',
+});
+
+const msg = await consumer.receive();
+console.log(msg.getData().toString());
+consumer.acknowledge(msg);
+
+await consumer.close();
+```
+
+> #### Promise operation
+> When you create a new Pulsar consumer, the operation will return `Promise` object and get consumer instance or an error through executor function.  
+> In this example, using await operator instead of executor function.
+
+### Consumer operations
+
+Pulsar Node.js consumers have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise<Object>` |
+| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise<Object>` |
+| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` |
+| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` |
+| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method will return void, and send the ack to the broker asynchronously. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` |
+| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` |
+| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise<null>` |
+
+### Consumer configuration
+
+| Parameter | Description | Default |
+| :-------- | :---------- | :------ |
+| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages. | |
+| `subscription` | The subscription name for this consumer. | |
+| `subscriptionType` | Available options are `Exclusive`, `Shared`, and `Failover`. | `Exclusive` |
+| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 |
+| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 |
+| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 |
+| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | |
+| `properties` | The metadata of consumer. | |
+
+### Consumer example
+
+This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times:
+
+```JavaScript
+const Pulsar = require('pulsar-client');
+
+(async () => {
+  // Create a client
+  const client = new Pulsar.Client({
+    serviceUrl: 'pulsar://localhost:6650',
+  });
+
+  // Create a consumer
+  const consumer = await client.subscribe({
+    topic: 'my-topic',
+    subscription: 'my-subscription',
+    subscriptionType: 'Exclusive',
+  });
+
+  // Receive messages
+  for (let i = 0; i < 10; i += 1) {
+    const msg = await consumer.receive();
+    console.log(msg.getData().toString());
+    consumer.acknowledge(msg);
+  }
+
+  await consumer.close();
+  await client.close();
+})();
+```
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object.
+
+Here is an example:
+
+```JavaScript
+const reader = await client.createReader({
+  topic: 'my-topic',
+  startMessageId: Pulsar.MessageId.earliest(),
+});
+
+const msg = await reader.readNext();
+console.log(msg.getData().toString());
+
+await reader.close();
+```
+
+### Reader operations
+
+Pulsar Node.js readers have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise<Object>` |
+| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise<Object>` |
+| `hasNext()` | Return whether the broker has next message in target topic. | `Boolean` |
+| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise<null>` |
+
+### Reader configuration
+
+| Parameter | Description | Default |
+| :-------- | :---------- | :------ |
+| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages. | |
+| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | |
+| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 |
+| `readerName` | The name of the reader. |  |
+| `subscriptionRolePrefix` | The subscription role prefix. | |
+
+### Reader example
+
+This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times:
+
+```JavaScript
+const Pulsar = require('pulsar-client');
+
+(async () => {
+  // Create a client
+  const client = new Pulsar.Client({
+    serviceUrl: 'pulsar://localhost:6650',
+    operationTimeoutSeconds: 30,
+  });
+
+  // Create a reader
+  const reader = await client.createReader({
+    topic: 'my-topic',
+    startMessageId: Pulsar.MessageId.earliest(),
+  });
+
+  // read messages
+  for (let i = 0; i < 10; i += 1) {
+    const msg = await reader.readNext();
+    console.log(msg.getData().toString());
+  }
+
+  await reader.close();
+  await client.close();
+})();
+```
+
+## Messages
+
+In Pulsar Node.js client, you have to construct producer message object for producer.
+
+Here is an example message:
+
+```JavaScript
+const msg = {
+  data: Buffer.from('Hello, Pulsar'),
+  partitionKey: 'key1',
+  properties: {
+    'foo': 'bar',
+  },
+  eventTimestamp: Date.now(),
+  replicationClusters: [
+    'cluster1',
+    'cluster2',
+  ],
+}
+
+await producer.send(msg);
+```
+
+The following keys are available for producer message objects:
+
+| Parameter | Description |
+| :-------- | :---------- |
+| `data` | The actual data payload of the message. |
+| `properties` | A Object for any application-specific metadata attached to the message. |
+| `eventTimestamp` | The timestamp associated with the message. |
+| `sequenceId` | The sequence ID of the message. |
+| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). |
+| `replicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. |
+
+### Message object operations
+
+In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader).
+
+The message object have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `getTopicName()` | Getter method of topic name. | `String` |
+| `getProperties()` | Getter method of properties. | `Array<Object>` |
+| `getData()` | Getter method of message data. | `Buffer` |
+| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` |
+| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` |
+| `getEventTimestamp()` | Getter method of event timestamp. | `Number` |
+| `getPartitionKey()` | Getter method of partition key. | `String` |
+
+### Message ID object operations
+
+In Pulsar Node.js client, you can get message id object from message object.
+
+The message id object have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` |
+| `toString()` | Get message id as String. | `String` |
+
+The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too.
+
+The following static methods are available for the message id object:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` |
+| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` |
+| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` |
+
diff --git a/site2/website/versioned_docs/version-2.6.0/client-libraries-python.md b/site2/website/versioned_docs/version-2.6.0/client-libraries-python.md
new file mode 100644
index 0000000..75f259c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/client-libraries-python.md
@@ -0,0 +1,291 @@
+---
+id: version-2.6.0-client-libraries-python
+title: Pulsar Python client
+sidebar_label: Python
+original_id: client-libraries-python
+---
+
+Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [`python` subdirectory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code.
+
+All the methods in producer, consumer, and reader of a Python client are thread-safe.
+
+[pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client are available [here](/api/python).
+
+## Install
+
+You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from source.
+
+### Install using pip
+
+To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager:
+
+```shell
+$ pip install pulsar-client=={{pulsar:version_number}}
+```
+
+Installation via PyPi is available for the following Python versions:
+
+Platform | Supported Python versions
+:--------|:-------------------------
+MacOS <br />  10.13 (High Sierra), 10.14 (Mojave) <br /> | 2.7, 3.7
+Linux | 2.7, 3.4, 3.5, 3.6, 3.7
+
+### Install from source
+
+To install the `pulsar-client` library by building from source, follow [instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That builds the Python binding for the library.
+
+To install the built Python bindings:
+
+```shell
+$ git clone https://github.com/apache/pulsar
+$ cd pulsar/pulsar-client-cpp/python
+$ sudo python setup.py install
+```
+
+## API Reference
+
+The complete Python API reference is available at [api/python](/api/python).
+
+## Examples
+
+You can find a variety of Python code examples for the `pulsar-client` library.
+
+### Producer example
+
+The following example creates a Python producer for the `my-topic` topic and sends 10 messages on that topic:
+
+```python
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+
+producer = client.create_producer('my-topic')
+
+for i in range(10):
+    producer.send(('Hello-%d' % i).encode('utf-8'))
+
+client.close()
+```
+
+### Consumer example
+
+The following example creates a consumer with the `my-subscription` subscription name on the `my-topic` topic, receives incoming messages, prints the content and ID of messages that arrive, and acknowledges each message to the Pulsar broker.
+
+```python
+consumer = client.subscribe('my-topic', 'my-subscription')
+
+while True:
+    msg = consumer.receive()
+    try:
+        print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
+        # Acknowledge successful processing of the message
+        consumer.acknowledge(msg)
+    except:
+        # Message failed to be processed
+        consumer.negative_acknowledge(msg)
+
+client.close()
+```
+
+This example shows how to configure negative acknowledgement.
+
+```python
+from pulsar import Client, schema
+client = Client('pulsar://localhost:6650')
+consumer = client.subscribe('negative_acks','test',schema=schema.StringSchema())
+producer = client.create_producer('negative_acks',schema=schema.StringSchema())
+for i in range(10):
+    print('send msg "hello-%d"' % i)
+    producer.send_async('hello-%d' % i, callback=None)
+producer.flush()
+for i in range(10):
+    msg = consumer.receive()
+    consumer.negative_acknowledge(msg)
+    print('receive and nack msg "%s"' % msg.data())
+for i in range(10):
+    msg = consumer.receive()
+    consumer.acknowledge(msg)
+    print('receive and ack msg "%s"' % msg.data())
+try:
+    # No more messages expected
+    msg = consumer.receive(100)
+except:
+    print("no more msg")
+    pass
+```
+
+### Reader interface example
+
+You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example:
+
+```python
+# MessageId taken from a previously fetched message
+msg_id = msg.message_id()
+
+reader = client.create_reader('my-topic', msg_id)
+
+while True:
+    msg = reader.read_next()
+    print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
+    # No acknowledgment
+```
+### Multi-topic subscriptions
+
+In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously. To use multi-topic subscriptions, you can supply a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace.
+
+The following is an example. 
+
+```python
+import re
+consumer = client.subscribe(re.compile('persistent://public/default/topic-*'), 'my-subscription')
+while True:
+    msg = consumer.receive()
+    try:
+        print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
+        # Acknowledge successful processing of the message
+        consumer.acknowledge(msg)
+    except:
+        # Message failed to be processed
+        consumer.negative_acknowledge(msg)
+client.close()
+```
+
+## Schema
+
+### Declare and validate schema
+
+You can declare a schema by passing a class that inherits
+from `pulsar.schema.Record` and defines the fields as
+class variables. For example:
+
+```python
+from pulsar.schema import *
+
+class Example(Record):
+    a = String()
+    b = Integer()
+    c = Boolean()
+```
+
+With this simple schema definition, you can create producers, consumers and readers instances that refer to that.
+
+```python
+producer = client.create_producer(
+                    topic='my-topic',
+                    schema=AvroSchema(Example) )
+
+producer.send(Example(a='Hello', b=1))
+```
+
+After creating the producer, the Pulsar broker validates that the existing topic schema is indeed of "Avro" type and that the format is compatible with the schema definition of the `Example` class.
+
+If there is a mismatch, an exception occurs in the producer creation.
+
+Once a producer is created with a certain schema definition,
+it will only accept objects that are instances of the declared
+schema class.
+
+Similarly, for a consumer/reader, the consumer will return an
+object, instance of the schema record class, rather than the raw
+bytes:
+
+```python
+consumer = client.subscribe(
+                  topic='my-topic',
+                  subscription_name='my-subscription',
+                  schema=AvroSchema(Example) )
+
+while True:
+    msg = consumer.receive()
+    ex = msg.value()
+    try:
+        print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c))
+        # Acknowledge successful processing of the message
+        consumer.acknowledge(msg)
+    except:
+        # Message failed to be processed
+        consumer.negative_acknowledge(msg)
+```
+
+### Supported schema types
+
+You can use different builtin schema types in Pulsar. All the definitions are in the `pulsar.schema` package.
+
+| Schema | Notes |
+| ------ | ----- |
+| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode |
+| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects |
+| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload |
+| `AvroSchema` | Require record definition. Serializes in AVRO format |
+
+### Schema definition reference
+
+The schema definition is done through a class that inherits from `pulsar.schema.Record`.
+
+This class has a number of fields which can be of either
+`pulsar.schema.Field` type or another nested `Record`. All the
+fields are specified in the `pulsar.schema` package. The fields
+are matching the AVRO fields types.
+
+| Field Type | Python Type | Notes |
+| ---------- | ----------- | ----- |
+| `Boolean`  | `bool`      |       |
+| `Integer`  | `int`       |       |
+| `Long`     | `int`       |       |
+| `Float`    | `float`     |       |
+| `Double`   | `float`     |       |
+| `Bytes`    | `bytes`     |       |
+| `String`   | `str`       |       |
+| `Array`    | `list`      | Need to specify record type for items. |
+| `Map`      | `dict`      | Key is always `String`. Need to specify value type. |
+
+Additionally, any Python `Enum` type can be used as a valid field type.
+
+#### Fields parameters
+
+When adding a field, you can use these parameters in the constructor.
+
+| Argument   | Default | Notes |
+| ---------- | --------| ----- |
+| `default`  | `None`  | Set a default value for the field. Eg: `a = Integer(default=5)` |
+| `required` | `False` | Mark the field as "required". It is set in the schema accordingly. |
+
+#### Schema definition examples
+
+##### Simple definition
+
+```python
+class Example(Record):
+    a = String()
+    b = Integer()
+    c = Array(String())
+    i = Map(String())
+```
+
+##### Using enums
+
+```python
+from enum import Enum
+
+class Color(Enum):
+    red = 1
+    green = 2
+    blue = 3
+
+class Example(Record):
+    name = String()
+    color = Color
+```
+
+##### Complex types
+
+```python
+class MySubRecord(Record):
+    x = Integer()
+    y = Long()
+    z = String()
+
+class Example(Record):
+    a = String()
+    sub = MySubRecord()
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.6.0/client-libraries-websocket.md
new file mode 100644
index 0000000..7c39667
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/client-libraries-websocket.md
@@ -0,0 +1,444 @@
+---
+id: version-2.6.0-client-libraries-websocket
+title: Pulsar's WebSocket API
+sidebar_label: WebSocket
+original_id: client-libraries-websocket
+---
+
+Pulsar's [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API is meant to provide a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSockets you can publish and consume messages and use all the features available in the [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md) client libraries.
+
+
+> You can use Pulsar's WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples).
+
+## Running the WebSocket service
+
+The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled.
+
+In non-standalone mode, there are two ways to deploy the WebSocket service:
+
+* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker
+* as a [separate component](#as-a-separate-component)
+
+### Embedded with a Pulsar broker
+
+In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation.
+
+```properties
+webSocketServiceEnabled=true
+```
+
+### As a separate component
+
+In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters:
+
+* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers)
+* [`webServicePort`](reference-configuration.md#websocket-webServicePort)
+* [`clusterName`](reference-configuration.md#websocket-clusterName)
+
+Here's an example:
+
+```properties
+configurationStoreServers=zk1:2181,zk2:2181,zk3:2181
+webServicePort=8080
+clusterName=my-cluster
+```
+
+### Starting the broker
+
+When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool:
+
+```shell
+$ bin/pulsar-daemon start websocket
+```
+
+## API Reference
+
+Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages.
+
+All exchanges via the WebSocket API use JSON.
+
+### Producer endpoint
+
+The producer endpoint requires you to specify a tenant, namespace, and topic in the URL:
+
+```http
+ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic 
+```
+
+##### Query param
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs)
+`batchingEnabled` | boolean | no | Enable batching of messages (default: false)
+`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000)
+`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000)
+`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms)
+`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition`
+`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB`
+`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic
+`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer.
+`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash`
+
+
+#### Publishing a message
+
+```json
+{
+  "payload": "SGVsbG8gV29ybGQ=",
+  "properties": {"key1": "value1", "key2": "value2"},
+  "context": "1"
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`payload` | string | yes | Base-64 encoded payload
+`properties` | key-value pairs | no | Application-defined properties
+`context` | string | no | Application-defined request identifier
+`key` | string | no | For partitioned topics, decides which partition to use
+`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name
+
+
+##### Example success response
+
+```json
+{
+   "result": "ok",
+   "messageId": "CAAQAw==",
+   "context": "1"
+ }
+```
+##### Example failure response
+
+```json
+ {
+   "result": "send-error:3",
+   "errorMsg": "Failed to de-serialize from JSON",
+   "context": "1"
+ }
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`result` | string | yes | `ok` if successful or an error message if unsuccessful
+`messageId` | string | yes | Message ID assigned to the published message
+`context` | string | no | Application-defined request identifier
+
+
+### Consumer endpoint
+
+The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL:
+
+```http
+ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription
+```
+
+##### Query param
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0)
+`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`
+`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000)
+`consumerName` | string | no | Consumer name
+`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer
+`maxRedeliverCount` | int | no | Define a [maxRedeliverCount](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: 0). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature.
+`deadLetterTopic` | string | no | Define a [deadLetterTopic](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#deadLetterPolicy-org.apache.pulsar.client.api.DeadLetterPolicy-) for the consumer (default: {topic}-{subscription}-DLQ). Activates [Dead Letter Topic](https://github.com/apache/pulsar/wiki/PIP-22%3A-Pulsar-Dead-Letter-Topic) feature.
+`pullMode` | boolean | no | Enable pull mode (default: false). See "Flow Control" below.
+
+NB: these parameter (except `pullMode`) apply to the internal consumer of the WebSocket service.
+So messages will be subject to the redelivery settings as soon as the get into the receive queue,
+even if the client doesn't consume on the WebSocket.
+
+##### Receiving messages
+
+Server will push messages on the WebSocket session:
+
+```json
+{
+  "messageId": "CAAQAw==",
+  "payload": "SGVsbG8gV29ybGQ=",
+  "properties": {"key1": "value1", "key2": "value2"},
+  "publishTime": "2016-08-30 16:45:57.785"
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`messageId` | string | yes | Message ID
+`payload` | string | yes | Base-64 encoded payload
+`publishTime` | string | yes | Publish timestamp
+`properties` | key-value pairs | no | Application-defined properties
+`key` | string | no |  Original routing key set by producer
+
+#### Acknowledging the message
+
+Consumer needs to acknowledge the successful processing of the message to
+have the Pulsar broker delete it.
+
+```json
+{
+  "messageId": "CAAQAw=="
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`messageId`| string | yes | Message ID of the processed message
+
+#### Flow control
+
+##### Push Mode
+
+By default (`pullMode=false`), the consumer endpoint will use the `receiverQueueSize` parameter both to size its
+internal receive queue and to limit the number of unacknowledged messages that are passed to the WebSocket client.
+In this mode, if you don't send acknowledgements, the Pulsar WebSocket service will stop sending messages after reaching
+`receiverQueueSize` unacked messages sent to the WebSocket client.
+
+##### Pull Mode
+
+If you set `pullMode` to `true`, the WebSocket client will need to send `permit` commands to permit the
+Pulsar WebSocket service to send more messages.
+
+```json
+{
+  "type": "permit",
+  "permitMessages": 100
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`type`| string | yes | Type of command. Must be `permit`
+`permitMessages`| int | yes | Number of messages to permit
+
+NB: in this mode it's possible to acknowledge messages in a different connection.
+
+### Reader endpoint
+
+The reader endpoint requires you to specify a tenant, namespace, and topic in the URL:
+
+```http
+ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic
+```
+
+##### Query param
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`readerName` | string | no | Reader name
+`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000)
+`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`)
+
+##### Receiving messages
+
+Server will push messages on the WebSocket session:
+
+```json
+{
+  "messageId": "CAAQAw==",
+  "payload": "SGVsbG8gV29ybGQ=",
+  "properties": {"key1": "value1", "key2": "value2"},
+  "publishTime": "2016-08-30 16:45:57.785"
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`messageId` | string | yes | Message ID
+`payload` | string | yes | Base-64 encoded payload
+`publishTime` | string | yes | Publish timestamp
+`properties` | key-value pairs | no | Application-defined properties
+`key` | string | no |  Original routing key set by producer
+
+#### Acknowledging the message
+
+**In WebSocket**, Reader needs to acknowledge the successful processing of the message to
+have the Pulsar WebSocket service update the number of pending messages.
+If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit.
+
+```json
+{
+  "messageId": "CAAQAw=="
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`messageId`| string | yes | Message ID of the processed message
+
+
+### Error codes
+
+In case of error the server will close the WebSocket session using the
+following error codes:
+
+Error Code | Error Message
+:----------|:-------------
+1 | Failed to create producer
+2 | Failed to subscribe
+3 | Failed to deserialize from JSON
+4 | Failed to serialize to JSON
+5 | Failed to authenticate client
+6 | Client is not authorized
+7 | Invalid payload encoding
+8 | Unknown error
+
+> The application is responsible for re-establishing a new WebSocket session after a backoff period.
+
+## Client examples
+
+Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs).
+
+### Python
+
+This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip):
+
+```shell
+$ pip install websocket-client
+```
+
+You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client).
+
+#### Python producer
+
+Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic):
+
+```python
+import websocket, base64, json
+
+TOPIC = 'ws://localhost:8080/ws/v2/producer/persistent/public/default/my-topic'
+
+ws = websocket.create_connection(TOPIC)
+
+# Send one message as JSON
+ws.send(json.dumps({
+    'payload' : base64.b64encode('Hello World'),
+    'properties': {
+        'key1' : 'value1',
+        'key2' : 'value2'
+    },
+    'context' : 5
+}))
+
+response =  json.loads(ws.recv())
+if response['result'] == 'ok':
+    print 'Message published successfully'
+else:
+    print 'Failed to publish message:', response
+ws.close()
+```
+
+#### Python consumer
+
+Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives:
+
+```python
+import websocket, base64, json
+
+TOPIC = 'ws://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub'
+
+ws = websocket.create_connection(TOPIC)
+
+while True:
+    msg = json.loads(ws.recv())
+    if not msg: break
+
+    print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))
+
+    # Acknowledge successful processing
+    ws.send(json.dumps({'messageId' : msg['messageId']}))
+
+ws.close()
+```
+
+#### Python reader
+
+Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives:
+
+```python
+import websocket, base64, json
+
+TOPIC = 'ws://localhost:8080/ws/v2/reader/persistent/public/default/my-topic'
+
+ws = websocket.create_connection(TOPIC)
+
+while True:
+    msg = json.loads(ws.recv())
+    if not msg: break
+
+    print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))
+
+    # Acknowledge successful processing
+    ws.send(json.dumps({'messageId' : msg['messageId']}))
+
+ws.close()
+```
+
+### Node.js
+
+This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/):
+
+```shell
+$ npm install ws
+```
+
+#### Node.js producer
+
+Here's an example Node.js producer that sends a simple message to a Pulsar topic:
+
+```javascript
+var WebSocket = require('ws'),
+    topic = "ws://localhost:8080/ws/v2/producer/persistent/public/default/my-topic",
+    ws = new WebSocket(topic);
+
+var message = {
+  "payload" : new Buffer("Hello World").toString('base64'),
+  "properties": {
+    "key1" : "value1",
+    "key2" : "value2"
+  },
+  "context" : "1"
+};
+
+ws.on('open', function() {
+  // Send one message
+  ws.send(JSON.stringify(message));
+});
+
+ws.on('message', function(message) {
+  console.log('received ack: %s', message);
+});
+```
+
+#### Node.js consumer
+
+Here's an example Node.js consumer that listens on the same topic used by the producer above:
+
+```javascript
+var WebSocket = require('ws'),
+    topic = "ws://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub",
+    ws = new WebSocket(topic);
+
+ws.on('message', function(message) {
+    var receiveMsg = JSON.parse(message);
+    console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString());
+    var ackMsg = {"messageId" : receiveMsg.messageId};
+    ws.send(JSON.stringify(ackMsg));
+});
+```
+
+#### NodeJS reader
+```javascript
+var WebSocket = require('ws'),
+    topic = "ws://localhost:8080/ws/v2/reader/persistent/public/default/my-topic",
+    ws = new WebSocket(topic);
+
+ws.on('message', function(message) {
+    var receiveMsg = JSON.parse(message);
+    console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString());
+    var ackMsg = {"messageId" : receiveMsg.messageId};
+    ws.send(JSON.stringify(ackMsg));
+});
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/concepts-authentication.md b/site2/website/versioned_docs/version-2.6.0/concepts-authentication.md
new file mode 100644
index 0000000..de896b8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/concepts-authentication.md
@@ -0,0 +1,9 @@
+---
+id: version-2.6.0-concepts-authentication
+title: Authentication and Authorization
+sidebar_label: Authentication and Authorization
+original_id: concepts-authentication
+---
+
+Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at broker and it also supports authorization to identify client and its access rights on topics and tenants.
+
diff --git a/site2/website/versioned_docs/version-2.6.0/concepts-multi-tenancy.md b/site2/website/versioned_docs/version-2.6.0/concepts-multi-tenancy.md
new file mode 100644
index 0000000..dafab62
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/concepts-multi-tenancy.md
@@ -0,0 +1,40 @@
+---
+id: version-2.6.0-concepts-multi-tenancy
+title: Multi Tenancy
+sidebar_label: Multi Tenancy
+original_id: concepts-multi-tenancy
+---
+
+Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview.md) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed.
+
+The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure:
+
+```http
+persistent://tenant/namespace/topic
+```
+
+As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name).
+
+## Tenants
+
+To each tenant in a Pulsar instance you can assign:
+
+* An [authorization](security-authorization.md) scheme
+* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies
+
+## Namespaces
+
+Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy.
+
+* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant.
+* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application.
+
+Names for topics in the same namespace will look like this:
+
+```http
+persistent://tenant/app1/topic-1
+
+persistent://tenant/app1/topic-2
+
+persistent://tenant/app1/topic-3
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/concepts-overview.md b/site2/website/versioned_docs/version-2.6.0/concepts-overview.md
new file mode 100644
index 0000000..3e0d9cc
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/concepts-overview.md
@@ -0,0 +1,31 @@
+---
+id: version-2.6.0-concepts-overview
+title: Pulsar Overview
+sidebar_label: Overview
+original_id: concepts-overview
+---
+
+Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Pulsar was originally developed by Yahoo, it is under the stewardship of the [Apache Software Foundation](https://www.apache.org/).
+
+Key features of Pulsar are listed below:
+
+* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters.
+* Very low publish and end-to-end latency.
+* Seamless scalability to over a million topics.
+* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md).
+* Multiple [subscription modes](concepts-messaging.md#subscription-modes) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics.
+* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/).
+* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing.
+* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar.
+* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/longterm storage (such as S3 and GCS) when the data is aging out.
+
+## Contents
+
+- [Messaging Concepts](concepts-messaging.md)
+- [Architecture Overview](concepts-architecture-overview.md)
+- [Pulsar Clients](concepts-clients.md)
+- [Geo Replication](concepts-replication.md)
+- [Multi Tenancy](concepts-multi-tenancy.md)
+- [Authentication and Authorization](concepts-authentication.md)
+- [Topic Compaction](concepts-topic-compaction.md)
+- [Tiered Storage](concepts-tiered-storage.md)
diff --git a/site2/website/versioned_docs/version-2.6.0/concepts-replication.md b/site2/website/versioned_docs/version-2.6.0/concepts-replication.md
new file mode 100644
index 0000000..5a7a4d8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/concepts-replication.md
@@ -0,0 +1,9 @@
+---
+id: version-2.6.0-concepts-replication
+title: Geo Replication
+sidebar_label: Geo Replication
+original_id: concepts-replication
+---
+
+Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that.
+
diff --git a/site2/website/versioned_docs/version-2.6.0/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.6.0/concepts-tiered-storage.md
new file mode 100644
index 0000000..0e68087
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/concepts-tiered-storage.md
@@ -0,0 +1,18 @@
+---
+id: version-2.6.0-concepts-tiered-storage
+title: Tiered Storage
+sidebar_label: Tiered Storage
+original_id: concepts-tiered-storage
+---
+
+Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time.
+
+One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed.
+
+![Tiered Storage](assets/pulsar-tiered-storage.png)
+
+> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data.
+
+Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default).
+
+> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md).
diff --git a/site2/website/versioned_docs/version-2.6.0/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.6.0/concepts-topic-compaction.md
new file mode 100644
index 0000000..4c34270
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/concepts-topic-compaction.md
@@ -0,0 +1,37 @@
+---
+id: version-2.6.0-concepts-topic-compaction
+title: Topic Compaction
+sidebar_label: Topic Compaction
+original_id: concepts-topic-compaction
+---
+
+Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time int [...]
+
+> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md).
+
+For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message assoc [...]
+
+Pulsar's topic compaction feature:
+
+* Allows for faster "rewind" through topic logs
+* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage)
+* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md)
+* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger.
+
+> #### Topic compaction example: the stock ticker
+> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be high [...]
+
+
+## How topic compaction works
+
+When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key.
+
+After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and con [...]
+
+After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur:
+
+* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either:
+  * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or
+  * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon)
+
+
diff --git a/site2/website/versioned_docs/version-2.6.0/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.6.0/cookbooks-bookkeepermetadata.md
new file mode 100644
index 0000000..be3a6a4
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/cookbooks-bookkeepermetadata.md
@@ -0,0 +1,21 @@
+---
+id: version-2.6.0-cookbooks-bookkeepermetadata
+title: BookKeeper Ledger Metadata
+original_id: cookbooks-bookkeepermetadata
+---
+
+Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger.
+Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs.
+
+Description of current metadata:
+
+| Scope  | Metadata name | Metadata value |
+| ------------- | ------------- | ------------- |
+| All ledgers  | application  | 'pulsar' |
+| All ledgers  | component  | 'managed-ledger', 'schema', 'compacted-topic' |
+| Managed ledgers | pulsar/managed-ledger | name of the ledger |
+| Cursor | pulsar/cursor | name of the cursor |
+| Compacted topic | pulsar/compactedTopic | name of the original topic |
+| Compacted topic | pulsar/compactedTo | id of the last compacted message |
+
+
diff --git a/site2/website/versioned_docs/version-2.6.0/cookbooks-compaction.md b/site2/website/versioned_docs/version-2.6.0/cookbooks-compaction.md
new file mode 100644
index 0000000..b5f8b8d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/cookbooks-compaction.md
@@ -0,0 +1,127 @@
+---
+id: version-2.6.0-cookbooks-compaction
+title: Topic compaction
+sidebar_label: Topic compaction
+original_id: cookbooks-compaction
+---
+
+Pulsar's [topic compaction](concepts-topic-compaction.md#compaction) feature enables you to create **compacted** topics in which older, "obscured" entries are pruned from the topic, allowing for faster reads through the topic's history (which messages are deemed obscured/outdated/irrelevant will depend on your use case).
+
+To use compaction:
+
+* You need to give messages keys, as topic compaction in Pulsar takes place on a *per-key basis* (i.e. messages are compacted based on their key). For a stock ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the key (more on this [below](#when-should-i-use-compacted-topics)). Messages without keys will be left alone by the compaction process.
+* Compaction can be configured to run [automatically](#configuring-compaction-to-run-automatically), or you can manually [trigger](#trigger) compaction using the Pulsar administrative API.
+* Your consumers must be [configured](#consumer-configuration) to read from compacted topics ([Java consumers](#java), for example, have a `readCompacted` setting that must be set to `true`). If this configuration is not set, consumers will still be able to read from the non-compacted topic.
+
+
+> Compaction only works on messages that have keys (as in the stock ticker example the stock symbol serves as the key for each message). Keys can thus be thought of as the axis along which compaction is applied. Messages that don't have keys are simply ignored by compaction.
+
+## When should I use compacted topics?
+
+The classic example of a topic that could benefit from compaction would be a stock ticker topic through which consumers can access up-to-date values for specific stocks. Imagine a scenario in which messages carrying stock value data use the stock symbol as the key (`GOOG`, `AAPL`, `TWTR`, etc.). Compacting this topic would give consumers on the topic two options:
+
+* They can read from the "original," non-compacted topic in case they need access to "historical" values, i.e. the entirety of the topic's messages.
+* They can read from the compacted topic if they only want to see the most up-to-date messages.
+
+Thus, if you're using a Pulsar topic called `stock-values`, some consumers could have access to all messages in the topic (perhaps because they're performing some kind of number crunching of all values in the last hour) while the consumers used to power the real-time stock ticker only see the compacted topic (and thus aren't forced to process outdated messages). Which variant of the topic any given consumer pulls messages from is determined by the consumer's [configuration](#consumer-con [...]
+
+> One of the benefits of compaction in Pulsar is that you aren't forced to choose between compacted and non-compacted topics, as the compaction process leaves the original topic as-is and essentially adds an alternate topic. In other words, you can run compaction on a topic and consumers that need access to the non-compacted version of the topic will not be adversely affected.
+
+
+## Configuring compaction to run automatically
+
+Tenant administrators can configure a policy for compaction at the namespace level. The policy specifies how large the topic backlog can grow before compaction is triggered.
+
+For example, to trigger compaction when the backlog reaches 100MB:
+
+```bash
+$ bin/pulsar-admin namespaces set-compaction-threshold \
+  --threshold 100M my-tenant/my-namespace
+```
+
+Configuring the compaction threshold on a namespace will apply to all topics within that namespace.
+
+## Triggering compaction manually
+
+In order to run compaction on a topic, you need to use the [`topics compact`](reference-pulsar-admin.md#topics-compact) command for the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example:
+
+```bash
+$ bin/pulsar-admin topics compact \
+  persistent://my-tenant/my-namespace/my-topic
+```
+
+The `pulsar-admin` tool runs compaction via the Pulsar {@inject: rest:REST:/} API. To run compaction in its own dedicated process, i.e. *not* through the REST API, you can use the [`pulsar compact-topic`](reference-cli-tools.md#pulsar-compact-topic) command. Here's an example:
+
+```bash
+$ bin/pulsar compact-topic \
+  --topic persistent://my-tenant-namespace/my-topic
+```
+
+> Running compaction in its own process is recommended when you want to avoid interfering with the broker's performance. Broker performance should only be affected, however, when running compaction on topics with a large keyspace (i.e when there are many keys on the topic). The first phase of the compaction process keeps a copy of each key in the topic, which can create memory pressure as the number of keys grows. Using the `pulsar-admin topics compact` command to run compaction through  [...]
+
+The `pulsar compact-topic` command communicates with [ZooKeeper](https://zookeeper.apache.org) directly. In order to establish communication with ZooKeeper, though, the `pulsar` CLI tool will need to have a valid [broker configuration](reference-configuration.md#broker). You can either supply a proper configuration in `conf/broker.conf` or specify a non-default location for the configuration:
+
+```bash
+$ bin/pulsar compact-topic \
+  --broker-conf /path/to/broker.conf \
+  --topic persistent://my-tenant/my-namespace/my-topic
+
+# If the configuration is in conf/broker.conf
+$ bin/pulsar compact-topic \
+  --topic persistent://my-tenant/my-namespace/my-topic
+```
+
+#### When should I trigger compaction?
+
+How often you [trigger compaction](#triggering-compaction-manually) will vary widely based on the use case. If you want a compacted topic to be extremely speedy on read, then you should run compaction fairly frequently.
+
+## Consumer configuration
+
+Pulsar consumers and readers need to be configured to read from compacted topics. The sections below show you how to enable compacted topic reads for Pulsar's language clients. If the
+
+### Java
+
+In order to read from a compacted topic using a Java consumer, the `readCompacted` parameter must be set to `true`. Here's an example consumer for a compacted topic:
+
+```java
+Consumer<byte[]> compactedTopicConsumer = client.newConsumer()
+        .topic("some-compacted-topic")
+        .readCompacted(true)
+        .subscribe();
+```
+
+As mentioned above, topic compaction in Pulsar works on a *per-key basis*. That means that messages that you produce on compacted topics need to have keys (the content of the key will depend on your use case). Messages that don't have keys will be ignored by the compaction process. Here's an example Pulsar message with a key:
+
+```java
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.MessageBuilder;
+
+Message<byte[]> msg = MessageBuilder.create()
+        .setContent(someByteArray)
+        .setKey("some-key")
+        .build();
+```
+
+The example below shows a message with a key being produced on a compacted Pulsar topic:
+
+```java
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.MessageBuilder;
+import org.apache.pulsar.client.api.Producer;
+import org.apache.pulsar.client.api.PulsarClient;
+
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650")
+        .build();
+
+Producer<byte[]> compactedTopicProducer = client.newProducer()
+        .topic("some-compacted-topic")
+        .create();
+
+Message<byte[]> msg = MessageBuilder.create()
+        .setContent(someByteArray)
+        .setKey("some-key")
+        .build();
+
+compactedTopicProducer.send(msg);
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.6.0/cookbooks-deduplication.md
new file mode 100644
index 0000000..4d55569
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/cookbooks-deduplication.md
@@ -0,0 +1,121 @@
+---
+id: version-2.6.0-cookbooks-deduplication
+title: Message deduplication
+sidebar_label: Message deduplication
+original_id: cookbooks-deduplication
+---
+
+When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. 
+
+To use message deduplication in Pulsar, you have to [configure](#configure-message-deduplication) your Pulsar brokers and [clients](#pulsar-clients).
+
+> For more details on message deduplication, refer to [Concepts and Architecture](concepts-messaging.md#message-deduplication).
+
+## How it works
+
+You can enable or disable message deduplication on a per-namespace basis. By default, it is *disabled* on all namespaces. You can enable it in the following ways:
+
+* Enable for all namespaces at the broker-level
+* Enable for specific namespaces with the `pulsar-admin namespaces` interface
+
+## Configure message deduplication
+
+You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available.
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar [broker](reference-terminology.md#broker). If it is set to `true`, message deduplication is enabled by default on all namespaces; if it is set to `false` (the default), you have to enable or disable deduplication on a per-namespace basis. | `false`
+`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000`
+`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000`
+`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours)
+
+### Set default value at the broker-level
+
+By default, message deduplication is *disabled* on all Pulsar namespaces. To enable it by default on all namespaces, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker.
+
+Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI will override the default settings at the broker-level.
+
+### Enable message deduplication
+
+Though message deduplication is disabled by default at broker-level, you can enable message deduplication for specific namespaces using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace. The following is an example with `<tenant>/<namespace>`:
+
+```bash
+$ bin/pulsar-admin namespaces set-deduplication \
+  public/default \
+  --enable # or just -e
+```
+
+### Disable message deduplication
+
+Even if you enable message deduplication at broker-level, you can disable message deduplication for a specific namespace using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace. The following is an example with `<tenant>/<namespace>`:
+
+```bash
+$ bin/pulsar-admin namespaces set-deduplication \
+  public/default \
+  --disable # or just -d
+```
+
+## Pulsar clients
+
+If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers:
+
+1. Specify a name for the producer.
+1. Set the message timeout to `0` (namely, no timeout).
+
+The instructions for Java, Python, and C++ clients are different.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java clients-->
+
+To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. 
+
+```java
+import org.apache.pulsar.client.api.Producer;
+import org.apache.pulsar.client.api.PulsarClient;
+import java.util.concurrent.TimeUnit;
+
+PulsarClient pulsarClient = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650")
+        .build();
+Producer producer = pulsarClient.newProducer()
+        .producerName("producer-1")
+        .topic("persistent://public/default/topic-1")
+        .sendTimeout(0, TimeUnit.SECONDS)
+        .create();
+```
+
+<!--Python clients-->
+
+To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. 
+
+```python
+import pulsar
+
+client = pulsar.Client("pulsar://localhost:6650")
+producer = client.create_producer(
+    "persistent://public/default/topic-1",
+    producer_name="producer-1",
+    send_timeout_millis=0)
+```
+<!--C++ clients-->
+
+To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. 
+
+```cpp
+#include <pulsar/Client.h>
+
+std::string serviceUrl = "pulsar://localhost:6650";
+std::string topic = "persistent://some-tenant/ns1/topic-1";
+std::string producerName = "producer-1";
+
+Client client(serviceUrl);
+
+ProducerConfiguration producerConfig;
+producerConfig.setSendTimeout(0);
+producerConfig.setProducerName(producerName);
+
+Producer producer;
+
+Result result = client.createProducer(topic, producerConfig, producer);
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.6.0/cookbooks-encryption.md
new file mode 100644
index 0000000..d840537
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/cookbooks-encryption.md
@@ -0,0 +1,170 @@
+---
+id: version-2.6.0-cookbooks-encryption
+title: Pulsar Encryption
+sidebar_label: Encryption
+original_id: cookbooks-encryption
+---
+
+Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key.
+
+## Asymmetric and symmetric encryption
+
+Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone.
+
+Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair.
+
+The application configures the producer with the public  key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message.
+
+A message can be encrypted with more than one key.  Any one of the keys used for encrypting the message is sufficient to decrypt the message
+
+Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable
+
+## Producer
+![alt text](assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer")
+
+## Consumer
+![alt text](assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer")
+
+## Here are the steps to get started:
+
+1. Create your ECDSA or RSA public/private key pair.
+
+```shell
+openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem
+openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem
+```
+2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys.
+3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key.
+4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key")
+5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader)
+6. Sample producer application:
+```java
+class RawFileKeyReader implements CryptoKeyReader {
+
+    String publicKeyFile = "";
+    String privateKeyFile = "";
+
+    RawFileKeyReader(String pubKeyFile, String privKeyFile) {
+        publicKeyFile = pubKeyFile;
+        privateKeyFile = privKeyFile;
+    }
+
+    @Override
+    public EncryptionKeyInfo getPublicKey(String keyName, Map<String, String> keyMeta) {
+        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
+        try {
+            keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile)));
+        } catch (IOException e) {
+            System.out.println("ERROR: Failed to read public key from file " + publicKeyFile);
+            e.printStackTrace();
+        }
+        return keyInfo;
+    }
+
+    @Override
+    public EncryptionKeyInfo getPrivateKey(String keyName, Map<String, String> keyMeta) {
+        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
+        try {
+            keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile)));
+        } catch (IOException e) {
+            System.out.println("ERROR: Failed to read private key from file " + privateKeyFile);
+            e.printStackTrace();
+        }
+        return keyInfo;
+    }
+}
+PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080");
+
+ProducerConfiguration prodConf = new ProducerConfiguration();
+prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"));
+prodConf.addEncryptionKey("myappkey");
+
+Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf);
+
+for (int i = 0; i < 10; i++) {
+    producer.send("my-message".getBytes());
+}
+
+pulsarClient.close();
+```
+7. Sample Consumer Application:
+```java
+class RawFileKeyReader implements CryptoKeyReader {
+
+    String publicKeyFile = "";
+    String privateKeyFile = "";
+
+    RawFileKeyReader(String pubKeyFile, String privKeyFile) {
+        publicKeyFile = pubKeyFile;
+        privateKeyFile = privKeyFile;
+    }
+
+    @Override
+    public EncryptionKeyInfo getPublicKey(String keyName, Map<String, String> keyMeta) {
+        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
+        try {
+            keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile)));
+        } catch (IOException e) {
+            System.out.println("ERROR: Failed to read public key from file " + publicKeyFile);
+            e.printStackTrace();
+        }
+        return keyInfo;
+    }
+
+    @Override
+    public EncryptionKeyInfo getPrivateKey(String keyName, Map<String, String> keyMeta) {
+        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
+        try {
+            keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile)));
+        } catch (IOException e) {
+            System.out.println("ERROR: Failed to read private key from file " + privateKeyFile);
+            e.printStackTrace();
+        }
+        return keyInfo;
+    }
+}
+
+ConsumerConfiguration consConf = new ConsumerConfiguration();
+consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"));
+PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080");
+Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf);
+Message msg = null;
+
+for (int i = 0; i < 10; i++) {
+    msg = consumer.receive();
+    // do something
+    System.out.println("Received: " + new String(msg.getData()));
+}
+
+// Acknowledge the consumption of all messages at once
+consumer.acknowledgeCumulative(msg);
+pulsarClient.close();
+```
+
+## Key rotation
+Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version.
+
+## Enabling encryption at the producer application:
+If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages.  This can be done in two ways:
+1. The consumer application provides you access to their public key, which you add to your producer keys
+1. You grant access to one of the private keys from the pairs used by producer 
+
+In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys.
+
+E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2,
+```java
+conf.addEncryptionKey("myapp.messagekey1");
+conf.addEncryptionKey("myapp.messagekey2");
+```
+## Decrypting encrypted messages at the consumer application:
+Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key.
+
+## Handling Failures:
+* Producer/ Consumer loses access to the key
+  * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request.
+  * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request.
+Application will never be able to decrypt the messages if the private key is permanently lost.
+* Batch messaging
+  * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME.
+* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. 
+
diff --git a/site2/website/versioned_docs/version-2.6.0/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.6.0/cookbooks-message-queue.md
new file mode 100644
index 0000000..3bc696c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/cookbooks-message-queue.md
@@ -0,0 +1,95 @@
+---
+id: version-2.6.0-cookbooks-message-queue
+title: Using Pulsar as a message queue
+sidebar_label: Message queue
+original_id: cookbooks-message-queue
+---
+
+Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken.
+
+Pulsar is a great choice for a message queue because:
+
+* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind
+* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish)
+
+> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish).
+
+
+# Client configuration changes
+
+To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must:
+
+* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble)
+* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setti [...]
+
+   The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case.
+
+## Java clients
+
+Here's an example Java consumer configuration that uses a shared subscription:
+
+```java
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.SubscriptionType;
+
+String SERVICE_URL = "pulsar://localhost:6650";
+String TOPIC = "persistent://public/default/mq-topic-1";
+String subscription = "sub-1";
+
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl(SERVICE_URL)
+        .build();
+
+Consumer consumer = client.newConsumer()
+        .topic(TOPIC)
+        .subscriptionName(subscription)
+        .subscriptionType(SubscriptionType.Shared)
+        // If you'd like to restrict the receiver queue size
+        .receiverQueueSize(10)
+        .subscribe();
+```
+
+## Python clients
+
+Here's an example Python consumer configuration that uses a shared subscription:
+
+```python
+from pulsar import Client, ConsumerType
+
+SERVICE_URL = "pulsar://localhost:6650"
+TOPIC = "persistent://public/default/mq-topic-1"
+SUBSCRIPTION = "sub-1"
+
+client = Client(SERVICE_URL)
+consumer = client.subscribe(
+    TOPIC,
+    SUBSCRIPTION,
+    # If you'd like to restrict the receiver queue size
+    receiver_queue_size=10,
+    consumer_type=ConsumerType.Shared)
+```
+
+## C++ clients
+
+Here's an example C++ consumer configuration that uses a shared subscription:
+
+```cpp
+#include <pulsar/Client.h>
+
+std::string serviceUrl = "pulsar://localhost:6650";
+std::string topic = "persistent://public/defaultmq-topic-1";
+std::string subscription = "sub-1";
+
+Client client(serviceUrl);
+
+ConsumerConfiguration consumerConfig;
+consumerConfig.setConsumerType(ConsumerType.ConsumerShared);
+// If you'd like to restrict the receiver queue size
+consumerConfig.setReceiverQueueSize(10);
+
+Consumer consumer;
+
+Result result = client.subscribe(topic, subscription, consumerConfig, consumer);
+```
+
diff --git a/site2/website/versioned_docs/version-2.6.0/cookbooks-non-persistent.md b/site2/website/versioned_docs/version-2.6.0/cookbooks-non-persistent.md
new file mode 100644
index 0000000..3f73736
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/cookbooks-non-persistent.md
@@ -0,0 +1,59 @@
+---
+id: version-2.6.0-cookbooks-non-persistent
+title: Non-persistent messaging
+sidebar_label: Non-persistent messaging
+original_id: cookbooks-non-persistent
+---
+
+**Non-persistent topics** are Pulsar topics in which message data is *never* [persistently stored](concepts-architecture-overview.md#persistent-storage) and kept only in memory. This cookbook provides:
+
+* A basic [conceptual overview](#overview) of non-persistent topics
+* Information about [configurable parameters](#configuration) related to non-persistent topics
+* A guide to the [CLI interface](#cli) for managing non-persistent topics
+
+## Overview
+
+By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover.
+
+Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar [broker](reference-terminology.md#broker) or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss.
+
+Non-persistent topics have names of this form (note the `non-persistent` in the name):
+
+```http
+non-persistent://tenant/namespace/topic
+```
+
+> For more high-level information about non-persistent topics, see the [Concepts and Architecture](concepts-messaging.md#non-persistent-topics) documentation.
+
+## Using
+
+> In order to use non-persistent topics, they must be [enabled](#enabling) in your Pulsar broker configuration.
+
+In order to use non-persistent topics, you only need to differentiate them by name when interacting with them. This [`pulsar-client produce`](reference-cli-tools.md#pulsar-client-produce) command, for example, would produce one message on a non-persistent topic in a standalone cluster:
+
+```bash
+$ bin/pulsar-client produce non-persistent://public/default/example-np-topic \
+  --num-produce 1 \
+  --messages "This message will be stored only in memory"
+```
+
+> For a more thorough guide to non-persistent topics from an administrative perspective, see the [Non-persistent topics](admin-api-non-persistent-topics.md) guide.
+
+## Enabling
+
+In order to enable non-persistent topics in a Pulsar broker, the [`enableNonPersistentTopics`](reference-configuration.md#broker-enableNonPersistentTopics) must be set to `true`. This is the default, and so you won't need to take any action to enable non-persistent messaging.
+
+
+> #### Configuration for standalone mode
+> If you're running Pulsar in standalone mode, the same configurable parameters are available but in the [`standalone.conf`](reference-configuration.md#standalone) configuration file. 
+
+If you'd like to enable *only* non-persistent topics in a broker, you can set the [`enablePersistentTopics`](reference-configuration.md#broker-enablePersistentTopics) parameter to `false` and the `enableNonPersistentTopics` parameter to `true`.
+
+## Managing with cli
+
+Non-persistent topics can be managed using the [`pulsar-admin non-persistent`](reference-pulsar-admin.md#non-persistent) command-line interface. With that interface you can perform actions like [create a partitioned non-persistent topic](reference-pulsar-admin.md#non-persistent-create-partitioned-topic), get [stats](reference-pulsar-admin.md#non-persistent-stats) for a non-persistent topic, [list](reference-pulsar-admin.md) non-persistent topics under a namespace, and more.
+
+## Using with Pulsar clients
+
+You shouldn't need to make any changes to your Pulsar clients to use non-persistent messaging beyond making sure that you use proper [topic names](#using) with `non-persistent` as the topic type.
+
diff --git a/site2/website/versioned_docs/version-2.6.0/cookbooks-partitioned.md b/site2/website/versioned_docs/version-2.6.0/cookbooks-partitioned.md
new file mode 100644
index 0000000..6956539
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/cookbooks-partitioned.md
@@ -0,0 +1,93 @@
+---
+id: version-2.6.0-cookbooks-partitioned
+title: Partitioned topics
+sidebar_label: Partitioned Topics
+original_id: cookbooks-partitioned
+---
+
+By default, Pulsar topics are served by a single broker. Using only a single broker limits a topic's maximum throughput. *Partitioned topics* are a special type of topic that can span multiple brokers and thus allow for much higher throughput. For an explanation of how partitioned topics work, see the [Partitioned Topics](concepts-messaging.md#partitioned-topics) concepts.
+
+You can publish to partitioned topics using Pulsar client libraries and you can [create and manage](#managing-partitioned-topics) partitioned topics using Pulsar [admin API](admin-api-overview.md).
+
+## Publish to partitioned topics
+
+When publishing to partitioned topics, you do not need to explicitly specify a [routing mode](concepts-messaging.md#routing-modes) when you create a new producer. If you do not specify a routing mode, the round robin route mode is used. Take [Java](#java) as an example.
+
+Publishing messages to partitioned topics in the Java client works much like [publishing to normal topics](client-libraries-java.md#using-producers). The difference is that you need to specify either one of the currently available message routers or a custom router.
+
+### Routing mode
+
+You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. Three options are available:
+
+* `SinglePartition`
+* `RoundRobinPartition`
+* `CustomPartition`
+
+The following is an example:
+
+```java
+String pulsarBrokerRootUrl = "pulsar://localhost:6650";
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+
+PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build();
+Producer<byte[]> producer = pulsarClient.newProducer()
+        .topic(topic)
+        .messageRoutingMode(MessageRoutingMode.SinglePartition)
+        .create();
+producer.send("Partitioned topic message".getBytes());
+```
+
+### Custom message router
+
+To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method:
+
+```java
+public interface MessageRouter extends Serializable {
+    int choosePartition(Message msg);
+}
+```
+
+The following router routes every message to partition 10:
+
+```java
+public class AlwaysTenRouter implements MessageRouter {
+    public int choosePartition(Message msg) {
+        return 10;
+    }
+}
+```
+
+With that implementation in hand, you can send
+
+```java
+String pulsarBrokerRootUrl = "pulsar://localhost:6650";
+String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic";
+
+PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build();
+Producer<byte[]> producer = pulsarClient.newProducer()
+        .topic(topic)
+        .messageRouter(new AlwaysTenRouter())
+        .create();
+producer.send("Partitioned topic message".getBytes());
+```
+
+### How to choose partitions when using a key
+If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose partition when you use a key.
+
+```java
+// If the message has a key, it supersedes the round robin routing policy
+        if (msg.hasKey()) {
+            return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions());
+        }
+
+        if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary.
+            long currentMs = clock.millis();
+            return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions());
+        } else {
+            return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions());
+        }
+```        
+
+## Manage partitioned topics
+
+You can use Pulsar [admin API](admin-api-overview.md) to create and manage [partitioned topics](admin-api-partitioned-topics.md).
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.6.0/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.6.0/cookbooks-retention-expiry.md
new file mode 100644
index 0000000..093838c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/cookbooks-retention-expiry.md
@@ -0,0 +1,291 @@
+---
+id: version-2.6.0-cookbooks-retention-expiry
+title: Message retention and expiry
+sidebar_label: Message retention and expiry
+original_id: cookbooks-retention-expiry
+---
+
+Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs.
+
+As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it.
+
+(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.)
+
+In Pulsar, you can modify this behavior, with namespace granularity, in two ways:
+
+* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies).
+* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL).
+
+Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster).
+
+
+> #### Retention and TTL solve two different problems
+> * Message retention: Keep the data for at least X hours (even if acknowledged)
+> * Time-to-live: Discard data after some time (by automatically acknowledging)
+>
+> Most applications will want to use at most one of these.
+
+
+## Retention policies
+
+By default, when a Pulsar message arrives at a broker it will be stored until it has been acknowledged on all subscriptions, at which point it will be marked for deletion. You can override this behavior and retain even messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention policies are either a *size limit* or a *time limit*.
+
+Retention policies are particularly useful if you intend to exclusively use the Reader interface. Because the Reader interface does not use acknowledgements, messages will never exist within backlogs. Most realistic Reader-only use cases require that retention be configured.
+
+When you set a size limit of, say, 10 gigabytes, then acknowledged messages in all topics in the namespace will be retained until the size limit for the topic is reached; if you set a time limit of, say, 1 day, then acknowledged messages for all topics in the namespace will be retained for 24 hours. The retention settings apply to all messages on topics that do not have any subscriptions, or if there are subscriptions, to messages that have been acked by all subscriptions. The retention  [...]
+
+When a retention limit is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again.
+
+It is also possible to set *unlimited* retention time or size by setting `-1` for either time or size retention.
+
+### Defaults
+
+There are two configuration parameters that you can use to set [instance](reference-terminology.md#instance)-wide defaults for message retention: [`defaultRetentionTimeInMinutes=0`](reference-configuration.md#broker-defaultRetentionTimeInMinutes) and [`defaultRetentionSizeInMB=0`](reference-configuration.md#broker-defaultRetentionSizeInMB).
+
+Both of these parameters are in the [`broker.conf`](reference-configuration.md#broker) configuration file.
+
+### Set retention policy
+
+You can set a retention policy for a namespace by specifying the namespace as well as both a size limit *and* a time limit.
+
+#### pulsar-admin
+
+Use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag.
+
+##### Examples
+
+To set a size limit of 10 gigabytes and a time limit of 3 hours for the `my-tenant/my-ns` namespace:
+
+```shell
+$ pulsar-admin namespaces set-retention my-tenant/my-ns \
+  --size 10G \
+  --time 3h
+```
+
+To set retention with a size limit but without a time limit:
+
+```shell
+$ pulsar-admin namespaces set-retention my-tenant/my-ns \
+  --size 1T \
+  --time -1
+```
+
+Retention can be configured to be unlimited both in size and time:
+
+```shell
+$ pulsar-admin namespaces set-retention my-tenant/my-ns \
+  --size -1 \
+  --time -1
+```
+
+
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention}
+
+#### Java
+
+```java
+int retentionTime = 10; // 10 minutes
+int retentionSize = 500; // 500 megabytes
+RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize);
+admin.namespaces().setRetention(namespace, policies);
+```
+
+### Get retention policy
+
+You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`.
+
+#### pulsar-admin
+
+Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces get-retention my-tenant/my-ns
+{
+  "retentionTimeInMinutes": 10,
+  "retentionSizeInMB": 0
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention}
+
+#### Java
+
+```java
+admin.namespaces().getRetention(namespace);
+```
+
+## Backlog quotas
+
+*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged.
+
+You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting:
+
+TODO: Expand on is this per backlog or per topic?
+
+* an allowable *size threshold* for each topic in the namespace
+* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded.
+
+The following retention policies are available:
+
+Policy | Action
+:------|:------
+`producer_request_hold` | The broker will hold and not persist produce request payload
+`producer_exception` | The broker will disconnect from the client by throwing an exception
+`consumer_backlog_eviction` | The broker will begin discarding backlog messages
+
+
+> #### Beware the distinction between retention policy types
+> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs.
+
+
+Backlog quotas are handled at the namespace level. They can be managed via:
+
+### Set size thresholds and backlog retention policies
+
+You can set a size threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit, and a policy by name.
+
+#### pulsar-admin
+
+Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` flag, and a retention policy using the `-p`/`--policy` flag.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \
+  --limit 2G \
+  --policy producer_request_hold
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap}
+
+#### Java
+
+```java
+long sizeLimit = 2147483648L;
+BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold;
+BacklogQuota quota = new BacklogQuota(sizeLimit, policy);
+admin.namespaces().setBacklogQuota(namespace, quota);
+```
+
+### Get backlog threshold and backlog retention policy
+
+You can see which size threshold and backlog retention policy has been applied to a namespace.
+
+#### pulsar-admin
+
+Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example:
+
+```shell
+$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns
+{
+  "destination_storage": {
+    "limit" : 2147483648,
+    "policy" : "producer_request_hold"
+  }
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap}
+
+#### Java
+
+```java
+Map<BacklogQuota.BacklogQuotaType,BacklogQuota> quotas =
+  admin.namespaces().getBacklogQuotas(namespace);
+```
+
+### Remove backlog quotas
+
+#### pulsar-admin
+
+Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace. Here's an example:
+
+```shell
+$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota}
+
+#### Java
+
+```java
+admin.namespaces().removeBacklogQuota(namespace);
+```
+
+### Clear backlog
+
+#### pulsar-admin
+
+Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces clear-backlog my-tenant/my-ns
+```
+
+By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag.
+
+## Time to live (TTL)
+
+By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained.
+
+### Set the TTL for a namespace
+
+#### pulsar-admin
+
+Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \
+  --messageTTL 120 # TTL of 2 minutes
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL}
+
+#### Java
+
+```java
+admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds);
+```
+
+### Get the TTL configuration for a namespace
+
+#### pulsar-admin
+
+Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns
+60
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL}
+
+#### Java
+
+```java
+admin.namespaces().getNamespaceMessageTTL(namespace)
+```
+
diff --git a/site2/website/versioned_docs/version-2.6.0/deploy-aws.md b/site2/website/versioned_docs/version-2.6.0/deploy-aws.md
new file mode 100644
index 0000000..bcef265
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/deploy-aws.md
@@ -0,0 +1,224 @@
+---
+id: version-2.6.0-deploy-aws
+title: Deploying a Pulsar cluster on AWS using Terraform and Ansible
+sidebar_label: Amazon Web Services
+original_id: deploy-aws
+---
+
+> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md).
+
+One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install [...]
+
+## Requirements and setup
+
+In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things:
+
+* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool
+* Python and [pip](https://pip.pypa.io/en/stable/)
+* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts
+
+You also need to make sure that you are currently logged into your AWS account via the `aws` tool:
+
+```bash
+$ aws configure
+```
+
+## Installation
+
+You can install Ansible on Linux or macOS using pip.
+
+```bash
+$ pip install ansible
+```
+
+You can install Terraform using the instructions [here](https://www.terraform.io/intro/getting-started/install.html).
+
+You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands:
+
+```bash
+$ git clone https://github.com/apache/pulsar
+$ cd pulsar/deployment/terraform-ansible/aws
+```
+
+## SSH setup
+
+> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting
+> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file.
+>
+> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`,
+> follow the steps below:
+>
+> 1. update `ansible.cfg` with following values:
+>
+> ```shell
+> private_key_file=~/.ssh/pulsar_aws
+> ```
+>
+> 2. update `terraform.tfvars` with following values:
+>
+> ```shell
+> public_key_path=~/.ssh/pulsar_aws.pub
+> ```
+
+In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`:
+
+```bash
+$ ssh-keygen -t rsa
+```
+
+Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created:
+
+```bash
+$ ls ~/.ssh
+id_rsa               id_rsa.pub
+```
+
+## Create AWS resources using Terraform
+
+To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the following command:
+
+```bash
+$ terraform init
+# This will create a .terraform folder
+```
+
+After that, you can apply the default Terraform configuration by entering this command:
+
+```bash
+$ terraform apply
+```
+
+Then you see this prompt below:
+
+```bash
+Do you want to perform these actions?
+  Terraform will perform the actions described above.
+  Only 'yes' will be accepted to approve.
+
+  Enter a value:
+```
+
+Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created.
+
+### Apply a non-default configuration
+
+You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available:
+
+Variable name | Description | Default
+:-------------|:------------|:-------
+`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub`
+`region` | The AWS region in which the Pulsar cluster runs | `us-west-2`
+`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a`
+`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses  | `ami-9fa343e7`
+`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3
+`num_bookie_nodes` | The number of bookies that runs in the cluster | 3
+`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2
+`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1
+`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16`
+`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies)
+
+### What is installed
+
+When you run the Ansible playbook, the following AWS resources are used:
+
+* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes:
+  * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances)
+  * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances)
+  * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+  * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)
+* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security
+* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world
+* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC
+* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC
+
+All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region.
+
+### Fetch your Pulsar connection URL
+
+When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this:
+
+```
+pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650
+```
+
+You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that):
+
+```bash
+$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value
+```
+
+### Destroy your cluster
+
+At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command:
+
+```bash
+$ terraform destroy
+```
+
+## Setup Disks
+
+Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config,
+
+To setup disks on bookie nodes, enter this command:
+
+```bash
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  setup-disk.yaml
+```
+
+After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk.
+Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up.
+
+## Run the Pulsar playbook
+
+Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. To do so, enter this command:
+
+```bash
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  ../deploy-pulsar.yaml
+```
+
+If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command:
+
+```bash
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  --private-key="~/.ssh/some-non-default-key" \
+  ../deploy-pulsar.yaml
+```
+
+## Access the cluster
+
+You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url).
+
+For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip:
+
+```bash
+$ pip install pulsar-client
+```
+
+Now, open up the Python shell using the `python` command:
+
+```bash
+$ python
+```
+
+Once you are in the shell, enter the following command:
+
+```python
+>>> import pulsar
+>>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650')
+# Make sure to use your connection URL
+>>> producer = client.create_producer('persistent://public/default/test-topic')
+>>> producer.send('Hello world')
+>>> client.close()
+```
+
+If all of these commands are successful, Pulsar clients can now use your cluster!
+
diff --git a/site2/website/versioned_docs/version-2.6.0/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.6.0/deploy-bare-metal-multi-cluster.md
new file mode 100644
index 0000000..a409bbb
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/deploy-bare-metal-multi-cluster.md
@@ -0,0 +1,426 @@
+---
+id: version-2.6.0-deploy-bare-metal-multi-cluster
+title: Deploying a multi-cluster on bare metal
+sidebar_label: Bare metal multi-cluster
+original_id: deploy-bare-metal-multi-cluster
+---
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with
+> Pulsar or using it in a startup or on a single team, you had better opt for a single cluster. For instructions on deploying a single cluster,
+> see the guide [here](deploy-bare-metal.md).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+> run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
+>
+> 3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+> package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure
+> this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md).
+
+A Pulsar *instance* consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance involves the following basic steps:
+
+* Deploying two separate [ZooKeeper](#deploy-zookeeper) quorums: a [local](#deploy-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks
+* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster
+* Deploying a [BookKeeper cluster](#deploy-bookkeeper) of bookies in each Pulsar cluster
+* Deploying [brokers](#deploy-brokers) in each Pulsar cluster
+
+If you want to deploy a single Pulsar cluster, see [Clusters and Brokers](getting-started-standalone.md#start-the-cluster).
+
+> #### Run Pulsar locally or on Kubernetes?
+> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pul [...]
+
+## System requirement
+Pulsar is currently available for **MacOS** and **Linux**. In order to use Pulsar, you need to install Java 8 from [Oracle download center](http://www.oracle.com/).
+
+## Install Pulsar
+
+To get started running Pulsar, download a binary tarball release in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-{{pulsar:version}}/apache-pulsar-{{pulsar:version}}-bin.tar.gz' -O apache-pulsar-{{pulsar:version}}-bin.tar.gz
+  ```
+
+Once you download the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvfz apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+## What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md)
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses 
+`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase
+
+The following directories are created once you begin running Pulsar:
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`instances` | Artifacts created for [Pulsar Functions](functions-overview.md)
+`logs` | Logs that the installation creates
+
+
+## Deploy ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Configuration Store](#deploy-the-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum.
+
+The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper.
+
+
+### Deploy local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar.
+
+You need to stand up one local ZooKeeper cluster *per Pulsar cluster* for deploying a Pulsar instance. 
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
+
+```shell
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start zookeeper
+```
+
+### Deploy the configuration store 
+
+The ZooKeeper cluster that is configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
+
+If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you should stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorom uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+```
+
+As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers.
+
+Again, given the very low expected load on the configuration store servers, you can
+share the same hosts used for the local ZooKeeper quorum.
+
+For example, assume a Pulsar instance with the following clusters `us-west`,
+`us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: 
+
+```
+zk[1-3].${CLUSTER}.example.com
+```
+
+In this scenario if you want to pick the quorum participants from few clusters and
+let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This method guarantees that writes to configuration store is possible even if one of these regions is unreachable.
+
+The ZK configuration in all the servers looks like:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+```
+
+Additionally, ZK observers need to have the following parameters:
+
+```properties
+peerType=observer
+```
+
+##### Start the service
+
+Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+$ bin/pulsar-daemon start configuration-store
+```
+
+## Cluster metadata initialization
+
+Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only needs to write these metadata once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example:
+
+```shell
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster us-west \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2184 \
+  --web-service-url http://pulsar.us-west.example.com:8080/ \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
+```
+
+As you can see from the example above, you need to specify the following:
+
+* The name of the cluster
+* The local ZooKeeper connection string for the cluster
+* The configuration store connection string for the entire instance
+* The web service URL for the cluster
+* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster
+
+If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster.
+
+Make sure to run `initialize-cluster-metadata` for each cluster in your instance.
+
+## Deploy BookKeeper
+
+BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
+
+Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Configure bookies
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster.
+
+### Start bookies
+
+You can start a bookie in two ways: in the foreground or as a background daemon.
+
+To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start bookie
+```
+
+You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+```shell
+$ bin/bookkeeper shell bookiesanity
+```
+
+This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger.
+
+After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running.
+
+```bash
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+```
+
+Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity.
+
+* Disk I/O capacity read/write
+* Storage capacity
+
+Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is
+designed to use multiple devices:
+
+* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller.
+
+
+
+## Deploy brokers
+
+Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers.
+
+### Broker configuration
+
+You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
+
+The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those  [...]
+
+You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster.
+
+The following is an example configuration:
+
+```properties
+# Local ZooKeeper servers
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Configuration store quorum connection string.
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+
+clusterName=us-west
+
+# Broker data port
+brokerServicePort=6650
+
+# Broker data port for TLS
+brokerServicePortTls=6651
+
+# Port to use to server HTTP request
+webServicePort=8080
+
+# Port to use to server HTTPS request
+webServicePortTls=8443
+```
+
+### Broker hardware
+
+Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that.
+
+### Start the broker service
+
+You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start broker
+```
+
+You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker):
+
+```shell
+$ bin/pulsar broker
+```
+
+## Service discovery
+
+[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup).
+
+You can also use your own service discovery system if you want. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means.
+
+> #### Service discovery already provided by many scheduling systems
+> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism.
+
+
+### Service discovery setup
+
+The service discovery mechanism that included with Pulsar maintains a list of active brokers, which stored in ZooKeeper, and supports lookup using HTTP and also the [binary protocol](developing-binary-protocol.md) of Pulsar.
+
+To get started setting up the built-in service of discovery of Pulsar, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the ZooKeeper quorum connection string of the cluster and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [con [...]
+store](reference-terminology.md#configuration-store) quorum connection string.
+
+```properties
+# Zookeeper quorum connection string
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Global configuration store connection string
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+```
+
+To start the discovery service:
+
+```shell
+$ bin/pulsar-daemon start discovery
+```
+
+## Admin client and verification
+
+At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients.
+
+The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster:
+
+```properties
+serviceUrl=http://pulsar.us-west.example.com:8080/
+```
+
+## Provision new tenants
+
+Pulsar is built as a fundamentally multi-tenant system.
+
+
+If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool:
+
+
+```shell
+$ bin/pulsar-admin tenants create test-tenant \
+  --allowed-clusters us-west \
+  --admin-roles test-admin-role
+```
+
+In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources.
+
+Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant.
+
+
+The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant.
+
+```shell
+$ bin/pulsar-admin namespaces create test-tenant/ns1
+```
+
+##### Test producer and consumer
+
+
+Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool.
+
+
+You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them.
+
+The topic name in this case could be:
+
+```http
+persistent://test-tenant/ns1/my-topic
+```
+
+Start a consumer that creates a subscription on the topic and waits for messages:
+
+```shell
+$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic
+```
+
+Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds:
+
+```shell
+$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic
+```
+
+To report the topic stats:
+
+```shell
+$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.6.0/deploy-bare-metal.md
new file mode 100644
index 0000000..ea62361
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/deploy-bare-metal.md
@@ -0,0 +1,459 @@
+---
+id: version-2.6.0-deploy-bare-metal
+title: Deploy a cluster on bare metal
+sidebar_label: Bare metal
+original_id: deploy-bare-metal
+---
+
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with
+> Pulsar or using Pulsar in a startup or on a single team, it is simplest to opt for a single cluster. If you do need to run a multi-cluster Pulsar instance,
+> see the guide [here](deploy-bare-metal-multi-cluster.md).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+> have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
+>
+> 3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+> package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure
+> this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md).
+
+Deploying a Pulsar cluster involves doing the following (in order):
+
+* Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional)
+* Initialize [cluster metadata](#initialize-cluster-metadata)
+* Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster
+* Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers)
+
+## Preparation
+
+### Requirements
+
+> If you already have an existing zookeeper cluster and want to reuse it, you do not need to prepare the machines
+> for running ZooKeeper.
+
+To run Pulsar on bare metal, the following configuration is recommended:
+
+* At least 6 Linux machines or VMs
+  * 3 for running [ZooKeeper](https://zookeeper.apache.org)
+  * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie
+* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts
+
+> If you do not have enough machines, or to try out Pulsar in cluster mode (and expand the cluster later),
+> you can deploy a full Pulsar configuration on one node, where Zookeeper, the bookie and broker are run on the same machine.
+
+> If you do not have a DNS server, you can use the multi-host format in the service URL instead.
+
+Each machine in your cluster needs to have [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or a more recent  version of Java installed.
+
+The following is a diagram showing the basic setup:
+
+![alt-text](assets/pulsar-basic-setup.png)
+
+In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper.
+
+### Hardware considerations
+
+When you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning.
+
+#### ZooKeeper
+
+For machines running ZooKeeper, is is recommended to use less powerful machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, *not* for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice.
+
+#### Bookies and Brokers
+
+For machines running a bookie and a Pulsar broker, more powerful machines are required. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following:
+
+* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers)
+* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies)
+
+## Install the Pulsar binary package
+
+> You need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploy-a-zookeeper-cluster) and [BookKeeper](#deploy-a-bookkeeper-cluster).
+
+To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways:
+
+* By clicking on the link below directly, which automatically triggers a download:
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+* From the Pulsar [downloads page](pulsar:download_page_url)
+* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on [GitHub](https://github.com)
+* Using [wget](https://www.gnu.org/software/wget):
+
+```bash
+$ wget pulsar:binary_release_url
+```
+
+Once you download the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvzf apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+The extracted directory contains the following subdirectories:
+
+Directory | Contains
+:---------|:--------
+`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses
+`logs` | Logs that the installation creates
+
+## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional)
+
+> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary distribution, containing all the `builtin` connectors.
+> If you want to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using builtin connectors, you need to download the connectors tarball release on every broker node in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:connector_release_url" download>Pulsar IO Connectors {{pulsar:version}} release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:connector_release_url/{connector}-{{pulsar:version}}.nar
+  ```
+
+Once you download the .nar file, copy the file to directory `connectors` in the pulsar directory. 
+For example, if you download the connector file `pulsar-io-aerospike-{{pulsar:version}}.nar`:
+
+```bash
+$ mkdir connectors
+$ mv pulsar-io-aerospike-{{pulsar:version}}.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-{{pulsar:version}}.nar
+...
+```
+
+## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional)
+
+> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders {{pulsar:version}} release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:offloader_release_url
+  ```
+
+Once you download the tarball, in the pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the pulsar directory:
+
+```bash
+$ tar xvfz apache-pulsar-offloaders-{{pulsar:version}}-bin.tar.gz
+
+// you can find a directory named `apache-pulsar-offloaders-{{pulsar:version}}` in the pulsar directory
+// then copy the offloaders
+
+$ mv apache-pulsar-offloaders-{{pulsar:version}}/offloaders offloaders
+
+$ ls offloaders
+tiered-storage-jcloud-{{pulsar:version}}.nar
+```
+
+For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md)
+
+
+## Deploy a ZooKeeper cluster
+
+> If you already have an exsiting zookeeper cluster and want to use it, you can skip this section.
+
+[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster, you need to deploy ZooKeeper first (before all other components). A 3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper.
+
+To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+> If you only have one machine on which to deploy Pulsar, you only need to add one server entry in the configuration file.
+
+On each host, you need to specify the ID of the node in the `myid` file, which is in the `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set the `myid` value as follows:
+
+```bash
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start zookeeper
+```
+
+> If you plan to deploy Zookeeper with the Bookie on the same node, you
+> need to start zookeeper by using different stats port.
+
+Start zookeeper with [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool like:
+
+```bash
+$ PULSAR_EXTRA_OPTS="-Dstats_server_port=8001" bin/pulsar-daemon start zookeeper
+```
+
+## Initialize cluster metadata
+
+Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper for each cluster in your instance. You only need to write this data **once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your ZooKeeper cluster. The following is an example:
+
+```shell
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster pulsar-cluster-1 \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2181 \
+  --web-service-url http://pulsar.us-west.example.com:8080 \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443 \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650 \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+As you can see from the example above, you will need to specify the following:
+
+Flag | Description
+:----|:-----------
+`--cluster` | A name for the cluster
+`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port).
+`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port).
+`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port).
+`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port).
+
+
+> If you do not have a DNS server, you can use multi-host format in the service URL with the following settings:
+>
+> ```properties
+> --web-service-url http://host1:8080,host2:8080,host3:8080 \
+> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \
+> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \
+> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651
+> ```
+
+## Deploy a BookKeeper cluster
+
+[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**.
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example:
+
+```properties
+zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+```
+
+Once you appropriately modify the `zkServers` parameter, you can make any other configuration changes that you require. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper). However, consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice.
+
+> ##### NOTES
+>
+> Since Pulsar 2.1.0 releases, Pulsar introduces [stateful function](functions-develop.md#state-storage) for Pulsar Functions. If you want to enable that feature,
+> you need to enable table service on BookKeeper by doing the following setting in `conf/bookkeeper.conf` file.
+>
+> ```conf
+> extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent
+> ```
+
+Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground.
+
+To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start bookie
+```
+
+To start the bookie in the foreground:
+
+```bash
+$ bin/bookkeeper bookie
+```
+
+You can verify that a bookie works properly by running the `bookiesanity` command on the [BookKeeper shell](reference-cli-tools.md#shell):
+
+```bash
+$ bin/bookkeeper shell bookiesanity
+```
+
+This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger.
+
+After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify all the bookies in the cluster are up running.
+
+```bash
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+```
+
+This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger.
+
+
+## Deploy Pulsar brokers
+
+Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie.
+
+### Configure Brokers
+
+The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`.
+
+```properties
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+```
+
+You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)):
+
+```properties
+clusterName=pulsar-cluster-1
+```
+
+In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port than the default):
+
+```properties
+brokerServicePort=6650
+brokerServicePortTls=6651
+webServicePort=8080
+webServicePortTls=8443
+```
+
+> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`.
+>
+> ```properties
+> # Number of bookies to use when creating a ledger
+> managedLedgerDefaultEnsembleSize=1
+>
+> # Number of copies to store for each message
+> managedLedgerDefaultWriteQuorum=1
+> 
+> # Number of guaranteed copies (acks to wait before write is complete)
+> managedLedgerDefaultAckQuorum=1
+> ```
+
+### Enable Pulsar Functions (optional)
+
+If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below:
+
+1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`.
+
+    ```conf
+    functionsWorkerEnabled=true
+    ```
+
+2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). 
+
+    ```conf
+    pulsarFunctionsCluster: pulsar-cluster-1
+    ```
+
+If you want to learn more options about deploying the functions worker, check out [Deploy and manage functions worker](functions-worker.md).
+
+### Start Brokers
+
+You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup.
+
+You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command:
+
+```bash
+$ bin/pulsar broker
+```
+
+You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start broker
+```
+
+Once you succesfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go!
+
+## Connect to the running cluster
+
+Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly.
+
+To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example:
+
+```properties
+webServiceUrl=http://us-west.example.com:8080
+brokerServiceurl=pulsar://us-west.example.com:6650
+```
+
+> If you do not have a DNS server, you can specify multi-host in service URL as follows:
+>
+> ```properties
+> webServiceUrl=http://host1:8080,host2:8080,host3:8080
+> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650
+> ```
+
+Once that is complete, you can publish a message to the Pulsar topic:
+
+```bash
+$ bin/pulsar-client produce \
+  persistent://public/default/test \
+  -n 1 \
+  -m "Hello Pulsar"
+```
+
+> You may need to use a different cluster name in the topic if you specify a cluster name other than `pulsar-cluster-1`.
+
+This command publishes a single message to the Pulsar topic. In addition, you can subscribe to the Pulsar topic in a different terminal before publishing messages as below:
+
+```bash
+$ bin/pulsar-client consume \
+  persistent://public/default/test \
+  -n 100 \
+  -s "consumer-test" \
+  -t "Exclusive"
+```
+
+Once you successfully publish the above message to the topic, you should see it in the standard output:
+
+```bash
+----- got message -----
+Hello Pulsar
+```
+
+## Run Functions
+
+> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can try out the Pulsar Functions now.
+
+Create an ExclamationFunction `exclamation`.
+
+```bash
+bin/pulsar-admin functions create \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+```
+
+Check whether the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function.
+
+```bash
+bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world"
+```
+
+You should see the following output:
+
+```shell
+hello world!
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/deploy-dcos.md b/site2/website/versioned_docs/version-2.6.0/deploy-dcos.md
new file mode 100644
index 0000000..0d60363
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/deploy-dcos.md
@@ -0,0 +1,183 @@
+---
+id: version-2.6.0-deploy-dcos
+title: Deploy Pulsar on DC/OS
+sidebar_label: DC/OS
+original_id: deploy-dcos
+---
+
+> ### Tips
+>
+> If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of
+> `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+[DC/OS](https://dcos.io/) (the <strong>D</strong>ata<strong>C</strong>enter <strong>O</strong>perating <strong>S</strong>ystem) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool that [Mesosphere](https://mesosphere.com/) creates and maintains .
+
+Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets.
+
+## Prerequisites
+
+In order to run Pulsar on DC/OS, you need the following:
+
+* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher
+* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes
+* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed
+* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo.
+
+  ```bash
+  $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json
+  ```
+
+Each node in the DC/OS-managed Mesos cluster must have at least:
+
+* 4 CPU
+* 4 GB of memory
+* 60 GB of total persistent disk
+
+Alternatively, you can change the configuration in `PulsarGroups.json` according to match your resources of DC/OS cluster.
+
+## Deploy Pulsar using the DC/OS command interface
+
+You can deploy Pulsar on DC/OS using this command:
+
+```bash
+$ dcos marathon group add PulsarGroups.json
+```
+
+This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster:
+
+* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance)
+* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance)
+* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance
+
+
+> When you run DC/OS, a ZooKeeper cluster already runs at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately.
+
+After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying.
+
+![DC/OS command executed](assets/dcos_command_execute.png)
+
+![DC/OS command executed2](assets/dcos_command_execute2.png)
+
+## The BookKeeper group
+
+To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group.
+
+![DC/OS bookkeeper status](assets/dcos_bookkeeper_status.png)
+
+At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that the bookies have been deployed successfully and are now running.
+ 
+![DC/OS bookkeeper running](assets/dcos_bookkeeper_run.png)
+ 
+You can also click into each bookie instance to get more detailed information, such as the bookie running log.
+
+![DC/OS bookie log](assets/dcos_bookie_log.png)
+
+To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory.
+
+![DC/OS bookkeeper in zk](assets/dcos_bookkeeper_in_zookeeper.png)
+
+## The Pulsar broker Group
+
+Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers.
+
+![DC/OS broker status](assets/dcos_broker_status.png)
+
+![DC/OS broker running](assets/dcos_broker_run.png)
+
+You can also click into each broker instance to get more detailed information, such as the broker running log.
+
+![DC/OS broker log](assets/dcos_broker_log.png)
+
+Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created.
+
+![DC/OS broker in zk](assets/dcos_broker_in_zookeeper.png)
+
+## Monitor Group
+
+The **monitory** group consists of Prometheus and Grafana.
+
+![DC/OS monitor status](assets/dcos_monitor_status.png)
+
+### Prometheus
+
+Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example.
+
+![DC/OS prom endpoint](assets/dcos_prom_endpoint.png)
+
+If you click that endpoint, you can see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL display all the bookies and brokers.
+
+![DC/OS prom targets](assets/dcos_prom_targets.png)
+
+### Grafana
+
+Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example.
+ 
+![DC/OS grafana endpoint](assets/dcos_grafana_endpoint.png)
+
+If you click that endpoint, you can access the Grafana dashbaord.
+
+![DC/OS grafana targets](assets/dcos_grafana_dashboard.png)
+
+## Run a simple Pulsar consumer and producer on DC/OS
+
+Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action.
+
+### Download and prepare the Pulsar Java tutorial
+
+You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file of the repo).
+
+```bash
+$ git clone https://github.com/streamlio/pulsar-java-tutorial
+```
+
+Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java).
+The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. The client agent IP address can also replace this.
+
+Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it can produce more messages.
+
+Now compile the project code using the command below:
+
+```bash
+$ mvn clean package
+```
+
+### Run the consumer and producer
+
+Execute this command to run the consumer:
+
+```bash
+$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial"
+```
+
+Execute this command to run the producer:
+
+```bash
+$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial"
+```
+
+You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI.
+
+![DC/OS pulsar producer](assets/dcos_producer.png)
+
+![DC/OS pulsar consumer](assets/dcos_consumer.png)
+
+### View Grafana metric output
+
+While the producer and consumer run, you can access running metrics information from Grafana.
+
+![DC/OS pulsar dashboard](assets/dcos_metrics.png)
+
+
+## Uninstall Pulsar
+
+You can shut down and uninstall the `pulsar` application from DC/OS at any time in the following two ways:
+
+1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar group.
+
+    ![DC/OS pulsar uninstall](assets/dcos_uninstall.png)
+
+2. You can use the following command:
+
+    ```bash
+    $ dcos marathon group remove /pulsar
+    ```
diff --git a/site2/website/versioned_docs/version-2.6.0/deploy-monitoring.md b/site2/website/versioned_docs/version-2.6.0/deploy-monitoring.md
new file mode 100644
index 0000000..aee871c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/deploy-monitoring.md
@@ -0,0 +1,90 @@
+---
+id: version-2.6.0-deploy-monitoring
+title: Monitoring
+sidebar_label: Monitoring
+original_id: deploy-monitoring
+---
+
+You can use different ways to monitor a Pulsar cluster, exposing both metrics that relate to the usage of topics and the overall health of the individual components of the cluster.
+
+## Collect metrics
+
+You can collect broker stats, ZooKeeper stats, and BookKeeper stats. 
+
+### Broker stats
+
+You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types:
+
+* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below:
+
+  ```shell
+  bin/pulsar-admin broker-stats destinations
+  ```
+
+* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics using the command below:
+
+  ```shell
+  bin/pulsar-admin broker-stats monitoring-metrics
+  ```
+
+All the message rates are updated every 1min.
+
+The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at:
+
+```shell
+http://$BROKER_ADDRESS:8080/metrics
+```
+
+### ZooKeeper stats
+
+The local Zookeeper and configuration store server and clients that are shipped with Pulsar have been instrumented to expose detailed stats through Prometheus as well.
+
+```shell
+http://$LOCAL_ZK_SERVER:8000/metrics
+http://$GLOBAL_ZK_SERVER:8001/metrics
+```
+
+The default port of local ZooKeeper is `8000` and the default port of configuration store is `8001`. You can change the default port of local Zookeeper and configuration store by specifying system property `stats_server_port`.
+
+### BookKeeper stats
+
+For BookKeeper you can configure the stats frameworks by changing the `statsProviderClass` in
+`conf/bookkeeper.conf`.
+
+The default BookKeeper configuration, which is included with Pulsar distribution, enables the Prometheus exporter.
+
+```shell
+http://$BOOKIE_ADDRESS:8000/metrics
+```
+
+The default port for bookie is `8000` (instead of `8080`). You can change the port by configuring `prometheusStatsHttpPort` in `conf/bookkeeper.conf`.
+
+## Configure Prometheus
+
+You can use Prometheus to collect and store the metrics data. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/).
+
+When you run Pulsar on bare metal, you can provide the list of nodes that needs to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is automatically setup with the [provided](deploy-kubernetes.md) instructions.
+
+## Dashboards
+
+When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode.
+
+For that reason you only need to collect time series of metrics aggregated at the namespace level.
+
+### Pulsar per-topic dashboard
+
+The per-topic dashboard instructions are available at [Dashboard](administration-dashboard.md).
+
+### Grafana
+
+You can use grafana to easily create dashboard driven by the data that is stored in Prometheus.
+
+When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards.
+
+Enter the command below to use the dashboard manually:
+
+```shell
+docker run -p3000:3000 \
+        -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \
+        apachepulsar/pulsar-grafana:latest
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.6.0/developing-binary-protocol.md
new file mode 100644
index 0000000..1ddb4f7
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/developing-binary-protocol.md
@@ -0,0 +1,556 @@
+---
+id: version-2.6.0-develop-binary-protocol
+title: Pulsar binary protocol specification
+sidebar_label: Binary protocol
+original_id: develop-binary-protocol
+---
+
+Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency.
+
+Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below.
+
+> ### Connection sharing
+> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction.
+
+All commands associated with Pulsar's protocol are contained in a
+[`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand.
+
+## Framing
+
+Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB.
+
+The Pulsar protocol allows for two types of commands:
+
+1. **Simple commands** that do not carry a message payload.
+2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers.
+
+> Message payloads are passed in raw format rather than protobuf format for efficiency reasons.
+
+### Simple commands
+
+Simple (payload-free) commands have this basic structure:
+
+| Component   | Description                                                                             | Size (in bytes) |
+|:------------|:----------------------------------------------------------------------------------------|:----------------|
+| totalSize   | The size of the frame, counting everything that comes after it (in bytes)               | 4               |
+| commandSize | The size of the protobuf-serialized command                                             | 4               |
+| message     | The protobuf message serialized in a raw binary format (rather than in protobuf format) |                 |
+
+### Payload commands
+
+Payload commands have this basic structure:
+
+| Component    | Description                                                                                 | Size (in bytes) |
+|:-------------|:--------------------------------------------------------------------------------------------|:----------------|
+| totalSize    | The size of the frame, counting everything that comes after it (in bytes)                   | 4               |
+| commandSize  | The size of the protobuf-serialized command                                                 | 4               |
+| message      | The protobuf message serialized in a raw binary format (rather than in protobuf format)     |                 |
+| magicNumber  | A 2-byte byte array (`0x0e01`) identifying the current format                               | 2               |
+| checksum     | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4               |
+| metadataSize | The size of the message [metadata](#message-metadata)                                       | 4               |
+| metadata     | The message [metadata](#message-metadata) stored as a binary protobuf message               |                 |
+| payload      | Anything left in the frame is considered the payload and can include any sequence of bytes  |                 |
+
+## Message metadata
+
+Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer.
+
+| Field                                | Description                                                                                                                                                                                                                                               |
+|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `producer_name`                      | The name of the producer that published the message                                                                                                                                                                                         |
+| `sequence_id`                        | The sequence ID of the message, assigned by producer                                                                                                                                                                                        |
+| `publish_time`                       | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC)                                                                                                                                                    |
+| `properties`                         | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. |
+| `replicated_from` *(optional)*       | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published                                                                                                             |
+| `partition_key` *(optional)*         | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose                                                                                                                          |
+| `compression` *(optional)*           | Signals that payload has been compressed and with which compression library                                                                                                                                                                               |
+| `uncompressed_size` *(optional)*     | If compression is used, the producer must fill the uncompressed size field with the original payload size                                                                                                                                                 |
+| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch                                                                                                                   |
+
+### Batch messages
+
+When using batch messages, the payload will be containing a list of entries,
+each of them with its individual metadata, defined by the `SingleMessageMetadata`
+object.
+
+
+For a single batch, the payload format will look like this:
+
+
+| Field         | Description                                                 |
+|:--------------|:------------------------------------------------------------|
+| metadataSizeN | The size of the single message metadata serialized Protobuf |
+| metadataN     | Single message metadata                                     |
+| payloadN      | Message payload passed by application                       |
+
+Each metadata field looks like this;
+
+| Field                      | Description                                             |
+|:---------------------------|:--------------------------------------------------------|
+| properties                 | Application-defined properties                          |
+| partition key *(optional)* | Key to indicate the hashing to a particular partition   |
+| payload_size               | Size of the payload for the single message in the batch |
+
+When compression is enabled, the whole batch will be compressed at once.
+
+## Interactions
+
+### Connection establishment
+
+After opening a TCP connection to a broker, typically on port 6650, the client
+is responsible to initiate the session.
+
+![Connect interaction](assets/binary-protocol-connect.png)
+
+After receiving a `Connected` response from the broker, the client can
+consider the connection ready to use. Alternatively, if the broker doesn't
+validate the client authentication, it will reply with an `Error` command and
+close the TCP connection.
+
+Example:
+
+```protobuf
+message CommandConnect {
+  "client_version" : "Pulsar-Client-Java-v1.15.2",
+  "auth_method_name" : "my-authentication-plugin",
+  "auth_data" : "my-auth-data",
+  "protocol_version" : 6
+}
+```
+
+Fields:
+ * `client_version` → String based identifier. Format is not enforced
+ * `auth_method_name` → *(optional)* Name of the authentication plugin if auth
+   enabled
+ * `auth_data` → *(optional)* Plugin specific authentication data
+ * `protocol_version` → Indicates the protocol version supported by the
+   client. Broker will not send commands introduced in newer revisions of the
+   protocol. Broker might be enforcing a minimum version
+
+```protobuf
+message CommandConnected {
+  "server_version" : "Pulsar-Broker-v1.15.2",
+  "protocol_version" : 6
+}
+```
+
+Fields:
+ * `server_version` → String identifier of broker version
+ * `protocol_version` → Protocol version supported by the broker. Client
+   must not attempt to send commands introduced in newer revisions of the
+   protocol
+
+### Keep Alive
+
+To identify prolonged network partitions between clients and brokers or cases
+in which a machine crashes without interrupting the TCP connection on the remote
+end (eg: power outage, kernel panic, hard reboot...), we have introduced a
+mechanism to probe for the availability status of the remote peer.
+
+Both clients and brokers are sending `Ping` commands periodically and they will
+close the socket if a `Pong` response is not received within a timeout (default
+used by broker is 60s).
+
+A valid implementation of a Pulsar client is not required to send the `Ping`
+probe, though it is required to promptly reply after receiving one from the
+broker in order to prevent the remote side from forcibly closing the TCP connection.
+
+
+### Producer
+
+In order to send messages, a client needs to establish a producer. When creating
+a producer, the broker will first verify that this particular client is
+authorized to publish on the topic.
+
+Once the client gets confirmation of the producer creation, it can publish
+messages to the broker, referring to the producer id negotiated before.
+
+![Producer interaction](assets/binary-protocol-producer.png)
+
+##### Command Producer
+
+```protobuf
+message CommandProducer {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "producer_id" : 1,
+  "request_id" : 1
+}
+```
+
+Parameters:
+ * `topic` → Complete topic name to where you want to create the producer on
+ * `producer_id` → Client generated producer identifier. Needs to be unique
+    within the same connection
+ * `request_id` → Identifier for this request. Used to match the response with
+    the originating request. Needs to be unique within the same connection
+ * `producer_name` → *(optional)* If a producer name is specified, the name will
+    be used, otherwise the broker will generate a unique name. Generated
+    producer name is guaranteed to be globally unique. Implementations are
+    expected to let the broker generate a new producer name when the producer
+    is initially created, then reuse it when recreating the producer after
+    reconnections.
+
+The broker will reply with either `ProducerSuccess` or `Error` commands.
+
+##### Command ProducerSuccess
+
+```protobuf
+message CommandProducerSuccess {
+  "request_id" :  1,
+  "producer_name" : "generated-unique-producer-name"
+}
+```
+
+Parameters:
+ * `request_id` → Original id of the `CreateProducer` request
+ * `producer_name` → Generated globally unique producer name or the name
+    specified by the client, if any.
+
+##### Command Send
+
+Command `Send` is used to publish a new message within the context of an
+already existing producer. This command is used in a frame that includes command
+as well as message payload, for which the complete format is specified in the
+[payload commands](#payload-commands) section.
+
+```protobuf
+message CommandSend {
+  "producer_id" : 1,
+  "sequence_id" : 0,
+  "num_messages" : 1
+}
+```
+
+Parameters:
+ * `producer_id` → id of an existing producer
+ * `sequence_id` → each message has an associated sequence id which is expected
+   to be implemented with a counter starting at 0. The `SendReceipt` that
+   acknowledges the effective publishing of a messages will refer to it by
+   its sequence id.
+ * `num_messages` → *(optional)* Used when publishing a batch of messages at
+   once.
+
+##### Command SendReceipt
+
+After a message has been persisted on the configured number of replicas, the
+broker will send the acknowledgment receipt to the producer.
+
+
+```protobuf
+message CommandSendReceipt {
+  "producer_id" : 1,
+  "sequence_id" : 0,
+  "message_id" : {
+    "ledgerId" : 123,
+    "entryId" : 456
+  }
+}
+```
+
+Parameters:
+ * `producer_id` → id of producer originating the send request
+ * `sequence_id` → sequence id of the published message
+ * `message_id` → message id assigned by the system to the published message
+   Unique within a single cluster. Message id is composed of 2 longs, `ledgerId`
+   and `entryId`, that reflect that this unique id is assigned when appending
+   to a BookKeeper ledger
+
+
+##### Command CloseProducer
+
+**Note**: *This command can be sent by either producer or broker*.
+
+When receiving a `CloseProducer` command, the broker will stop accepting any
+more messages for the producer, wait until all pending messages are persisted
+and then reply `Success` to the client.
+
+The broker can send a `CloseProducer` command to client when it's performing
+a graceful failover (eg: broker is being restarted, or the topic is being unloaded
+by load balancer to be transferred to a different broker).
+
+When receiving the `CloseProducer`, the client is expected to go through the
+service discovery lookup again and recreate the producer again. The TCP
+connection is not affected.
+
+### Consumer
+
+A consumer is used to attach to a subscription and consume messages from it.
+After every reconnection, a client needs to subscribe to the topic. If a
+subscription is not already there, a new one will be created.
+
+![Consumer](assets/binary-protocol-consumer.png)
+
+#### Flow control
+
+After the consumer is ready, the client needs to *give permission* to the
+broker to push messages. This is done with the `Flow` command.
+
+A `Flow` command gives additional *permits* to send messages to the consumer.
+A typical consumer implementation will use a queue to accumulate these messages
+before the application is ready to consume them.
+
+After the application has dequeued half of the messages in the queue, the consumer 
+sends permits to the broker to ask for more messages (equals to half of the messages in the queue).
+
+For example, if the queue size is 1000 and the consumer consumes 500 messages in the queue.
+Then the consumer sends permits to the broker to ask for 500 messages.
+
+##### Command Subscribe
+
+```protobuf
+message CommandSubscribe {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "subscription" : "my-subscription-name",
+  "subType" : "Exclusive",
+  "consumer_id" : 1,
+  "request_id" : 1
+}
+```
+
+Parameters:
+ * `topic` → Complete topic name to where you want to create the consumer on
+ * `subscription` → Subscription name
+ * `subType` → Subscription type: Exclusive, Shared, Failover
+ * `consumer_id` → Client generated consumer identifier. Needs to be unique
+    within the same connection
+ * `request_id` → Identifier for this request. Used to match the response with
+    the originating request. Needs to be unique within the same connection
+ * `consumer_name` → *(optional)* Clients can specify a consumer name. This
+    name can be used to track a particular consumer in the stats. Also, in
+    Failover subscription type, the name is used to decide which consumer is
+    elected as *master* (the one receiving messages): consumers are sorted by
+    their consumer name and the first one is elected master.
+
+##### Command Flow
+
+```protobuf
+message CommandFlow {
+  "consumer_id" : 1,
+  "messagePermits" : 1000
+}
+```
+
+Parameters:
+* `consumer_id` → Id of an already established consumer
+* `messagePermits` → Number of additional permits to grant to the broker for
+    pushing more messages
+
+##### Command Message
+
+Command `Message` is used by the broker to push messages to an existing consumer,
+within the limits of the given permits.
+
+
+This command is used in a frame that includes the message payload as well, for
+which the complete format is specified in the [payload commands](#payload-commands)
+section.
+
+```protobuf
+message CommandMessage {
+  "consumer_id" : 1,
+  "message_id" : {
+    "ledgerId" : 123,
+    "entryId" : 456
+  }
+}
+```
+
+
+##### Command Ack
+
+An `Ack` is used to signal to the broker that a given message has been
+successfully processed by the application and can be discarded by the broker.
+
+In addition, the broker will also maintain the consumer position based on the
+acknowledged messages.
+
+```protobuf
+message CommandAck {
+  "consumer_id" : 1,
+  "ack_type" : "Individual",
+  "message_id" : {
+    "ledgerId" : 123,
+    "entryId" : 456
+  }
+}
+```
+
+Parameters:
+ * `consumer_id` → Id of an already established consumer
+ * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative`
+ * `message_id` → Id of the message to acknowledge
+ * `validation_error` → *(optional)* Indicates that the consumer has discarded
+   the messages due to: `UncompressedSizeCorruption`,
+   `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError`
+
+##### Command CloseConsumer
+
+***Note***: *This command can be sent by either producer or broker*.
+
+This command behaves the same as [`CloseProducer`](#command-closeproducer)
+
+##### Command RedeliverUnacknowledgedMessages
+
+A consumer can ask the broker to redeliver some or all of the pending messages
+that were pushed to that particular consumer and not yet acknowledged.
+
+The protobuf object accepts a list of message ids that the consumer wants to
+be redelivered. If the list is empty, the broker will redeliver all the
+pending messages.
+
+On redelivery, messages can be sent to the same consumer or, in the case of a
+shared subscription, spread across all available consumers.
+
+
+##### Command ReachedEndOfTopic
+
+This is sent by a broker to a particular consumer, whenever the topic
+has been "terminated" and all the messages on the subscription were
+acknowledged.
+
+The client should use this command to notify the application that no more
+messages are coming from the consumer.
+
+##### Command ConsumerStats
+
+This command is sent by the client to retreive Subscriber and Consumer level 
+stats from the broker.
+Parameters:
+ * `request_id` → Id of the request, used to correlate the request 
+      and the response.
+ * `consumer_id` → Id of an already established consumer.
+
+##### Command ConsumerStatsResponse
+
+This is the broker's response to ConsumerStats request by the client. 
+It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request.
+If the `error_code` or the `error_message` field is set it indicates that the request has failed.
+
+##### Command Unsubscribe
+
+This command is sent by the client to unsubscribe the `consumer_id` from the associated topic.
+Parameters:
+ * `request_id` → Id of the request.
+ * `consumer_id` → Id of an already established consumer which needs to unsubscribe.
+
+
+## Service discovery
+
+### Topic lookup
+
+Topic lookup needs to be performed each time a client needs to create or
+reconnect a producer or a consumer. Lookup is used to discover which particular
+broker is serving the topic we are about to use.
+
+Lookup can be done with a REST call as described in the
+[admin API](admin-api-persistent-topics.md#lookup-of-topic)
+docs.
+
+Since Pulsar-1.16 it is also possible to perform the lookup within the binary
+protocol.
+
+For the sake of example, let's assume we have a service discovery component
+running at `pulsar://broker.example.com:6650`
+
+Individual brokers will be running at `pulsar://broker-1.example.com:6650`,
+`pulsar://broker-2.example.com:6650`, ...
+
+A client can use a connection to the discovery service host to issue a
+`LookupTopic` command. The response can either be a broker hostname to
+connect to, or a broker hostname to which retry the lookup.
+
+The `LookupTopic` command has to be used in a connection that has already
+gone through the `Connect` / `Connected` initial handshake.
+
+![Topic lookup](assets/binary-protocol-topic-lookup.png)
+
+```protobuf
+message CommandLookupTopic {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "request_id" : 1,
+  "authoritative" : false
+}
+```
+
+Fields:
+ * `topic` → Topic name to lookup
+ * `request_id` → Id of the request that will be passed with its response
+ * `authoritative` → Initial lookup request should use false. When following a
+   redirect response, client should pass the same value contained in the
+   response
+
+##### LookupTopicResponse
+
+Example of response with successful lookup:
+
+```protobuf
+message CommandLookupTopicResponse {
+  "request_id" : 1,
+  "response" : "Connect",
+  "brokerServiceUrl" : "pulsar://broker-1.example.com:6650",
+  "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651",
+  "authoritative" : true
+}
+```
+
+Example of lookup response with redirection:
+
+```protobuf
+message CommandLookupTopicResponse {
+  "request_id" : 1,
+  "response" : "Redirect",
+  "brokerServiceUrl" : "pulsar://broker-2.example.com:6650",
+  "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651",
+  "authoritative" : true
+}
+```
+
+In this second case, we need to reissue the `LookupTopic` command request
+to `broker-2.example.com` and this broker will be able to give a definitive
+answer to the lookup request.
+
+### Partitioned topics discovery
+
+Partitioned topics metadata discovery is used to find out if a topic is a
+"partitioned topic" and how many partitions were set up.
+
+If the topic is marked as "partitioned", the client is expected to create
+multiple producers or consumers, one for each partition, using the `partition-X`
+suffix.
+
+This information only needs to be retrieved the first time a producer or
+consumer is created. There is no need to do this after reconnections.
+
+The discovery of partitioned topics metadata works very similar to the topic
+lookup. The client send a request to the service discovery address and the
+response will contain actual metadata.
+
+##### Command PartitionedTopicMetadata
+
+```protobuf
+message CommandPartitionedTopicMetadata {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "request_id" : 1
+}
+```
+
+Fields:
+ * `topic` → the topic for which to check the partitions metadata
+ * `request_id` → Id of the request that will be passed with its response
+
+
+##### Command PartitionedTopicMetadataResponse
+
+Example of response with metadata:
+
+```protobuf
+message CommandPartitionedTopicMetadataResponse {
+  "request_id" : 1,
+  "response" : "Success",
+  "partitions" : 32
+}
+```
+
+## Protobuf interface
+
+All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}.
diff --git a/site2/website/versioned_docs/version-2.6.0/developing-cpp.md b/site2/website/versioned_docs/version-2.6.0/developing-cpp.md
new file mode 100644
index 0000000..bec31d5
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/developing-cpp.md
@@ -0,0 +1,101 @@
+---
+id: version-2.6.0-develop-cpp
+title: Building Pulsar C++ client
+sidebar_label: Building Pulsar C++ client
+original_id: develop-cpp
+---
+
+## Supported platforms
+
+The Pulsar C++ client has been successfully tested on **MacOS** and **Linux**.
+
+## System requirements
+
+You need to have the following installed to use the C++ client:
+
+* [CMake](https://cmake.org/)
+* [Boost](http://www.boost.org/)
+* [Protocol Buffers](https://developers.google.com/protocol-buffers/) 2.6
+* [Log4CXX](https://logging.apache.org/log4cxx)
+* [libcurl](https://curl.haxx.se/libcurl/)
+* [Google Test](https://github.com/google/googletest)
+* [JsonCpp](https://github.com/open-source-parsers/jsoncpp)
+
+## Compilation
+
+There are separate compilation instructions for [MacOS](#macos) and [Linux](#linux). For both systems, start by cloning the Pulsar repository:
+
+```shell
+$ git clone https://github.com/apache/pulsar
+```
+
+### Linux
+
+First, install all of the necessary dependencies:
+
+```shell
+$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \
+  libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev
+```
+
+Then compile and install [Google Test](https://github.com/google/googletest):
+
+```shell
+# libgtest-dev version is 1.18.0 or above
+$ cd /usr/src/googletest
+$ sudo cmake .
+$ sudo make
+$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/
+
+# less than 1.18.0
+$ cd /usr/src/gtest
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgtest.a /usr/lib
+
+$ cd /usr/src/gmock
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgmock.a /usr/lib
+```
+
+Finally, compile the Pulsar client library for C++ inside the Pulsar repo:
+
+```shell
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+```
+
+The resulting files, `libpulsar.so` and `libpulsar.a`, will be placed in the `lib` folder of the repo while two tools, `perfProducer` and `perfConsumer`, will be placed in the `perf` directory.
+
+### MacOS
+
+First, install all of the necessary dependencies:
+
+```shell
+# OpenSSL installation
+$ brew install openssl
+$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/
+$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/
+
+# Protocol Buffers installation
+$ brew tap homebrew/versions
+$ brew install protobuf260
+$ brew install boost
+$ brew install log4cxx
+
+# Google Test installation
+$ git clone https://github.com/google/googletest.git
+$ cd googletest
+$ cmake .
+$ make install
+```
+
+Then compile the Pulsar client library in the repo that you cloned:
+
+```shell
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+```
diff --git a/site2/website/versioned_docs/version-2.6.0/developing-tools.md b/site2/website/versioned_docs/version-2.6.0/developing-tools.md
new file mode 100644
index 0000000..82a49f1
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.6.0/developing-tools.md
@@ -0,0 +1,106 @@
+---
+id: version-2.6.0-develop-tools
+title: Simulation tools
+sidebar_label: Simulation tools
+original_id: develop-tools
+---
+
+It is sometimes necessary create an test environment and incur artificial load to observe how well load managers
+handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an
+effort to make create this load and observe the effects on the managers more easily.
+
+## Simulation Client
+The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes.
+Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact
+with the simulation client directly, but instead delegates their requests to the simulation controller, which will then
+send signals to clients to start incurring load. The client implementation is in the class
+`org.apache.pulsar.testclient.LoadSimulationClient`.
+
+### Usage
+To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows:
+
+```
+pulsar-perf simulation-client --port <listen port> --service-url <pulsar service url>
+```
+
+The client will then be ready to receive controller commands.
+## Simulation Controller
+The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old
+topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class
+`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send
+command with.
+
+### Usage
... 10298 lines suppressed ...