You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by zh...@apache.org on 2020/05/19 11:06:04 UTC

[pulsar] branch master updated: [release][website] Update 2.5.2 website (#6986)

This is an automated email from the ASF dual-hosted git repository.

zhaijia pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new 25f726d  [release][website] Update 2.5.2 website (#6986)
25f726d is described below

commit 25f726d817ebce2287791a8bb3d23e1b5da0e82a
Author: Jia Zhai <zh...@apache.org>
AuthorDate: Tue May 19 19:05:54 2020 +0800

    [release][website] Update 2.5.2 website (#6986)
    
    * add versioned docs
    
    * add new version 2.5.2
    
    * change following guangning's comments
---
 site2/tools/build-site.sh                          |    5 +
 site2/website/releases.json                        |    1 +
 .../versioned_docs/version-2.5.2/adaptors-kafka.md |  264 +++
 .../admin-api-non-partitioned-topics.md            |  160 ++
 .../version-2.5.2/admin-api-partitioned-topics.md  |  377 ++++
 .../version-2.5.2/admin-api-persistent-topics.md   |  673 ++++++
 .../version-2.5.2/admin-api-schemas.md             |    7 +
 .../version-2.5.2/administration-dashboard.md      |   60 +
 .../version-2.5.2/administration-geo.md            |  158 ++
 .../version-2.5.2/administration-load-balance.md   |  182 ++
 .../version-2.5.2/administration-proxy.md          |  105 +
 .../version-2.5.2/administration-pulsar-manager.md |  133 ++
 .../version-2.5.2/administration-stats.md          |   64 +
 .../version-2.5.2/administration-upgrade.md        |  151 ++
 .../version-2.5.2/administration-zk-bk.md          |  322 +++
 .../version-2.5.2/client-libraries-cpp.md          |  215 ++
 .../version-2.5.2/client-libraries-go.md           |  493 +++++
 .../version-2.5.2/client-libraries-java.md         |  809 +++++++
 .../version-2.5.2/client-libraries-node.md         |  402 ++++
 .../version-2.5.2/client-libraries-python.md       |  249 +++
 .../version-2.5.2/concepts-clients.md              |   82 +
 .../version-2.5.2/concepts-messaging.md            |  445 ++++
 .../version-2.5.2/concepts-overview.md             |   31 +
 .../version-2.5.2/concepts-tiered-storage.md       |   18 +
 .../version-2.5.2/cookbooks-deduplication.md       |  121 ++
 .../version-2.5.2/cookbooks-retention-expiry.md    |  291 +++
 .../version-2.5.2/cookbooks-tiered-storage.md      |  296 +++
 .../versioned_docs/version-2.5.2/deploy-aws.md     |  224 ++
 .../deploy-bare-metal-multi-cluster.md             |  426 ++++
 .../version-2.5.2/deploy-bare-metal.md             |  461 ++++
 .../versioned_docs/version-2.5.2/deploy-dcos.md    |  183 ++
 .../version-2.5.2/deploy-kubernetes.md             |  394 ++++
 .../version-2.5.2/deploy-monitoring.md             |   90 +
 .../versioned_docs/version-2.5.2/functions-cli.md  |  198 ++
 .../version-2.5.2/functions-debug.md               |  455 ++++
 .../version-2.5.2/functions-develop.md             |  983 +++++++++
 .../version-2.5.2/functions-metrics.md             |    7 +
 .../version-2.5.2/functions-overview.md            |  200 ++
 .../version-2.5.2/functions-runtime.md             |  173 ++
 .../version-2.5.2/functions-worker.md              |  242 +++
 .../version-2.5.2/getting-started-clients.md       |   58 +
 .../version-2.5.2/getting-started-docker.md        |  161 ++
 .../version-2.5.2/getting-started-standalone.md    |  226 ++
 .../version-2.5.2/io-aerospike-sink.md             |   26 +
 .../version-2.5.2/io-canal-source.md               |  203 ++
 .../version-2.5.2/io-cassandra-sink.md             |   54 +
 .../version-2.5.2/io-cdc-debezium.md               |  475 ++++
 .../website/versioned_docs/version-2.5.2/io-cdc.md |   26 +
 .../website/versioned_docs/version-2.5.2/io-cli.md |  601 ++++++
 .../versioned_docs/version-2.5.2/io-connectors.md  |  189 ++
 .../version-2.5.2/io-debezium-source.md            |  350 +++
 .../versioned_docs/version-2.5.2/io-develop.md     |  230 ++
 .../version-2.5.2/io-elasticsearch-sink.md         |   50 +
 .../versioned_docs/version-2.5.2/io-file-source.md |  138 ++
 .../versioned_docs/version-2.5.2/io-flume-sink.md  |   52 +
 .../version-2.5.2/io-flume-source.md               |   52 +
 .../versioned_docs/version-2.5.2/io-hbase-sink.md  |   64 +
 .../versioned_docs/version-2.5.2/io-hdfs2-sink.md  |   54 +
 .../versioned_docs/version-2.5.2/io-hdfs3-sink.md  |   54 +
 .../version-2.5.2/io-influxdb-sink.md              |   62 +
 .../versioned_docs/version-2.5.2/io-jdbc-sink.md   |   57 +
 .../versioned_docs/version-2.5.2/io-kafka-sink.md  |   69 +
 .../version-2.5.2/io-kafka-source.md               |  171 ++
 .../version-2.5.2/io-kinesis-sink.md               |   73 +
 .../version-2.5.2/io-kinesis-source.md             |   77 +
 .../versioned_docs/version-2.5.2/io-mongo-sink.md  |   52 +
 .../version-2.5.2/io-netty-source.md               |  205 ++
 .../versioned_docs/version-2.5.2/io-overview.md    |  136 ++
 .../versioned_docs/version-2.5.2/io-quickstart.md  |  824 +++++++
 .../version-2.5.2/io-rabbitmq-sink.md              |   81 +
 .../version-2.5.2/io-rabbitmq-source.md            |   78 +
 .../versioned_docs/version-2.5.2/io-redis-sink.md  |   70 +
 .../versioned_docs/version-2.5.2/io-solr-sink.md   |   61 +
 .../version-2.5.2/io-twitter-source.md             |   28 +
 .../versioned_docs/version-2.5.2/io-twitter.md     |    7 +
 .../version-2.5.2/reference-configuration.md       |  515 +++++
 .../version-2.5.2/reference-connector-admin.md     |    7 +
 .../version-2.5.2/reference-metrics.md             |  248 +++
 .../version-2.5.2/reference-pulsar-admin.md        | 2276 ++++++++++++++++++++
 .../schema-evolution-compatibility.md              |  953 ++++++++
 .../version-2.5.2/schema-get-started.md            |   91 +
 .../versioned_docs/version-2.5.2/schema-manage.md  |  809 +++++++
 .../version-2.5.2/schema-understand.md             |  592 +++++
 .../version-2.5.2/security-bouncy-castle.md        |  122 ++
 .../version-2.5.2/security-encryption.md           |  176 ++
 .../version-2.5.2/security-extending.md            |  194 ++
 .../versioned_docs/version-2.5.2/security-jwt.md   |  241 +++
 .../version-2.5.2/security-overview.md             |   31 +
 .../version-2.5.2/security-tls-authentication.md   |  177 ++
 .../version-2.5.2/security-tls-keystore.md         |  287 +++
 .../version-2.5.2/security-tls-transport.md        |  245 +++
 .../version-2.5.2/security-token-admin.md          |  159 ++
 .../version-2.5.2/sql-deployment-configurations.md |  159 ++
 .../version-2.5.2/sql-getting-started.md           |  144 ++
 .../versioned_docs/version-2.5.2/sql-overview.md   |   18 +
 .../versioned_docs/version-2.5.2/sql-rest-api.md   |  186 ++
 .../versioned_sidebars/version-2.5.2-sidebars.json |  147 ++
 site2/website/versions.json                        |    1 +
 98 files changed, 23042 insertions(+)

diff --git a/site2/tools/build-site.sh b/site2/tools/build-site.sh
index df995eb..de6c6ab 100755
--- a/site2/tools/build-site.sh
+++ b/site2/tools/build-site.sh
@@ -36,26 +36,31 @@ cp versioned_docs/version-2.4.2/functions-develop.md translated_docs/zh-CN/versi
 cp versioned_docs/version-2.5.0/functions-develop.md translated_docs/zh-CN/version-2.5.0/functions-develop.md
 cp versioned_docs/version-2.5.0/io-overview.md translated_docs/zh-CN/version-2.5.0/io-overview.md
 cp versioned_docs/version-2.5.1/functions-develop.md translated_docs/zh-CN/version-2.5.1/functions-develop.md
+cp versioned_docs/version-2.5.2/functions-develop.md translated_docs/zh-CN/version-2.5.2/functions-develop.md
 
 cp versioned_docs/version-2.4.2/functions-develop.md translated_docs/ja/version-2.4.2/functions-develop.md
 cp versioned_docs/version-2.5.0/functions-develop.md translated_docs/ja/version-2.5.0/functions-develop.md
 cp versioned_docs/version-2.5.0/io-overview.md translated_docs/ja/version-2.5.0/io-overview.md
 cp versioned_docs/version-2.5.1/functions-develop.md translated_docs/ja/version-2.5.1/functions-develop.md
+cp versioned_docs/version-2.5.2/functions-develop.md translated_docs/ja/version-2.5.2/functions-develop.md
 
 cp versioned_docs/version-2.4.2/functions-develop.md translated_docs/fr/version-2.4.2/functions-develop.md
 cp versioned_docs/version-2.5.0/functions-develop.md translated_docs/fr/version-2.5.0/functions-develop.md
 cp versioned_docs/version-2.5.0/io-overview.md translated_docs/fr/version-2.5.0/io-overview.md
 cp versioned_docs/version-2.5.1/functions-develop.md translated_docs/fr/version-2.5.1/functions-develop.md
+cp versioned_docs/version-2.5.2/functions-develop.md translated_docs/fr/version-2.5.2/functions-develop.md
 
 cp versioned_docs/version-2.4.2/functions-develop.md translated_docs/zh-TW/version-2.4.2/functions-develop.md
 cp versioned_docs/version-2.5.0/functions-develop.md translated_docs/zh-TW/version-2.5.0/functions-develop.md
 cp versioned_docs/version-2.5.0/io-overview.md translated_docs/zh-TW/version-2.5.0/io-overview.md
 cp versioned_docs/version-2.5.1/functions-develop.md translated_docs/zh-TW/version-2.5.1/functions-develop.md
+cp versioned_docs/version-2.5.2/functions-develop.md translated_docs/zh-TW/version-2.5.2/functions-develop.md
 
 cp versioned_docs/version-2.4.2/functions-develop.md translated_docs/ko/version-2.4.2/functions-develop.md
 cp versioned_docs/version-2.5.0/functions-develop.md translated_docs/ko/version-2.5.0/functions-develop.md
 cp versioned_docs/version-2.5.0/io-overview.md translated_docs/ko/version-2.5.0/io-overview.md
 cp versioned_docs/version-2.5.1/functions-develop.md translated_docs/ko/version-2.5.1/functions-develop.md
+cp versioned_docs/version-2.5.2/functions-develop.md translated_docs/ko/version-2.5.2/functions-develop.md
 
 yarn build
 
diff --git a/site2/website/releases.json b/site2/website/releases.json
index cce9fb5..49c49d3 100644
--- a/site2/website/releases.json
+++ b/site2/website/releases.json
@@ -1,4 +1,5 @@
 [
+  "2.5.2",
   "2.5.1",
   "2.5.0",
   "2.4.2",
diff --git a/site2/website/versioned_docs/version-2.5.2/adaptors-kafka.md b/site2/website/versioned_docs/version-2.5.2/adaptors-kafka.md
new file mode 100644
index 0000000..542ecf8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/adaptors-kafka.md
@@ -0,0 +1,264 @@
+---
+id: version-2.5.2-adaptors-kafka
+title: Pulsar adaptor for Apache Kafka
+sidebar_label: Kafka client wrapper
+original_id: adaptors-kafka
+---
+
+
+Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API.
+
+## Using the Pulsar Kafka compatibility wrapper
+
+In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`:
+
+```xml
+<dependency>
+  <groupId>org.apache.kafka</groupId>
+  <artifactId>kafka-clients</artifactId>
+  <version>0.10.2.1</version>
+</dependency>
+```
+
+Then include this dependency for the Pulsar Kafka wrapper:
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client-kafka</artifactId>
+  <version>{{pulsar:version}}</version>
+</dependency>
+```
+
+With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the
+producers and consumers to Pulsar service rather than Kafka, and uses a particular
+Pulsar topic.
+
+## Using the Pulsar Kafka compatibility wrapper together with existing kafka client
+
+When migrating from Kafka to Pulsar, the application might use the original kafka client
+and the pulsar kafka wrapper together during migration. You should consider using the
+unshaded pulsar kafka client wrapper.
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client-kafka-original</artifactId>
+  <version>{{pulsar:version}}</version>
+</dependency>
+```
+
+When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer`
+instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers.
+
+## Producer example
+
+```java
+// Topic needs to be a regular Pulsar topic
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+
+props.put("key.serializer", IntegerSerializer.class.getName());
+props.put("value.serializer", StringSerializer.class.getName());
+
+Producer<Integer, String> producer = new KafkaProducer<>(props);
+
+for (int i = 0; i < 10; i++) {
+    producer.send(new ProducerRecord<Integer, String>(topic, i, "hello-" + i));
+    log.info("Message {} sent successfully", i);
+}
+
+producer.close();
+```
+
+## Consumer example
+
+```java
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+props.put("group.id", "my-subscription-name");
+props.put("enable.auto.commit", "false");
+props.put("key.deserializer", IntegerDeserializer.class.getName());
+props.put("value.deserializer", StringDeserializer.class.getName());
+
+Consumer<Integer, String> consumer = new KafkaConsumer<>(props);
+consumer.subscribe(Arrays.asList(topic));
+
+while (true) {
+    ConsumerRecords<Integer, String> records = consumer.poll(100);
+    records.forEach(record -> {
+        log.info("Received record: {}", record);
+    });
+
+    // Commit last offset
+    consumer.commitSync();
+}
+```
+
+## Complete Examples
+
+You can find the complete producer and consumer examples
+[here](https://github.com/apache/pulsar/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples).
+
+## Compatibility matrix
+
+Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API.
+
+#### Producer
+
+APIs:
+
+| Producer Method                                                               | Supported | Notes                                                                    |
+|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------|
+| `Future<RecordMetadata> send(ProducerRecord<K, V> record)`                    | Yes       |                                                                          |
+| `Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback)` | Yes       |                                                                          |
+| `void flush()`                                                                | Yes       |                                                                          |
+| `List<PartitionInfo> partitionsFor(String topic)`                             | No        |                                                                          |
+| `Map<MetricName, ? extends Metric> metrics()`                                 | No        |                                                                          |
+| `void close()`                                                                | Yes       |                                                                          |
+| `void close(long timeout, TimeUnit unit)`                                     | Yes       |                                                                          |
+
+Properties:
+
+| Config property                         | Supported | Notes                                                                         |
+|:----------------------------------------|:----------|:------------------------------------------------------------------------------|
+| `acks`                                  | Ignored   | Durability and quorum writes are configured at the namespace level            |
+| `auto.offset.reset`                     | Yes       | Will have a default value of `latest` if user does not give specific setting. |
+| `batch.size`                            | Ignored   |                                                                               |
+| `bootstrap.servers`                     | Yes       |                                 |
+| `buffer.memory`                         | Ignored   |                                                                               |
+| `client.id`                             | Ignored   |                                                                               |
+| `compression.type`                      | Yes       | Allows `gzip` and `lz4`. No `snappy`.                                         |
+| `connections.max.idle.ms`               | Yes       | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time|
+| `interceptor.classes`                   | Yes       |                                                                               |
+| `key.serializer`                        | Yes       |                                                                               |
+| `linger.ms`                             | Yes       | Controls the group commit time when batching messages                         |
+| `max.block.ms`                          | Ignored   |                                                                               |
+| `max.in.flight.requests.per.connection` | Ignored   | In Pulsar ordering is maintained even with multiple requests in flight        |
+| `max.request.size`                      | Ignored   |                                                                               |
+| `metric.reporters`                      | Ignored   |                                                                               |
+| `metrics.num.samples`                   | Ignored   |                                                                               |
+| `metrics.sample.window.ms`              | Ignored   |                                                                               |
+| `partitioner.class`                     | Yes       |                                                                               |
+| `receive.buffer.bytes`                  | Ignored   |                                                                               |
+| `reconnect.backoff.ms`                  | Ignored   |                                                                               |
+| `request.timeout.ms`                    | Ignored   |                                                                               |
+| `retries`                               | Ignored   | Pulsar client retries with exponential backoff until the send timeout expires. |
+| `send.buffer.bytes`                     | Ignored   |                                                                               |
+| `timeout.ms`                            | Yes       |                                                                               |
+| `value.serializer`                      | Yes       |                                                                               |
+
+
+#### Consumer
+
+The following table lists consumer APIs.
+
+| Consumer Method                                                                                         | Supported | Notes |
+|:--------------------------------------------------------------------------------------------------------|:----------|:------|
+| `Set<TopicPartition> assignment()`                                                                      | No        |       |
+| `Set<String> subscription()`                                                                            | Yes       |       |
+| `void subscribe(Collection<String> topics)`                                                             | Yes       |       |
+| `void subscribe(Collection<String> topics, ConsumerRebalanceListener callback)`                         | No        |       |
+| `void assign(Collection<TopicPartition> partitions)`                                                    | No        |       |
+| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)`                                   | No        |       |
+| `void unsubscribe()`                                                                                    | Yes       |       |
+| `ConsumerRecords<K, V> poll(long timeoutMillis)`                                                        | Yes       |       |
+| `void commitSync()`                                                                                     | Yes       |       |
+| `void commitSync(Map<TopicPartition, OffsetAndMetadata> offsets)`                                       | Yes       |       |
+| `void commitAsync()`                                                                                    | Yes       |       |
+| `void commitAsync(OffsetCommitCallback callback)`                                                       | Yes       |       |
+| `void commitAsync(Map<TopicPartition, OffsetAndMetadata> offsets, OffsetCommitCallback callback)`       | Yes       |       |
+| `void seek(TopicPartition partition, long offset)`                                                      | Yes       |       |
+| `void seekToBeginning(Collection<TopicPartition> partitions)`                                           | Yes       |       |
+| `void seekToEnd(Collection<TopicPartition> partitions)`                                                 | Yes       |       |
+| `long position(TopicPartition partition)`                                                               | Yes       |       |
+| `OffsetAndMetadata committed(TopicPartition partition)`                                                 | Yes       |       |
+| `Map<MetricName, ? extends Metric> metrics()`                                                           | No        |       |
+| `List<PartitionInfo> partitionsFor(String topic)`                                                       | No        |       |
+| `Map<String, List<PartitionInfo>> listTopics()`                                                         | No        |       |
+| `Set<TopicPartition> paused()`                                                                          | No        |       |
+| `void pause(Collection<TopicPartition> partitions)`                                                     | No        |       |
+| `void resume(Collection<TopicPartition> partitions)`                                                    | No        |       |
+| `Map<TopicPartition, OffsetAndTimestamp> offsetsForTimes(Map<TopicPartition, Long> timestampsToSearch)` | No        |       |
+| `Map<TopicPartition, Long> beginningOffsets(Collection<TopicPartition> partitions)`                     | No        |       |
+| `Map<TopicPartition, Long> endOffsets(Collection<TopicPartition> partitions)`                           | No        |       |
+| `void close()`                                                                                          | Yes       |       |
+| `void close(long timeout, TimeUnit unit)`                                                               | Yes       |       |
+| `void wakeup()`                                                                                         | No        |       |
+
+Properties:
+
+| Config property                 | Supported | Notes                                                 |
+|:--------------------------------|:----------|:------------------------------------------------------|
+| `group.id`                      | Yes       | Maps to a Pulsar subscription name                    |
+| `max.poll.records`              | Yes       |                                                       |
+| `max.poll.interval.ms`          | Ignored   | Messages are "pushed" from broker                     |
+| `session.timeout.ms`            | Ignored   |                                                       |
+| `heartbeat.interval.ms`         | Ignored   |                                                       |
+| `bootstrap.servers`             | Yes       | Needs to point to a single Pulsar service URL         |
+| `enable.auto.commit`            | Yes       |                                                       |
+| `auto.commit.interval.ms`       | Ignored   | With auto-commit, acks are sent immediately to broker |
+| `partition.assignment.strategy` | Ignored   |                                                       |
+| `auto.offset.reset`             | Yes       | Only support earliest and latest.                     |
+| `fetch.min.bytes`               | Ignored   |                                                       |
+| `fetch.max.bytes`               | Ignored   |                                                       |
+| `fetch.max.wait.ms`             | Ignored   |                                                       |
+| `interceptor.classes`           | Yes       |                                                       |
+| `metadata.max.age.ms`           | Ignored   |                                                       |
+| `max.partition.fetch.bytes`     | Ignored   |                                                       |
+| `send.buffer.bytes`             | Ignored   |                                                       |
+| `receive.buffer.bytes`          | Ignored   |                                                       |
+| `client.id`                     | Ignored   |                                                       |
+
+
+## Customize Pulsar configurations
+
+You can configure Pulsar authentication provider directly from the Kafka properties.
+
+### Pulsar client properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-)          |         | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.|
+| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-)          |         | Map which represents parameters for the Authentication-Plugin. |
+| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-)          |         | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. |
+| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-)                       | `false` | Enable TLS transport encryption.                                                        |
+| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-)   |         | Path for the TLS trust certificate store.                                               |
+| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers.                                           |
+| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. |
+| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. |
+| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. |
+| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. |
+| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. |
+| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. |
+| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. |
+| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection.  |
+
+
+### Pulsar producer properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. |
+| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) |  | Specify baseline for sequence ID of this producer. |
+| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker.  |
+| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions.  |
+| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. |
+| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. |
+
+
+### Pulsar consumer Properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. |
+| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. |
+| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. |
+| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. |
+| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. |
diff --git a/site2/website/versioned_docs/version-2.5.2/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.5.2/admin-api-non-partitioned-topics.md
new file mode 100644
index 0000000..1fa58be
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/admin-api-non-partitioned-topics.md
@@ -0,0 +1,160 @@
+---
+id: version-2.5.2-admin-api-non-partitioned-topics
+title: Managing non-partitioned topics
+sidebar_label: Non-Partitioned topics
+original_id: admin-api-non-partitioned-topics
+---
+
+
+You can use Pulsar's [admin API](admin-api-overview.md) to create and manage non-partitioned topics.
+
+In all of the instructions and commands below, the topic name structure is:
+
+```shell
+persistent://tenant/namespace/topic
+```
+
+## Non-Partitioned topics resources
+
+### Create
+
+Non-partitioned topics in Pulsar must be explicitly created. When creating a new non-partitioned topic you
+need to provide a name for the topic.
+
+> #### Note
+>
+> By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data.
+>
+> To disable this feature, set `brokerDeleteInactiveTopicsEnabled`  to `false`.
+>
+> To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value.
+>
+> For more information about these two parameters, see [here](reference-configuration.md#broker).
+
+#### pulsar-admin
+
+You can create non-partitioned topics using the [`create`](reference-pulsar-admin.md#create-3)
+command and specifying the topic name as an argument.
+Here's an example:
+
+```shell
+$ bin/pulsar-admin topics create \
+  persistent://my-tenant/my-namespace/my-topic
+```
+
+> #### Note
+>
+> It's only allowed to create non partitioned topic of name contains suffix '-partition-' followed by numeric value like
+> 'xyz-topic-partition-10', if there's already a partitioned topic with same name, in this case 'xyz-topic', and has
+> number of partition larger then that numeric value in this case 11(partition index is start from 0). Else creation of such topic will fail.
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic|operation/createNonPartitionedTopic}
+
+#### Java
+
+```java
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().createNonPartitionedTopic(topicName);
+```
+
+### Delete
+
+#### pulsar-admin
+
+Non-partitioned topics can be deleted using the
+[`delete`](reference-pulsar-admin.md#delete-4) command, specifying the topic by name:
+
+```shell
+$ bin/pulsar-admin topics delete \
+  persistent://my-tenant/my-namespace/my-topic
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/persistent/:tenant/:namespace/:topic|operation/deleteTopic}
+
+#### Java
+
+```java
+admin.topics().delete(persistentTopic);
+```
+
+### List
+
+It provides a list of topics existing under a given namespace.  
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin topics list tenant/namespace
+persistent://tenant/namespace/topic1
+persistent://tenant/namespace/topic2
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace|operation/getList}
+
+#### Java
+
+```java
+admin.topics().getList(namespace);
+```
+
+### Stats
+
+It shows current statistics of a given topic. Here's an example payload:
+
+The following stats are available:
+
+|Stat|Description|
+|----|-----------|
+|msgRateIn|The sum of all local and replication publishers’ publish rates in messages per second|
+|msgThroughputIn|Same as msgRateIn but in bytes per second instead of messages per second|
+|msgRateOut|The sum of all local and replication consumers’ dispatch rates in messages per second|
+|msgThroughputOut|Same as msgRateOut but in bytes per second instead of messages per second|
+|averageMsgSize|Average message size, in bytes, from this publisher within the last interval|
+|storageSize|The sum of the ledgers’ storage size for this topic|
+|publishers|The list of all local publishers into the topic. There can be anywhere from zero to thousands.|
+|producerId|Internal identifier for this producer on this topic|
+|producerName|Internal identifier for this producer, generated by the client library|
+|address|IP address and source port for the connection of this producer|
+|connectedSince|Timestamp this producer was created or last reconnected|
+|subscriptions|The list of all local subscriptions to the topic|
+|my-subscription|The name of this subscription (client defined)|
+|msgBacklog|The count of messages in backlog for this subscription|
+|msgBacklogNoDelayed|The count of messages in backlog without delayed messages for this subscription|
+|type|This subscription type|
+|msgRateExpired|The rate at which messages were discarded instead of dispatched from this subscription due to TTL|
+|consumers|The list of connected consumers for this subscription|
+|consumerName|Internal identifier for this consumer, generated by the client library|
+|availablePermits|The number of messages this consumer has space for in the client library’s listen queue. A value of 0 means the client library’s queue is full and receive() isn’t being called. A nonzero value means this consumer is ready to be dispatched messages.|
+|replication|This section gives the stats for cross-colo replication of this topic|
+|replicationBacklog|The outbound replication backlog in messages|
+|connected|Whether the outbound replicator is connected|
+|replicationDelayInSeconds|How long the oldest message has been waiting to be sent through the connection, if connected is true|
+|inboundConnection|The IP and port of the broker in the remote cluster’s publisher connection to this broker|
+|inboundConnectedSince|The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.|
+
+#### pulsar-admin
+
+The stats for the topic and its connected producers and consumers can be fetched by using the
+[`stats`](reference-pulsar-admin.md#stats) command, specifying the topic by name:
+
+```shell
+$ pulsar-admin topics stats \
+  persistent://test-tenant/namespace/topic \
+  --get-precise-backlog
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/stats|operation/getStats}
+
+#### Java
+
+```java
+admin.topics().getStats(persistentTopic, false /* is precise backlog */);
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.5.2/admin-api-partitioned-topics.md
new file mode 100644
index 0000000..1955645
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/admin-api-partitioned-topics.md
@@ -0,0 +1,377 @@
+---
+id: version-2.5.2-admin-api-partitioned-topics
+title: Managing partitioned topics
+sidebar_label: Partitioned topics
+original_id: admin-api-partitioned-topics
+---
+
+
+You can use Pulsar's [admin API](admin-api-overview.md) to create and manage partitioned topics.
+
+In all of the instructions and commands below, the topic name structure is:
+
+```shell
+persistent://tenant/namespace/topic
+```
+
+## Partitioned topics resources
+
+### Create
+
+Partitioned topics in Pulsar must be explicitly created. When creating a new partitioned topic you
+need to provide a name for the topic as well as the desired number of partitions.
+
+> #### Note
+>
+> By default, after 60 seconds of creation, topics are considered inactive and deleted automatically to prevent from generating trash data.
+>
+> To disable this feature, set `brokerDeleteInactiveTopicsEnabled`  to `false`.
+>
+> To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to your desired value.
+>
+> For more information about these two parameters, see [here](reference-configuration.md#broker).
+
+#### pulsar-admin
+
+You can create partitioned topics using the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic)
+command and specifying the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag.
+
+Here's an example:
+
+```shell
+$ bin/pulsar-admin topics create-partitioned-topic \
+  persistent://my-tenant/my-namespace/my-topic \
+  --partitions 4
+```
+
+> #### Note
+>
+> If there already exists a non partitioned topic with suffix '-partition-' followed by numeric value like
+> 'xyz-topic-partition-10', then you can not create partitioned topic with name 'xyz-topic' as the partitions
+> of the partitioned topic could override the existing non partitioned topic. You have to delete that non
+> partitioned topic first then create the partitioned topic.
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic}
+
+#### Java
+
+```java
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+int numPartitions = 4;
+admin.persistentTopics().createPartitionedTopic(topicName, numPartitions);
+```
+
+### Create missed partitions
+
+Try to create partitions for partitioned topic. The partitions of partition topic has to be created, 
+can be used by repair partitions when topic auto creation is disabled
+
+#### pulsar-admin
+
+You can create missed partitions using the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions)
+command and specifying the topic name as an argument.
+
+Here's an example:
+
+```shell
+$ bin/pulsar-admin topics create-missed-partitions \
+  persistent://my-tenant/my-namespace/my-topic \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic|operation/createMissedPartitions}
+
+#### Java
+
+```java
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().createMissedPartitions(topicName);
+```
+
+### Get metadata
+
+Partitioned topics have metadata associated with them that you can fetch as a JSON object.
+The following metadata fields are currently available:
+
+Field | Meaning
+:-----|:-------
+`partitions` | The number of partitions into which the topic is divided
+
+#### pulsar-admin
+
+You can see the number of partitions in a partitioned topic using the
+[`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata)
+subcommand. Here's an example:
+
+```shell
+$ pulsar-admin topics get-partitioned-topic-metadata \
+  persistent://my-tenant/my-namespace/my-topic
+{
+  "partitions": 4
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata}
+
+#### Java
+
+```java
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getPartitionedTopicMetadata(topicName);
+```
+
+### Update
+
+You can update the number of partitions on an existing partitioned topic
+*if* the topic is non-global. To update, the new number of partitions must be greater
+than the existing number.
+
+Decrementing the number of partitions would deleting the topic, which is not supported in Pulsar.
+
+Already created partitioned producers and consumers will automatically find the newly created partitions.
+
+#### pulsar-admin
+
+Partitioned topics can be updated using the
+[`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command.
+
+```shell
+$ pulsar-admin topics update-partitioned-topic \
+  persistent://my-tenant/my-namespace/my-topic \
+  --partitions 8
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic}
+
+#### Java
+
+```java
+admin.persistentTopics().updatePartitionedTopic(persistentTopic, numPartitions);
+```
+
+### Delete
+
+#### pulsar-admin
+
+Partitioned topics can be deleted using the
+[`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, specifying the topic by name:
+
+```shell
+$ bin/pulsar-admin topics delete-partitioned-topic \
+  persistent://my-tenant/my-namespace/my-topic
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/persistent/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic}
+
+#### Java
+
+```java
+admin.persistentTopics().delete(persistentTopic);
+```
+
+### List
+
+It provides a list of persistent topics existing under a given namespace.  
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin topics list tenant/namespace
+persistent://tenant/namespace/topic1
+persistent://tenant/namespace/topic2
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace|operation/getPartitionedTopicList}
+
+#### Java
+
+```java
+admin.persistentTopics().getList(namespace);
+```
+
+### Stats
+
+It shows current statistics of a given partitioned topic. Here's an example payload:
+
+```json
+{
+  "msgRateIn": 4641.528542257553,
+  "msgThroughputIn": 44663039.74947473,
+  "msgRateOut": 0,
+  "msgThroughputOut": 0,
+  "averageMsgSize": 1232439.816728665,
+  "storageSize": 135532389160,
+  "publishers": [
+    {
+      "msgRateIn": 57.855383881403576,
+      "msgThroughputIn": 558994.7078932219,
+      "averageMsgSize": 613135,
+      "producerId": 0,
+      "producerName": null,
+      "address": null,
+      "connectedSince": null
+    }
+  ],
+  "subscriptions": {
+    "my-topic_subscription": {
+      "msgRateOut": 0,
+      "msgThroughputOut": 0,
+      "msgBacklog": 116632,
+      "type": null,
+      "msgRateExpired": 36.98245516804671,
+      "consumers": []
+    }
+  },
+  "replication": {}
+}
+```
+
+The following stats are available:
+
+|Stat|Description|
+|----|-----------|
+|msgRateIn|The sum of all local and replication publishers’ publish rates in messages per second|
+|msgThroughputIn|Same as msgRateIn but in bytes per second instead of messages per second|
+|msgRateOut|The sum of all local and replication consumers’ dispatch rates in messages per second|
+|msgThroughputOut|Same as msgRateOut but in bytes per second instead of messages per second|
+|averageMsgSize|Average message size, in bytes, from this publisher within the last interval|
+|storageSize|The sum of the ledgers’ storage size for this topic|
+|publishers|The list of all local publishers into the topic. There can be anywhere from zero to thousands.|
+|producerId|Internal identifier for this producer on this topic|
+|producerName|Internal identifier for this producer, generated by the client library|
+|address|IP address and source port for the connection of this producer|
+|connectedSince|Timestamp this producer was created or last reconnected|
+|subscriptions|The list of all local subscriptions to the topic|
+|my-subscription|The name of this subscription (client defined)|
+|msgBacklog|The count of messages in backlog for this subscription|
+|msgBacklogNoDelayed|The count of messages in backlog without delayed messages for this subscription|
+|type|This subscription type|
+|msgRateExpired|The rate at which messages were discarded instead of dispatched from this subscription due to TTL|
+|consumers|The list of connected consumers for this subscription|
+|consumerName|Internal identifier for this consumer, generated by the client library|
+|availablePermits|The number of messages this consumer has space for in the client library’s listen queue. A value of 0 means the client library’s queue is full and receive() isn’t being called. A nonzero value means this consumer is ready to be dispatched messages.|
+|replication|This section gives the stats for cross-colo replication of this topic|
+|replicationBacklog|The outbound replication backlog in messages|
+|connected|Whether the outbound replicator is connected|
+|replicationDelayInSeconds|How long the oldest message has been waiting to be sent through the connection, if connected is true|
+|inboundConnection|The IP and port of the broker in the remote cluster’s publisher connection to this broker|
+|inboundConnectedSince|The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.|
+
+#### pulsar-admin
+
+The stats for the partitioned topic and its connected producers and consumers can be fetched by using the
+[`partitioned-stats`](reference-pulsar-admin.md#partitioned-stats) command, specifying the topic by name:
+
+```shell
+$ pulsar-admin topics partitioned-stats \
+  persistent://test-tenant/namespace/topic \
+  --per-partition
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats}
+
+#### Java
+
+```java
+admin.topics().getPartitionedStats(persistentTopic, true /* per partition */, false /* is precise backlog */);
+```
+
+### Internal stats
+
+It shows detailed statistics of a topic.
+
+|Stat|Description|
+|----|-----------|
+|entriesAddedCounter|Messages published since this broker loaded this topic|
+|numberOfEntries|Total number of messages being tracked|
+|totalSize|Total storage size in bytes of all messages|
+|currentLedgerEntries|Count of messages written to the ledger currently open for writing|
+|currentLedgerSize|Size in bytes of messages written to ledger currently open for writing|
+|lastLedgerCreatedTimestamp|Time when last ledger was created|
+|lastLedgerCreationFailureTimestamp|time when last ledger was failed|
+|waitingCursorsCount|How many cursors are caught up and waiting for a new message to be published|
+|pendingAddEntriesCount|How many messages have (asynchronous) write requests we are waiting on completion|
+|lastConfirmedEntry|The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger has been opened or is currently being opened but has no entries written yet.|
+|state|The state of the cursor ledger. Open means we have a cursor ledger for saving updates of the markDeletePosition.|
+|ledgers|The ordered list of all ledgers for this topic holding its messages|
+|cursors|The list of all cursors on this topic. There will be one for every subscription you saw in the topic stats.|
+|markDeletePosition|The ack position: the last message the subscriber acknowledged receiving|
+|readPosition|The latest position of subscriber for reading message|
+|waitingReadOp|This is true when the subscription has read the latest message published to the topic and is waiting on new messages to be published.|
+|pendingReadOps|The counter for how many outstanding read requests to the BookKeepers we have in progress|
+|messagesConsumedCounter|Number of messages this cursor has acked since this broker loaded this topic|
+|cursorLedger|The ledger being used to persistently store the current markDeletePosition|
+|cursorLedgerLastEntry|The last entryid used to persistently store the current markDeletePosition|
+|individuallyDeletedMessages|If Acks are being done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position|
+|lastLedgerSwitchTimestamp|The last time the cursor ledger was rolled over|
+
+
+```json
+{
+  "entriesAddedCounter": 20449518,
+  "numberOfEntries": 3233,
+  "totalSize": 331482,
+  "currentLedgerEntries": 3233,
+  "currentLedgerSize": 331482,
+  "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825",
+  "lastLedgerCreationFailureTimestamp": null,
+  "waitingCursorsCount": 1,
+  "pendingAddEntriesCount": 0,
+  "lastConfirmedEntry": "324711539:3232",
+  "state": "LedgerOpened",
+  "ledgers": [
+    {
+      "ledgerId": 324711539,
+      "entries": 0,
+      "size": 0
+    }
+  ],
+  "cursors": {
+    "my-subscription": {
+      "markDeletePosition": "324711539:3133",
+      "readPosition": "324711539:3233",
+      "waitingReadOp": true,
+      "pendingReadOps": 0,
+      "messagesConsumedCounter": 20449501,
+      "cursorLedger": 324702104,
+      "cursorLedgerLastEntry": 21,
+      "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]",
+      "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313",
+      "state": "Open"
+    }
+  }
+}
+```
+
+#### pulsar-admin
+
+The internal stats for the partitioned topic can be fetched by using the
+[`stats-internal`](reference-pulsar-admin.md#stats-internal) command, specifying the topic by name:
+
+```shell
+$ pulsar-admin topics stats-internal \
+  persistent://test-tenant/namespace/topic
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/internalStats|operation/getInternalStats}
+
+#### Java
+
+```java
+admin.persistentTopics().getInternalStats(persistentTopic);
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.5.2/admin-api-persistent-topics.md
new file mode 100644
index 0000000..fea98fd
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/admin-api-persistent-topics.md
@@ -0,0 +1,673 @@
+---
+id: version-2.5.2-admin-api-persistent-topics
+title: Managing persistent topics
+sidebar_label: Persistent topics
+original_id: admin-api-persistent-topics
+---
+
+Persistent helps to access topic which is a logical endpoint for publishing and consuming messages. Producers publish messages to the topic and consumers subscribe to the topic, to consume messages published to the topic.
+
+In all of the instructions and commands below, the topic name structure is:
+
+
+```shell
+persistent://tenant/namespace/topic
+```
+
+## Persistent topics resources
+
+### List of topics
+
+It provides a list of persistent topics exist under a given namespace.
+
+#### pulsar-admin
+
+List of topics can be fetched using [`list`](../../reference/CliTools#list) command.
+
+```shell
+$ pulsar-admin persistent list \
+  my-tenant/my-namespace
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace|operation/getList}
+
+#### Java
+
+```java
+String namespace = "my-tenant/my-namespace";
+admin.persistentTopics().getList(namespace);
+```
+
+### Grant permission
+
+It grants permissions on a client role to perform specific actions on a given topic.
+
+#### pulsar-admin
+
+Permission can be granted using [`grant-permission`](../../reference/CliTools#grant-permission) command.
+
+```shell
+$ pulsar-admin persistent grant-permission \
+  --actions produce,consume --role application1 \
+  persistent://test-tenant/ns1/tp1 \
+
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String role = "test-role";
+Set<AuthAction> actions  = Sets.newHashSet(AuthAction.produce, AuthAction.consume);
+admin.persistentTopics().grantPermission(topic, role, actions);
+```
+
+### Get permission
+
+Permission can be fetched using [`permissions`](../../reference/CliTools#permissions) command.
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin persistent permissions \
+  persistent://test-tenant/ns1/tp1 \
+
+{
+    "application1": [
+        "consume",
+        "produce"
+    ]
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getPermissions(topic);
+```
+
+### Revoke permission
+
+It revokes a permission which was granted on a client role.
+
+#### pulsar-admin
+
+Permission can be revoked using [`revoke-permission`](../../reference/CliTools#revoke-permission) command.
+
+```shell
+$ pulsar-admin persistent revoke-permission \
+  --role application1 \
+  persistent://test-tenant/ns1/tp1 \
+
+{
+  "application1": [
+    "consume",
+    "produce"
+  ]
+}
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/persistent/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String role = "test-role";
+admin.persistentTopics().revokePermissions(topic, role);
+```
+
+### Delete topic
+
+It deletes a topic. The topic cannot be deleted if there's any active subscription or producers connected to it.
+
+#### pulsar-admin
+
+Topic can be deleted using [`delete`](../../reference/CliTools#delete) command.
+
+```shell
+$ pulsar-admin persistent delete \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/persistent/:tenant/:namespace/:topic|operation/deleteTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().delete(topic);
+```
+
+### Unload topic
+
+It unloads a topic.
+
+#### pulsar-admin
+
+Topic can be unloaded using [`unload`](../../reference/CliTools#unload) command.
+
+```shell
+$ pulsar-admin persistent unload \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/unload|operation/unloadTopic}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().unload(topic);
+```
+
+### Get stats
+
+It shows current statistics of a given non-partitioned topic.
+
+  -   **msgRateIn**: The sum of all local and replication publishers' publish rates in messages per second
+
+  -   **msgThroughputIn**: Same as above, but in bytes per second instead of messages per second
+
+  -   **msgRateOut**: The sum of all local and replication consumers' dispatch rates in messages per second
+
+  -   **msgThroughputOut**: Same as above, but in bytes per second instead of messages per second
+
+  -   **averageMsgSize**: The average size in bytes of messages published within the last interval
+
+  -   **storageSize**: The sum of the ledgers' storage size for this topic. Space used to store the messages for the topic
+
+  -   **publishers**: The list of all local publishers into the topic. There can be zero or thousands
+
+      -   **msgRateIn**: Total rate of messages published by this publisher in messages per second 
+
+      -   **msgThroughputIn**: Total throughput of the messages published by this publisher in bytes per second
+
+      -   **averageMsgSize**: Average message size in bytes from this publisher within the last interval
+
+      -   **producerId**: Internal identifier for this producer on this topic
+
+      -   **producerName**: Internal identifier for this producer, generated by the client library
+
+      -   **address**: IP address and source port for the connection of this producer
+
+      -   **connectedSince**: Timestamp this producer was created or last reconnected
+
+  -   **subscriptions**: The list of all local subscriptions to the topic
+
+      -   **my-subscription**: The name of this subscription (client defined)
+
+          -   **msgRateOut**: Total rate of messages delivered on this subscription (msg/s)
+
+          -   **msgThroughputOut**: Total throughput delivered on this subscription (bytes/s)
+
+          -   **msgBacklog**: Number of messages in the subscription backlog
+
+          -   **type**: This subscription type
+
+          -   **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL
+          
+          -   **lastExpireTimestamp**: The last message expire execution timestamp
+          
+          -   **lastConsumedFlowTimestamp**: The last flow command received timestamp 
+          
+          -   **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers
+          
+          -   **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers
+
+          -   **consumers**: The list of connected consumers for this subscription
+
+                -   **msgRateOut**: Total rate of messages delivered to the consumer (msg/s)
+
+                -   **msgThroughputOut**: Total throughput delivered to the consumer (bytes/s)
+
+                -   **consumerName**: Internal identifier for this consumer, generated by the client library
+
+                -   **availablePermits**: The number of messages this consumer has space for in the client library's listen queue. A value of 0 means the client library's queue is full and receive() isn't being called. A nonzero value means this consumer is ready to be dispatched messages.
+
+                -   **unackedMessages**: Number of unacknowledged messages for the consumer
+
+                -   **blockedConsumerOnUnackedMsgs**: Flag to verify if the consumer is blocked due to reaching threshold of unacked messages
+                
+                -   **lastConsumedTimestamp**: The timestamp of the consumer last consume a message
+          
+                -   **lastAckedTimestamp**: The timestamp of the consumer last ack a message
+
+  -   **replication**: This section gives the stats for cross-colo replication of this topic
+
+      -   **msgRateIn**: Total rate of messages received from the remote cluster (msg/s)
+
+      -   **msgThroughputIn**: Total throughput received from the remote cluster (bytes/s)
+
+      -   **msgRateOut**: Total rate of messages delivered to the replication-subscriber (msg/s)
+
+      -   **msgThroughputOut**: Total through delivered to the replication-subscriber (bytes/s)
+
+      -   **msgRateExpired**: Total rate of messages expired (msg/s)
+
+      -   **replicationBacklog**: Number of messages pending to be replicated to remote cluster
+
+      -   **connected**: Whether the outbound replicator is connected
+
+      -   **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is true
+
+      -   **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker
+
+      -   **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.
+
+      -   **outboundConnection**: Address of outbound replication connection
+
+      -   **outboundConnectedSince**: Timestamp of establishing outbound connection
+
+```json
+{
+  "msgRateIn": 4641.528542257553,
+  "msgThroughputIn": 44663039.74947473,
+  "msgRateOut": 0,
+  "msgThroughputOut": 0,
+  "averageMsgSize": 1232439.816728665,
+  "storageSize": 135532389160,
+  "publishers": [
+    {
+      "msgRateIn": 57.855383881403576,
+      "msgThroughputIn": 558994.7078932219,
+      "averageMsgSize": 613135,
+      "producerId": 0,
+      "producerName": null,
+      "address": null,
+      "connectedSince": null
+    }
+  ],
+  "subscriptions": {
+    "my-topic_subscription": {
+      "msgRateOut": 0,
+      "msgThroughputOut": 0,
+      "msgBacklog": 116632,
+      "type": null,
+      "msgRateExpired": 36.98245516804671,
+      "consumers": []
+    }
+  },
+  "replication": {}
+}
+```
+
+#### pulsar-admin
+
+Topic stats can be fetched using [`stats`](../../reference/CliTools#stats) command.
+
+```shell
+$ pulsar-admin persistent stats \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/stats|operation/getStats}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getStats(topic);
+```
+
+### Get internal stats
+
+It shows detailed statistics of a topic.
+
+  -   **entriesAddedCounter**: Messages published since this broker loaded this topic
+
+  -   **numberOfEntries**: Total number of messages being tracked
+
+  -   **totalSize**: Total storage size in bytes of all messages
+
+  -   **currentLedgerEntries**: Count of messages written to the ledger currently open for writing
+
+  -   **currentLedgerSize**: Size in bytes of messages written to ledger currently open for writing
+
+  -   **lastLedgerCreatedTimestamp**: time when last ledger was created
+
+  -   **lastLedgerCreationFailureTimestamp:** time when last ledger was failed
+
+  -   **waitingCursorsCount**: How many cursors are "caught up" and waiting for a new message to be published
+
+  -   **pendingAddEntriesCount**: How many messages have (asynchronous) write requests we are waiting on completion
+
+  -   **lastConfirmedEntry**: The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger has been opened or is currently being opened but has no entries written yet.
+
+  -   **state**: The state of this ledger for writing. LedgerOpened means we have a ledger open for saving published messages.
+
+  -   **ledgers**: The ordered list of all ledgers for this topic holding its messages
+
+      -   **ledgerId**: Id of this ledger
+
+      -   **entries**: Total number of entries belong to this ledger
+
+      -   **size**: Size of messages written to this ledger (in bytes)
+
+      -   **offloaded**: Whether this ledger is offloaded
+
+  -   **cursors**: The list of all cursors on this topic. There will be one for every subscription you saw in the topic stats.
+
+      -   **markDeletePosition**: All of messages before the markDeletePosition are acknowledged by the subscriber.
+
+      -   **readPosition**: The latest position of subscriber for reading message
+
+      -   **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting on new messages to be published.
+
+      -   **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers we have in progress
+
+      -   **messagesConsumedCounter**: Number of messages this cursor has acked since this broker loaded this topic
+
+      -   **cursorLedger**: The ledger being used to persistently store the current markDeletePosition
+
+      -   **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition
+
+      -   **individuallyDeletedMessages**: If Acks are being done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position
+
+      -   **lastLedgerSwitchTimestamp**: The last time the cursor ledger was rolled over
+
+      -   **state**: The state of the cursor ledger: Open means we have a cursor ledger for saving updates of the markDeletePosition.
+
+```json
+{
+    "entriesAddedCounter": 20449518,
+    "numberOfEntries": 3233,
+    "totalSize": 331482,
+    "currentLedgerEntries": 3233,
+    "currentLedgerSize": 331482,
+    "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825",
+    "lastLedgerCreationFailureTimestamp": null,
+    "waitingCursorsCount": 1,
+    "pendingAddEntriesCount": 0,
+    "lastConfirmedEntry": "324711539:3232",
+    "state": "LedgerOpened",
+    "ledgers": [
+        {
+            "ledgerId": 324711539,
+            "entries": 0,
+            "size": 0
+        }
+    ],
+    "cursors": {
+        "my-subscription": {
+            "markDeletePosition": "324711539:3133",
+            "readPosition": "324711539:3233",
+            "waitingReadOp": true,
+            "pendingReadOps": 0,
+            "messagesConsumedCounter": 20449501,
+            "cursorLedger": 324702104,
+            "cursorLedgerLastEntry": 21,
+            "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]",
+            "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313",
+            "state": "Open"
+        }
+    }
+}
+```
+
+
+#### pulsar-admin
+
+Topic internal-stats can be fetched using [`stats-internal`](../../reference/CliTools#stats-internal) command.
+
+```shell
+$ pulsar-admin persistent stats-internal \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/internalStats|operation/getInternalStats}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getInternalStats(topic);
+```
+
+### Peek messages
+
+It peeks N messages for a specific subscription of a given topic.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent peek-messages \
+  --count 10 --subscription my-subscription \
+  persistent://test-tenant/ns1/tp1 \
+
+Message ID: 315674752:0
+Properties:  {  "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451"  }
+msg-payload
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+int numMessages = 1;
+admin.persistentTopics().peekMessages(topic, subName, numMessages);
+```
+
+### Skip messages
+
+It skips N messages for a specific subscription of a given topic.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent skip \
+  --count 10 --subscription my-subscription \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+int numMessages = 1;
+admin.persistentTopics().skipMessages(topic, subName, numMessages);
+```
+
+### Skip all messages
+
+It skips all old messages for a specific subscription of a given topic.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent skip-all \
+  --subscription my-subscription \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages}
+
+[More info](../../reference/RestApi#/admin/persistent/:tenant/:namespace/:topic/subscription/:subName/skip_all)
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+admin.persistentTopics().skipAllMessages(topic, subName);
+```
+
+### Reset cursor
+
+It resets a subscription’s cursor position back to the position which was recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent reset-cursor \
+  --subscription my-subscription --time 10 \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+long timestamp = 2342343L;
+admin.persistentTopics().skipAllMessages(topic, subName, timestamp);
+```
+
+### Lookup of topic
+
+It locates broker url which is serving the given topic.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent lookup \
+  persistent://test-tenant/ns1/tp1 \
+
+ "pulsar://broker1.org.com:4480"
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/lookup/v2/topic/persistent/:tenant:namespace/:topic|/}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.lookup().lookupDestination(topic);
+```
+
+### Get bundle
+
+It gives range of the bundle which contains given topic
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent bundle-range \
+  persistent://test-tenant/ns1/tp1 \
+
+ "0x00000000_0xffffffff"
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|/}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.lookup().getBundleRange(topic);
+```
+
+
+### Get subscriptions
+
+It shows all subscription names for a given topic.
+
+#### pulsar-admin
+
+```shell
+$ pulsar-admin persistent subscriptions \
+  persistent://test-tenant/ns1/tp1 \
+
+ my-subscription
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/persistent/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getSubscriptions(topic);
+```
+
+### Unsubscribe
+
+It can also help to unsubscribe a subscription which is no more processing further messages.
+
+#### pulsar-admin
+
+
+```shell
+$ pulsar-admin persistent unsubscribe \
+  --subscription my-subscription \
+  persistent://test-tenant/ns1/tp1 \
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription}
+
+#### Java
+
+```java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subscriptionName = "my-subscription";
+admin.persistentTopics().deleteSubscription(topic, subscriptionName);
+```
+
+### Last Message Id
+
+It gives the last commited message ID for a persistent topic, and it will be available in 2.3.0.
+
+```shell
+pulsar-admin topics last-message-id topic-name
+```
+
+#### REST API
+{% endpoint Get /admin/v2/persistent/:tenant/:namespace/:topic/lastMessageId %}
+
+#### Java
+
+```Java
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.persistentTopics().getLastMessage(topic);
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/admin-api-schemas.md b/site2/website/versioned_docs/version-2.5.2/admin-api-schemas.md
new file mode 100644
index 0000000..84c2da8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/admin-api-schemas.md
@@ -0,0 +1,7 @@
+---
+id: version-2.5.2-admin-api-schemas
+title: Managing Schemas
+sidebar_label: Schemas
+original_id: admin-api-schemas
+---
+
diff --git a/site2/website/versioned_docs/version-2.5.2/administration-dashboard.md b/site2/website/versioned_docs/version-2.5.2/administration-dashboard.md
new file mode 100644
index 0000000..e7cea9e
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/administration-dashboard.md
@@ -0,0 +1,60 @@
+---
+id: version-2.5.2-administration-dashboard
+title: The Pulsar dashboard
+sidebar_label: Dashboard
+original_id: administration-dashboard
+---
+
+The Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form.
+
+The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database.
+
+You can use the [Django](https://www.djangoproject.com) web app to render the collected data.
+
+## Install
+
+The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+```shell
+$ SERVICE_URL=http://broker.example.com:8080/
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  apachepulsar/pulsar-dashboard:{{pulsar:version}}
+```
+
+You can find the {@inject: github:`Dockerfile`:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well:
+
+```shell
+$ docker build -t apachepulsar/pulsar-dashboard dashboard
+```
+
+If token authentication is enabled:
+> Provided token should have super-user access. 
+```shell
+$ SERVICE_URL=http://broker.example.com:8080/
+$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  -e JWT_TOKEN=$JWT_TOKEN \
+  apachepulsar/pulsar-dashboard
+```
+ 
+You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://<broker-ip>:8080` by default. `<broker-ip>` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard.
+
+Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses.
+
+> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container
+
+If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to
+be the IP of the machine.
+
+Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to
+explicitely set the advertise address to the host IP. For example:
+
+```shell
+$ bin/pulsar standalone --advertised-address 1.2.3.4
+```
+
+### Known issues
+
+Only Pulsar Token [authentication](security-overview.md#authentication-providers)  is supported as of now.
diff --git a/site2/website/versioned_docs/version-2.5.2/administration-geo.md b/site2/website/versioned_docs/version-2.5.2/administration-geo.md
new file mode 100644
index 0000000..e0c01a7
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/administration-geo.md
@@ -0,0 +1,158 @@
+---
+id: version-2.5.2-administration-geo
+title: Pulsar geo-replication
+sidebar_label: Geo-replication
+original_id: administration-geo
+---
+
+*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+
+## How geo-replication works
+
+The diagram below illustrates the process of geo-replication across Pulsar clusters:
+
+![Replication Diagram](assets/geo-replication.png)
+
+In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
+
+Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
+
+## Geo-replication and Pulsar properties
+
+You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
+
+Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
+
+* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
+* Configure that namespace to replicate across two or more provisioned clusters
+
+Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
+
+## Local persistence and forwarding
+
+When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters.
+
+In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency.
+
+Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition).
+
+> #### Subscriptions are local to a cluster
+> While producers and consumers can publish to and consume from any cluster in a Pulsar instance, subscriptions are local to the clusters in which the subscriptions are created and cannot be transferred between clusters. If you do need to transfer a subscription, you need to create a new subscription in the desired cluster.
+
+In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
+
+All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
+
+## Configure replication
+
+As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level.
+
+### Grant permissions to properties
+
+To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later.
+
+Specify all the intended clusters when you create a tenant:
+
+```shell
+$ bin/pulsar-admin tenants create my-tenant \
+  --admin-roles my-admin-role \
+  --allowed-clusters us-west,us-east,us-cent
+```
+
+To update permissions of an existing tenant, use `update` instead of `create`.
+
+### Enable geo-replication namespaces
+
+You can create a namespace with the following command sample.
+
+```shell
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace
+```
+
+Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand:
+
+```shell
+$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \
+  --clusters us-west,us-east,us-cent
+```
+
+You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes.
+
+### Use topics with geo-replication
+
+Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster.
+
+#### Selective replication
+
+By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list.
+
+The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object:
+
+```java
+List<String> restrictReplicationTo = Arrays.asList(
+        "us-west",
+        "us-east"
+);
+
+Producer producer = client.newProducer()
+        .topic("some-topic")
+        .create();
+
+producer.newMessage()
+        .value("my-payload".getBytes())
+        .setReplicationClusters(restrictReplicationTo)
+        .send();
+```
+
+#### Topic stats
+
+Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API:
+
+```shell
+$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic
+```
+
+Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs.
+
+#### Delete a geo-replication topic
+
+Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection.
+
+In Pulsar, a topic is automatically deleted when the topic meets the following three conditions:
+- no producers or consumers are connected to it;
+- no subscriptions to it;
+- no more messages are kept for retention. 
+For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe.
+
+You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker).
+
+To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic.
+
+## Replicated subscriptions
+
+Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions.
+
+In case of failover, a consumer can restart consuming from the failure point in a different cluster. 
+
+### Enable replicated subscription
+
+Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. 
+
+```java
+Consumer<String> consumer = client.newConsumer(Schema.STRING)
+            .topic("my-topic")
+            .subscriptionName("my-subscription")
+            .replicateSubscriptionState(true)
+            .subscribe();
+```
+
+### Advantages
+
+ * It is easy to implement the logic. 
+ * You can choose to enable or disable replicated subscription.
+ * When you enable it, the overhead is low, and it is easy to configure. 
+ * When you disable it, the overhead is zero.
+
+### Limitations
+
+When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
diff --git a/site2/website/versioned_docs/version-2.5.2/administration-load-balance.md b/site2/website/versioned_docs/version-2.5.2/administration-load-balance.md
new file mode 100644
index 0000000..7a24283
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/administration-load-balance.md
@@ -0,0 +1,182 @@
+---
+id: version-2.5.2-administration-load-balance
+title: Pulsar load balance
+sidebar_label: Load balance
+original_id: administration-load-balance
+---
+
+## Load balance across Pulsar brokers
+
+Pulsar is an horizontally scalable messaging system, so the traffic
+in a logical cluster must be spread across all the available Pulsar brokers as evenly as possible, which is a core requirement.
+
+You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. 
+
+## Pulsar load manager architecture
+
+The following part introduces the basic architecture of the Pulsar load manager.
+
+### Assign topics to brokers dynamically
+
+Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster.
+
+When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. 
+
+In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic.
+
+The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker.
+
+The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage.
+
+#### Assignment granularity
+
+The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. 
+
+Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism.
+
+The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level.
+
+For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising
+a portion of overall hash range of the namespace.
+
+Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which
+bundle the hash falls into.
+
+Each bundle is independent of the others and thus is independently assigned to different brokers.
+
+### Create namespaces and bundles
+
+When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`:
+
+```properties
+# When a namespace is created without specifying the number of bundle, this
+# value will be used as the default
+defaultNumberOfNamespaceBundles=4
+```
+
+You can either change the system default, or override it when you create a new namespace:
+
+```shell
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16
+```
+
+With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers.
+
+In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution.
+
+On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers.
+
+### Unload topics and bundles
+
+You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics,
+release ownership and reassign the topics to a new broker, based on current load.
+
+When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned.
+
+Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded.
+
+Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic:
+
+```shell
+pulsar-admin topics unload persistent://tenant/namespace/topic
+```
+
+To unload all topics for a namespace and trigger reassignments:
+
+```shell
+pulsar-admin namespaces unload tenant/namespace
+```
+
+### Split namespace bundles 
+
+Since the load for the topics in a bundle might change over time, or predicting upfront might just be hard, brokers can split bundles into two. The new smaller bundles can be reassigned to different brokers.
+
+The splitting happens based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution.
+
+```properties
+# enable/disable namespace bundle auto split
+loadBalancerAutoBundleSplitEnabled=true
+
+# enable/disable automatic unloading of split bundles
+loadBalancerAutoUnloadSplitBundlesEnabled=true
+
+# maximum topics in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxTopics=1000
+
+# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxSessions=1000
+
+# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxMsgRate=30000
+
+# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxBandwidthMbytes=100
+
+# maximum number of bundles in a namespace (for auto-split)
+loadBalancerNamespaceMaximumBundles=128
+```
+
+### Shed load automatically
+
+The support for automatic load shedding is avaliable in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers.
+
+When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the
+ones with higher traffic, that make up for the overload percentage.
+
+For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`.
+
+Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network
+and memory), broker unloads bundles for at least 15% of traffic.
+
+The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting:
+
+```properties
+# Enable/disable automatic bundle unloading for load-shedding
+loadBalancerSheddingEnabled=true
+```
+
+Additional settings that apply to shedding:
+
+```properties
+# Load shedding interval. Broker periodically checks whether some traffic should be offload from
+# some over-loaded broker to other under-loaded brokers
+loadBalancerSheddingIntervalMinutes=1
+
+# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe
+loadBalancerSheddingGracePeriodMinutes=30
+```
+
+#### Broker overload thresholds
+
+The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled).
+
+By default, overload threshold is set at 85%:
+
+```properties
+# Usage threshold to determine a broker as over-loaded
+loadBalancerBrokerOverloadedThresholdPercentage=85
+```
+
+Pulsar gathers the usage stats from the system metrics.
+
+In case of network utilization, in some cases the network interface speed that Linux reports is
+not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps
+NIC speed for which the OS reports 10Gbps speed.
+
+Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down.
+
+You can use the following setting to correct the max NIC speed:
+
+```properties
+# Override the auto-detection of the network interfaces max speed.
+# This option is useful in some environments (eg: EC2 VMs) where the max speed
+# reported by Linux is not reflecting the real bandwidth available to the broker.
+# Since the network usage is employed by the load manager to decide when a broker
+# is overloaded, it is important to make sure the info is correct or override it
+# with the right value here. The configured value can be a double (eg: 0.8) and that
+# can be used to trigger load-shedding even before hitting on NIC limits.
+loadBalancerOverrideBrokerNicSpeedGbps=
+```
+
+When the value is empty, Pulsar uses the value that the OS reports.
+
diff --git a/site2/website/versioned_docs/version-2.5.2/administration-proxy.md b/site2/website/versioned_docs/version-2.5.2/administration-proxy.md
new file mode 100644
index 0000000..5353554
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/administration-proxy.md
@@ -0,0 +1,105 @@
+---
+id: version-2.5.2-administration-proxy
+title: The Pulsar proxy
+sidebar_label: Pulsar proxy
+original_id: administration-proxy
+---
+
+The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) is an optional gateway that you can run in front of the brokers in a Pulsar cluster. You can run a Pulsar proxy in cases when direction connections between clients and Pulsar brokers are either infeasible, undesirable, or both, for example when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform.
+
+## Configure the proxy
+
+The proxy must have some way to find the addresses of the brokers of the cluster. You can do this by either configuring the proxy to connect directly to service discovery or by specifying a broker URL in the configuration. 
+
+### Option 1: Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+```properties
+zookeeperServers=zk-0,zk-1,zk-2
+configurationStoreServers=zk-0:2184,zk-remote:2184
+```
+
+> If you use service discovery, the network ACL must allow the proxy to talk to the ZooKeeper nodes on the zookeeper client port, which is usually 2181, and on the configuration store client port, which is 2184 by default. Opening the network ACLs means that if someone compromises a proxy, they have full access to ZooKeeper. For this reason, using broker URLs to configure the proxy is more secure.
+
+### Option 2: Use broker URLs
+
+The more secure method of configuring the proxy is to specify a URL to connect to the brokers.
+
+> [Authorization](security-authorization#enable-authorization-and-assign-superusers) at the proxy requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you should disable the Proxy level authorization. Brokers still authorize requests after the proxy forwards them.
+
+You can configure the broker URLs in `conf/proxy.conf` as follows.
+
+```properties
+brokerServiceURL=pulsar://brokers.example.com:6650
+brokerWebServiceURL=http://brokers.example.com:8080
+functionWorkerWebServiceURL=http://function-workers.example.com:8080
+```
+
+Or if you use TLS:
+```properties
+brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651
+brokerWebServiceURLTLS=https://brokers.example.com:8443
+functionWorkerWebServiceURL=https://function-workers.example.com:8443
+```
+
+The hostname in the URLs provided should be a DNS entry which points to multiple brokers or a Virtual IP which is backed by multiple broker IP addresses so that the proxy does not lose connectivity to the pulsar cluster if a single broker becomes unavailable.
+
+The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs.
+
+Note that if you do not use functions, then you do not need to configure `functionWorkerWebServiceURL`.
+
+## Start the proxy
+
+To start the proxy:
+
+```bash
+$ cd /path/to/pulsar/directory
+$ bin/pulsar proxy
+```
+
+> You can run as many instances of the Pulsar proxy in a cluster as you want.
+
+
+## Stop the proxy
+
+The Pulsar proxy runs by default in the foreground. To stop the proxy, simply stop the process in which the proxy is running.
+
+## Proxy frontends
+
+You can run the Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer.
+
+## Use Pulsar clients with the proxy
+
+Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, then the connection URL for clients is `pulsar://pulsar.cluster.default:6650`.
+
+## Proxy configuration
+
+You can configure the Pulsar proxy using the [`proxy.conf`](reference-configuration.md#proxy) configuration file. The following parameters are available in that file:
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  The ZooKeeper quorum connection string (as a comma-separated list)  ||
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
+|servicePort| The port to use for server binary Protobuf requests |6650|
+|servicePortTls|  The port to use to server binary Protobuf TLS requests  |6651|
+|statusFilePath | Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks ||
+|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy  |false|
+|authenticateMetricsEndpoint| Whether the '/metrics' endpoint requires authentication. Defaults to true. 'authenticationEnabled' must also be set for this to take effect. |true|
+|authenticationProviders| Authentication provider name list (a comma-separated list of class names) ||
+|authorizationEnabled|  Whether authorization is enforced by the Pulsar proxy |false|
+|authorizationProvider| Authorization provider as a fully qualified class name  |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider|
+|brokerClientAuthenticationPlugin|  The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientAuthenticationParameters|  The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientTrustCertsFilePath|  The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers ||
+|superUserRoles|  Role names that are treated as “super-users,” meaning that they are able to perform all admin ||
+|forwardAuthorizationCredentials| Whether client authorization credentials are forwared to the broker for re-authorization. Authentication must be enabled via authenticationEnabled=true for this to take effect.  |false|
+|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy rejects requests beyond that. |10000|
+|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy errors out requests beyond that. |50000|
+|tlsEnabledInProxy| Whether TLS is enabled for the proxy  |false|
+|tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar brokers |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file ||
+|tlsHostnameVerificationEnabled|  Whether the hostname is validated when the proxy creates a TLS connection with brokers  |false|
+|tlsRequireTrustedClientCertOnConnect|  Whether client certificates are required for TLS. Connections are rejected if the client certificate is not trusted. |false|
diff --git a/site2/website/versioned_docs/version-2.5.2/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.5.2/administration-pulsar-manager.md
new file mode 100644
index 0000000..017882c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/administration-pulsar-manager.md
@@ -0,0 +1,133 @@
+---
+id: version-2.5.2-administration-pulsar-manager
+title: Pulsar Manager
+sidebar_label: Pulsar Manager
+original_id: administration-pulsar-manager
+---
+
+Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments.
+
+## Install
+
+The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+
+```
+docker pull apachepulsar/pulsar-manager:v0.1.0
+docker run -it -p 9527:9527 -e REDIRECT_HOST=http://192.168.0.104 -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -v $PWD:/data apachepulsar/pulsar-manager:v0.1.0 /bin/sh
+```
+
+* REDIRECT_HOST: the IP address of the front-end server.
+
+* REDIRECT_PORT: the port of the front-end server.
+
+* DRIVER_CLASS_NAME: the driver class name of PostgreSQL.
+
+* URL: the URL of PostgreSQL JDBC, For example, `jdbc:postgresql://127.0.0.1:5432/pulsar_manager`.
+
+* USERNAME: the username of PostgreSQL.
+
+* PASSWORD: the password of PostgreSQL.
+
+* LOG_LEVEL: level of log.
+
+You can find the in the [Docker](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from scratch as well:
+
+```
+git clone https://github.com/apache/pulsar-manager
+cd pulsar-manager
+./gradlew build -x test
+cd front-end
+npm install --save
+npm run build:prod
+cd ..
+docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager .
+```
+
+### Use custom databases
+
+If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL.   
+
+1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql).
+
+2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration.
+
+```
+spring.datasource.driver-class-name=org.postgresql.Driver
+spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager
+spring.datasource.username=postgres
+spring.datasource.password=postgres
+```
+
+3. Compile to generate a new executable jar package.
+
+```
+./gradlew -x build -x test
+```
+
+### Enable JWT authentication
+
+If you want to turn on JWT authentication, configure the following parameters:
+
+* `backend.jwt.token`:  token for the superuser. You need to configure this parameter during cluster initialization.
+* `jwt.broker.token.mode`:  two modes of generating token, SECRET and PRIVATE.
+* `jwt.broker.public.key`: configure this option if you are using the PRIVATE mode.
+* `jwt.broker.private.key`: configure this option if you are using the PRIVATE mode.
+* `jwt.broker.secret.key`: configure this option if you are using the SECRET mode.
+
+For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/).
+
+
+If you want to enable JWT authentication, use one of the following methods.
+
+
+* Method 1: use command-line tool
+
+```
+./build/distributions/pulsar-manager/bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key
+```
+
+* Method 2: configure the application.properties file
+
+```
+backend.jwt.token=token
+
+jwt.broker.token.mode=PRIVATE
+jwt.broker.public.key=file:///path/broker-public.key
+jwt.broker.private.key=file:///path/broker-private.key
+
+or 
+jwt.broker.token.mode=SECRET
+jwt.broker.secret.key=file:///path/broker-secret.key
+```
+
+* Method 3: use Docker and turn on token authentication.
+
+```
+export JWT_TOKEN="your-token"
+docker run -it -p 9527:9527 -e REDIRECT_HOST=http://192.168.55.182 -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.1.0 /bin/sh
+```
+
+* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key.
+
+```
+export JWT_TOKEN="your-token"
+export PRIVATE_KEY="file:///private-key-path"
+export PUBLIC_KEY="file:///public-key-path"
+docker run -it -p 9527:9527 -e REDIRECT_HOST=http://192.168.55.182 -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/private-key-path:/pulsar-manager/private-key-path -v $PWD/public-key-path:/pulsar-manager/public-key-path apachepulsar/pulsar-manager:v0.1.0 /bin/sh
+```
+
+* Method 5: use Docker and turn on **token authentication** and **token management** by secret key.
+
+```
+export JWT_TOKEN="your-token"
+export SECRET_KEY="file:///secret-key-path"
+docker run -it -p 9527:9527 -e REDIRECT_HOST=http://192.168.55.182 -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret-key-path:/pulsar-manager/secret-key-path apachepulsar/pulsar-manager:v0.1.0 /bin/sh
+```
+
+* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/8b1f26f7d7c725e6d056c41b98235fbc5deb9f49/src/README.md).
+* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/front-end/README.md).
+
+## Log in
+
+Visit http://localhost:9527 to log in.
diff --git a/site2/website/versioned_docs/version-2.5.2/administration-stats.md b/site2/website/versioned_docs/version-2.5.2/administration-stats.md
new file mode 100644
index 0000000..49f6b22
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/administration-stats.md
@@ -0,0 +1,64 @@
+---
+id: version-2.5.2-administration-stats
+title: Pulsar stats
+sidebar_label: Pulsar statistics
+original_id: administration-stats
+---
+
+## Partitioned topics
+
+|Stat|Description|
+|---|---|
+|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.|
+|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.|
+|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.|
+|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.|
+|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.|
+|storageSize| The sum of storage size of the ledgers for this topic.|
+|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.|
+|producerId| Internal identifier for this producer on this topic.|
+|producerName|  Internal identifier for this producer, generated by the client library.|
+|address| IP address and source port for the connection of this producer.|
+|connectedSince| Timestamp this producer is created or last reconnected.|
+|subscriptions| The list of all local subscriptions to the topic.|
+|my-subscription| The name of this subscription (client defined).|
+|msgBacklog| The count of messages in backlog for this subscription.|
+|type| This subscription type.|
+|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.|
+|consumers| The list of connected consumers for this subscription.|
+|consumerName| Internal identifier for this consumer, generated by the client library.|
+|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.|
+|replication| This section gives the stats for cross-colo replication of this topic.|
+|replicationBacklog| The outbound replication backlog in messages.|
+|connected| Whether the outbound replicator is connected.|
+|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.|
+|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. |
+|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.|
+
+
+## Topics
+
+|Stat|Description|
+|---|---|
+|entriesAddedCounter| Messages published since this broker loads this topic.|
+|numberOfEntries| Total number of messages being tracked.|
+|totalSize| Total storage size in bytes of all messages.|
+|currentLedgerEntries| Count of messages written to the ledger currently open for writing.|
+|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.|
+|lastLedgerCreatedTimestamp| Time when last ledger is created.|
+|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.|
+|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.|
+|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.|
+|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.|
+|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.|
+|ledgers| The ordered list of all ledgers for this topic holding its messages.|
+|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.|
+|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.|
+|readPosition| The latest position of subscriber for reading message.|
+|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.|
+|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.|
+|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.|
+|cursorLedger| The ledger used to persistently store the current markDeletePosition.|
+|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.|
+|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.|
+|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.|
diff --git a/site2/website/versioned_docs/version-2.5.2/administration-upgrade.md b/site2/website/versioned_docs/version-2.5.2/administration-upgrade.md
new file mode 100644
index 0000000..e2b0e5a
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/administration-upgrade.md
@@ -0,0 +1,151 @@
+---
+id: version-2.5.2-administration-upgrade
+title: Upgrade Guide
+sidebar_label: Upgrade
+original_id: administration-upgrade
+---
+
+## Upgrade guidelines
+
+Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless).
+
+The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading.
+
+- Backup all your configuration files before upgrading.
+- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration.   
+- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. 
+- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process.
+- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade.
+- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly.
+- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode.
+
+> Note: Currently, Apache Pulsar is compatible between versions. 
+
+## Upgrade sequence
+
+To upgrade an Apache Pulsar cluster, follow the upgrade sequence.
+
+1. Upgrade ZooKeeper (optional)  
+- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes.  
+- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process.
+2. Upgrade bookies  
+- Canary test: test an upgraded version in one or a small set of bookies.
+- Rolling upgrade:  
+    - a. Disable `autorecovery` with the following command.
+       ```shell
+       bin/bookkeeper shell autorecovery -disable
+       ```  
+    - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary.  
+    - c. After you upgrade all bookies, re-enable `autorecovery` with the following command.
+       ```shell
+       bin/bookkeeper shell autorecovery -enable
+       ```
+3. Upgrade brokers
+- Canary test: test an upgraded version in one or a small set of brokers.
+- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary.
+4. Upgrade proxies
+- Canary test: test an upgraded version in one or a small set of proxies.
+- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary.
+
+## Upgrade ZooKeeper (optional)
+While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster.
+
+### Canary test
+
+You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster.
+
+To upgrade ZooKeeper server to a new version, complete the following steps:
+
+1. Stop a ZooKeeper server.
+2. Upgrade the binary and configuration files.
+3. Start the ZooKeeper server with the new binary files.
+4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected.
+5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well.
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary.
+
+### Upgrade all ZooKeeper servers
+
+After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. 
+
+You can upgrade all ZooKeeper servers one by one by following steps in canary test.
+
+## Upgrade bookies
+
+While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster.
+For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade).
+
+### Canary test
+
+You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster.
+
+To upgrade bookie to a new version, complete the following steps:
+
+1. Stop a bookie.
+2. Upgrade the binary and configuration files.
+3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload.
+   ```shell
+   bin/pulsar bookie --readOnly
+   ```
+4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode.
+   ```shell
+   bin/pulsar bookie
+   ```
+5. Observe and make sure the cluster serves both write and read traffic.
+
+#### Canary rollback
+
+If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. 
+
+### Upgrade all bookies
+
+After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. 
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each bookie.
+
+1. Stop the bookie. 
+2. Upgrade the software (either new binary or new configuration files).
+2. Start the bookie.
+
+> **Advanced operations**   
+> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process.
+
+## Upgrade brokers and proxies
+
+The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy.
+
+### Canary test
+
+You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster.
+
+To upgrade to a new version, complete the following steps:
+
+1. Stop a broker (or proxy).
+2. Upgrade the binary and configuration file.
+3. Start a broker (or proxy).
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy).
+
+### Upgrade all brokers or proxies
+
+After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. 
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade.
+
+In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each broker or proxy.
+
+1. Stop the broker or proxy. 
+2. Upgrade the software (either new binary or new configuration files).
+3. Start the broker or proxy.
diff --git a/site2/website/versioned_docs/version-2.5.2/administration-zk-bk.md b/site2/website/versioned_docs/version-2.5.2/administration-zk-bk.md
new file mode 100644
index 0000000..b979a6a
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/administration-zk-bk.md
@@ -0,0 +1,322 @@
+---
+id: version-2.5.2-administration-zk-bk
+title: ZooKeeper and BookKeeper administration
+sidebar_label: ZooKeeper and BookKeeper
+original_id: administration-zk-bk
+---
+
+Pulsar relies on two external systems for essential tasks:
+
+* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks.
+* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data.
+
+ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects.
+
+> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar.
+
+
+## ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum. 
+
+### Deploy local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar.
+
+To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*.
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this:
+
+```shell
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start zookeeper
+```
+
+### Deploy configuration store
+
+The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
+
+If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorom uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+```
+
+As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers.
+
+Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum.
+
+For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as
+
+```
+zk[1-3].${CLUSTER}.example.com
+```
+
+In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This guarantees that writes to configuration store is possible even if one of these regions is unreachable.
+
+The ZK configuration in all the servers looks like:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+```
+
+Additionally, ZK observers need to have:
+
+```properties
+peerType=observer
+```
+
+##### Start the service
+
+Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+$ bin/pulsar-daemon start configuration-store
+```
+
+
+
+### ZooKeeper configuration
+
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
+
+#### Local ZooKeeper
+
+The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters:
+
+|Name|Description|Default|
+|---|---|---|
+|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
+|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
+|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
+|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
+|clientPort|  The port on which the ZooKeeper server listens for connections. |2181|
+|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
+|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1|
+|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+
+
+#### Configuration Store
+
+The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters:
+
+
+## BookKeeper
+
+BookKeeper is responsible for all durable message storage in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*.
+
+> For a guide to managing message persistence, retention, and expiry in Pulsar, see [this cookbook](cookbooks-retention-expiry.md).
+
+### Deploy BookKeeper
+
+BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
+
+Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Configure bookies
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster.
+
+### Start up bookies
+
+You can start up a bookie in two ways: in the foreground or as a background daemon.
+
+To start up a bookie in the foreground, use the [`bookeeper`](reference-cli-tools.md#bookkeeper) CLI tool:
+
+```bash
+$ bin/bookkeeper bookie
+```
+
+To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start bookie
+```
+
+You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+
+```shell
+$ bin/bookkeeper shell bookiesanity
+```
+
+This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger.
+
+### Hardware considerations
+
+Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, ensuring that the bookies have a suitable hardware configuration is essential. You can choose two key dimensions to bookie hardware capacity:
+
+* Disk I/O capacity read/write
+* Storage capacity
+
+Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is designed to use multiple devices:
+
+* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+* A **ledger storage device** is where data is stored until all consumers have acknowledged the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller.
+
+
+
+### Configure BookKeeper
+
+you can find configurable parameters for BookKeeper bookies in the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) file.
+
+Minimum configuration changes required in `conf/bookkeeper.conf` are:
+
+```properties
+# Change to point to journal disk mount point
+journalDirectory=data/bookkeeper/journal
+
+# Point to ledger storage disk mount point
+ledgerDirectories=data/bookkeeper/ledgers
+
+# Point to local ZK quorum
+zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
+
+# Change the ledger manager type
+ledgerManagerType=hierarchical
+```
+
+To change the zookeeper root path that Bookkeeper uses, use zkLedgersRootPath=/MY-PREFIX/ledgers instead of zkServers=localhost:2181/MY-PREFIX
+
+> Consult the official [BookKeeper docs](http://bookkeeper.apache.org) for more information about BookKeeper.
+
+## BookKeeper persistence policies
+
+In Pulsar, you can set *persistence policies*, at the namespace level, that determine how BookKeeper handles persistent storage of messages. Policies determine four things:
+
+* The number of acks (guaranteed copies) to wait for each ledger entry.
+* The number of bookies to use for a topic.
+* The number of writes to make for each ledger entry.
+* The throttling rate for mark-delete operations.
+
+### Set persistence policies
+
+You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level.
+
+#### Pulsar-admin
+
+Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are:
+
+Flag | Description | Default
+:----|:------------|:-------
+`-a`, `--bookkeeper-ack-quorom` | The number of acks (guaranteed copies) to wait on for each entry | 0
+`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0
+`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0
+`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0
+
+The following is an example:
+
+```shell
+$ pulsar-admin namespaces set-persistence my-tenant/my-ns \
+  --bookkeeper-ack-quorom 3 \
+  --bookeeper-ensemble 2
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence}
+
+#### Java
+
+```java
+int bkEnsemble = 2;
+int bkQuorum = 3;
+int bkAckQuorum = 2;
+double markDeleteRate = 0.7;
+PersistencePolicies policies =
+  new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate);
+admin.namespaces().setPersistence(namespace, policies);
+```
+
+### List persistence policies
+
+You can see which persistence policy currently applies to a namespace.
+
+#### Pulsar-admin
+
+Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace.
+
+The following is an example:
+
+```shell
+$ pulsar-admin namespaces get-persistence my-tenant/my-ns
+{
+  "bookkeeperEnsemble": 1,
+  "bookkeeperWriteQuorum": 1,
+  "bookkeeperAckQuorum", 1,
+  "managedLedgerMaxMarkDeleteRate": 0
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence}
+
+#### Java
+
+```java
+PersistencePolicies policies = admin.namespaces().getPersistence(namespace);
+```
+
+## How Pulsar uses ZooKeeper and BookKeeper
+
+This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster:
+
+![ZooKeeper and BookKeeper](assets/pulsar-system-architecture.png)
+
+Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies.
diff --git a/site2/website/versioned_docs/version-2.5.2/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.5.2/client-libraries-cpp.md
new file mode 100644
index 0000000..317a191
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/client-libraries-cpp.md
@@ -0,0 +1,215 @@
+---
+id: version-2.5.2-client-libraries-cpp
+title: Pulsar C++ client
+sidebar_label: C++
+original_id: client-libraries-cpp
+---
+
+## Supported platforms
+
+Pulsar C++ client is supported on **Linux** and **MacOS** platforms.
+
+## Linux
+
+> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly.
+
+Four kind of libraries `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` are included in your `/usr/lib` after rpm/deb download and install.
+By default, they are build under code path `${PULSAR_HOME}/pulsar-client-cpp`, using command
+ `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`
+These libraries rely on some other libraries, if you want to get detailed version of dependencies libraries, please reference [these](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) [files](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile).
+
+1. `libpulsar.so` is the Shared library, it contains statically linked `boost` and `openssl`, and will also dynamically link all other needed libraries.
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include
+```
+
+2. `libpulsarnossl.so` is the Shared library that similar to `libpulsar.so` except that the library `openssl` and `crypto` are dynamically linked.
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib
+```
+
+3. `libpulsar.a` is the Static library, it need to load some dependencies library when using it. 
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread  -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz
+```
+
+4. `libpulsarwithdeps.a` is the Static library, base on `libpulsar.a`, and archived in the dependencies libraries of `libboost_regex`,  `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`, 
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread  -I/usr/local/ssl/include -L/usr/local/ssl/lib
+```
+`libpulsarwithdeps.a` does not include library openssl related libraries: `libssl` and `libcrypto`, because these 2 library is related to security, 
+by using user local system provided version is more reasonable, and more easy for user to handling security issue and library upgrade.
+
+### Install RPM
+
+1. Download a RPM package from the links in the table. 
+
+| Link | Crypto files |
+|------|--------------|
+| [client]({{pulsar:dist_rpm:client}}) | [asc]({{pulsar:dist_rpm:client}}.asc), [sha512]({{pulsar:dist_rpm:client}}.sha512) |
+| [client-debuginfo]({{pulsar:dist_rpm:client-debuginfo}}) | [asc]({{pulsar:dist_rpm:client-debuginfo}}.asc),  [sha512]({{pulsar:dist_rpm:client-debuginfo}}.sha512) |
+| [client-devel]({{pulsar:dist_rpm:client-devel}}) | [asc]({{pulsar:dist_rpm:client-devel}}.asc),  [sha512]({{pulsar:dist_rpm:client-devel}}.sha512) |
+
+2. Install the package using the following command.
+
+```bash
+$ rpm -ivh apache-pulsar-client*.rpm
+```
+
+After install, Pulsar libraries will be placed under `/usr/lib`.
+
+### Install Debian
+
+1. Download a Debian package from the links in the table. 
+
+| Link | Crypto files |
+|------|--------------|
+| [client]({{pulsar:deb:client}}) | [asc]({{pulsar:dist_deb:client}}.asc), [sha512]({{pulsar:dist_deb:client}}.sha512) |
+| [client-devel]({{pulsar:deb:client-devel}}) | [asc]({{pulsar:dist_deb:client-devel}}.asc),  [sha512]({{pulsar:dist_deb:client-devel}}.sha512) |
+
+2. Install the package using the following command:
+
+```bash
+$ apt install ./apache-pulsar-client*.deb
+```
+
+After install, Pulsar libraries will be placed under `/usr/lib`.
+
+### Build
+
+> If you want to build RPM and Debian packages from the latest master, follow the instructions below. All the instructions are run at the root directory of your cloned Pulsar repository.
+
+There are recipes that build RPM and Debian packages containing a
+statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all the required
+dependencies.
+
+To build the C++ library packages, build the Java packages first.
+
+```shell
+mvn install -DskipTests
+```
+
+#### RPM
+
+```shell
+pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh
+```
+
+This builds the RPM inside a Docker container and it leaves the RPMs in `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/`.
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` |
+| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers |
+| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` |
+
+#### Debian
+
+To build Debian packages, enter the following command.
+
+```shell
+pulsar-client-cpp/pkg/deb/docker-build-deb.sh
+```
+
+Debian packages are created at `pulsar-client-cpp/pkg/deb/BUILD/DEB/`.
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` |
+| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers |
+
+## MacOS
+
+Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers.
+
+```shell
+brew install libpulsar
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a Pulsar protocol URL.
+
+Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost.
+
+```http
+pulsar://localhost:6650
+```
+
+In a Pulsar cluster in production, the URL looks as follows: 
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example.
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Create a consumer
+To connect to Pulsar as a consumer, you need to create a consumer on the C++ client. The following is an example. 
+
+```c++
+Client client("pulsar://localhost:6650");
+
+Consumer consumer;
+Result result = client.subscribe("my-topic", "my-subscription-name", consumer);
+if (result != ResultOk) {
+    LOG_ERROR("Failed to subscribe: " << result);
+    return -1;
+}
+
+Message msg;
+
+while (true) {
+    consumer.receive(msg);
+    LOG_INFO("Received: " << msg
+            << "  with payload '" << msg.getDataAsString() << "'");
+
+    consumer.acknowledge(msg);
+}
+
+client.close();
+```
+
+## Create a producer
+To connect to Pulsar as a producer, you need to create a producer on the C++ client. The following is an example. 
+
+```c++
+Client client("pulsar://localhost:6650");
+
+Producer producer;
+Result result = client.createProducer("my-topic", producer);
+if (result != ResultOk) {
+    LOG_ERROR("Error creating producer: " << result);
+    return -1;
+}
+
+// Publish 10 messages to the topic
+for (int i = 0; i < 10; i++){
+    Message msg = MessageBuilder().setContent("my-message").build();
+    Result res = producer.send(msg);
+    LOG_INFO("Message sent: " << res);
+}
+client.close();
+```
+
+## Enable authentication in connection URLs
+If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example.
+
+```cpp
+ClientConfiguration config = ClientConfiguration();
+config.setUseTls(true);
+config.setTlsTrustCertsFilePath("/path/to/cacert.pem");
+config.setTlsAllowInsecureConnection(false);
+config.setAuth(pulsar::AuthTls::create(
+            "/path/to/client-cert.pem", "/path/to/client-key.pem"););
+
+Client client("pulsar+ssl://my-broker.com:6651", config);
+```
+
+For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples).
diff --git a/site2/website/versioned_docs/version-2.5.2/client-libraries-go.md b/site2/website/versioned_docs/version-2.5.2/client-libraries-go.md
new file mode 100644
index 0000000..c8f17ea
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/client-libraries-go.md
@@ -0,0 +1,493 @@
+---
+id: version-2.5.2-client-libraries-go
+title: The Pulsar Go client
+sidebar_label: Go
+original_id: client-libraries-go
+---
+
+The Pulsar Go client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+> #### API docs available as well
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
+
+
+## Installation
+
+### Requirements
+
+Pulsar Go client library is based on the C++ client library. Follow
+the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries
+through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
+
+### Installing go package
+
+> #### Compatibility Warning
+> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
+
+You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
+
+```bash
+$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
+```
+
+Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
+
+```bash
+$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v{{pulsar:version}}
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+import "github.com/apache/pulsar/pulsar-client-go/pulsar"
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+pulsar://localhost:6650
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Creating a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+
+```go
+import (
+    "log"
+    "runtime"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+        OperationTimeoutSeconds: 5,
+        MessageListenerThreads: runtime.NumCPU(),
+    })
+
+    if err != nil {
+        log.Fatalf("Could not instantiate Pulsar client: %v", err)
+    }
+}
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
+`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
+`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
+`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
+`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
+`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
+`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
+`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
+`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
+`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic: "my-topic",
+})
+
+if err != nil {
+    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
+}
+
+defer producer.Close()
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Hello, Pulsar"),
+}
+
+if err := producer.Send(context.Background(), msg); err != nil {
+    log.Fatalf("Producer could not send message: %v", err)
+}
+```
+
+> #### Blocking operation
+> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
+`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
+`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
+`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
+`Schema()` | | Schema
+
+Here's a more involved example usage of a producer:
+
+```go
+import (
+    "context"
+    "fmt"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client to instantiate a producer
+    producer, err := client.CreateProducer(pulsar.ProducerOptions{
+        Topic: "my-topic",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    ctx := context.Background()
+
+    // Send 10 messages synchronously and 10 messages asynchronously
+    for i := 0; i < 10; i++ {
+        // Create a message
+        msg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("message-%d", i)),
+        }
+
+        // Attempt to send the message
+        if err := producer.Send(ctx, msg); err != nil {
+            log.Fatal(err)
+        }
+
+        // Create a different message to send asynchronously
+        asyncMsg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
+        }
+
+        // Attempt to send the message asynchronously and handle the response
+        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
+            if err != nil { log.Fatal(err) }
+
+            fmt.Printf("the %s successfully published", string(msg.Payload))
+        })
+    }
+}
+```
+
+### Producer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
+`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
+`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
+`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds
+`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
+`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
+`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
+`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
+`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
+`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
+`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
+`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
+`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 10ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 10ms
+`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+msgChannel := make(chan pulsar.ConsumerMessage)
+
+consumerOpts := pulsar.ConsumerOptions{
+    Topic:            "my-topic",
+    SubscriptionName: "my-subscription-1",
+    Type:             pulsar.Exclusive,
+    MessageChannel:   msgChannel,
+}
+
+consumer, err := client.Subscribe(consumerOpts)
+
+if err != nil {
+    log.Fatalf("Could not establish subscription: %v", err)
+}
+
+defer consumer.Close()
+
+for cm := range msgChannel {
+    msg := cm.Message
+
+    fmt.Printf("Message ID: %s", msg.ID())
+    fmt.Printf("Message value: %s", string(msg.Payload()))
+
+    consumer.Ack(msg)
+}
+```
+
+> #### Blocking operation
+> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
+`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
+`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
+`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
+`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
+`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
+
+#### Receive example
+
+Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
+
+```go
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client object to instantiate a consumer
+    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+        Topic:            "my-golang-topic",
+        SubscriptionName: "sub-1",
+        Type: pulsar.Exclusive,
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    defer consumer.Close()
+
+    ctx := context.Background()
+
+    // Listen indefinitely on the topic
+    for {
+        msg, err := consumer.Receive(ctx)
+        if err != nil { log.Fatal(err) }
+
+        // Do something with the message
+        err = processMessage(msg)
+
+        if err == nil {
+            // Message processed successfully
+            consumer.Ack(msg)
+        } else {
+            // Failed to process messages
+            consumer.Nack(msg)
+        }
+    }
+}
+```
+
+### Consumer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
+`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`SubscriptionName` | The subscription name for this consumer |
+`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
+`Name` | The name of the consumer |
+`AckTimeout` | Set the timeout for unacked messages | 0
+`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
+`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
+`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
+`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
+`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
+`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic: "my-golang-topic",
+    StartMessageId: pulsar.LatestMessage,
+})
+```
+
+> #### Blocking operation
+> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
+
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+#### "Next" example
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatalf("Could not create client: %v", err) }
+
+    // Use the client to instantiate a reader
+    reader, err := client.CreateReader(pulsar.ReaderOptions{
+        Topic:          "my-golang-topic",
+        StartMessageID: pulsar.EarliestMessage,
+    })
+
+    if err != nil { log.Fatalf("Could not create reader: %v", err) }
+
+    defer reader.Close()
+
+    ctx := context.Background()
+
+    // Listen on the topic for incoming messages
+    for {
+        msg, err := reader.Next(ctx)
+        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
+
+        // Process the message
+    }
+}
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic:          "my-golang-topic",
+    StartMessageID: DeserializeMessageID(lastSavedId),
+})
+```
+
+### Reader configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages |
+`Name` | The name of the reader |
+`StartMessageID` | THe initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
+`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
+`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
+`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Here is some message data"),
+    Key: "message-key",
+    Properties: map[string]string{
+        "foo": "bar",
+    },
+    EventTime: time.Now(),
+    ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if err := producer.send(msg); err != nil {
+    log.Fatalf("Could not publish message due to: %v", err)
+}
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+opts := pulsar.ClientOptions{
+    URL: "pulsar+ssl://my-cluster.com:6651",
+    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/client-libraries-java.md b/site2/website/versioned_docs/version-2.5.2/client-libraries-java.md
new file mode 100644
index 0000000..27e0326
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/client-libraries-java.md
@@ -0,0 +1,809 @@
+---
+id: version-2.5.2-client-libraries-java
+title: Pulsar Java client
+sidebar_label: Java
+original_id: client-libraries-java
+---
+
+You can use Pulsar Java client to create Java producers, consumers, and [readers](#reader-interface) of messages and to perform [administrative tasks](admin-api-overview.md). The current version of the Java client is **{{pulsar:version}}**.
+
+Javadoc for the Pulsar client is divided into two domains by package as follows.
+
+Package | Description | Maven Artifact
+:-------|:------------|:--------------
+[`org.apache.pulsar.client.api`](/api/client) | The producer and consumer API | [org.apache.pulsar:pulsar-client:{{pulsar:version}}](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C{{pulsar:version}}%7Cjar)
+[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:{{pulsar:version}}](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C{{pulsar:version}}%7Cjar)
+
+This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md).
+
+## Installation
+
+The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C{{pulsar:version}}%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration.
+
+### Maven
+
+If you use Maven, add the following information to the `pom.xml` file.
+
+```xml
+<!-- in your <properties> block -->
+<pulsar.version>{{pulsar:version}}</pulsar.version>
+
+<!-- in your <dependencies> block -->
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client</artifactId>
+  <version>${pulsar.version}</version>
+</dependency>
+```
+
+### Gradle
+
+If you use Gradle, add the following information to the `build.gradle` file.
+
+```groovy
+def pulsarVersion = '{{pulsar:version}}'
+
+dependencies {
+    compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion
+}
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`.
+
+```http
+pulsar://localhost:6650
+```
+
+If you have multiple brokers, the URL is as follows.
+
+```http
+pulsar://localhost:6550,localhost:6651,localhost:6652
+```
+
+A URL for a production Pulsar cluster is as follows.
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows. 
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Client 
+
+You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this:
+
+```java
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650")
+        .build();
+```
+
+If you have multiple brokers, you can initiate a PulsarClient like this:
+```java
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652")
+        .build();
+```
+
+> ### Default broker URLs for standalone clusters
+> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default.
+
+If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+| Type | Name | <div style="width:260px">Description</div> | Default
+|---|---|---|---
+String | `serviceUrl` |Service URL provider for Pulsar service | None
+String | `authPluginClassName` | Name of the authentication plugin | None
+String | `authParams` | String represents parameters for the authentication plugin <br/><br/>**Example**<br/> key1:val1,key2:val2|None
+long|`operationTimeoutMs`|Operation timeout |30000
+long|`statsIntervalSeconds`|Interval between each stats info<br/><br/>Stats is activated with positive `statsInterval`<br/><br/>Set `statsIntervalSeconds` to 1 second at least |60
+int|`numIoThreads`| The number of threads used for handling connections to brokers | 1 
+int|`numListenerThreads`|The number of threads used for handling message listeners | 1 
+boolean|`useTcpNoDelay`|Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true
+boolean |`useTls` |Whether to use TLS encryption on the connection| false
+string | `tlsTrustCertsFilePath` |Path to the trusted TLS certificate file|None
+boolean|`tlsAllowInsecureConnection`|Whether the Pulsar client accepts untrusted TLS certificate from broker | false
+boolean | `tlsHostnameVerificationEnable` | Whether to enable TLS hostname verification|false
+int|`concurrentLookupRequest`|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000
+int|`maxLookupRequest`|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000
+int|`maxNumberOfRejectedRequestPerConnection`|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50
+int|`keepAliveIntervalSeconds`|Seconds of keeping alive interval for each client broker connection|30
+int|`connectionTimeoutMs`|Duration of waiting for a connection to a broker to be established <br/><br/>If the duration passes without a response from a broker, the connection attempt is dropped|10000
+int|`requestTimeoutMs`|Maximum duration for completing a request |60000
+int|`defaultBackoffIntervalNanos`| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100);
+long|`maxBackoffIntervalNanos`|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30)
+
+Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters.
+
+> In addition to client-level configuration, you can also apply [producer](#configuring-producers) and [consumer](#configuring-consumers) specific configuration as described in sections below.
+
+## Producer
+
+In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic).
+
+```java
+Producer<byte[]> producer = client.newProducer()
+        .topic("my-topic")
+        .create();
+
+// You can then send messages to the broker and topic you specified:
+producer.send("My message".getBytes());
+```
+
+By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schemas).
+
+```java
+Producer<String> stringProducer = client.newProducer(Schema.STRING)
+        .topic("my-topic")
+        .create();
+stringProducer.send("My message");
+```
+
+> Make sure that you close your producers, consumers, and clients when you do not need them.
+> ```java
+> producer.close();
+> consumer.close();
+> client.close();
+> ```
+>
+> Close operations can also be asynchronous:
+> ```java
+> producer.closeAsync()
+>    .thenRun(() -> System.out.println("Producer closed"))
+>    .exceptionally((ex) -> {
+>        System.err.println("Failed to close producer: " + ex);
+>        return null;
+>    });
+> ```
+
+### Configure producer
+
+If you instantiate a `Producer` object by specifying only a topic name as the example above, use the default configuration for producer. 
+
+If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+Type | Name| <div style="width:300px">Description</div>|  Default
+|---|---|---|---
+String|	`topicName`|	Topic name| null|
+String|`producerName`|Producer name| null
+long|`sendTimeoutMs`|Message send timeout in ms.<br/><br/>If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000
+boolean|`blockIfQueueFull`|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors. <br/><br>If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.<br/><br/>The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false
+int|`maxPendingMessages`|The maximum size of a queue holding pending messages.<br/><br/>For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker). <br/><br/>By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000
+int|`maxPendingMessagesAcrossPartitions`|The maximum number of pending messages across partitions. <br/><br/>Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000
+MessageRoutingMode|`messageRoutingMode`|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics).<br/><br/> Apply the logic only when setting no key on messages. <br/><br/>Available options are as follows: <br/><br/><li>`pulsar.RoundRobinDistribution`: round robin<br/><br/> <li>`pulsar.UseSinglePartition`: publish all messages to a single partition<br/><br/><li>`pulsar.CustomPartition`: a custom partitioning scheme|`pulsar.RoundRob [...]
+HashingScheme|`hashingScheme`|Hashing function determining the partition where you publish a particular message (**partitioned topics only**).<br/><br/>Available options are as follows:<br/><br/><li> `pulsar.JavaStringHash`: the equivalent of `String.hashCode()` in Java<br/><br/><li> `pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function<br/><br/><li>`pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.or [...]
+ProducerCryptoFailureAction|`cryptoFailureAction`|Producer should take action when encryption fails.<br/><br/><li>**FAIL**: if encryption fails, unencrypted messages fail to send.</li><br/><li> **SEND**: if encryption fails, unencrypted messages are sent. |`ProducerCryptoFailureAction.FAIL`
+long|`batchingMaxPublishDelayMicros`|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1)
+int|batchingMaxMessages|The maximum number of messages permitted in a batch.|1000
+boolean|`batchingEnabled`|Enable batching of messages. |true
+CompressionType|`compressionType`|Message data compression type used by a producer. <br/><br/>Available options:<li>[`LZ4`](https://github.com/lz4/lz4)<br/><li>[`ZLIB`](https://zlib.net/)<br/><li>[`ZSTD`](https://facebook.github.io/zstd/)<br/><li>[`SNAPPY`](https://google.github.io/snappy/)| No compression
+
+You can configure parameters if you do not want to use the default configuration.
+
+For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example.
+
+```java
+Producer<byte[]> producer = client.newProducer()
+    .topic("my-topic")
+    .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS)
+    .sendTimeout(10, TimeUnit.SECONDS)
+    .blockIfQueueFull(true)
+    .create();
+```
+
+### Message routing
+
+When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics](cookbooks-partitioned.md) cookbook.
+
+### Async send
+
+You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer.
+
+The following is an example.
+
+```java
+producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> {
+    System.out.printf("Message with ID %s successfully sent", msgId);
+});
+```
+
+As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture).
+
+### Configure messages
+
+In addition to a value, you can set additional items on a given message:
+
+```java
+producer.newMessage()
+    .key("my-message-key")
+    .value("my-async-message".getBytes())
+    .property("my-key", "my-value")
+    .property("my-other-key", "my-other-value")
+    .send();
+```
+
+You can terminate the builder chain with `sendAsync()` and get a future return.
+
+## Consumer
+
+In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)).
+
+Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-modes).
+
+```java
+Consumer consumer = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscribe();
+```
+
+The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment [...]
+
+```java
+while (true) {
+  // Wait for a message
+  Message msg = consumer.receive();
+
+  try {
+      // Do something with the message
+      System.out.printf("Message received: %s", new String(msg.getData()));
+
+      // Acknowledge the message so that it can be deleted by the message broker
+      consumer.acknowledge(msg);
+  } catch (Exception e) {
+      // Message failed to process, redeliver later
+      consumer.negativeAcknowledge(msg);
+  }
+}
+```
+
+### Configure consumer
+
+If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration. 
+
+When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+Type | Name| <div style="width:300px">Description</div>|  Default
+|---|---|---|---
+Set&lt;String&gt;|	`topicNames`|	Topic name|	Sets.newTreeSet()
+Pattern|   `topicsPattern`|	Topic pattern	|None
+String|	`subscriptionName`|	Subscription name|	None
+SubscriptionType| `subscriptionType`|	Subscription type <br/><br/>Three subscription types are available:<li>Exclusive</li><li>Failover</li><li>Shared</li>|SubscriptionType.Exclusive
+int | `receiverQueueSize` | Size of a consumer's receiver queue. <br/><br/>For example, the number of messages accumulated by a consumer before an application calls `Receive`. <br/><br/>A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000
+long|`acknowledgementsGroupTimeMicros`|Group a consumer acknowledgment for a specified time.<br/><br/>By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.<br/><br/>Setting a group time of 0 sends out acknowledgments immediately. <br/><br/>A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100)
+long|`negativeAckRedeliveryDelayMicros`|Delay to wait before redelivering messages that failed to be processed.<br/><br/> When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1)
+int |`maxTotalReceiverQueueSizeAcrossPartitions`|The max total receiver queue size across partitions.<br/><br/>This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000
+String|`consumerName`|Consumer name|null
+long|`ackTimeoutMillis`|Timeout of unacked messages|0
+long|`tickDurationMillis`|Granularity of the ack-timeout redelivery.<br/><br/>Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000
+int|`priorityLevel`|Priority level for a consumer to which a broker gives more priority while dispatching messages in the shared subscription mode. <br/><br/>The broker follows descending priorities. For example, 0=max-priority, 1, 2,...<br/><br/>In shared subscription mode, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.<br/><br/> **Example 1**<br/><br/>If a subscription has [...]
+ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.<br/><br/><li>**FAIL**: this is the default option to fail messages until crypto succeeds.</li><br/><li> **DISCARD**:silently acknowledge and not deliver message to an application.</li><br/><li>**CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.<br/><br/>The decompression of message fails. <b [...]
+SortedMap<String, String>|`properties`|A name or value property of this consumer.<br/><br/>`properties` is application defined metadata attached to a consumer. <br/><br/>When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap<>()
+boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.<br/><br/> A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.<br/><br/>Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or e [...]
+SubscriptionInitialPosition|`subscriptionInitialPosition`|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest
+int|`patternAutoDiscoveryPeriod`|Topic auto discovery period when using a pattern for topic's consumer.<br/><br/>The default and minimum value is 1 minute.|1
+RegexSubscriptionMode|`regexSubscriptionMode`|When subscribing to a topic using a regular expression, you can pick a certain type of topics.<br/><br/><li>**PersistentOnly**: only subscribe to persistent topics.</li><br/><li>**NonPersistentOnly**: only subscribe to non-persistent topics.</li><br/><li>**AllTopics**: subscribe to both persistent and non-persistent topics.</li>|RegexSubscriptionMode.PersistentOnly
+DeadLetterPolicy|`deadLetterPolicy`|Dead letter policy for consumers.<br/><br/>By default, some messages are probably redelivered many times, even to the extent that it never stops.<br/><br/>By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.<br/><br/>You can enable the dead letter mechanism by setting `deadLetterPolicy`.<br/><br/>**Exa [...]
+boolean|`autoUpdatePartitions`|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.<br/><br/>**Note**: this is only for partitioned consumers.|true
+boolean|`replicateSubscriptionState`|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false
+
+You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. 
+
+The following is an example.
+
+```java
+Consumer consumer = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .ackTimeout(10, TimeUnit.SECONDS)
+        .subscriptionType(SubscriptionType.Exclusive)
+        .subscribe();
+```
+
+### Async receive
+
+The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available.
+
+The following is an example.
+
+```java
+CompletableFuture<Message> asyncMessage = consumer.receiveAsync();
+```
+
+Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture).
+
+### Batch receive
+
+Use `batchReceive` to receive multiple messages for each call. 
+
+The following is an example.
+
+```java
+Messages messages = consumer.batchReceive();
+for (message in messages) {
+  // do something
+}
+consumer.acknowledge(messages)
+```
+
+> Note:
+>
+> Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages.
+>
+> The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout.
+>
+> ```java
+> Consumer consumer = client.newConsumer()
+>         .topic("my-topic")
+>         .subscriptionName("my-subscription")
+>         .batchReceivePolicy(BatchReceivePolicy.builder()
+>              .maxNumMessages(100)
+>              .maxNumBytes(1024 * 1024)
+>              .timeout(200, TimeUnit.MILLISECONDS)
+>              .build())
+>         .subscribe();
+> ```
+> The default batch receive policy is:
+> ```java
+> BatchReceivePolicy.builder()
+>     .maxNumMessage(-1)
+>     .maxNumBytes(10 * 1024 * 1024)
+>     .timeout(100, TimeUnit.MILLISECONDS)
+>     .build();
+> ```
+
+### Multi-topic subscriptions
+
+In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace.
+
+The followings are some examples.
+
+```java
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.PulsarClient;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.regex.Pattern;
+
+ConsumerBuilder consumerBuilder = pulsarClient.newConsumer()
+        .subscriptionName(subscription);
+
+// Subscribe to all topics in a namespace
+Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*");
+Consumer allTopicsConsumer = consumerBuilder
+        .topicsPattern(allTopicsInNamespace)
+        .subscribe();
+
+// Subscribe to a subsets of topics in a namespace, based on regex
+Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*");
+Consumer allTopicsConsumer = consumerBuilder
+        .topicsPattern(someTopicsInNamespace)
+        .subscribe();
+```
+
+You can also subscribe to an explicit list of topics (across namespaces if you wish):
+
+```java
+List<String> topics = Arrays.asList(
+        "topic-1",
+        "topic-2",
+        "topic-3"
+);
+
+Consumer multiTopicConsumer = consumerBuilder
+        .topics(topics)
+        .subscribe();
+
+// Alternatively:
+Consumer multiTopicConsumer = consumerBuilder
+        .topics(
+            "topic-1",
+            "topic-2",
+            "topic-3"
+        )
+        .subscribe();
+```
+
+You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example.
+
+```java
+Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*");
+consumerBuilder
+        .topics(topics)
+        .subscribeAsync()
+        .thenAccept(this::receiveMessageFromConsumer);
+
+private void receiveMessageFromConsumer(Consumer consumer) {
+    consumer.receiveAsync().thenAccept(message -> {
+                // Do something with the received message
+                receiveMessageFromConsumer(consumer);
+            });
+}
+```
+
+### Subscription modes
+
+Pulsar has various [subscription modes](concepts-messaging#subscription-modes) to match different scenarios. A topic can have multiple subscriptions with different subscription modes. However, a subscription can only have one subscription mode at a time.
+
+A subscription is identical with the subscription name which can specify only one subscription mode at a time. You cannot change the subscription mode unless all existing consumers of this subscription are offline.
+
+Different subscription modes have different message distribution modes. This section describes the differences of subscription modes and how to use them.
+
+In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages.
+
+```java
+Producer<String> producer = client.newProducer(Schema.STRING)
+        .topic("my-topic")
+        .enableBatching(false)
+        .create();
+// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4"
+producer.newMessage().key("key-1").value("message-1-1").send();
+producer.newMessage().key("key-1").value("message-1-2").send();
+producer.newMessage().key("key-1").value("message-1-3").send();
+producer.newMessage().key("key-2").value("message-2-1").send();
+producer.newMessage().key("key-2").value("message-2-2").send();
+producer.newMessage().key("key-2").value("message-2-3").send();
+producer.newMessage().key("key-3").value("message-3-1").send();
+producer.newMessage().key("key-3").value("message-3-2").send();
+producer.newMessage().key("key-4").value("message-4-1").send();
+producer.newMessage().key("key-4").value("message-4-2").send();
+```
+
+#### Exclusive
+
+Create a new consumer and subscribe with the `Exclusive` subscription mode.
+
+```java
+Consumer consumer = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Exclusive)
+        .subscribe()
+```
+
+Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order.
+
+> Note:
+>
+> If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error. 
+
+#### Failover
+
+Create new consumers and subscribe with the`Failover` subscription mode.
+
+```java
+Consumer consumer1 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Failover)
+        .subscribe()
+Consumer consumer2 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Failover)
+        .subscribe()
+//conumser1 is the active consumer, consumer2 is the standby consumer.
+//consumer1 receives 5 messages and then crashes, consumer2 takes over as an  active consumer.
+
+  
+```
+
+Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer. 
+
+If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive:
+
+```
+("key-1", "message-1-1")
+("key-1", "message-1-2")
+("key-1", "message-1-3")
+("key-2", "message-2-1")
+("key-2", "message-2-2")
+```
+
+consumer2 will receive:
+
+```
+("key-2", "message-2-3")
+("key-3", "message-3-1")
+("key-3", "message-3-2")
+("key-4", "message-4-1")
+("key-4", "message-4-2")
+```
+
+> Note:
+>
+> If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers. 
+
+#### Shared
+
+Create new consumers and subscribe with `Shared` subscription mode:
+
+```java
+Consumer consumer1 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Shared)
+        .subscribe()
+  
+Consumer consumer2 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Shared)
+        .subscribe()
+//Both consumer1 and consumer 2 is active consumers.
+```
+
+In shared subscription mode, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers.
+
+If a broker dispatches only one message at a time, consumer1 receives the following information.
+
+```
+("key-1", "message-1-1")
+("key-1", "message-1-3")
+("key-2", "message-2-2")
+("key-3", "message-3-1")
+("key-4", "message-4-1")
+```
+
+consumer2 receives the follwoing information.
+
+```
+("key-1", "message-1-2")
+("key-2", "message-2-1")
+("key-2", "message-2-3")
+("key-3", "message-3-2")
+("key-4", "message-4-2")
+```
+
+`Shared` subscription is different from `Exclusive` and `Failover` subscription modes. `Shared` subscription has better flexibility, but cannot provide order guarantee.
+
+#### Key_shared
+
+This is a new subscription mode since 2.4.0 release, create new consumers and subscribe with `Key_Shared` subscription mode.
+
+```java
+Consumer consumer1 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Key_Shared)
+        .subscribe()
+  
+Consumer consumer2 = client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .subscriptionType(SubscriptionType.Key_Shared)
+        .subscribe()
+//Both consumer1 and consumer2 are active consumers.
+```
+
+`Key_Shared` subscription is like `Shared` subscription, all consumers can attach to the same subscription. But it is different from `Key_Shared` subscription, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time).
+
+consumer1 receives the follwoing information.
+
+```
+("key-1", "message-1-1")
+("key-1", "message-1-2")
+("key-1", "message-1-3")
+("key-3", "message-3-1")
+("key-3", "message-3-2")
+```
+
+consumer2 receives the follwoing information.
+
+```
+("key-2", "message-2-1")
+("key-2", "message-2-2")
+("key-2", "message-2-3")
+("key-4", "message-4-1")
+("key-4", "message-4-2")
+```
+
+> Note:
+>
+> If the message key is not specified, messages without key are dispatched to one consumer in order by default.
+
+## Reader 
+
+With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic, a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}, and {@inject: javadoc:ReaderConfiguration:/client/org/apache/pulsar/client/a [...]
+
+The following is an example.
+
+```java
+ReaderConfiguration conf = new ReaderConfiguration();
+byte[] msgIdBytes = // Some message ID byte array
+MessageId id = MessageId.fromByteArray(msgIdBytes);
+Reader reader = pulsarClient.newReader()
+        .topic(topic)
+        .startMessageId(id)
+        .create();
+
+while (true) {
+    Message message = reader.readNext();
+    // Process message
+}
+```
+
+In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application).
+
+The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message.
+
+When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+| Type | Name | <div style="width:300px">Description</div> | Default
+|---|---|---|---
+String|`topicName`|Topic name. |None
+int|`receiverQueueSize`|Size of a consumer's receiver queue.<br/><br/>For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.<br/><br/>A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000
+ReaderListener&lt;T&gt;|`readerListener`|A listener that is called for message received.|None
+String|`readerName`|Read name.|null
+String|`subscriptionRolePrefix`|Prefix of subscription role. |null
+CryptoKeyReader|`cryptoKeyReader`|Interface that abstracts the access to a key store.|null
+ConsumerCryptoFailureAction|`cryptoFailureAction`|Consumer should take action when it receives a message that can not be decrypted.<br/><br/><li>**FAIL**: this is the default option to fail messages until crypto succeeds.</li><br/><li> **DISCARD**: silently acknowledge and not deliver message to an application.</li><br/><li>**CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.<br/><br/>The message decompression fails. <br/ [...]
+boolean|`readCompacted`|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.<br/><br/> A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.<br/><br/>`readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failur [...]
+boolean|`resetIncludeHead`|If set to true, the first message to be returned is the one specified by `messageId`.<br/><br/>If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false
+
+## Schema
+
+In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). If you construct, say, a [producer](#producers) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example.
+
+```java
+Producer<byte[]> producer = client.newProducer()
+        .topic(topic)
+        .create();
+```
+
+The producer above is equivalent to a `Producer<byte[]>` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic).
+
+### Schema example
+
+Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic:
+
+```java
+public class SensorReading {
+    public float temperature;
+
+    public SensorReading(float temperature) {
+        this.temperature = temperature;
+    }
+
+    // A no-arg constructor is required
+    public SensorReading() {
+    }
+
+    public float getTemperature() {
+        return temperature;
+    }
+
+    public void setTemperature(float temperature) {
+        this.temperature = temperature;
+    }
+}
+```
+
+You could then create a `Producer<SensorReading>` (or `Consumer<SensorReading>`) like this:
+
+```java
+Producer<SensorReading> producer = client.newProducer(JSONSchema.of(SensorReading.class))
+        .topic("sensor-readings")
+        .create();
+```
+
+The following schema formats are currently available for Java:
+
+* No schema or the byte array schema (which can be applied using `Schema.BYTES`):
+
+  ```java
+  Producer<byte[]> bytesProducer = client.newProducer(Schema.BYTES)
+        .topic("some-raw-bytes-topic")
+        .create();
+  ```
+
+  Or, equivalently:
+
+  ```java
+  Producer<byte[]> bytesProducer = client.newProducer()
+        .topic("some-raw-bytes-topic")
+        .create();
+  ```
+
+* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`:
+
+  ```java
+  Producer<String> stringProducer = client.newProducer(Schema.STRING)
+        .topic("some-string-topic")
+        .create();
+  ```
+
+* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example.
+
+  ```java
+  Producer<MyPojo> pojoProducer = client.newProducer(Schema.JSON(MyPojo.class))
+        .topic("some-pojo-topic")
+        .create();
+  ```
+
+* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer:
+
+  ```java
+  Producer<MyProtobuf> protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class))
+        .topic("some-protobuf-topic")
+        .create();
+  ```
+
+* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema.
+  
+  ```java
+  Producer<MyAvro> avroProducer = client.newProducer(Schema.AVRO(MyAvro.class))
+        .topic("some-avro-topic")
+        .create();
+  ```
+
+## Authentication
+
+Pulsar currently supports two authentication schemes: [TLS](security-tls-authentication.md) and [Athenz](security-athenz.md). You can use the Pulsar Java client with both.
+
+### TLS Authentication
+
+To use [TLS](security-tls-authentication.md), you need to set TLS to `true` using the `setUseTls` method, point your Pulsar client to a TLS cert path, and provide paths to cert and key files.
+
+The following is an example.
+
+```java
+Map<String, String> authParams = new HashMap<>();
+authParams.put("tlsCertFile", "/path/to/client-cert.pem");
+authParams.put("tlsKeyFile", "/path/to/client-key.pem");
+
+Authentication tlsAuth = AuthenticationFactory
+        .create(AuthenticationTls.class.getName(), authParams);
+
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar+ssl://my-broker.com:6651")
+        .enableTls(true)
+        .tlsTrustCertsFilePath("/path/to/cacert.pem")
+        .authentication(tlsAuth)
+        .build();
+```
+
+### Athenz
+
+To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash:
+
+* `tenantDomain`
+* `tenantService`
+* `providerDomain`
+* `privateKey`
+
+You can also set an optional `keyId`. The following is an example.
+
+```java
+Map<String, String> authParams = new HashMap<>();
+authParams.put("tenantDomain", "shopping"); // Tenant domain name
+authParams.put("tenantService", "some_app"); // Tenant service name
+authParams.put("providerDomain", "pulsar"); // Provider domain name
+authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path
+authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0")
+
+Authentication athenzAuth = AuthenticationFactory
+        .create(AuthenticationAthenz.class.getName(), authParams);
+
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar+ssl://my-broker.com:6651")
+        .enableTls(true)
+        .tlsTrustCertsFilePath("/path/to/cacert.pem")
+        .authentication(athenzAuth)
+        .build();
+```
+
+> #### Supported pattern formats
+> The `privateKey` parameter supports the following three pattern formats:
+> * `file:///path/to/file`
+> * `file:/path/to/file`
+> * `data:application/x-pem-file;base64,<base64-encoded value>`
diff --git a/site2/website/versioned_docs/version-2.5.2/client-libraries-node.md b/site2/website/versioned_docs/version-2.5.2/client-libraries-node.md
new file mode 100644
index 0000000..389a2d2
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/client-libraries-node.md
@@ -0,0 +1,402 @@
+---
+id: version-2.5.2-client-libraries-node
+title: The Pulsar Node.js client
+sidebar_label: Node.js
+original_id: client-libraries-node
+---
+
+The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js.
+
+## Installation
+
+You can install the [`pusar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/).
+
+### Requirements
+Pulsar Node.js client library is based on the C++ client library.
+Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library.
+
+### Compatibility
+
+Compatibility between each version of the Node.js client and the C++ client is as follows:
+
+| Node.js client | C++ client     |
+| :------------- | :------------- |
+| 1.0.0          | 2.3.0 or later |
+
+If an incompatible version of the C++ client is installed, you may fail to build or run this library.
+
+### Installation using npm
+
+Install the `pulsar-client` library via [npm](https://www.npmjs.com/):
+
+```shell
+$ npm install pulsar-client
+```
+
+> #### Note
+> 
+> Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library.
+
+## Connection URLs
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`:
+
+```http
+pulsar://localhost:6650
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL will look like something like this:
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Create a client
+
+In order to interact with Pulsar, you will first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)).
+
+Here is an example:
+
+```JavaScript
+const Pulsar = require('pulsar-client');
+
+(async () => {
+  const client = new Pulsar.Client({
+    serviceUrl: 'pulsar://localhost:6650',
+  });
+  
+  await client.close();
+})();
+```
+
+### Client configuration
+
+The following configurable parameters are available for Pulsar clients:
+
+| Parameter | Description | Default |
+| :-------- | :---------- | :------ |
+| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. |  |
+| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | |
+| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30 |
+| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 |
+| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 |
+| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 |
+| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | |
+| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` |
+| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` |
+| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 |
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object.
+
+Here is an example:
+
+```JavaScript
+const producer = await client.createProducer({
+  topic: 'my-topic',
+});
+
+await producer.send({
+  data: Buffer.from("Hello, Pulsar"),
+});
+
+await producer.close();
+```
+
+> #### Promise operation
+> When you create a new Pulsar producer, the operation will return `Promise` object and get producer instance or an error through executor function.  
+> In this example, using await operator instead of executor function.
+
+### Producer operations
+
+Pulsar Node.js producers have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error will be thrown, the Promise object run executor function. | `Promise<null>` |
+| `flush()` | Sends message from send queue to Pulser broker. When the message is successfully acknowledged by the Pulsar broker, or an error will be thrown, the Promise object run executor function. | `Promise<null>` |
+| `close()` | Closes the producer and releases all resources allocated to it. If `close()` is called then no more messages will be accepted from the publisher. This method will return Promise object, and when all pending publish requests have been persisted by Pulsar then run executor function. If an error is thrown, no pending writes will be retried. | `Promise<null>` |
+
+### Producer configuration
+
+| Parameter | Description | Default |
+| :-------- | :---------- | :------ |
+| `topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages. | |
+| `producerName` | A name for the producer. If you do not explicitly assign a name, Pulsar will automatically generate a globally unique name.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. | |
+| `sendTimeoutMs` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `sendTimeoutMs` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30000 |
+| `initialSequenceId` | The initial sequence ID of the message. When producer send message, add sequence ID to message. The ID is increased each time to send. | |
+| `maxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `send` method will fail *unless* `blockIfQueueFull` is set to `true`. | 1000 |
+| `maxPendingMessagesAcrossPartitions` | The maximum size of the sum of partition's  pending queue. | 50000 |
+| `blockIfQueueFull` | If set to `true`, the producer's `send` method will wait when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `maxPendingMessages` parameter); if set to `false` (the default), `send` operations will fail and throw a error when the queue is full. | `false` |
+| `messageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-messaging.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`RoundRobinDistribution`), or publishing all messages to a single partition (`UseSinglePartition`, the default). | `UseSinglePartition` |
+| `hashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `JavaStringHash` (the equivalent of `String.hashCode()` in Java), `Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library). | `BoostHash` |
+| `compressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), and [`Zlib`](https://zlib.net/). | Compression None |
+| `batchingEnabled` | If set to `true`, the producer send message as batch. | `true` |
+| `batchingMaxPublishDelayMs` | The maximum time of delay sending message in batching. | 10 |
+| `batchingMaxMessages` | The maximum size of sending message in each time of batching. | 1000 |
+| `properties` | The metadata of producer. | |
+
+### Producer example
+
+This example creates a Node.js producer for the `my-topic` topic and sends 10 messages to that topic:
+
+```JavaScript
+const Pulsar = require('pulsar-client');
+
+(async () => {
+  // Create a client
+  const client = new Pulsar.Client({
+    serviceUrl: 'pulsar://localhost:6650',
+  });
+
+  // Create a producer
+  const producer = await client.createProducer({
+    topic: 'my-topic',
+  });
+
+  // Send messages
+  for (let i = 0; i < 10; i += 1) {
+    const msg = `my-message-${i}`;
+    producer.send({
+      data: Buffer.from(msg),
+    });
+    console.log(`Sent message: ${msg}`);
+  }
+  await producer.flush();
+
+  await producer.close();
+  await client.close();
+})();
+```
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Node.js consumers using a consumer configuration object.
+
+Here is an example:
+
+```JavaScript
+const consumer = await client.subscribe({
+  topic: 'my-topic',
+  subscription: 'my-subscription',
+});
+
+const msg = await consumer.receive();
+console.log(msg.getData().toString());
+consumer.acknowledge(msg);
+
+await consumer.close();
+```
+
+> #### Promise operation
+> When you create a new Pulsar consumer, the operation will return `Promise` object and get consumer instance or an error through executor function.  
+> In this example, using await operator instead of executor function.
+
+### Consumer operations
+
+Pulsar Node.js consumers have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `receive()` | Receives a single message from the topic. When the message is available, the Promise object run executor function and get message object. | `Promise<Object>` |
+| `receive(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise<Object>` |
+| `acknowledge(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message object. | `void` |
+| `acknowledgeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID object. | `void` |
+| `acknowledgeCumulative(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `acknowledgeCumulative` method will return void, and send the ack to the broker asynchronously. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can not be used with a [shared](concepts-messaging.md#shared) subscription type. | `void` |
+| `acknowledgeCumulativeId(Object)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message ID. | `void` |
+| `close()` | Closes the consumer, disabling its ability to receive messages from the broker. | `Promise<null>` |
+
+### Consumer configuration
+
+| Parameter | Description | Default |
+| :-------- | :---------- | :------ |
+| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages. | |
+| `subscription` | The subscription name for this consumer. | |
+| `subscriptionType` | Available options are `Exclusive`, `Shared`, and `Failover`. | `Exclusive` |
+| `ackTimeoutMs` | Acknowledge timeout in milliseconds. | 0 |
+| `receiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000 |
+| `receiverQueueSizeAcrossPartitions` | Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value. | 50000 |
+| `consumerName` | The name of consumer. Currently(v2.4.1), [failover](concepts-messaging.md#failover) mode use consumer name in ordering. | |
+| `properties` | The metadata of consumer. | |
+
+### Consumer example
+
+This example creates a Node.js consumer with the `my-subscription` subscription on the `my-topic` topic, receives messages, prints the content that arrive, and acknowledges each message to the Pulsar broker for 10 times:
+
+```JavaScript
+const Pulsar = require('pulsar-client');
+
+(async () => {
+  // Create a client
+  const client = new Pulsar.Client({
+    serviceUrl: 'pulsar://localhost:6650',
+  });
+
+  // Create a consumer
+  const consumer = await client.subscribe({
+    topic: 'my-topic',
+    subscription: 'my-subscription',
+    subscriptionType: 'Exclusive',
+  });
+
+  // Receive messages
+  for (let i = 0; i < 10; i += 1) {
+    const msg = await consumer.receive();
+    console.log(msg.getData().toString());
+    consumer.acknowledge(msg);
+  }
+
+  await consumer.close();
+  await client.close();
+})();
+```
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recently unacked message). You can [configure](#reader-configuration) Node.js readers using a reader configuration object.
+
+Here is an example:
+
+```JavaScript
+const reader = await client.createReader({
+  topic: 'my-topic',
+  startMessageId: Pulsar.MessageId.earliest(),
+});
+
+const msg = await reader.readNext();
+console.log(msg.getData().toString());
+
+await reader.close();
+```
+
+### Reader operations
+
+Pulsar Node.js readers have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `readNext()` | Receives the next message on the topic (analogous to the `receive` method for [consumers](#consumer-operations)). When the message is available, the Promise object run executor function and get message object. | `Promise<Object>` |
+| `readNext(Number)` | Receives a single message from the topic with specific timeout in milliseconds. | `Promise<Object>` |
+| `hasNext()` | Return whether Proker has next message in target topic. | `Boolean` |
+| `close()` | Closes the reader, disabling its ability to receive messages from the broker. | `Promise<null>` |
+
+### Reader configuration
+
+| Parameter | Description | Default |
+| :-------- | :---------- | :------ |
+| `topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages. | |
+| `startMessageId` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `Pulsar.MessageId.earliest` (the earliest available message on the topic), `Pulsar.MessageId.latest` (the latest available message on the topic), or a message ID object for a position that is not earliest or latest. | |
+| `receiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `readNext`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000 |
+| `readerName` | The name of the reader. |  |
+| `subscriptionRolePrefix` | The subscription role prefix. | |
+
+### Reader example
+
+This example creates a Node.js reader with the `my-topic` topic, reads messages, and prints the content that arrive for 10 times:
+
+```JavaScript
+const Pulsar = require('pulsar-client');
+
+(async () => {
+  // Create a client
+  const client = new Pulsar.Client({
+    serviceUrl: 'pulsar://localhost:6650',
+    operationTimeoutSeconds: 30,
+  });
+
+  // Create a reader
+  const reader = await client.createReader({
+    topic: 'my-topic',
+    startMessageId: Pulsar.MessageId.earliest(),
+  });
+
+  // read messages
+  for (let i = 0; i < 10; i += 1) {
+    const msg = await reader.readNext();
+    console.log(msg.getData().toString());
+  }
+
+  await reader.close();
+  await client.close();
+})();
+```
+
+## Messages
+
+In Pulsar Node.js client, you have to construct producer message object for producer.
+
+Here is an example message:
+
+```JavaScript
+const msg = {
+  data: Buffer.from('Hello, Pulsar'),
+  partitionKey: 'key1',
+  properties: {
+    'foo': 'bar',
+  },
+  eventTimestamp: Date.now(),
+  replicationClusters: [
+    'cluster1',
+    'cluster2',
+  ],
+}
+
+await producer.send(msg);
+```
+
+The following keys are available for producer message objects:
+
+| Parameter | Description |
+| :-------- | :---------- |
+| `data` | The actual data payload of the message. |
+| `properties` | A Object for any application-specific metadata attached to the message. |
+| `eventTimestamp` | The timestamp associated with the message. |
+| `sequenceId` | The sequence ID of the message. |
+| `partitionKey` | The optional key associated with the message (particularly useful for things like topic compaction). |
+| `replicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default. |
+
+### Message object operations
+
+In Pulsar Node.js client, you can receive (or read) message object as consumer (or reader).
+
+The message object have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `getTopicName()` | Getter method of topic name. | `String` |
+| `getProperties()` | Getter method of properties. | `Array<Object>` |
+| `getData()` | Getter method of message data. | `Buffer` |
+| `getMessageId()` | Getter method of [message id object](#message-id-object-operations). | `Object` |
+| `getPublishTimestamp()` | Getter method of publish timestamp. | `Number` |
+| `getEventTimestamp()` | Getter method of event timestamp. | `Number` |
+| `getPartitionKey()` | Getter method of partition key. | `String` |
+
+### Message ID object operations
+
+In Pulsar Node.js client, you can get message id object from message object.
+
+The message id object have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `serialize()` | Serialize the message id into a Buffer for storing. | `Buffer` |
+| `toString()` | Get message id as String. | `String` |
+
+The client has static method of message id object. You can access it as `Pulsar.MessageId.someStaticMethod` too.
+
+The following static methods are available for the message id object:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `earliest()` | MessageId representing the earliest, or oldest available message stored in the topic. | `Object` |
+| `latest()` | MessageId representing the latest, or last published message in the topic. | `Object` |
+| `deserialize(Buffer)` | Deserialize a message id object from a Buffer. | `Object` |
+
diff --git a/site2/website/versioned_docs/version-2.5.2/client-libraries-python.md b/site2/website/versioned_docs/version-2.5.2/client-libraries-python.md
new file mode 100644
index 0000000..a782898
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/client-libraries-python.md
@@ -0,0 +1,249 @@
+---
+id: version-2.5.2-client-libraries-python
+title: The Pulsar Python client
+sidebar_label: Python
+original_id: client-libraries-python
+---
+
+The Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [`python` subdirectory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code.
+
+## Installation
+
+You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from source.
+
+### Installation using pip
+
+To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager:
+
+```shell
+$ pip install pulsar-client=={{pulsar:version_number}}
+```
+
+Installation via PyPi is available for the following Python versions:
+
+Platform | Supported Python versions
+:--------|:-------------------------
+MacOS <br />  10.13 (High Sierra), 10.14 (Mojave) <br /> | 2.7, 3.7
+Linux | 2.7, 3.4, 3.5, 3.6, 3.7
+
+### Installing from source
+
+To install the `pulsar-client` library by building from source, follow [these instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That will also build the Python binding for the library.
+
+To install the built Python bindings:
+
+```shell
+$ git clone https://github.com/apache/pulsar
+$ cd pulsar/pulsar-client-cpp/python
+$ sudo python setup.py install
+```
+
+## API Reference
+
+The complete Python API reference is available at [api/python](/api/python).
+
+## Examples
+
+Below you'll find a variety of Python code examples for the `pulsar-client` library.
+
+### Producer example
+
+This creates a Python producer for the `my-topic` topic and send 10 messages on that topic:
+
+```python
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+
+producer = client.create_producer('my-topic')
+
+for i in range(10):
+    producer.send(('Hello-%d' % i).encode('utf-8'))
+
+client.close()
+```
+
+### Consumer example
+
+This creates a consumer with the `my-subscription` subscription on the `my-topic` topic, listen for incoming messages, print the content and ID of messages that arrive, and acknowledge each message to the Pulsar broker:
+
+```python
+consumer = client.subscribe('my-topic', 'my-subscription')
+
+while True:
+    msg = consumer.receive()
+    try:
+        print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
+        # Acknowledge successful processing of the message
+        consumer.acknowledge(msg)
+    except:
+        # Message failed to be processed
+        consumer.negative_acknowledge(msg)
+
+client.close()
+```
+
+### Reader interface example
+
+You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example:
+
+```python
+# MessageId taken from a previously fetched message
+msg_id = msg.message_id()
+
+reader = client.create_reader('my-topic', msg_id)
+
+while True:
+    msg = reader.read_next()
+    print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
+    # No acknowledgment
+```
+
+
+## Schema
+
+### Declaring and validating schema
+
+A schema can be declared by passing a class that inherits
+from `pulsar.schema.Record` and defines the fields as
+class variables. For example:
+
+```python
+from pulsar.schema import *
+
+class Example(Record):
+    a = String()
+    b = Integer()
+    c = Boolean()
+```
+
+With this simple schema definition we can then create producers,
+consumers and readers instances that will be referring to that.
+
+```python
+producer = client.create_producer(
+                    topic='my-topic',
+                    schema=AvroSchema(Example) )
+
+producer.send(Example(a='Hello', b=1))
+```
+
+When the producer is created, the Pulsar broker will validate that
+the existing topic schema is indeed of "Avro" type and that the
+format is compatible with the schema definition of the `Example`
+class.
+
+If there is a mismatch, the producer creation will raise an
+exception.
+
+Once a producer is created with a certain schema definition,
+it will only accept objects that are instances of the declared
+schema class.
+
+Similarly, for a consumer/reader, the consumer will return an
+object, instance of the schema record class, rather than the raw
+bytes:
+
+```python
+consumer = client.subscribe(
+                  topic='my-topic',
+                  subscription_name='my-subscription',
+                  schema=AvroSchema(Example) )
+
+while True:
+    msg = consumer.receive()
+    ex = msg.value()
+    try:
+        print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c))
+        # Acknowledge successful processing of the message
+        consumer.acknowledge(msg)
+    except:
+        # Message failed to be processed
+        consumer.negative_acknowledge(msg)
+```
+
+### Supported schema types
+
+There are different builtin schema types that can be used in Pulsar.
+All the definitions are in the `pulsar.schema` package.
+
+| Schema | Notes |
+| ------ | ----- |
+| `BytesSchema` | Get the raw payload as a `bytes` object. No serialization/deserialization are performed. This is the default schema mode |
+| `StringSchema` | Encode/decode payload as a UTF-8 string. Uses `str` objects |
+| `JsonSchema` | Require record definition. Serializes the record into standard JSON payload |
+| `AvroSchema` | Require record definition. Serializes in AVRO format |
+
+### Schema definition reference
+
+The schema definition is done through a class that inherits from
+`pulsar.schema.Record`.
+
+This class can have a number of fields which can be of either
+`pulsar.schema.Field` type or even another nested `Record`. All the
+fields are also specified in the `pulsar.schema` package. The fields
+are matching the AVRO fields types.
+
+| Field Type | Python Type | Notes |
+| ---------- | ----------- | ----- |
+| `Boolean`  | `bool`      |       |
+| `Integer`  | `int`       |       |
+| `Long`     | `int`       |       |
+| `Float`    | `float`     |       |
+| `Double`   | `float`     |       |
+| `Bytes`    | `bytes`     |       |
+| `String`   | `str`       |       |
+| `Array`    | `list`      | Need to specify record type for items |
+| `Map`      | `dict`      | Key is always `String`. Need to specify value type |
+
+Additionally, any Python `Enum` type can be used as a valid field
+type
+
+#### Fields parameters
+
+When adding a field these parameters can be used in the constructor:
+
+| Argument   | Default | Notes |
+| ---------- | --------| ----- |
+| `default`  | `None`  | Set a default value for the field. Eg: `a = Integer(default=5)` |
+| `required` | `False` | Mark the field as "required". This will set it in the schema accordingly. |
+
+#### Schema definition examples
+
+##### Simple definition
+
+```python
+class Example(Record):
+    a = String()
+    b = Integer()
+    c = Array(String())
+    i = Map(String())
+```
+
+##### Using enums
+
+```python
+from enum import Enum
+
+class Color(Enum):
+    red = 1
+    green = 2
+    blue = 3
+
+class Example(Record):
+    name = String()
+    color = Color
+```
+
+##### Complex types
+
+```python
+class MySubRecord(Record):
+    x = Integer()
+    y = Long()
+    z = String()
+
+class Example(Record):
+    a = String()
+    sub = MySubRecord()
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/concepts-clients.md b/site2/website/versioned_docs/version-2.5.2/concepts-clients.md
new file mode 100644
index 0000000..8c494b6
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/concepts-clients.md
@@ -0,0 +1,82 @@
+---
+id: version-2.5.2-concepts-clients
+title: Pulsar Clients
+sidebar_label: Clients
+original_id: concepts-clients
+---
+
+Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md),  [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications.
+
+Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff.
+
+> #### Custom client libraries
+> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md)
+
+
+## Client setup phase
+
+When an application wants to create a producer/consumer, the Pulsar client library will initiate a setup phase that is composed of two steps:
+
+1. The client will attempt to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata will know who is serving the topic or, in case nobody is serving it, will try to assign it to the least loaded broker.
+1. Once the client library has the broker address, it will create a TCP connection (or reuse an existing connection from the pool) and authenticate it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client will send a command to create producer/consumer to the broker, which will comply after having validated the authorization policy.
+
+Whenever the TCP connection breaks, the client will immediately re-initiate this setup phase and will keep trying with exponential backoff to re-establish the producer or consumer until the operation succeeds.
+
+## Reader interface
+
+In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they've been processed.  Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription will begin reading with the first message created afterwards.  Whenever a consumer  [...]
+
+The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with:
+
+* The **earliest** available message in the topic
+* The **latest** available message in the topic
+* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache.
+
+The reader interface is helpful for use cases like using Pulsar to provide [effectively-once](https://streaml.io/blog/exactly-once/) processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic.
+
+Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name.
+
+![The Pulsar consumer and reader interfaces](assets/pulsar-reader-consumer-interfaces.png)
+
+> ### Non-partitioned topics only
+> The reader interface for Pulsar cannot currently be used with [partitioned topics](concepts-messaging.md#partitioned-topics).
+
+Here's a Java example that begins reading from the earliest available message on a topic:
+
+```java
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.MessageId;
+import org.apache.pulsar.client.api.Reader;
+
+// Create a reader on a topic and for a specific message (and onward)
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic("reader-api-test")
+    .startMessageId(MessageId.earliest)
+    .create();
+
+while (true) {
+    Message message = reader.readNext();
+
+    // Process the message
+}
+```
+
+To create a reader that will read from the latest available message:
+
+```java
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(MessageId.latest)
+    .create();
+```
+
+To create a reader that will read from some message between earliest and latest:
+
+```java
+byte[] msgIdBytes = // Some byte array
+MessageId id = MessageId.fromByteArray(msgIdBytes);
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(id)
+    .create();
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/concepts-messaging.md b/site2/website/versioned_docs/version-2.5.2/concepts-messaging.md
new file mode 100644
index 0000000..9a3c379
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/concepts-messaging.md
@@ -0,0 +1,445 @@
+---
+id: version-2.5.2-concepts-messaging
+title: Messaging Concepts
+sidebar_label: Messaging
+original_id: concepts-messaging
+---
+
+Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern, aka pub-sub. In this pattern, [producers](#producers) publish messages to [topics](#topics). [Consumers](#consumers) can then [subscribe](#subscription-modes) to those topics, process incoming messages, and send an acknowledgement when processing is complete.
+
+Once a subscription has been created, all messages will be [retained](concepts-architecture-overview.md#persistent-storage) by Pulsar, even if the consumer gets disconnected. Retained messages will be discarded only when a consumer acknowledges that they've been successfully processed.
+
+## Messages
+
+Messages are the basic "unit" of Pulsar. They're what producers publish to topics and what consumers then consume from topics (and acknowledge when the message has been processed). Messages are the analogue of letters in a postal service system.
+
+Component | Purpose
+:---------|:-------
+Value / data payload | The data carried by the message. All Pulsar messages carry raw bytes, although message data can also conform to data [schemas](schema-get-started.md)
+Key | Messages can optionally be tagged with keys, which can be useful for things like [topic compaction](concepts-topic-compaction.md)
+Properties | An optional key/value map of user-defined properties
+Producer name | The name of the producer that produced the message (producers are automatically given default names, but you can apply your own explicitly as well)
+Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. A message's sequence ID is its ordering in that sequence.
+Publish time | The timestamp of when the message was published (automatically applied by the producer)
+Event time | An optional timestamp that applications can attach to the message representing when something happened, e.g. when the message was processed. The event time of a message is 0 if none is explicitly set.
+
+
+> For a more in-depth breakdown of Pulsar message contents, see the documentation on Pulsar's [binary protocol](developing-binary-protocol.md).
+
+## Producers
+
+A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker) for processing.
+
+### Send modes
+
+Producers can send messages to brokers either synchronously (sync) or asynchronously (async).
+
+| Mode       | Description                                                                                                                                                                                                                                                                                                                                                              |
+|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Sync send  | The producer will wait for acknowledgement from the broker after sending each message. If acknowledgment isn't received then the producer will consider the send operation a failure.                                                                                                                                                                                    |
+| Async send | The producer will put the message in a blocking queue and return immediately. The client library will then send the message to the broker in the background. If the queue is full (max size [configurable](reference-configuration.md#broker)), the producer could be blocked or fail immediately when calling the API, depending on arguments passed to the producer. |
+
+### Compression
+
+Messages published by producers can be compressed during transportation in order to save bandwidth. Pulsar currently supports the following types of compression:
+
+* [LZ4](https://github.com/lz4/lz4)
+* [ZLIB](https://zlib.net/)
+* [ZSTD](https://facebook.github.io/zstd/)
+* [SNAPPY](https://google.github.io/snappy/)
+
+### Batching
+
+If batching is enabled, the producer will accumulate and send a batch of messages in a single request. Batching size is defined by the maximum number of messages and maximum publish latency.
+
+## Consumers
+
+A consumer is a process that attaches to a topic via a subscription and then receives messages.
+
+### Receive modes
+
+Messages can be received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async).
+
+| Mode          | Description                                                                                                                                                                                                   |
+|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Sync receive  | A sync receive will be blocked until a message is available.                                                                                                                                                  |
+| Async receive | An async receive will return immediately with a future value---a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java, for example---that completes once a new message is available. |
+
+### Listeners
+
+Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java.md) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received.
+
+### Acknowledgement
+
+When a consumer has consumed a message successfully, the consumer sends an acknowledgement request to the broker, so that the broker will discard the message. Otherwise, it [stores](concepts-architecture-overview.md#persistent-storage) the message.
+
+Messages can be acknowledged either one by one or cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message will not be re-delivered to that consumer.
+
+
+> Cumulative acknowledgement cannot be used with [shared subscription mode](#subscription-modes), because shared mode involves multiple consumers having access to the same subscription.
+
+In the shared subscription mode, messages can be acknowledged individually.
+
+### Negative acknowledgement
+
+When a consumer does not consume a message successfully at a time, and wants to consume the message again, the consumer can send a negative acknowledgement to the broker, and then the broker will redeliver the message.
+
+Messages can be negatively acknowledged one by one or cumulatively, which depends on the consumption subscription mode.
+
+In the exclusive and failover subscription modes, consumers only negatively acknowledge the last message they have received.
+
+In the shared and Key_Shared subscription modes, you can negatively acknowledge messages individually.
+
+### Acknowledgement timeout
+
+When a message is not consumed successfully, and you want to trigger the broker to redeliver the message automatically, you can adopt the unacknowledged message automatic re-delivery mechanism. Client will track the unacknowledged messages within the entire `acktimeout` time range, and send a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified.
+
+> Note    
+> Use negative acknowledgement prior to acknowledgement timeout. Negative acknowledgement controls re-delivery of individual messages with more precise, and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout.
+
+### Dead letter topic
+
+Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic.
+
+The following example shows how to enable dead letter topic in a Java client using the default dead letter topic:
+
+```java
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+              .topic(topic)
+              .subscriptionName("my-subscription")
+              .subscriptionType(SubscriptionType.Shared)
+              .deadLetterPolicy(DeadLetterPolicy.builder()
+                    .maxRedeliverCount(maxRedeliveryCount)
+                    .build())
+              .subscribe();
+                
+```
+The default dead letter topic uses this format: 
+```
+<topicname>-<subscriptionname>-DLQ
+```
+  
+If you want to specify the name of the dead letter topic, use this Java client example:
+
+```java
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+              .topic(topic)
+              .subscriptionName("my-subscription")
+              .subscriptionType(SubscriptionType.Shared)
+              .deadLetterPolicy(DeadLetterPolicy.builder()
+                    .maxRedeliverCount(maxRedeliveryCount)
+                    .deadLetterTopic("your-topic-name")
+                    .build())
+              .subscribe();
+                
+```
+  
+
+Dead letter topic depends on message re-delivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. 
+
+> Note    
+> Currently, dead letter topic is enabled only in the shared subscription mode.
+
+## Topics
+
+As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from [producers](reference-terminology.md#producer) to [consumers](reference-terminology.md#consumer). Topic names are URLs that have a well-defined structure:
+
+```http
+{persistent|non-persistent}://tenant/namespace/topic
+```
+
+Topic name component | Description
+:--------------------|:-----------
+`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics) (persistent is the default, so if you don't specify a type the topic will be persistent). With persistent topics, all messages are durably [persisted](concepts-architecture-overview.md#persistent-storage) on disk (that means on multiple disks unless the broker is standalone) [...]
+`tenant`             | The topic's tenant within the instance. Tenants are essential to multi-tenancy in Pulsar and can be spread across clusters.
+`namespace`          | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant can have multiple namespaces.
+`topic`              | The final part of the name. Topic names are freeform and have no special meaning in a Pulsar instance.
+
+
+> #### No need to explicitly create new topics
+> You don't need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar will automatically create that topic under the [namespace](#namespaces) provided in the [topic name](#topics).
+
+
+## Namespaces
+
+A namespace is a logical nomenclature within a tenant. A tenant can create multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace.
+
+## Subscription modes
+
+A subscription is a named configuration rule that determines how messages are delivered to consumers. There are three available subscription modes in Pulsar: [exclusive](#exclusive), [shared](#shared), and [failover](#failover). These modes are illustrated in the figure below.
+
+![Subscription modes](assets/pulsar-subscription-modes.png)
+
+### Exclusive
+
+In *exclusive* mode, only a single consumer is allowed to attach to the subscription. If more than one consumer attempts to subscribe to a topic using the same subscription, the consumer receives an error.
+
+In the diagram below, only **Consumer A-0** is allowed to consume messages.
+
+> Exclusive mode is the default subscription mode.
+
+![Exclusive subscriptions](assets/pulsar-exclusive-subscriptions.png)
+
+### Failover
+
+In *failover* mode, multiple consumers can attach to the same subscription. The consumers will be lexically sorted by the consumer's name and the first consumer will initially be the only one receiving messages. This consumer is called the *master consumer*.
+
+When the master consumer disconnects, all (non-acked and subsequent) messages will be delivered to the next consumer in line.
+
+In the diagram below, **Consumer-B-0** is the master consumer while **Consumer-B-1** would be the next in line to receive messages if **Consumer-B-0** disconnected.
+
+![Failover subscriptions](assets/pulsar-failover-subscriptions.png)
+
+### Shared
+
+In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers.
+
+In the diagram below, **Consumer-C-1** and **Consumer-C-2** are able to subscribe to the topic, but **Consumer-C-3** and others could as well.
+
+> #### Limitations of shared mode
+> There are two important things to be aware of when using shared mode:
+> * Message ordering is not guaranteed.
+> * You cannot use cumulative acknowledgment with shared mode.
+
+![Shared subscriptions](assets/pulsar-shared-subscriptions.png)
+
+### Key_Shared
+
+In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message.
+
+> #### Limitations of Key_Shared mode
+> There are two important things to be aware of when using Key_Shared mode:
+> * You need to specify a key or orderingKey for messages
+> * You cannot use cumulative acknowledgment with Key_Shared mode.
+
+![Key_Shared subscriptions](assets/pulsar-key-shared-subscriptions.png)
+
+**Key_Shared subscription is a beta feature. You can disable it at broker.config.**
+
+## Multi-topic subscriptions
+
+When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways:
+
+* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*`
+* By explicitly defining a list of topics
+
+> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces)
+
+When subscribing to multiple topics, the Pulsar client will automatically make a call to the Pulsar API to discover the topics that match the regex pattern/list and then subscribe to all of them. If any of the topics don't currently exist, the consumer will auto-subscribe to them once the topics are created.
+
+> #### No ordering guarantees
+> When a consumer subscribes to multiple topics, all ordering guarantees normally provided by Pulsar on single topics do not hold. If your use case for Pulsar involves any strict ordering requirements, we would strongly recommend against using this feature.
+
+Here are some multi-topic subscription examples for Java:
+
+```java
+import java.util.regex.Pattern;
+
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.PulsarClient;
+
+PulsarClient pulsarClient = // Instantiate Pulsar client object
+
+// Subscribe to all topics in a namespace
+Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*");
+Consumer<byte[]> allTopicsConsumer = pulsarClient.newConsumer()
+                .topicsPattern(allTopicsInNamespace)
+                .subscriptionName("subscription-1")
+                .subscribe();
+
+// Subscribe to a subsets of topics in a namespace, based on regex
+Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*");
+Consumer<byte[]> someTopicsConsumer = pulsarClient.newConsumer()
+                .topicsPattern(someTopicsInNamespace)
+                .subscriptionName("subscription-1")
+                .subscribe();
+```
+
+For code examples, see:
+
+* [Java](client-libraries-java.md#multi-topic-subscriptions)
+
+## Partitioned topics
+
+Normal topics can be served only by a single broker, which limits the topic's maximum throughput. *Partitioned topics* are a special type of topic that be handled by multiple brokers, which allows for much higher throughput.
+
+Behind the scenes, a partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar.
+
+The diagram below illustrates this:
+
+![](assets/partitioning.png)
+
+Here, the topic **Topic1** has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically).
+
+Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines both which broker handles each partition, while the [subscription mode](#subscription-modes) determines which messages go to which consumers.
+
+Decisions about routing and subscription modes can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics.
+
+There is no difference between partitioned topics and normal topics in terms of how subscription modes work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer.
+
+Partitioned topics need to be explicitly created via the [admin API](admin-api-overview.md). The number of partitions can be specified when creating the topic.
+
+### Routing modes
+
+When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to.
+
+There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available:
+
+Mode     | Description 
+:--------|:------------
+`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. 
+`SinglePartition`     | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition.
+`CustomPartition`     | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java.md) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface.
+
+### Ordering guarantee
+
+The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee.
+
+If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode.
+
+Ordering guarantee | Description | Routing Mode and Key
+:------------------|:------------|:------------
+Per-key-partition  | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message.
+Per-producer       | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message.
+
+### Hashing scheme
+
+{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message.
+
+There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. 
+The default hashing function for producer is `JavaStringHash`.
+Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`.
+
+
+
+## Non-persistent topics
+
+
+By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover.
+
+Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss.
+
+Non-persistent topics have names of this form (note the `non-persistent` in the name):
+
+```http
+non-persistent://tenant/namespace/topic
+```
+
+> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent.md).
+
+In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases [...]
+
+> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it.
+
+By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the [`pulsar-admin topics`](referencereference--pulsar-admin/#topics-1) interface.
+
+### Performance
+
+Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is delivered to connected brokers. Producers thus see comparatively low publish latency with non-persistent topic.
+
+### Client API
+
+Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription modes---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics.
+
+Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic:
+
+```java
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650")
+        .build();
+String npTopic = "non-persistent://public/default/my-topic";
+String subscriptionName = "my-subscription-name";
+
+Consumer<byte[]> consumer = client.newConsumer()
+        .topic(npTopic)
+        .subscriptionName(subscriptionName)
+        .subscribe();
+```
+
+Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic:
+
+```java
+Producer<byte[]> producer = client.newProducer()
+                .topic(npTopic)
+                .create();
+```
+
+## Message retention and expiry
+
+By default, Pulsar message brokers:
+
+* immediately delete *all* messages that have been acknowledged by a consumer, and
+* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog.
+
+Pulsar has two features, however, that enable you to override this default behavior:
+
+* Message **retention** enables you to store messages that have been acknowledged by a consumer
+* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged
+
+> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry.md) cookbook.
+
+The diagram below illustrates both concepts:
+
+![Message retention and expiry](assets/retention-expiry.png)
+
+With message retention, shown at the top, a <span style="color: #89b557;">retention policy</span> applied to all topics in a namespace dicates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are <span style="color: #bb3b3e;">deleted</span>. Without a retention policy, *all* of the <span style="color: #19967d;">acknowledged messages</span> would be deleted.
+
+With message expiry, shown at the bottom, some messages are <span style="color: #bb3b3e;">deleted</span>, even though they <span style="color: #337db6;">haven't been acknowledged</span>, because they've expired according to the <span style="color: #e39441;">TTL applied to the namespace</span> (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old).
+
+## Message deduplication
+
+Message **duplication** occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message ***de*duplication** is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, *even if the message is received more than once*.
+
+The following diagram illustrates what happens when message deduplication is disabled vs. enabled:
+
+![Pulsar message deduplication](assets/message-deduplication.png)
+
+
+Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred.
+
+In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message.
+
+> Message deduplication is handled at the namespace level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication.md).
+
+
+### Producer idempotency
+
+The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, which means that you don't need to modify your Pulsar client code. Instead, you only need to make administrative changes (see the [Managi [...]
+
+### Deduplication and effectively-once semantics
+
+Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide [effectively-once](https://streaml.io/blog/exactly-once) processing semantics. Messaging systems that don't offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplica [...]
+
+> More in-depth information can be found in [this post](https://streaml.io/blog/pulsar-effectively-once/) on the [Streamlio blog](https://streaml.io/blog)
+
+## Delayed message delivery
+Delayed message delivery enables you to consume a message later rather than immediately. In this mechanism, a message is stored in BookKeeper, `DelayedDeliveryTracker` maintains the time index(time -> messageId) in memory after published to a broker, and it is delivered to a consumer once the specific delayed time is passed.  
+
+Delayed message delivery only works well in Shared subscription mode. In Exclusive and Failover subscription mode, the delayed message is dispatched immediately.
+
+The diagram below illustrates the concept of delayed message delivery:
+
+![Delayed Message Delivery](assets/message_delay.png)
+
+A broker saves a message without any check. When a consumer consumes a message, if the message is set to delay, then the message is added to `DelayedDeliveryTracker`. A subscription checks and gets timeout messages from `DelayedDeliveryTracker`.
+
+### Broker 
+Delayed message delivery is enabled by default. You can change it in the broker configuration file as below:
+
+```
+# Whether to enable the delayed delivery for messages.
+# If disabled, messages are immediately delivered and there is no tracking overhead.
+delayedDeliveryEnabled=true
+
+# Control the ticking time for the retry of delayed message delivery,
+# affecting the accuracy of the delivery time compared to the scheduled time.
+# Default is 1 second.
+delayedDeliveryTickTimeMillis=1000
+```
+
+### Producer 
+The following is an example of delayed message delivery for a producer in Java:
+```java
+// message to be delivered at the configured delay interval
+producer.newMessage().deliverAfter(3L, TimeUnit.Minute).value("Hello Pulsar!").send();
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/concepts-overview.md b/site2/website/versioned_docs/version-2.5.2/concepts-overview.md
new file mode 100644
index 0000000..df250c2
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/concepts-overview.md
@@ -0,0 +1,31 @@
+---
+id: version-2.5.2-concepts-overview
+title: Pulsar Overview
+sidebar_label: Overview
+original_id: concepts-overview
+---
+
+Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Pulsar was originally developed by Yahoo, it is under the stewardship of the [Apache Software Foundation](https://www.apache.org/).
+
+Key features of Pulsar are listed below:
+
+* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters.
+* Very low publish and end-to-end latency.
+* Seamless scalability to over a million topics.
+* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md).
+* Multiple [subscription modes](concepts-messaging.md#subscription-modes) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics.
+* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/).
+* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing.
+* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out Apache Pulsar.
+* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/longterm storage (such as S3 and GCS) when the data is aging out.
+
+## Contents
+
+- [Messaging Concepts](concepts-messaging.md)
+- [Architecture Overview](concepts-architecture-overview.md)
+- [Pulsar Clients](concepts-clients.md)
+- [Geo Replication](concepts-replication.md)
+- [Multi Tenancy](concepts-multi-tenancy.md)
+- [Authentication and Authorization](concepts-authentication.md)
+- [Topic Compaction](concepts-topic-compaction.md)
+- [Tiered Storage](concepts-tiered-storage.md)
diff --git a/site2/website/versioned_docs/version-2.5.2/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.5.2/concepts-tiered-storage.md
new file mode 100644
index 0000000..ffb1241
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/concepts-tiered-storage.md
@@ -0,0 +1,18 @@
+---
+id: version-2.5.2-concepts-tiered-storage
+title: Tiered Storage
+sidebar_label: Tiered Storage
+original_id: concepts-tiered-storage
+---
+
+Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time.
+
+One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed.
+
+![Tiered Storage](assets/pulsar-tiered-storage.png)
+
+> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data.
+
+Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default).
+
+> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md).
diff --git a/site2/website/versioned_docs/version-2.5.2/cookbooks-deduplication.md b/site2/website/versioned_docs/version-2.5.2/cookbooks-deduplication.md
new file mode 100644
index 0000000..3afd95e
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/cookbooks-deduplication.md
@@ -0,0 +1,121 @@
+---
+id: version-2.5.2-cookbooks-deduplication
+title: Message deduplication
+sidebar_label: Message deduplication
+original_id: cookbooks-deduplication
+---
+
+When **Message deduplication** is enabled, it ensures that each message produced on Pulsar topics is persisted to disk *only once*, even if the message is produced more than once. Message deduplication is handled automatically on the server side. 
+
+To use message deduplication in Pulsar, you have to [configure](#configure-message-deduplication) your Pulsar brokers and [clients](#pulsar-clients).
+
+> For more details on message deduplication, refer to [Concepts and Architecture](concepts-messaging.md#message-deduplication).
+
+## How it works
+
+You can enable or disable message deduplication on a per-namespace basis. By default, it is *disabled* on all namespaces. You can enable it in the following ways:
+
+* Enable for all namespaces at the broker-level
+* Enable for specific namespaces with the `pulsar-admin namespaces` interface
+
+## Configure message deduplication
+
+You can configure message deduplication in Pulsar using the [`broker.conf`](reference-configuration.md#broker) configuration file. The following deduplication-related parameters are available.
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar [broker](reference-terminology.md#broker). If it is set to `true`, message deduplication is enabled by default on all namespaces; if it is set to `false` (the default), you have to enable or disable deduplication on a per-namespace basis. | `false`
+`brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000`
+`brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000`
+`brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours)
+
+### Set default value at the broker-level
+
+By default, message deduplication is *disabled* on all Pulsar namespaces. To enable it by default on all namespaces, set the `brokerDeduplicationEnabled` parameter to `true` and re-start the broker.
+
+Even if you set the value for `brokerDeduplicationEnabled`, enabling or disabling via Pulsar admin CLI will override the default settings at the broker-level.
+
+### Enable message deduplication
+
+Though message deduplication is disabled by default at broker-level, you can enable message deduplication for specific namespaces using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) command. You can use the `--enable`/`-e` flag and specify the namespace. The following is an example with `<tenant>/<namespace>`:
+
+```bash
+$ bin/pulsar-admin namespaces set-deduplication \
+  public/default \
+  --enable # or just -e
+```
+
+### Disable message deduplication
+
+Even if you enable message deduplication at broker-level, you can disable message deduplication for a specific namespace using the [`pulsar-admin namespace set-deduplication`](reference-pulsar-admin.md#namespace-set-deduplication) command. Use the `--disable`/`-d` flag and specify the namespace. The following is an example with `<tenant>/<namespace>`:
+
+```bash
+$ bin/pulsar-admin namespaces set-deduplication \
+  public/default \
+  --disable # or just -d
+```
+
+## Pulsar clients
+
+If you enable message deduplication in Pulsar brokers, you need complete the following tasks for your client producers:
+
+1. Specify a name for the producer.
+1. Set the message timeout to `0` (namely, no timeout).
+
+The instructions for Java, Python, and C++ clients are different.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java clients-->
+
+To enable message deduplication on a [Java producer](client-libraries-java.md#producers), set the producer name using the `producerName` setter, and set the timeout to `0` using the `sendTimeout` setter. 
+
+```java
+import org.apache.pulsar.client.api.Producer;
+import org.apache.pulsar.client.api.PulsarClient;
+import java.util.concurrent.TimeUnit;
+
+PulsarClient pulsarClient = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650")
+        .build();
+Producer producer = pulsarClient.newProducer()
+        .producerName("producer-1")
+        .topic("persistent://public/default/topic-1")
+        .sendTimeout(0, TimeUnit.SECONDS)
+        .create();
+```
+
+<!--Python clients-->
+
+To enable message deduplication on a [Python producer](client-libraries-python.md#producers), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. 
+
+```python
+import pulsar
+
+client = pulsar.Client("pulsar://localhost:6650")
+producer = client.create_producer(
+    "persistent://public/default/topic-1",
+    producer_name="producer-1",
+    send_timeout_millis=0)
+```
+<!--C++ clients-->
+
+To enable message deduplication on a [C++ producer](client-libraries-cpp.md#producer), set the producer name using `producer_name`, and set the timeout to `0` using `send_timeout_millis`. 
+
+```cpp
+#include <pulsar/Client.h>
+
+std::string serviceUrl = "pulsar://localhost:6650";
+std::string topic = "persistent://some-tenant/ns1/topic-1";
+std::string producerName = "producer-1";
+
+Client client(serviceUrl);
+
+ProducerConfiguration producerConfig;
+producerConfig.setSendTimeout(0);
+producerConfig.setProducerName(producerName);
+
+Producer producer;
+
+Result result = client.createProducer(topic, producerConfig, producer);
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.5.2/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.5.2/cookbooks-retention-expiry.md
new file mode 100644
index 0000000..0f610df
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/cookbooks-retention-expiry.md
@@ -0,0 +1,291 @@
+---
+id: version-2.5.2-cookbooks-retention-expiry
+title: Message retention and expiry
+sidebar_label: Message retention and expiry
+original_id: cookbooks-retention-expiry
+---
+
+Pulsar brokers are responsible for handling messages that pass through Pulsar, including [persistent storage](concepts-architecture-overview.md#persistent-storage) of messages. By default, for each topic, brokers only retain messages that are in at least one backlog. A backlog is the set of unacknowledged messages for a particular subscription. As a topic can have multiple subscriptions, a topic can have multiple backlogs.
+
+As a consequence, no messages are retained (by default) on a topic that has not had any subscriptions created for it.
+
+(Note that messages that are no longer being stored are not necessarily immediately deleted, and may in fact still be accessible until the next ledger rollover. Because clients cannot predict when rollovers may happen, it is not wise to rely on a rollover not happening at an inconvenient point in time.)
+
+In Pulsar, you can modify this behavior, with namespace granularity, in two ways:
+
+* You can persistently store messages that are not within a backlog (because they've been acknowledged by on every existing subscription, or because there are no subscriptions) by setting [retention policies](#retention-policies).
+* Messages that are not acknowledged within a specified timeframe can be automatically acknowledged, by specifying the [time to live](#time-to-live-ttl) (TTL).
+
+Pulsar's [admin interface](admin-api-overview.md) enables you to manage both retention policies and TTL with namespace granularity (and thus within a specific tenant and either on a specific cluster or in the [`global`](concepts-architecture-overview.md#global-cluster) cluster).
+
+
+> #### Retention and TTL solve two different problems
+> * Message retention: Keep the data for at least X hours (even if acknowledged)
+> * Time-to-live: Discard data after some time (by automatically acknowledging)
+>
+> Most applications will want to use at most one of these.
+
+
+## Retention policies
+
+By default, when a Pulsar message arrives at a broker it will be stored until it has been acknowledged on all subscriptions, at which point it will be marked for deletion. You can override this behavior and retain even messages that have already been acknowledged on all subscriptions by setting a *retention policy* for all topics in a given namespace. Retention policies are either a *size limit* or a *time limit*.
+
+Retention policies are particularly useful if you intend to exclusively use the Reader interface. Because the Reader interface does not use acknowledgements, messages will never exist within backlogs. Most realistic Reader-only use cases require that retention be configured.
+
+When you set a size limit of, say, 10 gigabytes, then acknowledged messages in all topics in the namespace will be retained until the size limit for the topic is reached; if you set a time limit of, say, 1 day, then acknowledged messages for all topics in the namespace will be retained for 24 hours. The retention settings apply to all messages on topics that do not have any subscriptions, or if there are subscriptions, to messages that have been acked by all subscriptions. The retention  [...]
+
+When a retention limit is exceeded, the oldest message is marked for deletion until the set of retained messages falls within the specified limits again.
+
+It is also possible to set *unlimited* retention time or size by setting `-1` for either time or size retention.
+
+### Defaults
+
+There are two configuration parameters that you can use to set [instance](reference-terminology.md#instance)-wide defaults for message retention: [`defaultRetentionTimeInMinutes=0`](reference-configuration.md#broker-defaultRetentionTimeInMinutes) and [`defaultRetentionSizeInMB=0`](reference-configuration.md#broker-defaultRetentionSizeInMB).
+
+Both of these parameters are in the [`broker.conf`](reference-configuration.md#broker) configuration file.
+
+### Set retention policy
+
+You can set a retention policy for a namespace by specifying the namespace as well as both a size limit *and* a time limit.
+
+#### pulsar-admin
+
+Use the [`set-retention`](reference-pulsar-admin.md#namespaces-set-retention) subcommand and specify a namespace, a size limit using the `-s`/`--size` flag, and a time limit using the `-t`/`--time` flag.
+
+##### Examples
+
+To set a size limit of 10 gigabytes and a time limit of 3 hours for the `my-tenant/my-ns` namespace:
+
+```shell
+$ pulsar-admin namespaces set-retention my-tenant/my-ns \
+  --size 10G \
+  --time 3h
+```
+
+To set retention with a size limit but without a time limit:
+
+```shell
+$ pulsar-admin namespaces set-retention my-tenant/my-ns \
+  --size 1T \
+  --time -1
+```
+
+Retention can be configured to be unlimited both in size and time:
+
+```shell
+$ pulsar-admin namespaces set-retention my-tenant/my-ns \
+  --size -1 \
+  --time -1
+```
+
+
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention}
+
+#### Java
+
+```java
+int retentionTime = 10; // 10 minutes
+int retentionSize = 500; // 500 megabytes
+RetentionPolicies policies = new RetentionPolicies(retentionTime, retentionSize);
+admin.namespaces().setRetention(namespace, policies);
+```
+
+### Get retention policy
+
+You can fetch the retention policy for a namespace by specifying the namespace. The output will be a JSON object with two keys: `retentionTimeInMinutes` and `retentionSizeInMB`.
+
+#### pulsar-admin
+
+Use the [`get-retention`](reference-pulsar-admin.md#namespaces) subcommand and specify the namespace.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces get-retention my-tenant/my-ns
+{
+  "retentionTimeInMinutes": 10,
+  "retentionSizeInMB": 0
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention}
+
+#### Java
+
+```java
+admin.namespaces().getRetention(namespace);
+```
+
+## Backlog quotas
+
+*Backlogs* are sets of unacknowledged messages for a topic that have been stored by bookies. Pulsar stores all unacknowledged messages in backlogs until they are processed and acknowledged.
+
+You can control the allowable size of backlogs, at the namespace level, using *backlog quotas*. Setting a backlog quota involves setting:
+
+TODO: Expand on is this per backlog or per topic?
+
+* an allowable *size threshold* for each topic in the namespace
+* a *retention policy* that determines which action the [broker](reference-terminology.md#broker) takes if the threshold is exceeded.
+
+The following retention policies are available:
+
+Policy | Action
+:------|:------
+`producer_request_hold` | The broker will hold and not persist produce request payload
+`producer_exception` | The broker will disconnect from the client by throwing an exception
+`consumer_backlog_eviction` | The broker will begin discarding backlog messages
+
+
+> #### Beware the distinction between retention policy types
+> As you may have noticed, there are two definitions of the term "retention policy" in Pulsar, one that applies to persistent storage of messages not in backlogs, and one that applies to messages within backlogs.
+
+
+Backlog quotas are handled at the namespace level. They can be managed via:
+
+### Set size thresholds and backlog retention policies
+
+You can set a size threshold and backlog retention policy for all of the topics in a [namespace](reference-terminology.md#namespace) by specifying the namespace, a size limit, and a policy by name.
+
+#### pulsar-admin
+
+Use the [`set-backlog-quota`](reference-pulsar-admin.md#namespaces) subcommand and specify a namespace, a size limit using the `-l`/`--limit` flag, and a retention policy using the `-p`/`--policy` flag.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \
+  --limit 2G \
+  --policy producer_request_hold
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/getBacklogQuotaMap}
+
+#### Java
+
+```java
+long sizeLimit = 2147483648L;
+BacklogQuota.RetentionPolicy policy = BacklogQuota.RetentionPolicy.producer_request_hold;
+BacklogQuota quota = new BacklogQuota(sizeLimit, policy);
+admin.namespaces().setBacklogQuota(namespace, quota);
+```
+
+### Get backlog threshold and backlog retention policy
+
+You can see which size threshold and backlog retention policy has been applied to a namespace.
+
+#### pulsar-admin
+
+Use the [`get-backlog-quotas`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-backlog-quotas) subcommand and specify a namespace. Here's an example:
+
+```shell
+$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns
+{
+  "destination_storage": {
+    "limit" : 2147483648,
+    "policy" : "producer_request_hold"
+  }
+}
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap}
+
+#### Java
+
+```java
+Map<BacklogQuota.BacklogQuotaType,BacklogQuota> quotas =
+  admin.namespaces().getBacklogQuotas(namespace);
+```
+
+### Remove backlog quotas
+
+#### pulsar-admin
+
+Use the [`remove-backlog-quota`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-backlog-quota) subcommand and specify a namespace. Here's an example:
+
+```shell
+$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota}
+
+#### Java
+
+```java
+admin.namespaces().removeBacklogQuota(namespace);
+```
+
+### Clear backlog
+
+#### pulsar-admin
+
+Use the [`clear-backlog`](reference-pulsar-admin.md#pulsar-admin-namespaces-clear-backlog) subcommand.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces clear-backlog my-tenant/my-ns
+```
+
+By default, you will be prompted to ensure that you really want to clear the backlog for the namespace. You can override the prompt using the `-f`/`--force` flag.
+
+## Time to live (TTL)
+
+By default, Pulsar stores all unacknowledged messages forever. This can lead to heavy disk space usage in cases where a lot of messages are going unacknowledged. If disk space is a concern, you can set a time to live (TTL) that determines how long unacknowledged messages will be retained.
+
+### Set the TTL for a namespace
+
+#### pulsar-admin
+
+Use the [`set-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-set-message-ttl) subcommand and specify a namespace and a TTL (in seconds) using the `-ttl`/`--messageTTL` flag.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces set-message-ttl my-tenant/my-ns \
+  --messageTTL 120 # TTL of 2 minutes
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL}
+
+#### Java
+
+```java
+admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds);
+```
+
+### Get the TTL configuration for a namespace
+
+#### pulsar-admin
+
+Use the [`get-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-get-message-ttl) subcommand and specify a namespace.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns
+60
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL}
+
+#### Java
+
+```java
+admin.namespaces().getNamespaceMessageTTL(namespace)
+```
+
diff --git a/site2/website/versioned_docs/version-2.5.2/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.5.2/cookbooks-tiered-storage.md
new file mode 100644
index 0000000..cd9add0
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/cookbooks-tiered-storage.md
@@ -0,0 +1,296 @@
+---
+id: version-2.5.2-cookbooks-tiered-storage
+title: Tiered Storage
+sidebar_label: Tiered Storage
+original_id: cookbooks-tiered-storage
+---
+
+Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster.
+
+* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support
+[Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short)
+for long term storage. With Jclouds, it is easy to add support for more
+[cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future.
+
+* Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support filesystem for long term storage. 
+With Hadoop, it is easy to add support for more filesystem in the future.
+
+## When should I use Tiered Storage?
+
+Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history.
+
+## The offloading mechanism
+
+A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture.
+
+![Tiered storage](assets/pulsar-tiered-storage.png "Tiered Storage")
+
+The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded.
+
+On the broker, the administrator must configure the bucket and credentials for the cloud storage service.
+The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail.
+
+Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data.
+We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid
+getting charged for incomplete uploads.
+
+## Configuring the offload driver
+
+Offloading is configured in ```broker.conf```.
+
+At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials.
+There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc.
+
+Currently we support driver of types:
+
+- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/)
+- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/)
+- `filesystem`: [Filesystem Storage](http://hadoop.apache.org/)
+
+> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`,
+> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if
+> using a S3 compatible data store, other than AWS.
+
+```conf
+managedLedgerOffloadDriver=aws-s3
+```
+
+### "aws-s3" Driver configuration
+
+#### Bucket and Region
+
+Buckets are the basic containers that hold your data.
+Everything that you store in Cloud Storage must be contained in a bucket.
+You can use buckets to organize your data and control access to your data,
+but unlike directories and folders, you cannot nest buckets.
+
+```conf
+s3ManagedLedgerOffloadBucket=pulsar-topic-offload
+```
+
+Bucket Region is the region where bucket located. Bucket Region is not a required
+but a recommended configuration. If it is not configured, It will use the default region.
+
+With AWS S3, the default region is `US East (N. Virginia)`. Page
+[AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information.
+
+```conf
+s3ManagedLedgerOffloadRegion=eu-west-3
+```
+
+#### Authentication with AWS
+
+To be able to access AWS S3, you need to authenticate with AWS S3.
+Pulsar does not provide any direct means of configuring authentication for AWS S3,
+but relies on the mechanisms supported by the
+[DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html).
+
+Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways.
+
+1. Using ec2 instance metadata credentials
+
+If you are on AWS instance with an instance profile that provides credentials, Pulsar will use these credentials
+if no other mechanism is provided
+
+2. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```.
+
+```bash
+export AWS_ACCESS_KEY_ID=ABC123456789
+export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
+```
+
+> \"export\" is important so that the variables are made available in the environment of spawned processes.
+
+
+3. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`.
+
+```bash
+PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024"
+```
+
+4. Set the access credentials in ```~/.aws/credentials```.
+
+```conf
+[default]
+aws_access_key_id=ABC123456789
+aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
+```
+
+5. Assuming an IAM role
+
+If you want to assume an IAM role, this can be done via specifying the following:
+
+```conf
+s3ManagedLedgerOffloadRole=<aws role arn>
+s3ManagedLedgerOffloadRoleSessionName=pulsar-s3-offload
+```
+
+This will use the `DefaultAWSCredentialsProviderChain` for assuming this role.
+
+> The broker must be rebooted for credentials specified in pulsar_env to take effect.
+
+#### Configuring the size of block read/write
+
+Pulsar also provides some knobs to configure the size of requests sent to AWS S3.
+
+- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes```  configures the maximum size of
+  a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB.
+- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for
+  each individual read when reading back data from AWS S3. Default is 1MB.
+
+In both cases, these should not be touched unless you know what you are doing.
+
+### "google-cloud-storage" Driver configuration
+
+Buckets are the basic containers that hold your data. Everything that you store in
+Cloud Storage must be contained in a bucket. You can use buckets to organize your data and
+control access to your data, but unlike directories and folders, you cannot nest buckets.
+
+```conf
+gcsManagedLedgerOffloadBucket=pulsar-topic-offload
+```
+
+Bucket Region is the region where bucket located. Bucket Region is not a required but
+a recommended configuration. If it is not configured, It will use the default region.
+
+Regarding GCS, buckets are default created in the `us multi-regional location`,
+page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information.
+
+```conf
+gcsManagedLedgerOffloadRegion=europe-west3
+```
+
+#### Authentication with GCS
+
+The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf`
+for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is
+a Json file, containing the GCS credentials of a service account.
+[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains
+more information of how to create this key file for authentication. More information about google cloud IAM
+is available [here](https://cloud.google.com/storage/docs/access-control/iam).
+
+Usually these are the steps to create the authentication file:
+1. Open the API Console Credentials page.
+2. If it's not already selected, select the project that you're creating credentials for.
+3. To set up a new service account, click New credentials and then select Service account key.
+4. Choose the service account to use for the key.
+5. Download the service account's public/private key as a JSON file that can be loaded by a Google API client library.
+
+```conf
+gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json"
+```
+
+#### Configuring the size of block read/write
+
+Pulsar also provides some knobs to configure the size of requests sent to GCS.
+
+- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent
+  during a multipart upload. This cannot be smaller than 5MB. Default is 64MB.
+- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual
+  read when reading back data from GCS. Default is 1MB.
+
+In both cases, these should not be touched unless you know what you are doing.
+
+### "filesystem" Driver configuration
+
+
+#### Configure connection address
+
+You can configure the connection address in the `broker.conf` file.
+
+```conf
+fileSystemURI="hdfs://127.0.0.1:9000"
+```
+#### Configure Hadoop profile path
+
+The configuration file is stored in the Hadoop profile path. It contains various settings, such as base path, authentication, and so on.
+
+```conf
+fileSystemProfilePath="../conf/filesystem_offload_core_site.xml"
+```
+
+The model for storing topic data uses `org.apache.hadoop.io.MapFile`. You can use all of the configurations in `org.apache.hadoop.io.MapFile` for Hadoop.
+
+**Example**
+
+```conf
+
+    <property>
+        <name>fs.defaultFS</name>
+        <value></value>
+    </property>
+    
+    <property>
+        <name>hadoop.tmp.dir</name>
+        <value>pulsar</value>
+    </property>
+    
+    <property>
+        <name>io.file.buffer.size</name>
+        <value>4096</value>
+    </property>
+    
+    <property>
+        <name>io.seqfile.compress.blocksize</name>
+        <value>1000000</value>
+    </property>
+    <property>
+    
+        <name>io.seqfile.compression.type</name>
+        <value>BLOCK</value>
+    </property>
+    
+    <property>
+        <name>io.map.index.interval</name>
+        <value>128</value>
+    </property>
+    
+```
+
+For more information about the configurations in `org.apache.hadoop.io.MapFile`, see [Filesystem Storage](http://hadoop.apache.org/).
+## Configuring offload to run automatically
+
+Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can.
+
+```bash
+$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace
+```
+
+> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full.
+
+
+## Triggering offload manually
+
+Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you.
+
+When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met.
+
+```bash
+$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1
+Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1
+```
+
+The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status.
+
+```bash
+$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1
+Offload is currently running
+```
+
+To wait for offload to complete, add the -w flag.
+
+```bash
+$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1
+Offload was a success
+```
+
+If there is an error offloading, the error will be propagated to the offload-status command.
+
+```bash
+$ bin/pulsar-admin topics offload-status persistent://public/default/topic1
+Error in offload
+null
+
+Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads.  Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhr [...]
+````
+
diff --git a/site2/website/versioned_docs/version-2.5.2/deploy-aws.md b/site2/website/versioned_docs/version-2.5.2/deploy-aws.md
new file mode 100644
index 0000000..0a4b3aa
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/deploy-aws.md
@@ -0,0 +1,224 @@
+---
+id: version-2.5.2-deploy-aws
+title: Deploying a Pulsar cluster on AWS using Terraform and Ansible
+sidebar_label: Amazon Web Services
+original_id: deploy-aws
+---
+
+> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md).
+
+One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary for running the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---While Ansible can install [...]
+
+## Requirements and setup
+
+In order to install a Pulsar cluster on AWS using Terraform and Ansible, you need to prepare the following things:
+
+* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool
+* Python and [pip](https://pip.pypa.io/en/stable/)
+* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts
+
+You also need to make sure that you are currently logged into your AWS account via the `aws` tool:
+
+```bash
+$ aws configure
+```
+
+## Installation
+
+You can install Ansible on Linux or macOS using pip.
+
+```bash
+$ pip install ansible
+```
+
+You can install Terraform using the instructions [here](https://www.terraform.io/intro/getting-started/install.html).
+
+You also need to have the Terraform and Ansible configuration for Pulsar locally on your machine. You can find them in the [GitHub repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch using Git commands:
+
+```bash
+$ git clone https://github.com/apache/pulsar
+$ cd pulsar/deployment/terraform-ansible/aws
+```
+
+## SSH setup
+
+> If you already have an SSH key and want to use it, you can skip the step of generating an SSH key and update `private_key_file` setting
+> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file.
+>
+> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`,
+> follow the steps below:
+>
+> 1. update `ansible.cfg` with following values:
+>
+> ```shell
+> private_key_file=~/.ssh/pulsar_aws
+> ```
+>
+> 2. update `terraform.tfvars` with following values:
+>
+> ```shell
+> public_key_path=~/.ssh/pulsar_aws.pub
+> ```
+
+In order to create the necessary AWS resources using Terraform, you need to create an SSH key. Enter the following commands to create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`:
+
+```bash
+$ ssh-keygen -t rsa
+```
+
+Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). Enter the following command to verify that a key has been created:
+
+```bash
+$ ls ~/.ssh
+id_rsa               id_rsa.pub
+```
+
+## Create AWS resources using Terraform
+
+To start building AWS resources with Terraform, you need to install all Terraform dependencies. Enter the follwing command:
+
+```bash
+$ terraform init
+# This will create a .terraform folder
+```
+
+After that, you can apply the default Terraform configuration by entering this command:
+
+```bash
+$ terraform apply
+```
+
+Then you see this prompt below:
+
+```bash
+Do you want to perform these actions?
+  Terraform will perform the actions described above.
+  Only 'yes' will be accepted to approve.
+
+  Enter a value:
+```
+
+Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When the configuration applying finishes, you can see `Apply complete!` along with some other information, including the number of resources created.
+
+### Apply a non-default configuration
+
+You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available:
+
+Variable name | Description | Default
+:-------------|:------------|:-------
+`public_key_path` | The path of the public key that you have generated. | `~/.ssh/id_rsa.pub`
+`region` | The AWS region in which the Pulsar cluster runs | `us-west-2`
+`availability_zone` | The AWS availability zone in which the Pulsar cluster runs | `us-west-2a`
+`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that the cluster uses  | `ami-9fa343e7`
+`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3
+`num_bookie_nodes` | The number of bookies that runs in the cluster | 3
+`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2
+`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1
+`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that network assets uses for the cluster | `10.0.0.0/16`
+`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies)
+
+### What is installed
+
+When you run the Ansible playbook, the following AWS resources are used:
+
+* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes:
+  * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances)
+  * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances)
+  * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+  * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)
+* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security
+* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world
+* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC
+* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC
+
+All EC2 instances for the cluster run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region.
+
+### Fetch your Pulsar connection URL
+
+When you apply the Terraform configuration by entering the command `terraform apply`, Terraform outputs a value for the `pulsar_service_url`. The value should look something like this:
+
+```
+pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650
+```
+
+You can fetch that value at any time by entering the command `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename does not reflect that):
+
+```bash
+$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value
+```
+
+### Destroy your cluster
+
+At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command:
+
+```bash
+$ terraform destroy
+```
+
+## Setup Disks
+
+Before you run the Pulsar playbook, you need to mount the disks to the correct directories on those bookie nodes. Since different type of machines have different disk layout, you need to update the task defined in `setup-disk.yaml` file after changing the `instance_types` in your terraform config,
+
+To setup disks on bookie nodes, enter this command:
+
+```bash
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  setup-disk.yaml
+```
+
+After that, the disks is mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk.
+Remember to enter this command just only once. If you attempt to enter this command again after you have run Pulsar playbook, your disks might potentially be erased again, causing the bookies to fail to start up.
+
+## Run the Pulsar playbook
+
+Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. To do so, enter this command:
+
+```bash
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  ../deploy-pulsar.yaml
+```
+
+If you have created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag in the following command:
+
+```bash
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  --private-key="~/.ssh/some-non-default-key" \
+  ../deploy-pulsar.yaml
+```
+
+## Access the cluster
+
+You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain following the instructions [above](#fetching-your-pulsar-connection-url).
+
+For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip:
+
+```bash
+$ pip install pulsar-client
+```
+
+Now, open up the Python shell using the `python` command:
+
+```bash
+$ python
+```
+
+Once you are in the shell, enter the following command:
+
+```python
+>>> import pulsar
+>>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650')
+# Make sure to use your connection URL
+>>> producer = client.create_producer('persistent://public/default/test-topic')
+>>> producer.send('Hello world')
+>>> client.close()
+```
+
+If all of these commands are successful, Pulsar clients can now use your cluster!
+
diff --git a/site2/website/versioned_docs/version-2.5.2/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.5.2/deploy-bare-metal-multi-cluster.md
new file mode 100644
index 0000000..d510038
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/deploy-bare-metal-multi-cluster.md
@@ -0,0 +1,426 @@
+---
+id: version-2.5.2-deploy-bare-metal-multi-cluster
+title: Deploying a multi-cluster on bare metal
+sidebar_label: Bare metal multi-cluster
+original_id: deploy-bare-metal-multi-cluster
+---
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with
+> Pulsar or using it in a startup or on a single team, you had better opt for a single cluster. For instructions on deploying a single cluster,
+> see the guide [here](deploy-bare-metal.md).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+> run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
+>
+> 3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+> package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure
+> this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md).
+
+A Pulsar *instance* consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance involves the following basic steps:
+
+* Deploying two separate [ZooKeeper](#deploy-zookeeper) quorums: a [local](#deploy-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks
+* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster
+* Deploying a [BookKeeper cluster](#deploy-bookkeeper) of bookies in each Pulsar cluster
+* Deploying [brokers](#deploy-brokers) in each Pulsar cluster
+
+If you want to deploy a single Pulsar cluster, see [Clusters and Brokers](getting-started-standalone.md#start-the-cluster).
+
+> #### Run Pulsar locally or on Kubernetes?
+> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pul [...]
+
+## System requirement
+Pulsar is currently available for **MacOS** and **Linux**. In order to use Pulsar, you need to install Java 8 from [Oracle download center](http://www.oracle.com/).
+
+## Install Pulsar
+
+To get started running Pulsar, download a binary tarball release in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-{{pulsar:version}}/apache-pulsar-{{pulsar:version}}-bin.tar.gz' -O apache-pulsar-{{pulsar:version}}-bin.tar.gz
+  ```
+
+Once you download the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvfz apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+## What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md)
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses 
+`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase
+
+The following directories are created once you begin running Pulsar:
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`instances` | Artifacts created for [Pulsar Functions](functions-overview.md)
+`logs` | Logs that the installation creates
+
+
+## Deploy ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Configuration Store](#deploy-the-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum.
+
+The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper.
+
+
+### Deploy local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar.
+
+You need to stand up one local ZooKeeper cluster *per Pulsar cluster* for deploying a Pulsar instance. 
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
+
+```shell
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start zookeeper
+```
+
+### Deploy the configuration store 
+
+The ZooKeeper cluster that is configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
+
+If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you should stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorom uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+```
+
+As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers.
+
+Again, given the very low expected load on the configuration store servers, you can
+share the same hosts used for the local ZooKeeper quorum.
+
+For example, assume a Pulsar instance with the following clusters `us-west`,
+`us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: 
+
+```
+zk[1-3].${CLUSTER}.example.com
+```
+
+In this scenario if you want to pick the quorum participants from few clusters and
+let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This method guarantees that writes to configuration store is possible even if one of these regions is unreachable.
+
+The ZK configuration in all the servers looks like:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+```
+
+Additionally, ZK observers need to have the following parameters:
+
+```properties
+peerType=observer
+```
+
+##### Start the service
+
+Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+$ bin/pulsar-daemon start configuration-store
+```
+
+## Cluster metadata initialization
+
+Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only needs to write these metadata once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example:
+
+```shell
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster us-west \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2184 \
+  --web-service-url http://pulsar.us-west.example.com:8080/ \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
+```
+
+As you can see from the example above, you need to specify the following:
+
+* The name of the cluster
+* The local ZooKeeper connection string for the cluster
+* The configuration store connection string for the entire instance
+* The web service URL for the cluster
+* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster
+
+If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster.
+
+Make sure to run `initialize-cluster-metadata` for each cluster in your instance.
+
+## Deploy BookKeeper
+
+BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
+
+Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Configure bookies
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster.
+
+### Start bookies
+
+You can start a bookie in two ways: in the foreground or as a background daemon.
+
+To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start bookie
+```
+
+You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+```shell
+$ bin/bookkeeper shell bookiesanity
+```
+
+This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger.
+
+After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running.
+
+```bash
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+```
+
+Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity.
+
+* Disk I/O capacity read/write
+* Storage capacity
+
+Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is
+designed to use multiple devices:
+
+* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller.
+
+
+
+## Deploy brokers
+
+Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers.
+
+### Broker configuration
+
+You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
+
+The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those  [...]
+
+You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster.
+
+The following is an example configuration:
+
+```properties
+# Local ZooKeeper servers
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Configuration store quorum connection string.
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+
+clusterName=us-west
+
+# Broker data port
+brokerServicePort=6650
+
+# Broker data port for TLS
+brokerServicePortTls=6651
+
+# Port to use to server HTTP request
+webServicePort=8080
+
+# Port to use to server HTTPS request
+webServicePortTls=8443
+```
+
+### Broker hardware
+
+Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that.
+
+### Start the broker service
+
+You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start broker
+```
+
+You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker):
+
+```shell
+$ bin/pulsar broker
+```
+
+## Service discovery
+
+[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup).
+
+You can also use your own service discovery system if you want. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means.
+
+> #### Service discovery already provided by many scheduling systems
+> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism.
+
+
+### Service discovery setup
+
+The service discovery mechanism that included with Pulsar maintains a list of active brokers, which stored in ZooKeeper, and supports lookup using HTTP and also the [binary protocol](developing-binary-protocol.md) of Pulsar.
+
+To get started setting up the built-in service of discovery of Pulsar, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the ZooKeeper quorum connection string of the cluster and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [con [...]
+store](reference-terminology.md#configuration-store) quorum connection string.
+
+```properties
+# Zookeeper quorum connection string
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Global configuration store connection string
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+```
+
+To start the discovery service:
+
+```shell
+$ bin/pulsar-daemon start discovery
+```
+
+## Admin client and verification
+
+At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients.
+
+The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster:
+
+```properties
+serviceUrl=http://pulsar.us-west.example.com:8080/
+```
+
+## Provision new tenants
+
+Pulsar is built as a fundamentally multi-tenant system.
+
+
+If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool:
+
+
+```shell
+$ bin/pulsar-admin tenants create test-tenant \
+  --allowed-clusters us-west \
+  --admin-roles test-admin-role
+```
+
+In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources.
+
+Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant.
+
+
+The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant.
+
+```shell
+$ bin/pulsar-admin namespaces create test-tenant/ns1
+```
+
+##### Test producer and consumer
+
+
+Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool.
+
+
+You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them.
+
+The topic name in this case could be:
+
+```http
+persistent://test-tenant/ns1/my-topic
+```
+
+Start a consumer that creates a subscription on the topic and waits for messages:
+
+```shell
+$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic
+```
+
+Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds:
+
+```shell
+$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic
+```
+
+To report the topic stats:
+
+```shell
+$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.5.2/deploy-bare-metal.md
new file mode 100644
index 0000000..3db040a
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/deploy-bare-metal.md
@@ -0,0 +1,461 @@
+---
+id: version-2.5.2-deploy-bare-metal
+title: Deploy a cluster on bare metal
+sidebar_label: Bare metal
+original_id: deploy-bare-metal
+---
+
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with
+> Pulsar or using Pulsar in a startup or on a single team, you had better opt for a single cluster. If you do need to run a multi-cluster Pulsar instance,
+> see the guide [here](deploy-bare-metal-multi-cluster.md).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+> have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
+>
+> 3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+> package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure
+> this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md).
+
+Deploying a Pulsar cluster involves doing the following (in order):
+
+* Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional)
+* Initialize [cluster metadata](#initialize-cluster-metadata)
+* Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster
+* Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers)
+
+## Preparation
+
+### Requirements
+
+> If you already have an existing zookeeper cluster and want to reuse it, you do not need to prepare the machines
+> for running ZooKeeper.
+
+To run Pulsar on bare metal, you had better have the following:
+
+* At least 6 Linux machines or VMs
+  * 3 for running [ZooKeeper](https://zookeeper.apache.org)
+  * 3 for running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie
+* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts
+
+> If you do not have enough machines, or try out Pulsar in cluster mode (and expand the cluster later),
+> you can even deploy Pulsar in one node, where Zookeeper, bookie and broker are run in the same machine.
+
+> If you do not have a DNS server, you can use multi-host in service URL instead.
+
+Each machine in your cluster needs to have [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or higher version of Java installed.
+
+The following is a diagram showing the basic setup:
+
+![alt-text](assets/pulsar-basic-setup.png)
+
+In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL, in this case `pulsar-cluster.acme.com` abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper.
+
+### Hardware considerations
+
+When you deploy a Pulsar cluster, keep in mind the following basic better choices when you do the capacity planning.
+
+#### ZooKeeper
+
+For machines running ZooKeeper, you had better use lighter-weight machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related and configuration-related tasks, *not* for basic operations. If you run Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance might likely suffice.
+
+#### Bookies and Brokers
+
+For machines running a bookie and a Pulsar broker, you had better use more powerful machines. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines you can use the following:
+
+* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers)
+* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies)
+
+## Install the Pulsar binary package
+
+> You need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploy-a-zookeeper-cluster) and [BookKeeper](#deploy-a-bookkeeper-cluster).
+
+To get started deploying a Pulsar cluster on bare metal, you need to download a binary tarball release in one of the following ways:
+
+* By clicking on the link below directly, which automatically triggers a download:
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+* From the Pulsar [downloads page](pulsar:download_page_url)
+* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on [GitHub](https://github.com)
+* Using [wget](https://www.gnu.org/software/wget):
+
+```bash
+$ wget pulsar:binary_release_url
+```
+
+Once you download the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvzf apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+The untarred directory contains the following subdirectories:
+
+Directory | Contains
+:---------|:--------
+`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses
+`logs` | Logs that the installation creates
+
+## [Install Builtin Connectors (optional)]( https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional)
+
+> Since Pulsar releases `2.1.0-incubating`, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+> If you want to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using builtin connectors, you need to download the connectors tarball release on every broker node in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:connector_release_url" download>Pulsar IO Connectors {{pulsar:version}} release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:connector_release_url/{connector}-{{pulsar:version}}.nar
+  ```
+
+Once you download the nar file, copy the file to directory `connectors` in the pulsar directory, 
+for example, if you download the connector file `pulsar-io-aerospike-{{pulsar:version}}.nar`:
+
+```bash
+$ mkdir connectors
+$ mv pulsar-io-aerospike-{{pulsar:version}}.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-{{pulsar:version}}.nar
+...
+```
+
+## [Install Tiered Storage Offloaders (optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional)
+
+> Since Pulsar release `2.2.0`, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+> If you want to enable tiered storage feature, you can follow the instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using tiered storage offloaders, you need to download the offloaders tarball release on every broker node in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders {{pulsar:version}} release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:offloader_release_url
+  ```
+
+Once you download the tarball, in the pulsar directory, untar the offloaders package and copy the offloaders as `offloaders` in the pulsar directory:
+
+```bash
+$ tar xvfz apache-pulsar-offloaders-{{pulsar:version}}-bin.tar.gz
+
+// you can find a directory named `apache-pulsar-offloaders-{{pulsar:version}}` in the pulsar directory
+// then copy the offloaders
+
+$ mv apache-pulsar-offloaders-{{pulsar:version}}/offloaders offloaders
+
+$ ls offloaders
+tiered-storage-jcloud-{{pulsar:version}}.nar
+```
+
+For more details of how to configure tiered storage feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md)
+
+
+## Deploy a ZooKeeper cluster
+
+> If you already have an exsiting zookeeper cluster and want to use it, you can skip this section.
+
+[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster you need to deploy ZooKeeper first (before all other components). You had better deploy a 3-node ZooKeeper cluster. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper.
+
+To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory that you create [above](#install-the-pulsar-binary-package)). The following is an example:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+> If you have only one machine to deploy Pulsar, you just need to add one server entry in the configuration file.
+
+On each host, you need to specify the ID of the node in the `myid` file of each node, which is in each `data/zookeeper` folder of server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this:
+
+```bash
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and have the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start zookeeper
+```
+
+> If you plan to deploy zookeeper with bookie on the same node, you
+> need to start zookeeper by using different stats port.
+
+Start zookeeper with [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool like:
+
+```bash
+$ PULSAR_EXTRA_OPTS="-Dstats_server_port=8001" bin/pulsar-daemon start zookeeper
+```
+
+## Initialize cluster metadata
+
+Once you deploy ZooKeeper for your cluster, you need to write some metadata to ZooKeeper for each cluster in your instance. You only need to write **once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your ZooKeeper cluster. The following is an example:
+
+```shell
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster pulsar-cluster-1 \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2181 \
+  --web-service-url http://pulsar.us-west.example.com:8080 \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443 \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650 \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+As you can see from the example above, you
+need to specify the following:
+
+Flag | Description
+:----|:-----------
+`--cluster` | A name for the cluster
+`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (you had better not use a different port).
+`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster. The default port is 8443 (you had better not use a different port).
+`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should not use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (you had better not use a different port).
+`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (you had better not use a different port).
+
+
+> If you don't have a DNS server, you can use multi-host in service URL with the following settings:
+>
+> ```properties
+> --web-service-url http://host1:8080,host2:8080,host3:8080 \
+> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \
+> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \
+> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651
+> ```
+
+## Deploy a BookKeeper cluster
+
+[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use Pulsar. You can choose to run a **3-bookie BookKeeper cluster**.
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. The following is an example:
+
+```properties
+zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+```
+
+Once you appropriately modify the `zkServers` parameter, you can provide any other configuration modifications you need. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper), although consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide might be a better choice.
+
+> ##### NOTES
+>
+> Since Pulsar 2.1.0 releases, Pulsar introduces [stateful function](functions-develop.md#state-storage) for Pulsar Functions. If you want to enable that feature,
+> you need to enable table service on BookKeeper by doing the following setting in `conf/bookkeeper.conf` file.
+>
+> ```conf
+> extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent
+> ```
+
+Once you apply the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground.
+
+To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start bookie
+```
+
+To start the bookie in the foreground:
+
+```bash
+$ bin/bookkeeper bookie
+```
+
+You can verify that a bookie works properly by running the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#shell) on it:
+
+```bash
+$ bin/bookkeeper shell bookiesanity
+```
+
+This command creates an ephemeral BookKeeper ledger on the local bookie, writes a few entries, reads them back, and finally deletes the ledger.
+
+After you start all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to
+verify all the bookies in the cluster are up running.
+
+```bash
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+```
+
+This command creates a `num-bookies` sized ledger on the cluster, writes a few entries, and finally deletes the ledger.
+
+
+## Deploy Pulsar brokers
+
+Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide the administrative interface of Pulsar. A good choice is to run **3 brokers**, one for each machine that already runs a BookKeeper bookie.
+
+### Configure Brokers
+
+The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Make sure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`.
+
+```properties
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+```
+
+You also need to specify the cluster name (matching the name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata)):
+
+```properties
+clusterName=pulsar-cluster-1
+```
+
+In addition, you need to match the broker and web service ports provided when you initialize the metadata of the cluster (especially when you use a different port from default):
+
+```properties
+brokerServicePort=6650
+brokerServicePortTls=6651
+webServicePort=8080
+webServicePortTls=8443
+```
+
+> If you deploy Pulsar in a one-node cluster, you should update the replication settings in `conf/broker.conf` to `1`
+>
+> ```properties
+> # Number of bookies to use when creating a ledger
+> managedLedgerDefaultEnsembleSize=1
+>
+> # Number of copies to store for each message
+> managedLedgerDefaultWriteQuorum=1
+> 
+> # Number of guaranteed copies (acks to wait before write is complete)
+> managedLedgerDefaultAckQuorum=1
+> ```
+
+### Enable Pulsar Functions (optional)
+
+If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below:
+
+1. Edit `conf/broker.conf` to enable functions worker, by setting `functionsWorkerEnabled` to `true`.
+
+    ```conf
+    functionsWorkerEnabled=true
+    ```
+
+2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provide when you [initialize the metadata of the cluster](#initialize-cluster-metadata). 
+
+    ```conf
+    pulsarFunctionsCluster: pulsar-cluster-1
+    ```
+
+If you want to learn more options about deploying functions worker, checkout [Deploy and manage functions worker](functions-worker.md).
+
+### Start Brokers
+
+You can then provide any other configuration changes that you want in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, you can start brokers either in the foreground or in the background, using nohup.
+
+You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command:
+
+```bash
+$ bin/pulsar broker
+```
+
+You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start broker
+```
+
+Once you succesfully start up all the brokers that you intend to use, your Pulsar cluster should be ready to go!
+
+## Connect to the running cluster
+
+Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provide a simple way to make sure that your cluster runs properly.
+
+To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you assign to your broker/bookie hosts. The following is an example:
+
+```properties
+webServiceUrl=http://us-west.example.com:8080
+brokerServiceurl=pulsar://us-west.example.com:6650
+```
+
+> If you don't have a DNS server, you can specify multi-host in service URL like below:
+>
+> ```properties
+> webServiceUrl=http://host1:8080,host2:8080,host3:8080
+> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650
+> ```
+
+Once you do that, you can publish a message to Pulsar topic:
+
+```bash
+$ bin/pulsar-client produce \
+  persistent://public/default/test \
+  -n 1 \
+  -m "Hello Pulsar"
+```
+
+> You may need to use a different cluster name in the topic if you specify a cluster name different from `pulsar-cluster-1`.
+
+This command publishes a single message to the Pulsar topic. In addition, you can subscribe the Pulsar topic in a different terminal before publishing messages as below:
+
+```bash
+$ bin/pulsar-client consume \
+  persistent://public/default/test \
+  -n 100 \
+  -s "consumer-test" \
+  -t "Exclusive"
+```
+
+Once you successfully publish the message above to the topic, you should see it in the standard output:
+
+```bash
+----- got message -----
+Hello Pulsar
+```
+
+## Run Functions
+
+> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, you can also tryout pulsar functions now.
+
+Create a ExclamationFunction `exclamation`.
+
+```bash
+bin/pulsar-admin functions create \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+```
+
+Check if the function runs as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function.
+
+```bash
+bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world"
+```
+
+You can see the output as below:
+
+```shell
+hello world!
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/deploy-dcos.md b/site2/website/versioned_docs/version-2.5.2/deploy-dcos.md
new file mode 100644
index 0000000..f23b62d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/deploy-dcos.md
@@ -0,0 +1,183 @@
+---
+id: version-2.5.2-deploy-dcos
+title: Deploy Pulsar on DC/OS
+sidebar_label: DC/OS
+original_id: deploy-dcos
+---
+
+> ### Tips
+>
+> If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of
+> `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+[DC/OS](https://dcos.io/) (the <strong>D</strong>ata<strong>C</strong>enter <strong>O</strong>perating <strong>S</strong>ystem) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool that [Mesosphere](https://mesosphere.com/) creates and maintains .
+
+Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets.
+
+## Prerequisites
+
+In order to run Pulsar on DC/OS, you need the following:
+
+* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher
+* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes
+* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed
+* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo.
+
+  ```bash
+  $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json
+  ```
+
+Each node in the DC/OS-managed Mesos cluster must have at least:
+
+* 4 CPU
+* 4 GB of memory
+* 60 GB of total persistent disk
+
+Alternatively, you can change the configuration in `PulsarGroups.json` according to match your resources of DC/OS cluster.
+
+## Deploy Pulsar using the DC/OS command interface
+
+You can deploy Pulsar on DC/OS using this command:
+
+```bash
+$ dcos marathon group add PulsarGroups.json
+```
+
+This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster:
+
+* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance)
+* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance)
+* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance
+
+
+> When you run DC/OS, a ZooKeeper cluster already runs at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately.
+
+After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying.
+
+![DC/OS command executed](assets/dcos_command_execute.png)
+
+![DC/OS command executed2](assets/dcos_command_execute2.png)
+
+## The BookKeeper group
+
+To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group.
+
+![DC/OS bookkeeper status](assets/dcos_bookkeeper_status.png)
+
+At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that the bookies have been deployed successfully and are now running.
+ 
+![DC/OS bookkeeper running](assets/dcos_bookkeeper_run.png)
+ 
+You can also click into each bookie instance to get more detailed information, such as the bookie running log.
+
+![DC/OS bookie log](assets/dcos_bookie_log.png)
+
+To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory.
+
+![DC/OS bookkeeper in zk](assets/dcos_bookkeeper_in_zookeeper.png)
+
+## The Pulsar broker Group
+
+Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers.
+
+![DC/OS broker status](assets/dcos_broker_status.png)
+
+![DC/OS broker running](assets/dcos_broker_run.png)
+
+You can also click into each broker instance to get more detailed information, such as the broker running log.
+
+![DC/OS broker log](assets/dcos_broker_log.png)
+
+Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created.
+
+![DC/OS broker in zk](assets/dcos_broker_in_zookeeper.png)
+
+## Monitor Group
+
+The **monitory** group consists of Prometheus and Grafana.
+
+![DC/OS monitor status](assets/dcos_monitor_status.png)
+
+### Prometheus
+
+Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example.
+
+![DC/OS prom endpoint](assets/dcos_prom_endpoint.png)
+
+If you click that endpoint, you can see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL display all the bookies and brokers.
+
+![DC/OS prom targets](assets/dcos_prom_targets.png)
+
+### Grafana
+
+Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example.
+ 
+![DC/OS grafana endpoint](assets/dcos_grafana_endpoint.png)
+
+If you click that endpoint, you can access the Grafana dashbaord.
+
+![DC/OS grafana targets](assets/dcos_grafana_dashboard.png)
+
+## Run a simple Pulsar consumer and producer on DC/OS
+
+Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action.
+
+### Download and prepare the Pulsar Java tutorial
+
+You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file of the repo).
+
+```bash
+$ git clone https://github.com/streamlio/pulsar-java-tutorial
+```
+
+Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java).
+The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. The client agent IP address can also replace this.
+
+Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it can produce more messages.
+
+Now compile the project code using the command below:
+
+```bash
+$ mvn clean package
+```
+
+### Run the consumer and producer
+
+Execute this command to run the consumer:
+
+```bash
+$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial"
+```
+
+Execute this command to run the producer:
+
+```bash
+$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial"
+```
+
+You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI.
+
+![DC/OS pulsar producer](assets/dcos_producer.png)
+
+![DC/OS pulsar consumer](assets/dcos_consumer.png)
+
+### View Grafana metric output
+
+While the producer and consumer run, you can access running metrics information from Grafana.
+
+![DC/OS pulsar dashboard](assets/dcos_metrics.png)
+
+
+## Uninstall Pulsar
+
+You can shut down and uninstall the `pulsar` application from DC/OS at any time in the following two ways:
+
+1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar group.
+
+    ![DC/OS pulsar uninstall](assets/dcos_uninstall.png)
+
+2. You can use the following command:
+
+    ```bash
+    $ dcos marathon group remove /pulsar
+    ```
diff --git a/site2/website/versioned_docs/version-2.5.2/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.5.2/deploy-kubernetes.md
new file mode 100644
index 0000000..12112d1
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/deploy-kubernetes.md
@@ -0,0 +1,394 @@
+---
+id: version-2.5.2-deploy-kubernetes
+title: Deploying Pulsar on Kubernetes
+sidebar_label: Kubernetes
+original_id: deploy-kubernetes
+---
+
+> ### Tips
+>
+> If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of
+> `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+You can easily deploy Pulsar in [Kubernetes](https://kubernetes.io/) clusters, either in managed clusters on [Google Kubernetes Engine](#pulsar-on-google-kubernetes-engine) or [Amazon Web Services](https://aws.amazon.com/) or in [custom clusters](#pulsar-on-a-custom-kubernetes-cluster).
+
+The deployment method shown in this guide relies on [YAML](http://yaml.org/) definitions for Kubernetes [resources](https://kubernetes.io/docs/reference/). The {@inject: github:`deployment/kubernetes`:/deployment/kubernetes} subdirectory of the [Pulsar package](pulsar:download_page_url) holds resource definitions for:
+
+* A two-bookie BookKeeper cluster
+* A three-node ZooKeeper cluster
+* A three-broker Pulsar cluster
+* A [monitoring stack]() consisting of [Prometheus](https://prometheus.io/), [Grafana](https://grafana.com), and the [Pulsar dashboard](administration-dashboard.md)
+* A [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/) from which you can run administrative commands using the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool
+
+## Setup
+
+To get started, install a source package from the [downloads page](pulsar:download_page_url).
+
+> Note that the Pulsar binary package does *not* contain the necessary YAML resources to deploy Pulsar on Kubernetes.
+
+If you want to change the number of bookies, brokers, or ZooKeeper nodes in your Pulsar cluster, modify the `replicas` parameter in the `spec` section of the appropriate [`Deployment`](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) or [`StatefulSet`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) resource.
+
+## Pulsar on Google Kubernetes Engine
+
+[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine) (GKE) automates the creation and management of Kubernetes clusters in [Google Compute Engine](https://cloud.google.com/compute/) (GCE).
+
+### Prerequisites
+
+To get started, you need:
+
+* A Google Cloud Platform account, which you can sign up for at [cloud.google.com](https://cloud.google.com)
+* An existing Cloud Platform project
+* The [Google Cloud SDK](https://cloud.google.com/sdk/downloads) (in particular the [`gcloud`](https://cloud.google.com/sdk/gcloud/) and [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/#download-as-part-of-the-google-cloud-sdk) tools).
+
+### Create a new Kubernetes cluster
+
+You can create a new GKE cluster entering the [`container clusters create`](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create) command for `gcloud`. This command enables you to specify the number of nodes in the cluster, the machine types of those nodes, and so on.
+
+The following example creates a new GKE cluster for Kubernetes version [1.6.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v164) in the [us-central1-a](https://cloud.google.com/compute/docs/regions-zones/regions-zones#available) zone. The cluster is named `pulsar-gke-cluster` and consists of three VMs, each using two locally attached SSDs and running on [n1-standard-8](https://cloud.google.com/compute/docs/machine-types) machines. [Bookie](reference-terminology.md#b [...]
+
+```bash
+$ gcloud container clusters create pulsar-gke-cluster \
+  --zone=us-central1-a \
+  --machine-type=n1-standard-8 \
+  --num-nodes=3 \
+  --local-ssd-count=2 \
+```
+
+By default, bookies run on all the machines that have locally attached SSD disks. In this example, all of those machines have two SSDs, but you can add different types of machines to the cluster later. You can control which machines host bookie servers using [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels).
+
+### Dashboard
+
+You can observe your cluster in the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) by downloading the credentials for your Kubernetes cluster and opening up a proxy to the cluster:
+
+```bash
+$ gcloud container clusters get-credentials pulsar-gke-cluster \
+  --zone=us-central1-a \
+  --project=your-project-name
+$ kubectl proxy
+```
+
+By default, the proxy is opened on port 8001. Now you can navigate to [localhost:8001/ui](http://localhost:8001/ui) in your browser to access the dashboard. At first your GKE cluster is empty, but that changes as you begin deploying Pulsar components using `kubectl` [component by component](#deploying-pulsar-components),
+or using [`helm`](#deploying-pulsar-components-helm).
+
+## Pulsar on Amazon Web Services
+
+You can run Kubernetes on [Amazon Web Services](https://aws.amazon.com/) (AWS) in a variety of ways. A very simple way that is [recently introduced](https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-kops/) involves using the [Kubernetes Operations](https://github.com/kubernetes/kops) (kops) tool.
+
+You can find detailed instructions for setting up a Kubernetes cluster on AWS from [here](https://github.com/kubernetes/kops/blob/master/docs/aws.md).
+
+When you create a cluster using those instructions, your `kubectl` config in `~/.kube/config` (on MacOS and Linux) is updated for you, so you probably do not need to change your configuration. Nonetheless, you can ensure that `kubectl` can interact with your cluster by listing the nodes in the cluster:
+
+```bash
+$ kubectl get nodes
+```
+
+If `kubectl` works with your cluster, you can proceed to deploy Pulsar components using `kubectl` [component by component](#deploying-pulsar-components),
+or using [`helm`](#deploying-pulsar-components-helm).
+
+## Pulsar on a custom Kubernetes cluster
+
+You can deploy Pulsar on a custom, non-GKE Kubernetes cluster as well. You can find detailed documentation on how to choose a Kubernetes installation method that suits your needs in the [Picking the Right Solution](https://kubernetes.io/docs/setup/pick-right-solution) guide in the Kubernetes docs.
+
+The easiest way to run a Kubernetes cluster is to do so locally. To install a mini local cluster for testing purposes and running in local VMs, you can either:
+
+1. Use [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/) to run a single-node Kubernetes cluster.
+1. Create a local cluster running on multiple VMs on the same machine.
+
+### Minikube
+
+1. [Install and configure minikube](https://github.com/kubernetes/minikube#installation) with
+   a [VM driver](https://github.com/kubernetes/minikube#requirements), for example, `kvm2` on Linux or `hyperkit` or `VirtualBox` on macOS.
+1. Create a kubernetes cluster on Minikube.
+    ```shell
+    minikube start --memory=8192 --cpus=4 \
+        --kubernetes-version=<version>
+    ```
+    `<version>` can be any [Kubernetes version supported by your minikube installation](https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/). Example: `v1.16.1`
+1. Set `kubectl` to use Minikube.
+    ```shell
+    kubectl config use-context minikube
+    ```
+
+In order to use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/)
+with local Kubernetes cluster on Minikube, enter the command below:
+
+```bash
+$ minikube dashboard
+```
+
+The command automatically triggers opening a webpage in your browser. At first your local cluster is empty, but that changes as you begin deploying Pulsar components using `kubectl` [component by component](#deploying-pulsar-components),
+or using [`helm`](#deploying-pulsar-components-helm).
+
+### Multiple VMs
+
+For the second option, follow the [instructions](https://github.com/pires/kubernetes-vagrant-coreos-cluster) for running Kubernetes using [CoreOS](https://coreos.com/) on [Vagrant](https://www.vagrantup.com/). You can follow an abridged version of those instructions from here.
+
+
+First, make sure you have [Vagrant](https://www.vagrantup.com/downloads.html) and [VirtualBox](https://www.virtualbox.org/wiki/Downloads) installed. Then clone the repo and start up the cluster:
+
+```bash
+$ git clone https://github.com/pires/kubernetes-vagrant-coreos-cluster
+$ cd kubernetes-vagrant-coreos-cluster
+
+# Start a three-VM cluster
+$ NODES=3 USE_KUBE_UI=true vagrant up
+```
+
+Create SSD disk mount points on the VMs using this script:
+
+```bash
+$ for vm in node-01 node-02 node-03; do
+    NODES=3 vagrant ssh $vm -c "sudo mkdir -p /mnt/disks/ssd0"
+    NODES=3 vagrant ssh $vm -c "sudo mkdir -p /mnt/disks/ssd1"
+  done
+```
+
+Bookies expect two logical devices to mount for [journal](concepts-architecture-overview.md#journal-storage) and persistent message storage to be available. In this VM exercise, you can create two directories on each VM.
+
+Once the cluster is up, you can verify that `kubectl` can access it:
+
+```bash
+$ kubectl get nodes
+NAME           STATUS                     AGE       VERSION
+172.17.8.101   Ready,SchedulingDisabled   10m       v1.6.4
+172.17.8.102   Ready                      8m        v1.6.4
+172.17.8.103   Ready                      6m        v1.6.4
+172.17.8.104   Ready                      4m        v1.6.4
+```
+
+In order to use the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) with your local Kubernetes cluster, first, you need to use `kubectl` to create a proxy to the cluster:
+
+```bash
+$ kubectl proxy
+```
+
+Now you can access the web interface at [localhost:8001/ui](http://localhost:8001/ui). At first your local cluster is empty, but that changes as you begin deploying Pulsar components using `kubectl` [component by component](#deploying-pulsar-components), or using [`helm`](#deploying-pulsar-components-helm).
+
+## Deploy Pulsar components
+
+Now that you have set up a Kubernetes cluster, either on [Google Kubernetes Engine](#pulsar-on-google-kubernetes-engine) or on a [custom cluster](#pulsar-on-a-custom-kubernetes-cluster), you can begin deploying the components that make up Pulsar. You can find the YAML resource definitions for Pulsar components in the `kubernetes` folder of the [Pulsar source package](pulsar:download_page_url).
+
+In that package, you can find different sets of resource definitions for different environments.
+
+- `deployment/kubernetes/google-kubernetes-engine`: for Google Kubernetes Engine (GKE)
+- `deployment/kubernetes/aws`: for AWS
+- `deployment/kubernetes/generic`: for a custom Kubernetes cluster
+
+To begin, `cd` into the appropriate folder.
+
+### Deploy ZooKeeper
+
+You *must* deploy ZooKeeper as the first Pulsar component, as ZooKeeper is a dependency for the others.
+
+```bash
+$ kubectl apply -f zookeeper.yaml
+```
+
+Wait until all three ZooKeeper server pods are up and have the status `Running`. You can check on the status of the ZooKeeper pods at any time:
+
+```bash
+$ kubectl get pods -l component=zookeeper
+NAME      READY     STATUS             RESTARTS   AGE
+zk-0      1/1       Running            0          18m
+zk-1      1/1       Running            0          17m
+zk-2      0/1       Running            6          15m
+```
+
+This step may take several minutes, as Kubernetes needs to download the Docker image on the VMs.
+
+### Initialize cluster metadata
+
+Once ZooKeeper runs, you need to [initialize the metadata](#cluster-metadata-initialization) for the Pulsar cluster in ZooKeeper. This includes system metadata for [BookKeeper](reference-terminology.md#bookkeeper) and Pulsar more broadly. You only need to run the Kubernetes [job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) in the `cluster-metadata.yaml` file once:
+
+```bash
+$ kubectl apply -f cluster-metadata.yaml
+```
+
+For the sake of reference, that job runs the following command on an ephemeral pod:
+
+```bash
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster local \
+  --zookeeper zookeeper \
+  --configuration-store zookeeper \
+  --web-service-url http://broker.default.svc.cluster.local:8080/ \
+  --broker-service-url pulsar://broker.default.svc.cluster.local:6650/
+```
+
+### Deploy the rest of the components
+
+Once you have successfully initialized cluster metadata, you can then deploy the bookies, brokers, monitoring stack ([Prometheus](https://prometheus.io), [Grafana](https://grafana.com), and the [Pulsar dashboard](administration-dashboard.md)), and Pulsar cluster proxy:
+
+```bash
+$ kubectl apply -f bookie.yaml
+$ kubectl apply -f broker.yaml
+$ kubectl apply -f proxy.yaml
+$ kubectl apply -f monitoring.yaml
+$ kubectl apply -f admin.yaml
+```
+
+You can check on the status of the pods for these components either in the Kubernetes Dashboard or using `kubectl`:
+
+```bash
+$ kubectl get pods -w -l app=pulsar
+```
+
+### Set up properties and namespaces
+
+Once all of the components are up and running, you need to create at least one Pulsar tenant and at least one namespace.
+
+>If Pulsar [authentication and authorization](security-overview.md) is turned on,you do not have to strictly perform this step though you are allowed to change [policies](admin-api-namespaces.md) for each of the namespaces later.
+
+You can create properties and namespaces (and perform any other administrative tasks) using the `pulsar-admin` pod that is already configured to act as an admin client for your newly created Pulsar cluster. One easy way to perform administrative tasks is to create an alias for the [`pulsar-admin`](reference-pulsar-admin.md) tool installed on the admin pod.
+
+```bash
+$ alias pulsar-admin='kubectl exec pulsar-admin -it -- bin/pulsar-admin'
+```
+
+Now, any time you run `pulsar-admin`, you can run commands from that pod. This command creates a tenant called `ten`:
+
+```bash
+$ pulsar-admin tenants create ten \
+  --admin-roles admin \
+  --allowed-clusters local
+```
+
+This command creates a `ns` namespace under the `ten` tenant:
+
+```bash
+$ pulsar-admin namespaces create ten/ns
+```
+
+To verify that everything has gone as planned:
+
+```bash
+$ pulsar-admin tenants list
+public
+ten
+
+$ pulsar-admin namespaces list ten
+ten/ns
+```
+
+Now that you have a namespace and tenant set up, you can move on to [experimenting with your Pulsar cluster](#experimenting-with-your-cluster) from within the cluster or [connecting to the cluster](#client-connections) using a Pulsar client.
+
+### Experiment with your cluster
+
+Now that you have successfully created a tenant and namespace, you can begin experimenting with your running Pulsar cluster. Using the same `pulsar-admin` pod via an alias, as in the section above, you can use [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) to create a test [producer](reference-terminology.md#producer) to publish 10,000 messages a second on a topic in the [tenant](reference-terminology.md#tenant) and [namespace](reference-terminology.md#namespace) you have created.
+
+First, create an alias to use the `pulsar-perf` tool via the admin pod:
+
+```bash
+$ alias pulsar-perf='kubectl exec pulsar-admin -it -- bin/pulsar-perf'
+```
+
+Now, produce messages:
+
+```bash
+$ pulsar-perf produce persistent://public/default/my-topic \
+  --rate 10000
+```
+
+Similarly, you can start a [consumer](reference-terminology.md#consumer) to subscribe to and receive all the messages on that topic:
+
+```bash
+$ pulsar-perf consume persistent://public/default/my-topic \
+  --subscriber-name my-subscription-name
+```
+
+You can also view [stats](administration-stats.md) for the topic using the [`pulsar-admin`](reference-pulsar-admin.md#persistent-stats) tool:
+
+```bash
+$ pulsar-admin persistent stats persistent://public/default/my-topic
+```
+
+### Monitor
+
+The default monitoring stack for Pulsar on Kubernetes consists of [Prometheus](#prometheus), [Grafana](#grafana), and the [Pulsar dashbaord](administration-dashboard.md).
+
+> If you deploy the cluster to Minikube, the following monitoring ports are mapped at the minikube VM:
+>
+> - Prometheus port: 30003
+> - Grafana port: 30004
+> - Dashboard port: 30005
+>
+> You can use `minikube ip` to find the IP address of the minikube VM, and then use their mapped ports
+> to access corresponding services. For example, you can access Pulsar dashboard at `http://$(minikube ip):30005`.
+
+#### Prometheus
+
+A [Prometheus](https://prometheus.io) instance running inside the cluster can collect all Pulsar metrics in Kubernetes. Typically, you do not have to access Prometheus directly. Instead, you can use the [Grafana interface](#grafana) that displays the data stored in Prometheus.
+
+#### Grafana
+
+In your Kubernetes cluster, you can use [Grafana](https://grafana.com) to view dashbaords for Pulsar [namespaces](reference-terminology.md#namespace) (message rates, latency, and storage), JVM stats, [ZooKeeper](https://zookeeper.apache.org), and [BookKeeper](reference-terminology.md#bookkeeper). You can get access to the pod serving Grafana using the [`port-forward`](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster) command of `kubectl`:
+
+```bash
+$ kubectl port-forward \
+  $(kubectl get pods -l component=grafana -o jsonpath='{.items[*].metadata.name}') 3000
+```
+
+You can then access the dashboard in your web browser at [localhost:3000](http://localhost:3000).
+
+#### Pulsar dashboard
+
+While Grafana and Prometheus are used to provide graphs with historical data, [Pulsar dashboard](administration-dashboard.md) reports more detailed current data for individual [topics](reference-terminology.md#topic).
+
+For example, you can have sortable tables showing all namespaces, topics, and broker stats, with details on the IP address for consumers, how long they have been connected, and much more.
+
+You can access to the pod serving the Pulsar dashboard using the [`port-forward`](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster) command of `kubectl`:
+
+```bash
+$ kubectl port-forward \
+  $(kubectl get pods -l component=dashboard -o jsonpath='{.items[*].metadata.name}') 8080:80
+```
+
+You can then access the dashboard in your web browser at [localhost:8080](http://localhost:8080).
+
+### Client connections
+
+> If you deploy the cluster to Minikube, the proxy ports are mapped at the minikube VM:
+>
+> - Http port: 30001
+> - Pulsar binary protocol port: 30002
+>
+> You can use `minikube ip` to find the IP address of the minikube VM, and then use their mapped ports
+> to access corresponding services. For example, pulsar webservice url is at `http://$(minikube ip):30001`.
+
+Once your Pulsar cluster is running on Kubernetes, you can connect to it using a Pulsar client. You can fetch the IP address for the Pulsar proxy running in your Kubernetes cluster using `kubectl`:
+
+```bash
+$ kubectl get service broker-proxy \
+  --output=jsonpath='{.status.loadBalancer.ingress[*].ip}'
+```
+
+If the IP address for the proxy is, for example, 35.12.13.198, you can connect to Pulsar using `pulsar://35.12.13.198:6650`.
+
+You can find client documentation for:
+
+* [Java](client-libraries-java.md)
+* [Python](client-libraries-python.md)
+* [C++](client-libraries-cpp.md)
+
+
+## Deploy Pulsar components (helm)
+
+Pulsar also provides a [Helm](https://docs.helm.sh/) chart for deploying a Pulsar cluster to Kubernetes. Before you start, make sure you follow [Helm documentation](https://docs.helm.sh/using_helm) to install helm.
+
+> Assume you clone a pulsar repo under a `PULSAR_HOME` directory.
+
+### Minikube
+
+1. Go to Pulsar helm chart directory
+    ```shell
+    cd ${PULSAR_HOME}/deployment/kubernetes/helm
+    ```
+1. Install helm chart to a K8S cluster on Minikube.
+    ```shell
+    helm install --values pulsar/values-mini.yaml ./pulsar
+    ```
+
+Once the helm chart is completed on installation, you can access the cluster via:
+
+- Web service url: `http://$(minikube ip):30001/`
+- Pulsar service url: `pulsar://$(minikube ip):30002/`
diff --git a/site2/website/versioned_docs/version-2.5.2/deploy-monitoring.md b/site2/website/versioned_docs/version-2.5.2/deploy-monitoring.md
new file mode 100644
index 0000000..2533065
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/deploy-monitoring.md
@@ -0,0 +1,90 @@
+---
+id: version-2.5.2-deploy-monitoring
+title: Monitoring
+sidebar_label: Monitoring
+original_id: deploy-monitoring
+---
+
+You can use different ways to monitor a Pulsar cluster, exposing both metrics that relate to the usage of topics and the overall health of the individual components of the cluster.
+
+## Collect metrics
+
+You can collect broker stats, ZooKeeper stats, and BookKeeper stats. 
+
+### Broker stats
+
+You can collect Pulsar broker metrics from brokers and export the metrics in JSON format. The Pulsar broker metrics mainly have two types:
+
+* *Destination dumps*, which contain stats for each individual topic. You can fetch the destination dumps using the command below:
+
+  ```shell
+  bin/pulsar-admin broker-stats destinations
+  ```
+
+* Broker metrics, which contain the broker information and topics stats aggregated at namespace level. You can fetch the broker metrics using the command below:
+
+  ```shell
+  bin/pulsar-admin broker-stats monitoring-metrics
+  ```
+
+All the message rates are updated every 1min.
+
+The aggregated broker metrics are also exposed in the [Prometheus](https://prometheus.io) format at:
+
+```shell
+http://$BROKER_ADDRESS:8080/metrics
+```
+
+### ZooKeeper stats
+
+The local Zookeeper and configuration store server and clients that are shipped with Pulsar have been instrumented to expose detailed stats through Prometheus as well.
+
+```shell
+http://$LOCAL_ZK_SERVER:8000/metrics
+http://$GLOBAL_ZK_SERVER:8001/metrics
+```
+
+The default port of local ZooKeeper is `8000` and the default port of configuration store is `8001`. You can change the default port of local Zookeeper and configuration store by specifying system property `stats_server_port`.
+
+### BookKeeper stats
+
+For BookKeeper you can configure the stats frameworks by changing the `statsProviderClass` in
+`conf/bookkeeper.conf`.
+
+The default BookKeeper configuration, which is included with Pulsar distribution, enables the Prometheus exporter.
+
+```shell
+http://$BOOKIE_ADDRESS:8000/metrics
+```
+
+The default port for bookie is `8000` (instead of `8080`). You can change the port by configuring `prometheusStatsHttpPort` in `conf/bookkeeper.conf`.
+
+## Configure Prometheus
+
+You can use Prometheus to collect and store the metrics data. For details, refer to [Prometheus guide](https://prometheus.io/docs/introduction/getting_started/).
+
+When you run Pulsar on bare metal, you can provide the list of nodes that needs to be probed. When you deploy Pulsar in a Kubernetes cluster, the monitoring is automatically setup with the [provided](deploy-kubernetes.md) instructions.
+
+## Dashboards
+
+When you collect time series statistics, the major problem is to make sure the number of dimensions attached to the data does not explode.
+
+For that reason you only need to collect time series of metrics aggregated at the namespace level.
+
+### Pulsar per-topic dashboard
+
+The per-topic dashboard instructions are available at [Dashboard](administration-dashboard.md).
+
+### Grafana
+
+You can use grafana to easily create dashboard driven by the data that is stored in Prometheus.
+
+When you deploy Pulsar on Kubernetes, a `pulsar-grafana` Docker image is enabled by default. You can use the docker image with the principal dashboards.
+
+Enter the command below to use the dashboard manually:
+
+```shell
+docker run -p3000:3000 \
+        -e PROMETHEUS_URL=http://$PROMETHEUS_HOST:9090/ \
+        apachepulsar/pulsar-grafana:latest
+```
diff --git a/site2/website/versioned_docs/version-2.5.2/functions-cli.md b/site2/website/versioned_docs/version-2.5.2/functions-cli.md
new file mode 100644
index 0000000..bb9edc5
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/functions-cli.md
@@ -0,0 +1,198 @@
+---
+id: version-2.5.2-functions-cli
+title: Pulsar Functions command line tool
+sidebar_label: Reference: CLI
+original_id: functions-cli
+---
+
+The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters.
+
+## localrun
+
+Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster.
+
+Name | Description | Default
+---|---|---
+auto-ack | Whether or not the framework acknowledges messages automatically. | false |
+broker-service-url | The URL for the Pulsar broker. | |
+classname | The class name of a Pulsar Function.| |
+client-auth-params | Client authentication parameter. | |
+client-auth-plugin | Client authentication plugin using which function-process can connect to broker. |  |
+CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| |
+custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | |
+custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | |
+dead-letter-topic | The topic where all messages that were not processed successfully are sent. | |
+disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | |
+fqfn | The Fully Qualified Function Name (FQFN) for the function. |  |
+function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. |  |
+go | Path to the main Go executable binary for the function (if the function is written in Go). |  |
+hostname-verification-enabled | Enable hostname verification. | false
+inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | |
+jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. |  |
+instance-id-offset | Start the instanceIds from this offset. | 0
+log-topic | The topic to which the logs  a Pulsar Function are produced. |  |
+max-message-retries | How many times should we try to process a message before giving up. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+output | The output topic of a Pulsar Function (If none is specified, no output is written). |  |
+output-serde-classname | The SerDe class to be used for messages output by the function. |  |
+parallelism | The parallelism factor of  a Pulsar Function (i.e. the number of function instances to run). |  |
+processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE
+py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). |  |
+ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). |  |
+retain-ordering | Function consumes and processes messages in order. | |
+schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | <empty string>
+sliding-interval-count | The number of messages after which the window slides. |  |
+sliding-interval-duration-ms | The time duration after which the window slides. |  |
+subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. |  |
+tenant | The tenant of a Pulsar Function. |  |
+timeout-ms | The message timeout in milliseconds. |  |
+tls-allow-insecure | Allow insecure tls connection. | false
+tls-trust-cert-path | tls trust cert file path. |  |
+topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). |  |
+use-tls | Use tls connection. | false
+user-config | User-defined config key/values. |  |
+window-length-count | The number of messages per window. |  |
+window-length-duration-ms | The time duration of the window in milliseconds. | |
+
+
+## create
+
+Create and deploy a Pulsar Function in cluster mode.
+
+Name | Description | Default
+---|---|---
+auto-ack | Whether or not the framework acknowledges messages automatically. | false |
+classname | The class name of a Pulsar Function. |  |
+CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| |
+custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | |
+custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | |
+custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | |
+dead-letter-topic | The topic where all messages that were not processed successfully are sent. | |
+disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | |
+fqfn | The Fully Qualified Function Name (FQFN) for the function. |  |
+function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. |  |
+go | Path to the main Go executable binary for the function (if the function is written in Go). |  |
+inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | |
+jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. |  |
+log-topic | The topic to which the logs of a Pulsar Function are produced. |  |
+max-message-retries | How many times should we try to process a message before giving up. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+output | The output topic of a Pulsar Function (If none is specified, no output is written). |  |
+output-serde-classname | The SerDe class to be used for messages output by the function. |  |
+parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). |  |
+processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE
+py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). |  |
+ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). |  |
+retain-ordering | Function consumes and processes messages in order. |  |
+schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | <empty string>
+sliding-interval-count | The number of messages after which the window slides. |  |
+sliding-interval-duration-ms | The time duration after which the window slides. |  |
+subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. |  |
+tenant | The tenant of a Pulsar Function. |  |
+timeout-ms | The message timeout in milliseconds. |  |
+topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). |  |
+user-config | User-defined config key/values. |  |
+window-length-count | The number of messages per window. |  |
+window-length-duration-ms | The time duration of the window in milliseconds. | |
+
+## delete
+
+Delete a Pulsar Function that is running on a Pulsar cluster.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## update
+
+Update a Pulsar Function that has been deployed to a Pulsar cluster.
+
+Name | Description | Default
+---|---|---
+auto-ack | Whether or not the framework acknowledges messages automatically. | false
+classname | The class name of a Pulsar Function. | |
+CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | |
+custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | |
+custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | |
+custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | |
+dead-letter-topic | The topic where all messages that were not processed successfully are sent. | |
+disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | |
+fqfn | The Fully Qualified Function Name (FQFN) for the function. |  |
+function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. |  |
+go | Path to the main Go executable binary for the function (if the function is written in Go). |  |
+inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | |
+jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. |  |
+log-topic | The topic to which the logs of a Pulsar Function are produced. |  |
+max-message-retries | How many times should we try to process a message before giving up. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+output | The output topic of a Pulsar Function (If none is specified, no output is written). |  |
+output-serde-classname | The SerDe class to be used for messages output by the function. |  |
+parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). |  |
+processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE
+py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). |  |
+ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). |  |
+retain-ordering | Function consumes and processes messages in order. |  |
+schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | <empty string>
+sliding-interval-count | The number of messages after which the window slides. |  |
+sliding-interval-duration-ms | The time duration after which the window slides. |  |
+subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. |  |
+tenant | The tenant of a Pulsar Function. |  |
+timeout-ms | The message timeout in milliseconds. |  |
+topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). |  |
+update-auth-data | Whether or not to update the auth data. | false
+user-config | User-defined config key/values. |  |
+window-length-count | The number of messages per window. |  |
+window-length-duration-ms | The time duration of the window in milliseconds. | |
+
+## get
+
+Fetch information about a Pulsar Function.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## restart
+
+Restart function instance.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+instance-id | The function instanceId (restart all instances if instance-id is not provided. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## stop
+
+Stops function instance.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+instance-id | The function instanceId (restart all instances if instance-id is not provided. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## start
+
+Starts a stopped function instance.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+instance-id | The function instanceId (restart all instances if instance-id is not provided. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
diff --git a/site2/website/versioned_docs/version-2.5.2/functions-debug.md b/site2/website/versioned_docs/version-2.5.2/functions-debug.md
new file mode 100644
index 0000000..1a84d68
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/functions-debug.md
@@ -0,0 +1,455 @@
+---
+id: version-2.5.2-functions-debug
+title: Debug Pulsar Functions
+sidebar_label: How-to: Debug
+original_id: functions-debug
+---
+
+You can use the following methods to debug Pulsar Functions:
+
+* [Captured stderr](functions-debug.md#captured-stderr)
+* [Use unit test](functions-debug.md#use-unit-test)
+* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode)
+* [Use log topic](functions-debug.md#use-log-topic)
+* [Use Functions CLI](functions-debug.md#use-functions-cli)
+
+## Captured stderr
+
+Function startup information and captured stderr output is written to `logs/functions/<tenant>/<namespace>/<function>/<function>-<instance>.log`
+
+This is useful for debugging why a function fails to start.
+
+## Use unit test
+
+A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function.
+
+For example, if you have the following Pulsar Function:
+
+```java
+import java.util.function.Function;
+
+public class JavaNativeExclamationFunction implements Function<String, String> {
+   @Override
+   public String apply(String input) {
+       return String.format("%s!", input);
+   }
+}
+```
+
+You can write a simple unit test to test Pulsar Function.
+
+```java
+@Test
+public void testJavaNativeExclamationFunction() {
+   JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction();
+   String output = exclamation.apply("foo");
+   Assert.assertEquals(output, "foo!");
+}
+```
+
+The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface.
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+public class ExclamationFunction implements Function<String, String> {
+   @Override
+   public String process(String input, Context context) {
+       return String.format("%s!", input);
+   }
+}
+```
+
+In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example.
+
+```java
+@Test
+public void testExclamationFunction() {
+   ExclamationFunction exclamation = new ExclamationFunction();
+   String output = exclamation.process("foo", mock(Context.class));
+   Assert.assertEquals(output, "foo!");
+}
+```
+
+## Debug with localrun mode
+When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread.
+
+In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster.
+
+> Note  
+> Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads.
+
+You can launch your function in the following manner.
+
+```java
+FunctionConfig functionConfig = new FunctionConfig();
+functionConfig.setName(functionName);
+functionConfig.setInputs(Collections.singleton(sourceTopic));
+functionConfig.setClassName(ExclamationFunction.class.getName());
+functionConfig.setRuntime(FunctionConfig.Runtime.JAVA);
+functionConfig.setOutput(sinkTopic);
+
+LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build();
+localRunner.start(true);
+```
+
+So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data.
+
+The following example illustrates how to programmatically launch a function in localrun mode.
+
+```java
+public class ExclamationFunction implements Function<String, String> {
+
+   @Override
+   public String process(String s, Context context) throws Exception {
+       return s + "!";
+   }
+
+public static void main(String[] args) throws Exception {
+    FunctionConfig functionConfig = new FunctionConfig();
+    functionConfig.setName("exclamation");
+    functionConfig.setInputs(Collections.singleton("input"));
+    functionConfig.setClassName(ExclamationFunction.class.getName());
+    functionConfig.setRuntime(FunctionConfig.Runtime.JAVA);
+    functionConfig.setOutput("output");
+
+    LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build();
+    localRunner.start(false);
+}
+```
+
+To use localrun mode programmatically, add the following dependency.
+
+```xml
+<dependency>
+   <groupId>org.apache.pulsar</groupId>
+   <artifactId>pulsar-functions-local-runner</artifactId>
+   <version>${pulsar.version}</version>
+</dependency>
+
+```
+
+For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging).
+
+> Note   
+> Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon.
+
+## Use log topic
+
+In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information.
+
+![Pulsar Functions core programming model](assets/pulsar-functions-overview.png)
+
+**Example** 
+
+```text
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+public class LoggingFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String messageId = new String(context.getMessageId());
+
+        if (input.contains("danger")) {
+            LOG.warn("A warning was received in message {}", messageId);
+        } else {
+            LOG.info("Message {} received\nContent: {}", messageId, input);
+        }
+
+        return null;
+    }
+}
+```
+
+As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced.
+
+**Example** 
+
+```
+$ bin/pulsar-admin functions create \
+  --log-topic persistent://public/default/logging-function-logs \
+  # Other function configs
+```
+
+## Use Functions CLI
+
+With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands:
+
+* `get`
+* `status`
+* `stats`
+* `list`
+* `trigger`
+
+> **Tip**
+> 
+> For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。
+
+### `get`
+
+Get information about a Pulsar Function.
+
+**Usage**
+
+```text
+$ pulsar-admin functions get options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function.
+|`--name`|The name of a Pulsar Function.
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+
+> **Tip**
+> 
+> `--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`.
+
+**Example** 
+
+You can specify `--fqfn` to get information about a Pulsar Function.
+
+```text
+$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6
+```
+Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function.
+
+```text
+$ ./bin/pulsar-admin functions get \
+    --tenant public \
+    --namespace default \
+    --name ExclamationFunctio6
+```
+
+As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function.
+
+```text
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "ExclamationFunctio6",
+  "className": "org.example.test.ExclamationFunction",
+  "inputSpecs": {
+    "persistent://public/default/my-topic-1": {
+      "isRegexPattern": false
+    }
+  },
+  "output": "persistent://public/default/test-1",
+  "processingGuarantees": "ATLEAST_ONCE",
+  "retainOrdering": false,
+  "userConfig": {},
+  "runtime": "JAVA",
+  "autoAck": true,
+  "parallelism": 1
+}
+```
+
+### `status`
+
+Check the current status of a Pulsar Function.
+
+**Usage**
+
+```text
+$ pulsar-admin functions status options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function.
+|`--instance-id`|The instance ID of a Pulsar Function <br>If the `--instance-id` is not specified, it gets the IDs of all instances.<br>
+|`--name`|The name of a Pulsar Function. 
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+
+**Example** 
+
+```text
+$ ./bin/pulsar-admin functions status \
+    --tenant public \
+    --namespace default \
+    --name ExclamationFunctio6 \
+```
+
+As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on.
+
+```text
+{
+  "numInstances" : 1,
+  "numRunning" : 1,
+  "instances" : [ {
+    "instanceId" : 0,
+    "status" : {
+      "running" : true,
+      "error" : "",
+      "numRestarts" : 0,
+      "numReceived" : 1,
+      "numSuccessfullyProcessed" : 1,
+      "numUserExceptions" : 0,
+      "latestUserExceptions" : [ ],
+      "numSystemExceptions" : 0,
+      "latestSystemExceptions" : [ ],
+      "averageLatency" : 0.8385,
+      "lastInvocationTime" : 1557734137987,
+      "workerId" : "c-standalone-fw-23ccc88ef29b-8080"
+    }
+  } ]
+}
+```
+
+### `stats`
+
+Get the current stats of a Pulsar Function.
+
+**Usage**
+
+```text
+$ pulsar-admin functions stats options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function.
+|`--instance-id`|The instance ID of a Pulsar Function. <br>If the `--instance-id` is not specified, it gets the IDs of all instances.<br>
+|`--name`|The name of a Pulsar Function. 
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+
+**Example**
+
+```text
+$ ./bin/pulsar-admin functions stats \
+    --tenant public \
+    --namespace default \
+    --name ExclamationFunctio6 \
+```
+
+The output is shown as follows:
+
+```text
+{
+  "receivedTotal" : 1,
+  "processedSuccessfullyTotal" : 1,
+  "systemExceptionsTotal" : 0,
+  "userExceptionsTotal" : 0,
+  "avgProcessLatency" : 0.8385,
+  "1min" : {
+    "receivedTotal" : 0,
+    "processedSuccessfullyTotal" : 0,
+    "systemExceptionsTotal" : 0,
+    "userExceptionsTotal" : 0,
+    "avgProcessLatency" : null
+  },
+  "lastInvocation" : 1557734137987,
+  "instances" : [ {
+    "instanceId" : 0,
+    "metrics" : {
+      "receivedTotal" : 1,
+      "processedSuccessfullyTotal" : 1,
+      "systemExceptionsTotal" : 0,
+      "userExceptionsTotal" : 0,
+      "avgProcessLatency" : 0.8385,
+      "1min" : {
+        "receivedTotal" : 0,
+        "processedSuccessfullyTotal" : 0,
+        "systemExceptionsTotal" : 0,
+        "userExceptionsTotal" : 0,
+        "avgProcessLatency" : null
+      },
+      "lastInvocation" : 1557734137987,
+      "userMetrics" : { }
+    }
+  } ]
+}
+```
+
+### `list`
+
+List all Pulsar Functions running under a specific tenant and namespace.
+
+**Usage**
+
+```text
+$ pulsar-admin functions list options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+
+**Example** 
+
+```text
+$ ./bin/pulsar-admin functions list \
+    --tenant public \
+    --namespace default
+```
+As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace.
+
+```text
+ExclamationFunctio1
+ExclamationFunctio2
+ExclamationFunctio3
+```
+
+### `trigger`
+
+Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it.
+
+**Usage**
+
+```text
+$ pulsar-admin functions trigger options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function.
+|`--name`|The name of a Pulsar Function.
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+|`--topic`|The topic name that a Pulsar Function consumes from.
+|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function.
+|`--trigger-value`|The value to trigger a Pulsar Function.
+
+**Example** 
+
+```text
+$ ./bin/pulsar-admin functions trigger \
+    --tenant public \
+    --namespace default \
+    --name ExclamationFunctio6 \
+    --topic persistent://public/default/my-topic-1 \
+    --trigger-value "hello pulsar functions"
+```
+
+As shown below, the `trigger` command returns the following result:
+
+```text
+This is my function!
+```
+
+> #### **Note**
+> You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs.
+>
+>```text
+>Function in trigger function has unidentified topic
+>
+>Reason: Function in trigger function has unidentified topic
+>```
diff --git a/site2/website/versioned_docs/version-2.5.2/functions-develop.md b/site2/website/versioned_docs/version-2.5.2/functions-develop.md
new file mode 100644
index 0000000..8b84f43
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/functions-develop.md
@@ -0,0 +1,983 @@
+---
+id: version-2.5.2-functions-develop
+title: Develop Pulsar Functions
+sidebar_label: How-to: Develop
+original_id: functions-develop
+---
+
+This tutorial walks you through how to develop Pulsar Functions.
+
+## Available APIs
+In Java and Python, you have two options to write Pulsar Functions. In Go, you can use Pulsar Functions SDK for Go.
+
+Interface | Description | Use cases
+:---------|:------------|:---------
+Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context).
+Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context).
+
+The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+```Java
+import java.util.function.Function;
+
+public class JavaNativeExclamationFunction implements Function<String, String> {
+    @Override
+    public String apply(String input) {
+        return String.format("%s!", input);
+    }
+}
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java).
+
+<!--Python-->
+```python
+def process(input):
+    return "{}!".format(input)
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/native_exclamation_function.py).
+
+> Note
+> You can write Pulsar Functions in python2 or python3. However, Pulsar only looks for `python` as the interpreter.
+> 
+> If you're running Pulsar Functions on an Ubuntu system that only supports python3, you might fail to
+> start the functions. In this case, you can create a symlink. Your system will fail if
+> you subsequently install any other package that depends on Python 2.x. A solution is under development in [Issue 5518](https://github.com/apache/pulsar/issues/5518).
+> 
+> ```bash
+> sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10
+> ```
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+The following example uses Pulsar Functions SDK.
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+```Java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+public class ExclamationFunction implements Function<String, String> {
+    @Override
+    public String process(String input, Context context) {
+        return String.format("%s!", input);
+    }
+}
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java).
+
+<!--Python-->
+```python
+from pulsar import Function
+
+class ExclamationFunction(Function):
+  def __init__(self):
+    pass
+
+  def process(self, input, context):
+    return input + '!'
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py).
+
+<!--Go-->
+```Go
+package main
+
+import (
+	"context"
+	"fmt"
+
+	"github.com/apache/pulsar/pulsar-function-go/pf"
+)
+
+func HandleRequest(ctx context.Context, in []byte) error{
+	fmt.Println(string(in) + "!")
+	return nil
+}
+
+func main() {
+	pf.Start(HandleRequest)
+}
+```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-function-go/examples/inputFunc.go#L20-L36).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## Schema registry
+Pulsar has a built in schema registry and comes bundled with a variety of popular schema types(avro, json and protobuf). Pulsar Functions can leverage existing schema information from input topics and derive the input type. The schema registry applies for output topic as well.
+
+## SerDe
+SerDe stands for **Ser**ialization and **De**serialization. Pulsar Functions uses SerDe when publishing data to and consuming data from Pulsar topics. How SerDe works by default depends on the language you use for a particular function.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+When you write Pulsar Functions in Java, the following basic Java types are built in and supported by default:
+
+* `String`
+* `Double`
+* `Integer`
+* `Float`
+* `Long`
+* `Short`
+* `Byte`
+
+To customize Java types, you need to implement the following interface.
+
+```java
+public interface SerDe<T> {
+    T deserialize(byte[] input);
+    byte[] serialize(T input);
+}
+```
+
+<!--Python-->
+In Python, the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns.
+
+You can specify the SerDe when [creating](functions-deploy.md#cluster-mode) or [running](functions-deploy.md#local-run-mode) functions. 
+
+```bash
+$ bin/pulsar-admin functions create \
+  --tenant public \
+  --namespace default \
+  --name my_function \
+  --py my_function.py \
+  --classname my_function.MyFunction \
+  --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \
+  --output-serde-classname Serde3 \
+  --output output-topic-1
+```
+
+This case contains two input topics: `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Functions logic, include processing function and SerDe classes, must be contained within a single Python file.
+
+When using Pulsar Functions for Python, you have three SerDe options:
+
+1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe means that this option is used.
+2. You can use the [`PickleSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe.
+3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for  [...]
+
+The table below shows when you should use each SerDe.
+
+SerDe option | When to use
+:------------|:-----------
+`IdentitySerde` | When you work with simple types like strings, Booleans, integers.
+`PickleSerDe` | When you work with complex, application-specific types and are comfortable with the "best effort" approach of `pickle`.
+Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes.
+
+<!--Go-->
+Currently, the feature is not available in Go.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### Example
+Imagine that you're writing Pulsar Functions that are processing tweet objects, you can refer to the following example of `Tweet` class.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+
+```java
+public class Tweet {
+    private String username;
+    private String tweetContent;
+
+    public Tweet(String username, String tweetContent) {
+        this.username = username;
+        this.tweetContent = tweetContent;
+    }
+
+    // Standard setters and getters
+}
+```
+
+To pass `Tweet` objects directly between Pulsar Functions, you need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`.
+
+```java
+package com.example.serde;
+
+import org.apache.pulsar.functions.api.SerDe;
+
+import java.util.regex.Pattern;
+
+public class TweetSerde implements SerDe<Tweet> {
+    public Tweet deserialize(byte[] input) {
+        String s = new String(input);
+        String[] fields = s.split(Pattern.quote("|"));
+        return new Tweet(fields[0], fields[1]);
+    }
+
+    public byte[] serialize(Tweet input) {
+        return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes();
+    }
+}
+```
+
+To apply this customized SerDe to a particular Pulsar Function, you need to:
+
+* Package the `Tweet` and `TweetSerde` classes into a JAR.
+* Specify a path to the JAR and SerDe class name when deploying the function.
+
+The following is an example of [`create`](reference-pulsar-admin.md#create-1) operation.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar /path/to/your.jar \
+  --output-serde-classname com.example.serde.TweetSerde \
+  # Other function attributes
+```
+
+> #### Custom SerDe classes must be packaged with your function JARs
+> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. So you need to include your SerDe classes in your function JARs. If not, Pulsar returns an error.
+
+<!--Python-->
+
+```python
+class Tweet(object):
+    def __init__(self, username, tweet_content):
+        self.username = username
+        self.tweet_content = tweet_content
+```
+
+In order to use this class in Pulsar Functions, you have two options:
+
+1. You can specify `PickleSerDe`, which applies the [`pickle`](https://docs.python.org/3/library/pickle.html) library SerDe.
+2. You can create your own SerDe class. The following is an example.
+
+  ```python
+from pulsar import SerDe
+
+class TweetSerDe(SerDe):
+
+    def serialize(self, input):
+        return bytes("{0}|{1}".format(input.username, input.tweet_content))
+
+    def deserialize(self, input_bytes):
+        tweet_components = str(input_bytes).split('|')
+        return Tweet(tweet_components[0], tweet_componentsp[1])
+  ```
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/custom_object_function.py).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+In both languages, however, you can write custom SerDe logic for more complex, application-specific types.
+
+## Context
+Java, Python and Go SDKs provide access to a **context object** that can be used by a function. This context object provides a wide variety of information and functionality to the function.
+
+* The name and ID of a Pulsar Function.
+* The message ID of each message. Each Pulsar message is automatically assigned with an ID.
+* The key, event time, properties and partition key of each message.
+* The name of the topic to which the message is sent.
+* The names of all input topics as well as the output topic associated with the function.
+* The name of the class used for [SerDe](#serde).
+* The [tenant](reference-terminology.md#tenant) and namespace associated with the function.
+* The ID of the Pulsar Functions instance running the function.
+* The version of the function.
+* The [logger object](functions-develop.md#logger) used by the function, which can be used to create function log messages.
+* Access to arbitrary [user configuration](#user-config) values supplied via the CLI.
+* An interface for recording [metrics](#metrics).
+* An interface for storing and retrieving state in [state storage](#state-storage).
+* A function to publish new messages onto arbitrary topics.
+* A function to ack the message being processed (if auto-ack is disabled).
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+The [Context](https://github.com/apache/pulsar/blob/master/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Context.java) interface provides a number of methods that you can use to access the function [context](#context). The various method signatures for the `Context` interface are listed as follows.
+
+```java
+public interface Context {
+    Record<?> getCurrentRecord();
+    Collection<String> getInputTopics();
+    String getOutputTopic();
+    String getOutputSchemaType();
+    String getTenant();
+    String getNamespace();
+    String getFunctionName();
+    String getFunctionId();
+    String getInstanceId();
+    String getFunctionVersion();
+    Logger getLogger();
+    void incrCounter(String key, long amount);
+    long getCounter(String key);
+    void putState(String key, ByteBuffer value);
+    void deleteState(String key);
+    ByteBuffer getState(String key);
+    Map<String, Object> getUserConfigMap();
+    Optional<Object> getUserConfigValue(String key);
+    Object getUserConfigValueOrDefault(String key, Object defaultValue);
+    void recordMetric(String metricName, double value);
+    <O> CompletableFuture<Void> publish(String topicName, O object, String schemaOrSerdeClassName);
+    <O> CompletableFuture<Void> publish(String topicName, O object);
+}
+```
+
+The following example uses several methods available via the `Context` object.
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+import java.util.stream.Collectors;
+
+public class ContextFunction implements Function<String, Void> {
+    public Void process(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", "));
+        String functionName = context.getFunctionName();
+
+        String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n",
+                input,
+                inputTopics);
+
+        LOG.info(logMessage);
+
+        String metricName = String.format("function-%s-messages-received", functionName);
+        context.recordMetric(metricName, 1);
+
+        return null;
+    }
+}
+```
+
+<!--Python-->
+```
+class ContextImpl(pulsar.Context):
+  def get_message_id(self):
+    ...
+  def get_message_key(self):
+    ...
+  def get_message_eventtime(self):
+    ...
+  def get_message_properties(self):
+    ...
+  def get_current_message_topic_name(self):
+    ...
+  def get_partition_key(self):
+    ...
+  def get_function_name(self):
+    ...
+  def get_function_tenant(self):
+    ...
+  def get_function_namespace(self):
+    ...
+  def get_function_id(self):
+    ...
+  def get_instance_id(self):
+    ...
+  def get_function_version(self):
+    ...
+  def get_logger(self):
+    ...
+  def get_user_config_value(self, key):
+    ...
+  def get_user_config_map(self):
+    ...
+  def record_metric(self, metric_name, metric_value):
+    ...
+  def get_input_topics(self):
+    ...
+  def get_output_topic(self):
+    ...
+  def get_output_serde_class_name(self):
+    ...
+  def publish(self, topic_name, message, serde_class_name="serde.IdentitySerDe",
+              properties=None, compression_type=None, callback=None, message_conf=None):
+    ...
+  def ack(self, msgid, topic):
+    ...
+  def get_and_reset_metrics(self):
+    ...
+  def reset_metrics(self):
+    ...
+  def get_metrics(self):
+    ...
+  def incr_counter(self, key, amount):
+    ...
+  def get_counter(self, key):
+    ...
+  def del_counter(self, key):
+    ...
+  def put_state(self, key, value):
+    ...
+  def get_state(self, key):
+    ...
+```
+
+<!--Go-->
+```
+func (c *FunctionContext) GetInstanceID() int {
+	return c.instanceConf.instanceID
+}
+
+func (c *FunctionContext) GetInputTopics() []string {
+	return c.inputTopics
+}
+
+func (c *FunctionContext) GetOutputTopic() string {
+	return c.instanceConf.funcDetails.GetSink().Topic
+}
+
+func (c *FunctionContext) GetFuncTenant() string {
+	return c.instanceConf.funcDetails.Tenant
+}
+
+func (c *FunctionContext) GetFuncName() string {
+	return c.instanceConf.funcDetails.Name
+}
+
+func (c *FunctionContext) GetFuncNamespace() string {
+	return c.instanceConf.funcDetails.Namespace
+}
+
+func (c *FunctionContext) GetFuncID() string {
+	return c.instanceConf.funcID
+}
+
+func (c *FunctionContext) GetFuncVersion() string {
+	return c.instanceConf.funcVersion
+}
+
+func (c *FunctionContext) GetUserConfValue(key string) interface{} {
+	return c.userConfigs[key]
+}
+
+func (c *FunctionContext) GetUserConfMap() map[string]interface{} {
+	return c.userConfigs
+}
+```
+
+The following example uses several methods available via the `Context` object.
+
+```
+import (
+    "context"
+    "fmt"
+
+    "github.com/apache/pulsar/pulsar-function-go/pf"
+)
+
+func contextFunc(ctx context.Context) {
+    if fc, ok := pf.FromContext(ctx); ok {
+        fmt.Printf("function ID is:%s, ", fc.GetFuncID())
+        fmt.Printf("function version is:%s\n", fc.GetFuncVersion())
+    }
+}
+```
+
+For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsar-function-go/examples/contextFunc.go#L29-L34).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### User config
+When you run or update Pulsar Functions created using SDK, you can pass arbitrary key/values to them with the command line with the `--userConfig` flag. Key/values must be specified as JSON. The following function creation command passes a user configured key/value to a function.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --name word-filter \
+  # Other function configs
+  --user-config '{"forbidden-word":"rosebud"}'
+```
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java--> 
+The Java SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair.
+
+```bash
+$ bin/pulsar-admin functions create \
+  # Other function configs
+  --user-config '{"word-of-the-day":"verdure"}'
+```
+
+To access that value in a Java function:
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+import java.util.Optional;
+
+public class UserConfigFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        Optional<String> wotd = context.getUserConfigValue("word-of-the-day");
+        if (wotd.isPresent()) {
+            LOG.info("The word of the day is {}", wotd);
+        } else {
+            LOG.warn("No word of the day provided");
+        }
+        return null;
+    }
+}
+```
+
+The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line.
+
+You can also access the entire user config map or set a default value in case no value is present:
+
+```java
+// Get the whole config map
+Map<String, String> allConfigs = context.getUserConfigMap();
+
+// Get value or resort to default
+String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious");
+```
+
+> For all key/value pairs passed to Java functions, both the key *and* the value are `String`. To set the value to be a different type, you need to deserialize from the `String` type.
+
+<!--Python-->
+In Python function, you can access the configuration value like this.
+
+```python
+from pulsar import Function
+
+class WordFilter(Function):
+    def process(self, context, input):
+        forbidden_word = context.user_config()["forbidden-word"]
+
+        # Don't publish the message if it contains the user-supplied
+        # forbidden word
+        if forbidden_word in input:
+            pass
+        # Otherwise publish the message
+        else:
+            return input
+```
+
+The Python SDK [`Context`](#context) object enables you to access key/value pairs provided to Pulsar Functions via the command line (as JSON). The following example passes a key/value pair.
+
+```bash
+$ bin/pulsar-admin functions create \
+  # Other function configs \
+  --user-config '{"word-of-the-day":"verdure"}'
+```
+
+To access that value in a Python function:
+
+```python
+from pulsar import Function
+
+class UserConfigFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        wotd = context.get_user_config_value('word-of-the-day')
+        if wotd is None:
+            logger.warn('No word of the day provided')
+        else:
+            logger.info("The word of the day is {0}".format(wotd))
+```
+<!--Go--> 
+Currently, the feature is not available in Go.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### Logger
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+Pulsar Functions that use the Java SDK have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. The following example logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`.
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+public class LoggingFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String messageId = new String(context.getMessageId());
+
+        if (input.contains("danger")) {
+            LOG.warn("A warning was received in message {}", messageId);
+        } else {
+            LOG.info("Message {} received\nContent: {}", messageId, input);
+        }
+
+        return null;
+    }
+}
+```
+
+If you want your function to produce logs, you need to specify a log topic when creating or running the function. The following is an example.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar my-functions.jar \
+  --classname my.package.LoggingFunction \
+  --log-topic persistent://public/default/logging-function-logs \
+  # Other function configs
+```
+
+All logs produced by `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic.
+
+<!--Python-->
+Pulsar Functions that use the Python SDK have access to a logging object that can be used to produce logs at the chosen log level. The following example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`.
+
+```python
+from pulsar import Function
+
+class LoggingFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        msg_id = context.get_message_id()
+        if 'danger' in input:
+            logger.warn("A warning was received in message {0}".format(context.get_message_id()))
+        else:
+            logger.info("Message {0} received\nContent: {1}".format(msg_id, input))
+```
+
+If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. The following is an example.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --py logging_function.py \
+  --classname logging_function.LoggingFunction \
+  --log-topic logging-function-logs \
+  # Other function configs
+```
+
+All logs produced by `LoggingFunction` above can be accessed via the `logging-function-logs` topic.
+
+<!--Go-->
+The following Go Function example shows different log levels based on the function input.
+
+```
+import (
+	"context"
+
+	"github.com/apache/pulsar/pulsar-function-go/log"
+	"github.com/apache/pulsar/pulsar-function-go/pf"
+)
+
+func loggerFunc(ctx context.Context, input []byte) {
+	if len(input) <= 100 {
+		log.Infof("This input has a length of: %d", len(input))
+	} else {
+		log.Warnf("This input is getting too long! It has {%d} characters", len(input))
+	}
+}
+
+func main() {
+	pf.Start(loggerFunc)
+}
+```
+
+When you use `logTopic` related functionalities in Go Function, import `github.com/apache/pulsar/pulsar-function-go/log`, and you do not have to use the `getLogger()` context object. 
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## Metrics
+Pulsar Functions can publish arbitrary metrics to the metrics interface which can be queried. 
+
+> If a Pulsar Function uses the language-native interface for Java or Python, that function is not able to publish metrics and stats to Pulsar.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. 
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+public class MetricRecorderFunction implements Function<Integer, Void> {
+    @Override
+    public void apply(Integer input, Context context) {
+        // Records the metric 1 every time a message arrives
+        context.recordMetric("hit-count", 1);
+
+        // Records the metric only if the arriving number equals 11
+        if (input == 11) {
+            context.recordMetric("elevens-count", 1);
+        }
+
+        return null;
+    }
+}
+```
+
+> For instructions on reading and using metrics, see the [Monitoring](deploy-monitoring.md) guide.
+
+<!--Python-->
+You can record metrics using the [`Context`](#context) object on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message. The following is an example.
+
+```python
+from pulsar import Function
+
+class MetricRecorderFunction(Function):
+    def process(self, input, context):
+        context.record_metric('hit-count', 1)
+
+        if input == 11:
+            context.record_metric('elevens-count', 1)
+```
+<!--Go-->
+Currently, the feature is not available in Go.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### Access metrics
+To access metrics created by Pulsar Functions, refer to [Monitoring](deploy-monitoring.md) in Pulsar. 
+
+## Security
+
+If you want to enable security on Pulsar Functions, first you should enable security on [Functions Workers](functions-worker.md). For more details, refer to [Security settings](functions-worker.md#security-settings).
+
+Pulsar Functions can support the following providers:
+
+- ClearTextSecretsProvider
+- EnvironmentBasedSecretsProvider
+
+> Pulsar Function supports ClearTextSecretsProvider by default.
+
+At the same time, Pulsar Functions provides two interfaces, **SecretsProvider** and **SecretsProviderConfigurator**, allowing users to customize secret provider.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+You can get secret provider using the [`Context`](#context) object. The following is an example:
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+public class GetSecretProviderFunction implements Function<String, Void> {
+
+    @Override
+    public Void process(String input, Context context) throws Exception {
+        Logger LOG = context.getLogger();
+        String secretProvider = context.getSecret(input);
+
+        if (!secretProvider.isEmpty()) {
+            LOG.info("The secret provider is {}", secretProvider);
+        } else {
+            LOG.warn("No secret provider");
+        }
+
+        return null;
+    }
+}
+```
+
+<!--Python-->
+You can get secret provider using the [`Context`](#context) object. The following is an example:
+
+```python
+from pulsar import Function
+
+class GetSecretProviderFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        secret_provider = context.get_secret(input)
+        if secret_provider is None:
+            logger.warn('No secret provider')
+        else:
+            logger.info("The secret provider is {0}".format(secret_provider))
+```
+
+
+<!--Go-->
+Currently, the feature is not available in Go.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+## State storage
+Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Pulsar installation, including the local standalone installation, includes deployment of BookKeeper bookies.
+
+Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f) to store the `State` for functions. For example, a `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions State API.
+
+States are key-value pairs, where the key is a string and the value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function, and shared between instances of that function.
+
+You can access states within Pulsar Functions using the `putState`, `getState`, `incrCounter`, `getCounter` and `deleteState` calls on the context object. You can also manage states using the [querystate](#query-state) and [putstate](#putstate) options to `pulsar-admin functions`.
+
+> Note  
+> State storage is not available in Go.
+
+### API
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](functions-develop.md#context) object when you are using Java SDK functions.
+
+#### incrCounter
+
+```java
+    /**
+     * Increment the builtin distributed counter refered by key
+     * @param key The name of the key
+     * @param amount The amount to be incremented
+     */
+    void incrCounter(String key, long amount);
+```
+
+Application can use `incrCounter` to change the counter of a given `key` by the given `amount`.
+
+#### getCounter
+
+```java
+    /**
+     * Retrieve the counter value for the key.
+     *
+     * @param key name of the key
+     * @return the amount of the counter value for this key
+     */
+    long getCounter(String key);
+```
+
+Application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`.
+
+Except the `counter` API, Pulsar also exposes a general key/value API for functions to store
+general key/value state.
+
+#### putState
+
+```java
+    /**
+     * Update the state value for the key.
+     *
+     * @param key name of the key
+     * @param value state value of the key
+     */
+    void putState(String key, ByteBuffer value);
+```
+
+#### getState
+
+```java
+    /**
+     * Retrieve the state value for the key.
+     *
+     * @param key name of the key
+     * @return the state value for the key.
+     */
+    ByteBuffer getState(String key);
+```
+
+#### deleteState
+
+```java
+    /**
+     * Delete the state value for the key.
+     *
+     * @param key   name of the key
+     */
+```
+
+Counters and binary values share the same keyspace, so this deletes either type.
+
+<!--Python-->
+Currently Pulsar Functions expose the following APIs for mutating and accessing State. These APIs are available in the [Context](#context) object when you are using Python SDK functions.
+
+#### incr_counter
+
+```python
+  def incr_counter(self, key, amount):
+    """incr the counter of a given key in the managed state"""
+```
+
+Application can use `incr_counter` to change the counter of a given `key` by the given `amount`.
+If the `key` does not exist, a new key is created.
+
+#### get_counter
+
+```python
+  def get_counter(self, key):
+    """get the counter of a given key in the managed state"""
+```
+
+Application can use `get_counter` to retrieve the counter of a given `key` mutated by `incrCounter`.
+
+Except the `counter` API, Pulsar also exposes a general key/value API for functions to store
+general key/value state.
+
+#### put_state
+
+```python
+  def put_state(self, key, value):
+    """update the value of a given key in the managed state"""
+```
+
+The key is a string, and the value is arbitrary binary data.
+
+#### get_state
+
+```python
+  def get_state(self, key):
+    """get the value of a given key in the managed state"""
+```
+
+#### del_counter
+
+```python
+  def del_counter(self, key):
+    """delete the counter of a given key in the managed state"""
+```
+
+Counters and binary values share the same keyspace, so this deletes either type.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### Query State
+
+A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage
+and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides
+CLI commands for querying its state.
+
+```shell
+$ bin/pulsar-admin functions querystate \
+    --tenant <tenant> \
+    --namespace <namespace> \
+    --name <function-name> \
+    --state-storage-url <bookkeeper-service-url> \
+    --key <state-key> \
+    [---watch]
+```
+
+If `--watch` is specified, the CLI will watch the value of the provided `state-key`.
+
+### Example
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+
+{@inject: github:`WordCountFunction`:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example
+demonstrating on how Application can easily store `state` in Pulsar Functions.
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+import java.util.Arrays;
+
+public class WordCountFunction implements Function<String, Void> {
+    @Override
+    public Void process(String input, Context context) throws Exception {
+        Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1));
+        return null;
+    }
+}
+```
+
+The logic of this `WordCount` function is pretty simple and straightforward:
+
+1. The function first splits the received `String` into multiple words using regex `\\.`.
+2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`).
+
+<!--Python-->
+
+```python
+from pulsar import Function
+
+class WordCount(Function):
+    def process(self, item, context):
+        for word in item.split():
+            context.incr_counter(word, 1)
+```
+
+The logic of this `WordCount` function is pretty simple and straightforward:
+
+1. The function first splits the received string into multiple words on space.
+2. For each `word`, the function increments the corresponding `counter` by 1 (via `incr_counter(key, amount)`).
+
+<!--END_DOCUSAURUS_CODE_TABS-->
diff --git a/site2/website/versioned_docs/version-2.5.2/functions-metrics.md b/site2/website/versioned_docs/version-2.5.2/functions-metrics.md
new file mode 100644
index 0000000..f4eddb4
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/functions-metrics.md
@@ -0,0 +1,7 @@
+---
+id: version-2.5.2-functions-metrics
+title: Metrics for Pulsar Functions
+sidebar_label: Metrics
+original_id: functions-metrics
+---
+
diff --git a/site2/website/versioned_docs/version-2.5.2/functions-overview.md b/site2/website/versioned_docs/version-2.5.2/functions-overview.md
new file mode 100644
index 0000000..99f82bc
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.5.2/functions-overview.md
@@ -0,0 +1,200 @@
+---
+id: version-2.5.2-functions-overview
+title: Pulsar Functions overview
+sidebar_label: Overview
+original_id: functions-overview
+---
+
+**Pulsar Functions** are lightweight compute processes that
+
+* consume messages from one or more Pulsar topics,
+* apply a user-supplied processing logic to each message,
+* publish the results of the computation to another topic.
+
+
+## Goals
+With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals:
+
+* Developer productivity (language-native vs Pulsar Functions SDK functions)
+* Easy troubleshooting
+* Operational simplicity (no need for an external processing system)
+
+## Inspirations
+Pulsar Functions are inspired by (and take cues from) several systems and paradigms:
+
+* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org)
+* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/)
+
+Pulsar Functions can be described as
+
+* [Lambda](https://aws.amazon.com/lambda/)-style functions that are
+* specifically designed to use Pulsar as a message bus.
+
+## Programming model
+Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks.   
+
+  * Apply some processing logic to the input and write output to:
+    * An **output topic** in Pulsar
+    * [Apache BookKeeper](#state-storage)
+  * Write logs to a **log topic** (potentially for debugging purposes)
+  * Increment a [counter](#word-count-example)
+
+![Pulsar Functions core programming model](assets/pulsar-functions-overview.png)
+
+You can use Pulsar Functions to set up the following processing chain:
+
+* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic.
+* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic
+* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table.
+
+
+### Word count example
+
+If you implement the classic word count example using Pulsar Functions, it looks something like this:
+
+![Pulsar Functions word count example](assets/pulsar-functions-word-count.png)
+
+To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows.
+
+```java
+package org.example.functions;
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+import java.util.Arrays;
+
+public class WordCountFunction implements Function<String, Void> {
+    // This function is invoked every time a message is published to the input topic
+    @Override
+    public Void process(String input, Context context) throws Exception {
+        Arrays.asList(input.split(" ")).forEach(word -> {
+            String counterKey = word.toLowerCase();
+            context.incrCounter(counterKey, 1);
+        });
+        return null;
+    }
+}
+```
+
+Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar target/my-jar-with-dependencies.jar \
+  --classname org.example.functions.WordCountFunction \
+  --tenant public \
+  --namespace default \
+  --name word-count \
+  --inputs persistent://public/default/sentences \
+  --output persistent://public/default/count
+```
+
... 13787 lines suppressed ...