You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by ur...@apache.org on 2022/03/03 05:34:34 UTC

[pulsar-site] branch main updated: sync docs from pulsar repo

This is an automated email from the ASF dual-hosted git repository.

urfree pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/pulsar-site.git


The following commit(s) were added to refs/heads/main by this push:
     new ae9f7a1  sync docs from pulsar repo
ae9f7a1 is described below

commit ae9f7a1284e3f336b5ebc6dca75af6a37705cb4d
Author: LiLi <ur...@apache.org>
AuthorDate: Thu Mar 3 13:34:19 2022 +0800

    sync docs from pulsar repo
    
    Signed-off-by: LiLi <ur...@apache.org>
---
 site2/docs/admin-api-clusters.md                   |   4 +-
 site2/docs/admin-api-topics.md                     | 234 +++++++++++
 site2/docs/administration-isolation.md             |   8 +-
 site2/docs/administration-load-balance.md          |  10 +-
 site2/docs/administration-proxy.md                 |  30 +-
 site2/docs/administration-pulsar-manager.md        | 236 +++++------
 site2/docs/administration-zk-bk.md                 |  32 +-
 site2/docs/assets/OverloadShedder.png              | Bin 0 -> 44951 bytes
 site2/docs/assets/ThresholdShedder.png             | Bin 0 -> 56518 bytes
 site2/docs/assets/UniformLoadShedder.png           | Bin 0 -> 50894 bytes
 site2/docs/assets/cluster-level-failover-1.png     | Bin 0 -> 50187 bytes
 site2/docs/assets/cluster-level-failover-2.png     | Bin 0 -> 62053 bytes
 site2/docs/assets/cluster-level-failover-3.png     | Bin 0 -> 134614 bytes
 site2/docs/assets/cluster-level-failover-4.png     | Bin 0 -> 151813 bytes
 site2/docs/assets/cluster-level-failover-5.png     | Bin 0 -> 110855 bytes
 site2/docs/assets/tableview.png                    | Bin 0 -> 53207 bytes
 site2/docs/assets/zookeeper-batching.png           | Bin 0 -> 159664 bytes
 site2/docs/client-libraries-cpp.md                 | 252 ++++++-----
 site2/docs/client-libraries-dotnet.md              |   5 +-
 site2/docs/client-libraries-java.md                | 414 +++++++++++++++++-
 site2/docs/client-libraries-python.md              |  10 +-
 site2/docs/client-libraries-websocket.md           |   4 +-
 site2/docs/concepts-architecture-overview.md       |   8 +-
 site2/docs/concepts-messaging.md                   |  86 +++-
 site2/docs/cookbooks-deduplication.md              |   1 +
 site2/docs/deploy-bare-metal-multi-cluster.md      |  10 +-
 site2/docs/deploy-bare-metal.md                    |  12 +-
 site2/docs/deploy-monitoring.md                    |   2 +-
 site2/docs/developing-binary-protocol.md           |   7 +-
 site2/docs/functions-develop.md                    |  65 ++-
 site2/docs/functions-runtime.md                    |   2 +-
 site2/docs/functions-worker.md                     |   4 +-
 site2/docs/getting-started-clients.md              |  24 +-
 site2/docs/getting-started-docker.md               |   1 +
 site2/docs/getting-started-standalone.md           |   6 +-
 site2/docs/io-elasticsearch-sink.md                |   4 +-
 site2/docs/io-file-source.md                       |   5 +-
 site2/docs/io-mongo-sink.md                        |   3 +-
 site2/docs/reference-cli-tools.md                  |  24 +-
 site2/docs/reference-configuration.md              | 140 +++++--
 site2/docs/reference-metrics.md                    |  17 +-
 site2/docs/schema-evolution-compatibility.md       |   2 +-
 site2/docs/schema-manage.md                        | 119 +++++-
 site2/docs/security-tls-keystore.md                |  28 +-
 site2/docs/security-versioning-policy.md           |  67 +++
 site2/docs/sql-deployment-configurations.md        |  23 +-
 site2/docs/tiered-storage-azure.md                 |   1 -
 site2/docs/tiered-storage-filesystem.md            |   1 -
 site2/docs/txn-why.md                              |   2 +-
 site2/website-next/docs/admin-api-clusters.md      |   4 +-
 site2/website-next/docs/admin-api-topics.md        | 371 ++++++++++++++++
 .../website-next/docs/administration-isolation.md  |   5 +-
 .../docs/administration-load-balance.md            |  10 +-
 site2/website-next/docs/administration-proxy.md    |  36 +-
 .../docs/administration-pulsar-manager.md          | 247 +++++------
 site2/website-next/docs/administration-zk-bk.md    |  35 +-
 site2/website-next/docs/client-libraries-cpp.md    | 273 +++++++-----
 site2/website-next/docs/client-libraries-dotnet.md |   5 +-
 site2/website-next/docs/client-libraries-java.md   | 466 ++++++++++++++++++++-
 site2/website-next/docs/client-libraries-python.md |  10 +-
 .../docs/client-libraries-websocket.md             |   4 +-
 site2/website-next/docs/client-libraries.md        |  25 +-
 .../docs/concepts-architecture-overview.md         |   8 +-
 site2/website-next/docs/concepts-messaging.md      |  96 ++++-
 site2/website-next/docs/cookbooks-deduplication.md |   1 +
 .../docs/deploy-bare-metal-multi-cluster.md        |  10 +-
 site2/website-next/docs/deploy-bare-metal.md       |  12 +-
 site2/website-next/docs/deploy-monitoring.md       |   2 +-
 site2/website-next/docs/develop-binary-protocol.md |   6 +-
 site2/website-next/docs/functions-develop.md       |  80 +++-
 site2/website-next/docs/functions-runtime.md       |   2 +-
 site2/website-next/docs/functions-worker.md        |   4 +-
 site2/website-next/docs/io-elasticsearch-sink.md   |   4 +-
 site2/website-next/docs/io-file-source.md          |   5 +-
 site2/website-next/docs/io-mongo-sink.md           |   3 +-
 site2/website-next/docs/reference-cli-tools.md     |  24 +-
 site2/website-next/docs/reference-configuration.md | 126 ++++--
 site2/website-next/docs/reference-metrics.md       |  17 +-
 .../docs/schema-evolution-compatibility.md         |   2 +-
 site2/website-next/docs/schema-manage.md           | 168 +++++++-
 site2/website-next/docs/security-tls-keystore.md   |  30 +-
 .../docs/security-versioning-policy.md             |  67 +++
 .../docs/sql-deployment-configurations.md          |  23 +-
 site2/website-next/docs/standalone-docker.md       |   1 +
 site2/website-next/docs/standalone.md              |   6 +-
 .../website-next/docs/tiered-storage-filesystem.md |   1 -
 site2/website-next/docs/txn-why.md                 |   2 +-
 site2/website-next/static/assets/DDLC.png          | Bin 0 -> 194151 bytes
 .../website-next/static/assets/OverloadShedder.png | Bin 0 -> 44951 bytes
 .../static/assets/ThresholdShedder.png             | Bin 0 -> 56518 bytes
 .../static/assets/UniformLoadShedder.png           | Bin 0 -> 50894 bytes
 .../static/assets/cluster-level-failover-1.png     | Bin 0 -> 50187 bytes
 .../static/assets/cluster-level-failover-2.png     | Bin 0 -> 62053 bytes
 .../static/assets/cluster-level-failover-3.png     | Bin 0 -> 134614 bytes
 .../static/assets/cluster-level-failover-4.png     | Bin 0 -> 151813 bytes
 .../static/assets/cluster-level-failover-5.png     | Bin 0 -> 110855 bytes
 site2/website-next/static/assets/tableview.png     | Bin 0 -> 53207 bytes
 .../static/assets/zookeeper-batching.png           | Bin 0 -> 159664 bytes
 .../version-2.2.0/admin-api-clusters.md            |   4 +-
 .../version-2.2.0/administration-proxy.md          |  36 +-
 .../version-2.2.0/administration-zk-bk.md          |  35 +-
 .../version-2.2.0/client-libraries-java.md         | 466 ++++++++++++++++++++-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.2.0/concepts-messaging.md            |  96 ++++-
 .../version-2.2.0/cookbooks-deduplication.md       |   1 +
 .../version-2.2.0/deploy-monitoring.md             |   2 +-
 .../version-2.2.0/reference-cli-tools.md           |  24 +-
 .../version-2.2.0/standalone-docker.md             |   1 +
 .../version-2.2.1/admin-api-clusters.md            |   4 +-
 .../version-2.2.1/administration-proxy.md          |  36 +-
 .../version-2.2.1/administration-zk-bk.md          |  35 +-
 .../version-2.2.1/client-libraries-cpp.md          | 273 +++++++-----
 .../version-2.2.1/client-libraries-python.md       |  10 +-
 .../version-2.2.1/client-libraries.md              |  25 +-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.2.1/concepts-messaging.md            |  96 ++++-
 .../version-2.2.1/cookbooks-deduplication.md       |   1 +
 .../deploy-bare-metal-multi-cluster.md             |  10 +-
 .../version-2.2.1/deploy-monitoring.md             |   2 +-
 .../version-2.2.1/develop-binary-protocol.md       |   6 +-
 .../version-2.2.1/reference-cli-tools.md           |  24 +-
 .../version-2.2.1/sql-deployment-configurations.md |  23 +-
 .../version-2.3.0/admin-api-clusters.md            |   4 +-
 .../version-2.3.0/administration-zk-bk.md          |   3 +
 .../version-2.3.0/client-libraries-java.md         | 466 ++++++++++++++++++++-
 .../version-2.3.0/client-libraries-websocket.md    |   4 +-
 .../version-2.3.0/client-libraries.md              |  25 +-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.3.0/cookbooks-deduplication.md       |   1 +
 .../deploy-bare-metal-multi-cluster.md             |  10 +-
 .../version-2.3.0/develop-binary-protocol.md       |   6 +-
 .../version-2.3.0/sql-deployment-configurations.md |  23 +-
 .../version-2.3.0/standalone-docker.md             |   1 +
 .../version-2.3.1/admin-api-clusters.md            |   4 +-
 .../version-2.3.1/administration-proxy.md          |  36 +-
 .../version-2.3.1/administration-zk-bk.md          |  35 +-
 .../version-2.3.1/client-libraries-websocket.md    |   4 +-
 .../version-2.3.1/client-libraries.md              |  25 +-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.3.1/cookbooks-deduplication.md       |   1 +
 .../version-2.3.1/deploy-monitoring.md             |   2 +-
 .../version-2.3.1/develop-binary-protocol.md       |   6 +-
 .../version-2.3.1/sql-deployment-configurations.md |  23 +-
 .../version-2.3.2/admin-api-clusters.md            |   4 +-
 .../version-2.3.2/administration-load-balance.md   |  10 +-
 .../version-2.3.2/administration-proxy.md          |  36 +-
 .../version-2.3.2/administration-zk-bk.md          |  35 +-
 .../version-2.3.2/client-libraries-cpp.md          | 273 +++++++-----
 .../version-2.3.2/client-libraries-java.md         | 466 ++++++++++++++++++++-
 .../version-2.3.2/client-libraries-python.md       |  10 +-
 .../version-2.3.2/client-libraries-websocket.md    |   4 +-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.3.2/cookbooks-deduplication.md       |   1 +
 .../deploy-bare-metal-multi-cluster.md             |  10 +-
 .../version-2.3.2/deploy-monitoring.md             |   2 +-
 .../version-2.3.2/develop-binary-protocol.md       |   6 +-
 .../version-2.3.2/sql-deployment-configurations.md |  23 +-
 .../version-2.4.0/admin-api-clusters.md            |   4 +-
 .../version-2.4.0/administration-load-balance.md   |  10 +-
 .../version-2.4.0/administration-proxy.md          |  36 +-
 .../version-2.4.0/administration-zk-bk.md          |   3 +
 .../version-2.4.0/client-libraries-cpp.md          | 273 +++++++-----
 .../version-2.4.0/client-libraries-websocket.md    |   4 +-
 .../version-2.4.0/client-libraries.md              |  25 +-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.4.0/cookbooks-deduplication.md       |   1 +
 .../deploy-bare-metal-multi-cluster.md             |  10 +-
 .../version-2.4.0/deploy-monitoring.md             |   2 +-
 .../version-2.4.0/sql-deployment-configurations.md |  23 +-
 .../version-2.4.0/standalone-docker.md             |   1 +
 .../version-2.4.1/admin-api-clusters.md            |   4 +-
 .../version-2.4.1/administration-load-balance.md   |  10 +-
 .../version-2.4.1/administration-proxy.md          |  36 +-
 .../version-2.4.1/administration-zk-bk.md          |  35 +-
 .../version-2.4.1/client-libraries-cpp.md          | 273 +++++++-----
 .../version-2.4.1/client-libraries-websocket.md    |   4 +-
 .../version-2.4.1/client-libraries.md              |  25 +-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.4.1/cookbooks-deduplication.md       |   1 +
 .../deploy-bare-metal-multi-cluster.md             |  10 +-
 .../version-2.4.1/deploy-monitoring.md             |   2 +-
 .../version-2.4.1/develop-binary-protocol.md       |   6 +-
 .../versioned_docs/version-2.4.1/io-debug.md       |  10 +-
 .../version-2.4.1/reference-cli-tools.md           |  24 +-
 .../version-2.4.1/sql-deployment-configurations.md |  23 +-
 .../version-2.4.1/standalone-docker.md             |   1 +
 .../version-2.4.2/admin-api-clusters.md            |   4 +-
 .../version-2.4.2/administration-load-balance.md   |  10 +-
 .../version-2.4.2/administration-proxy.md          |  36 +-
 .../version-2.4.2/administration-zk-bk.md          |  35 +-
 .../version-2.4.2/client-libraries-cpp.md          | 273 +++++++-----
 .../version-2.4.2/client-libraries-websocket.md    |   4 +-
 .../version-2.4.2/client-libraries.md              |  25 +-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.4.2/cookbooks-deduplication.md       |   1 +
 .../deploy-bare-metal-multi-cluster.md             |  10 +-
 .../version-2.4.2/deploy-monitoring.md             |   2 +-
 .../version-2.4.2/develop-binary-protocol.md       |   6 +-
 .../versioned_docs/version-2.4.2/io-debug.md       |  10 +-
 .../version-2.4.2/reference-cli-tools.md           |  24 +-
 .../version-2.4.2/sql-deployment-configurations.md |  23 +-
 .../version-2.4.2/standalone-docker.md             |   1 +
 .../version-2.5.0/admin-api-clusters.md            |   4 +-
 .../version-2.5.0/administration-zk-bk.md          |   3 +
 .../version-2.5.0/client-libraries-websocket.md    |   4 +-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.5.0/develop-binary-protocol.md       |   6 +-
 .../version-2.5.0/io-influxdb-sink.md              |   3 +-
 .../versioned_docs/version-2.5.0/io-mongo-sink.md  |   3 +-
 .../version-2.5.1/admin-api-clusters.md            |   4 +-
 .../version-2.5.1/administration-load-balance.md   |  10 +-
 .../version-2.5.1/administration-proxy.md          |  36 +-
 .../version-2.5.1/administration-pulsar-manager.md | 247 +++++------
 .../version-2.5.1/administration-zk-bk.md          |  35 +-
 .../version-2.5.1/client-libraries-websocket.md    |   4 +-
 .../version-2.5.1/cookbooks-deduplication.md       |   1 +
 .../deploy-bare-metal-multi-cluster.md             |  10 +-
 .../version-2.5.1/functions-worker.md              |   4 +-
 .../versioned_docs/version-2.5.1/io-debug.md       |  10 +-
 .../version-2.5.1/sql-deployment-configurations.md |  23 +-
 .../version-2.5.2/admin-api-clusters.md            |   4 +-
 .../version-2.5.2/administration-load-balance.md   |  10 +-
 .../version-2.5.2/administration-proxy.md          |  36 +-
 .../version-2.5.2/administration-pulsar-manager.md | 247 +++++------
 .../version-2.5.2/administration-zk-bk.md          |  35 +-
 .../version-2.5.2/client-libraries-websocket.md    |   4 +-
 .../concepts-architecture-overview.md              |   8 +-
 .../version-2.5.2/cookbooks-deduplication.md       |   1 +
 .../deploy-bare-metal-multi-cluster.md             |  10 +-
 .../version-2.5.2/functions-worker.md              |   4 +-
 .../versioned_docs/version-2.5.2/io-debug.md       |  10 +-
 .../schema-evolution-compatibility.md              |   2 +-
 .../versioned_docs/version-2.5.2/schema-manage.md  | 168 +++++++-
 .../version-2.5.2/sql-deployment-configurations.md |  23 +-
 .../version-2.5.2/standalone-docker.md             |   1 +
 .../versioned_docs/version-2.5.2/standalone.md     |   6 +-
 .../version-2.6.0/administration-zk-bk.md          |   9 +
 .../versioned_docs/version-2.6.0/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.0/io-mongo-sink.md  |   3 +-
 .../version-2.6.1/administration-zk-bk.md          |   9 +
 .../versioned_docs/version-2.6.1/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.1/io-mongo-sink.md  |   3 +-
 .../version-2.6.2/administration-zk-bk.md          |   9 +
 .../versioned_docs/version-2.6.2/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.2/io-mongo-sink.md  |   3 +-
 .../version-2.6.3/administration-zk-bk.md          |  10 +
 .../versioned_docs/version-2.6.3/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.3/io-mongo-sink.md  |   3 +-
 .../version-2.6.4/administration-zk-bk.md          |   9 +
 .../versioned_docs/version-2.6.4/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.4/io-mongo-sink.md  |   3 +-
 .../version-2.7.0/admin-api-topics.md              | 371 ++++++++++++++++
 .../version-2.7.0/administration-zk-bk.md          |   9 +
 .../version-2.7.0/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.0/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.0/io-mongo-sink.md  |   3 +-
 .../version-2.7.0/reference-configuration.md       |  15 +-
 .../version-2.7.0/reference-metrics.md             |  14 +-
 .../version-2.7.0/sql-deployment-configurations.md |  10 +
 .../version-2.7.0/tiered-storage-filesystem.md     |   1 -
 .../version-2.7.1/admin-api-topics.md              | 371 ++++++++++++++++
 .../version-2.7.1/administration-zk-bk.md          |   9 +
 .../version-2.7.1/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.1/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.1/io-mongo-sink.md  |   3 +-
 .../version-2.7.1/reference-configuration.md       |  15 +-
 .../version-2.7.1/reference-metrics.md             |  14 +-
 .../version-2.7.1/sql-deployment-configurations.md |  10 +
 .../version-2.7.1/tiered-storage-filesystem.md     |   1 -
 .../version-2.7.2/admin-api-topics.md              | 371 ++++++++++++++++
 .../version-2.7.2/administration-zk-bk.md          |   9 +
 .../version-2.7.2/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.2/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.2/io-mongo-sink.md  |   3 +-
 .../version-2.7.2/reference-configuration.md       |  15 +-
 .../version-2.7.2/reference-metrics.md             |  14 +-
 .../version-2.7.2/sql-deployment-configurations.md |  10 +
 .../version-2.7.2/tiered-storage-filesystem.md     |   1 -
 .../version-2.7.3/admin-api-topics.md              | 371 ++++++++++++++++
 .../version-2.7.3/administration-zk-bk.md          |   9 +
 .../version-2.7.3/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.3/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.3/io-mongo-sink.md  |   3 +-
 .../version-2.7.3/reference-configuration.md       |  15 +-
 .../version-2.7.3/reference-metrics.md             |  14 +-
 .../version-2.7.3/sql-deployment-configurations.md |  10 +
 .../version-2.7.3/tiered-storage-filesystem.md     |   1 -
 .../version-2.7.4/admin-api-topics.md              | 371 ++++++++++++++++
 .../version-2.7.4/administration-zk-bk.md          |   9 +
 .../version-2.7.4/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.4/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.4/io-mongo-sink.md  |   3 +-
 .../version-2.7.4/reference-configuration.md       |  15 +-
 .../version-2.7.4/reference-metrics.md             |  14 +-
 .../version-2.7.4/sql-deployment-configurations.md |  10 +
 .../version-2.7.4/tiered-storage-filesystem.md     |   1 -
 .../version-2.8.0/admin-api-topics.md              | 371 ++++++++++++++++
 .../version-2.8.0/administration-zk-bk.md          |   9 +
 .../version-2.8.0/cookbooks-deduplication.md       |   1 +
 .../version-2.8.0/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.8.0/io-debug.md       |  10 +-
 .../versioned_docs/version-2.8.0/io-mongo-sink.md  |   3 +-
 .../version-2.8.0/reference-configuration.md       |  15 +-
 .../version-2.8.0/reference-metrics.md             |  14 +-
 .../version-2.8.0/sql-deployment-configurations.md |  10 +
 .../version-2.8.0/tiered-storage-filesystem.md     |   1 -
 .../version-2.8.1/admin-api-topics.md              | 372 ++++++++++++++++
 .../version-2.8.1/administration-zk-bk.md          |   9 +
 .../version-2.8.1/cookbooks-deduplication.md       |   1 +
 .../version-2.8.1/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.8.1/io-debug.md       |  10 +-
 .../versioned_docs/version-2.8.1/io-mongo-sink.md  |   3 +-
 .../version-2.8.1/reference-configuration.md       |  15 +-
 .../version-2.8.1/reference-metrics.md             |  14 +-
 .../version-2.8.1/sql-deployment-configurations.md |  10 +
 .../version-2.8.1/tiered-storage-filesystem.md     |   1 -
 .../version-2.8.2/admin-api-topics.md              | 371 ++++++++++++++++
 .../version-2.8.2/administration-zk-bk.md          |   9 +
 .../version-2.8.2/cookbooks-deduplication.md       |   1 +
 .../version-2.8.2/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.8.2/io-debug.md       |  10 +-
 .../versioned_docs/version-2.8.2/io-mongo-sink.md  |   3 +-
 .../version-2.8.2/reference-configuration.md       |  15 +-
 .../version-2.8.2/reference-metrics.md             |  14 +-
 .../version-2.8.2/sql-deployment-configurations.md |  10 +
 .../version-2.8.2/tiered-storage-filesystem.md     |   1 -
 .../version-2.9.0/admin-api-topics.md              | 371 ++++++++++++++++
 .../version-2.9.0/administration-zk-bk.md          |   9 +
 .../version-2.9.0/cookbooks-deduplication.md       |   1 +
 .../version-2.9.0/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.9.0/io-debug.md       |  10 +-
 .../version-2.9.0/io-influxdb-sink.md              |   6 +-
 .../versioned_docs/version-2.9.0/io-mongo-sink.md  |   3 +-
 .../version-2.9.0/reference-configuration.md       |  15 +-
 .../version-2.9.0/reference-metrics.md             |  14 +-
 .../version-2.9.0/sql-deployment-configurations.md |  10 +
 .../version-2.9.1/admin-api-topics.md              | 371 ++++++++++++++++
 .../version-2.9.1/administration-zk-bk.md          |   9 +
 .../version-2.9.1/cookbooks-deduplication.md       |   1 +
 .../version-2.9.1/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.9.1/io-debug.md       |  10 +-
 .../version-2.9.1/io-influxdb-sink.md              |   6 +-
 .../versioned_docs/version-2.9.1/io-mongo-sink.md  |   3 +-
 .../version-2.9.1/reference-configuration.md       |  15 +-
 .../version-2.9.1/reference-metrics.md             |  14 +-
 .../version-2.9.1/sql-deployment-configurations.md |  10 +
 .../administration-zk-bk.md                        |   3 +
 .../version-2.3.0/administration-zk-bk.md          |   3 +
 .../version-2.4.0/administration-zk-bk.md          |   3 +
 .../versioned_docs/version-2.4.1/io-debug.md       |  10 +-
 .../versioned_docs/version-2.4.2/io-debug.md       |  10 +-
 .../version-2.5.0/administration-zk-bk.md          |   3 +
 .../version-2.5.0/io-influxdb-sink.md              |   3 +-
 .../versioned_docs/version-2.5.0/io-mongo-sink.md  |   3 +-
 .../versioned_docs/version-2.5.1/io-debug.md       |  10 +-
 .../versioned_docs/version-2.5.2/io-debug.md       |  10 +-
 .../version-2.6.0/administration-zk-bk.md          |   6 +
 .../versioned_docs/version-2.6.0/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.0/io-mongo-sink.md  |   3 +-
 .../version-2.6.1/administration-zk-bk.md          |   6 +
 .../versioned_docs/version-2.6.1/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.1/io-mongo-sink.md  |   3 +-
 .../version-2.6.2/administration-zk-bk.md          |   6 +
 .../versioned_docs/version-2.6.2/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.2/io-mongo-sink.md  |   3 +-
 .../version-2.6.3/administration-zk-bk.md          |  40 +-
 .../versioned_docs/version-2.6.3/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.3/io-mongo-sink.md  |   3 +-
 .../version-2.6.4/administration-zk-bk.md          |   6 +
 .../versioned_docs/version-2.6.4/io-debug.md       |  10 +-
 .../versioned_docs/version-2.6.4/io-mongo-sink.md  |   3 +-
 .../version-2.7.0/admin-api-topics.md              | 234 +++++++++++
 .../version-2.7.0/administration-zk-bk.md          |   6 +
 .../version-2.7.0/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.0/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.0/io-mongo-sink.md  |   3 +-
 .../version-2.7.0/reference-configuration.md       |  15 +-
 .../version-2.7.0/reference-metrics.md             |  14 +-
 .../version-2.7.0/sql-deployment-configurations.md |  10 +
 .../version-2.7.0/tiered-storage-filesystem.md     |   1 -
 .../version-2.7.1/admin-api-topics.md              | 234 +++++++++++
 .../version-2.7.1/administration-zk-bk.md          |   6 +
 .../version-2.7.1/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.1/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.1/io-mongo-sink.md  |   3 +-
 .../version-2.7.1/reference-configuration.md       |  15 +-
 .../version-2.7.1/reference-metrics.md             |  14 +-
 .../version-2.7.1/sql-deployment-configurations.md |  10 +
 .../version-2.7.1/tiered-storage-filesystem.md     |   1 -
 .../version-2.7.2/admin-api-topics.md              | 234 +++++++++++
 .../version-2.7.2/administration-zk-bk.md          |   6 +
 .../version-2.7.2/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.2/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.2/io-mongo-sink.md  |   3 +-
 .../version-2.7.2/reference-configuration.md       |  15 +-
 .../version-2.7.2/reference-metrics.md             |  14 +-
 .../version-2.7.2/sql-deployment-configurations.md |  10 +
 .../version-2.7.2/tiered-storage-filesystem.md     |   1 -
 .../version-2.7.3/admin-api-topics.md              | 234 +++++++++++
 .../version-2.7.3/administration-zk-bk.md          |   6 +
 .../version-2.7.3/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.3/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.3/io-mongo-sink.md  |   3 +-
 .../version-2.7.3/reference-configuration.md       |  15 +-
 .../version-2.7.3/reference-metrics.md             |  14 +-
 .../version-2.7.3/sql-deployment-configurations.md |  11 +-
 .../version-2.7.3/tiered-storage-filesystem.md     |   1 -
 .../version-2.7.4/admin-api-topics.md              | 234 +++++++++++
 .../version-2.7.4/administration-zk-bk.md          |   6 +
 .../version-2.7.4/cookbooks-deduplication.md       |   1 +
 .../versioned_docs/version-2.7.4/io-debug.md       |  10 +-
 .../versioned_docs/version-2.7.4/io-mongo-sink.md  |   3 +-
 .../version-2.7.4/reference-configuration.md       |  15 +-
 .../version-2.7.4/reference-metrics.md             |  14 +-
 .../version-2.7.4/sql-deployment-configurations.md |  10 +
 .../version-2.7.4/tiered-storage-filesystem.md     |   1 -
 .../version-2.8.0/admin-api-topics.md              | 234 +++++++++++
 .../version-2.8.0/administration-zk-bk.md          |   6 +
 .../version-2.8.0/cookbooks-deduplication.md       |   1 +
 .../version-2.8.0/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.8.0/io-debug.md       |  10 +-
 .../versioned_docs/version-2.8.0/io-mongo-sink.md  |   3 +-
 .../version-2.8.0/reference-configuration.md       |  15 +-
 .../version-2.8.0/reference-metrics.md             |  14 +-
 .../version-2.8.0/sql-deployment-configurations.md |  10 +
 .../version-2.8.0/tiered-storage-filesystem.md     |   1 -
 .../version-2.8.1/admin-api-topics.md              | 235 +++++++++++
 .../version-2.8.1/administration-zk-bk.md          |   6 +
 .../version-2.8.1/cookbooks-deduplication.md       |   1 +
 .../version-2.8.1/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.8.1/io-debug.md       |  10 +-
 .../versioned_docs/version-2.8.1/io-mongo-sink.md  |   3 +-
 .../version-2.8.1/reference-configuration.md       |  15 +-
 .../version-2.8.1/reference-metrics.md             |  14 +-
 .../version-2.8.1/sql-deployment-configurations.md |  10 +
 .../version-2.8.1/tiered-storage-filesystem.md     |   1 -
 .../version-2.8.2/admin-api-topics.md              | 234 +++++++++++
 .../version-2.8.2/administration-zk-bk.md          |   6 +
 .../version-2.8.2/cookbooks-deduplication.md       |   1 +
 .../version-2.8.2/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.8.2/io-debug.md       |  10 +-
 .../versioned_docs/version-2.8.2/io-mongo-sink.md  |   3 +-
 .../version-2.8.2/reference-configuration.md       |  15 +-
 .../version-2.8.2/reference-metrics.md             |  14 +-
 .../version-2.8.2/sql-deployment-configurations.md |  11 +-
 .../version-2.8.2/tiered-storage-filesystem.md     |   1 -
 .../version-2.9.0/admin-api-topics.md              | 234 +++++++++++
 .../version-2.9.0/administration-zk-bk.md          |   6 +
 .../version-2.9.0/cookbooks-deduplication.md       |   1 +
 .../version-2.9.0/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.9.0/io-debug.md       |  10 +-
 .../version-2.9.0/io-influxdb-sink.md              |   6 +-
 .../versioned_docs/version-2.9.0/io-mongo-sink.md  |   3 +-
 .../version-2.9.0/reference-configuration.md       |  15 +-
 .../version-2.9.0/reference-metrics.md             |  14 +-
 .../version-2.9.0/sql-deployment-configurations.md |  10 +
 .../version-2.9.1/admin-api-topics.md              | 234 +++++++++++
 .../version-2.9.1/administration-zk-bk.md          |   6 +
 .../version-2.9.1/cookbooks-deduplication.md       |   1 +
 .../version-2.9.1/deploy-monitoring.md             |   2 +-
 .../versioned_docs/version-2.9.1/io-debug.md       |  10 +-
 .../version-2.9.1/io-influxdb-sink.md              |   6 +-
 .../versioned_docs/version-2.9.1/io-mongo-sink.md  |   3 +-
 .../version-2.9.1/reference-configuration.md       |  15 +-
 .../version-2.9.1/reference-metrics.md             |  14 +-
 .../version-2.9.1/sql-deployment-configurations.md |  10 +
 466 files changed, 14008 insertions(+), 2845 deletions(-)

diff --git a/site2/docs/admin-api-clusters.md b/site2/docs/admin-api-clusters.md
index 5f88af7..dd09f58 100644
--- a/site2/docs/admin-api-clusters.md
+++ b/site2/docs/admin-api-clusters.md
@@ -83,8 +83,8 @@ Here's an example cluster metadata initialization command:
 ```shell
 bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
diff --git a/site2/docs/admin-api-topics.md b/site2/docs/admin-api-topics.md
index 73bf850..60a8090 100644
--- a/site2/docs/admin-api-topics.md
+++ b/site2/docs/admin-api-topics.md
@@ -973,6 +973,240 @@ admin.topics().getBacklogSizeByMessageId(topic, messageId);
 
 <!--END_DOCUSAURUS_CODE_TABS-->
 
+
+### Configure deduplication snapshot interval
+
+#### Get deduplication snapshot interval
+
+To get the topic-level deduplication snapshot interval, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Pulsar-admin API-->
+
+```
+pulsar-admin topics get-deduplication-snapshot-interval options
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval?version=[[pulsar:version_number]]}
+```
+
+<!--Java API-->
+
+```java
+admin.topics().getDeduplicationSnapshotInterval(topic)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Set deduplication snapshot interval
+
+To set the topic-level deduplication snapshot interval, use one of the following methods.
+
+> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Pulsar-admin API-->
+
+```
+pulsar-admin topics set-deduplication-snapshot-interval options 
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval?version=[[pulsar:version_number]]}
+```
+
+```json
+{
+  "interval": 1000
+}
+```
+
+<!--Java API-->
+
+```java
+admin.topics().setDeduplicationSnapshotInterval(topic, 1000)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Remove deduplication snapshot interval
+
+To remove the topic-level deduplication snapshot interval, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Pulsar-admin API-->
+
+```
+pulsar-admin topics remove-deduplication-snapshot-interval options
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval?version=[[pulsar:version_number]]}
+```
+
+<!--Java API-->
+
+```java
+admin.topics().removeDeduplicationSnapshotInterval(topic)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+
+### Configure inactive topic policies
+
+#### Get inactive topic policies
+
+To get the topic-level inactive topic policies, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Pulsar-admin API-->
+
+```
+pulsar-admin topics get-inactive-topic-policies options
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies?version=[[pulsar:version_number]]}
+```
+
+<!--Java API-->
+
+```java
+admin.topics().getInactiveTopicPolicies(topic)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Set inactive topic policies
+
+To set the topic-level inactive topic policies, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Pulsar-admin API-->
+
+```
+pulsar-admin topics set-inactive-topic-policies options 
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies?version=[[pulsar:version_number]]}
+```
+
+<!--Java API-->
+
+```java
+admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Remove inactive topic policies
+
+To remove the topic-level inactive topic policies, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Pulsar-admin API-->
+
+```
+pulsar-admin topics remove-inactive-topic-policies options
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies?version=[[pulsar:version_number]]}
+```
+
+<!--Java API-->
+
+```java
+admin.topics().removeInactiveTopicPolicies(topic)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+
+### Configure offload policies
+
+#### Get offload policies
+
+To get the topic-level offload policies, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Pulsar-admin API-->
+
+```
+pulsar-admin topics get-offload-policies options
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies?version=[[pulsar:version_number]]}
+```
+
+<!--Java API-->
+
+```java
+admin.topics().getOffloadPolicies(topic)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Set offload policies
+
+To set the topic-level offload policies, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Pulsar-admin API-->
+
+```
+pulsar-admin topics set-offload-policies options 
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies?version=[[pulsar:version_number]]}
+```
+
+<!--Java API-->
+
+```java
+admin.topics().setOffloadPolicies(topic, offloadPolicies)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+#### Remove offload policies
+
+To remove the topic-level offload policies, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Pulsar-admin API-->
+
+```
+pulsar-admin topics remove-offload-policies options
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies?version=[[pulsar:version_number]]}
+```
+
+<!--Java API-->
+
+```java
+admin.topics().removeOffloadPolicies(topic)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+
 ## Manage non-partitioned topics
 You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics.
 
diff --git a/site2/docs/administration-isolation.md b/site2/docs/administration-isolation.md
index dac9d20..37d692a 100644
--- a/site2/docs/administration-isolation.md
+++ b/site2/docs/administration-isolation.md
@@ -73,9 +73,13 @@ bin/pulsar-admin namespaces set-bookie-affinity-group public/default \
 --primary-group group-bookie1
 ```
 
-> **Note**
+> **Notes**
 > 
-> Do not set a bookie rack name to slash (`/`) or an empty string (`""`) if you use Pulsar earlier than 2.7.5, 2.8.3, and 2.9.2. For the bookie rack name restrictions, see [pulsar-admin bookies set-bookie-rack](https://pulsar.apache.org/tools/pulsar-admin/).
+> - Do not set a bookie rack name to slash (`/`) or an empty string (`""`) if you use Pulsar earlier than 2.7.5, 2.8.3, and 2.9.2. If you use Pulsar 2.7.5, 2.8.3, 2.9.2 or later versions, it falls back to `/default-rack` or `/default-region/default-rack`.
+> - When `RackawareEnsemblePlacementPolicy` is enabled, the rack name is not allowed to contain slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/rack0` is okay, but `/rack/0` is not allowed.
+> - When `RegionawareEnsemblePlacementPolicy` is enabled, the rack name can only contain one slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/region0/rack0` is okay, but `/region0rack0` and `/region0/rack/0` are not allowed.
+> 
+> For the bookie rack name restrictions, see [pulsar-admin bookies set-bookie-rack](https://pulsar.apache.org/tools/pulsar-admin/).
 
 <!--REST API-->
 
diff --git a/site2/docs/administration-load-balance.md b/site2/docs/administration-load-balance.md
index 9b02615..9035087 100644
--- a/site2/docs/administration-load-balance.md
+++ b/site2/docs/administration-load-balance.md
@@ -141,20 +141,26 @@ loadBalancerSheddingIntervalMinutes=1
 loadBalancerSheddingGracePeriodMinutes=30
 ```
 
-Pulsar supports three types of shedding strategies:
+Pulsar supports the following types of shedding strategies. From Pulsar 2.10, the **default** shedding strategy is `ThresholdShedder`.
 
 ##### ThresholdShedder
-This strategy tends to shed the bundles if any broker's usage is above the configured threshold. It does this by first computing the average resource usage per broker for the whole cluster. The resource usage for each broker is calculated using the following method: LocalBrokerData#getMaxResourceUsageWithWeight). The weights for each resource are configurable. Historical observations are included in the running average based on the broker's setting for loadBalancerHistoryResourcePercenta [...]
+This strategy tends to shed the bundles if any broker's usage is above the configured threshold. It does this by first computing the average resource usage per broker for the whole cluster. The resource usage for each broker is calculated using the following method: LocalBrokerData#getMaxResourceUsageWithWeight. The weights for each resource are configurable. Historical observations are included in the running average based on the broker's setting for loadBalancerHistoryResourcePercentag [...]
 `loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`
 
+![Shedding strategy - ThresholdShedder](assets/ThresholdShedder.png)
+
 ##### OverloadShedder
 This strategy will attempt to shed exactly one bundle on brokers which are overloaded, that is, whose maximum system resource usage exceeds loadBalancerBrokerOverloadedThresholdPercentage. To see which resources are considered when determining the maximum system resource. A bundle is recommended for unloading off that broker if and only if the following conditions hold: The broker has at least two bundles assigned and the broker has at least one bundle that has not been unloaded recently [...]
 `loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.OverloadShedder`
 
+![Shedding strategy - OverloadShedder](assets/OverloadShedder.png)
+
 ##### UniformLoadShedder
 This strategy tends to distribute load uniformly across all brokers. This strategy checks laod difference between broker with highest load and broker with lowest load. If the difference is higher than configured thresholds `loadBalancerMsgRateDifferenceShedderThreshold` and `loadBalancerMsgThroughputMultiplierDifferenceShedderThreshold` then it finds out bundles which can be unloaded to distribute traffic evenly across all brokers. Configure broker with below value to use this strategy.
 `loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder`
 
+![Shedding strategy - UniformLoadShedder](assets/UniformLoadShedder.png)
+
 #### Broker overload thresholds
 
 The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled).
diff --git a/site2/docs/administration-proxy.md b/site2/docs/administration-proxy.md
index 9d99c5f..4a1a1e8 100644
--- a/site2/docs/administration-proxy.md
+++ b/site2/docs/administration-proxy.md
@@ -8,19 +8,9 @@ Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connection
 
 ## Configure the proxy
 
-Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. 
+Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery.
 
-### Use service discovery
-
-Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
-```properties
-zookeeperServers=zk-0,zk-1,zk-2
-configurationStoreServers=zk-0:2184,zk-remote:2184
-```
-
-> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
-
-> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+> In a production environment service discovery is not recommended.
 
 ### Use broker URLs
 
@@ -49,13 +39,27 @@ The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651
 
 Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`.
 
+### Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+```properties
+metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181
+configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184
+```
+
+> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
+
+> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+
 ## Start the proxy
 
 To start the proxy:
 
 ```bash
 $ cd /path/to/pulsar/directory
-$ bin/pulsar proxy
+$ bin/pulsar proxy \
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 ```
 
 > You can run multiple instances of the Pulsar proxy in a cluster.
diff --git a/site2/docs/administration-pulsar-manager.md b/site2/docs/administration-pulsar-manager.md
index 888ad32..dfe0872 100644
--- a/site2/docs/administration-pulsar-manager.md
+++ b/site2/docs/administration-pulsar-manager.md
@@ -11,6 +11,7 @@ Pulsar Manager is a web-based GUI management and monitoring tool that helps admi
 
 ## Install
 
+### Quick Install
 The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container.
 
 
@@ -21,89 +22,41 @@ docker run -it \
     -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \
     apachepulsar/pulsar-manager:v0.2.0
 ```
-
+* Pulsar Manager is divided into front-end and back-end, the front-end service port is `9527` and the back-end service port is `7750`.
 * `SPRING_CONFIGURATION_FILE`: Default configuration file for spring.
+* By default, Pulsar Manager uses the `herddb` database. HerdDB is a SQL distributed database implemented in Java and can be found at [herddb.org](https://herddb.org/) for more information.
 
-### Set administrator account and password
-
- ```shell
-CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token)
-curl \
-    -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \
-    -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \
-    -H "Content-Type: application/json" \
-    -X PUT http://localhost:7750/pulsar-manager/users/superuser \
-    -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}'
-```
+### Configure Database or JWT authentication
+####  Configure Database (optional)
 
-You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well:
+If you have a large amount of data, you can use a custom database, otherwise, some display errors may occur, such as the topic information cannot be displayed when the topic exceeds 10000.
+The following is an example of PostgreSQL.
 
-```
-git clone https://github.com/apache/pulsar-manager
-cd pulsar-manager/front-end
-npm install --save
-npm run build:prod
-cd ..
-./gradlew build -x test
-cd ..
-docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager .
-```
-
-### Use custom databases
-
-If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL.   
-
-1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql).
-
-2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration.
-
-```
+1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/META-INF/sql/postgresql-schema.sql).
+2. Download and modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties), then add the PostgreSQL configuration.
+```properties
 spring.datasource.driver-class-name=org.postgresql.Driver
 spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager
 spring.datasource.username=postgres
 spring.datasource.password=postgres
 ```
 
-3. Compile to generate a new executable jar package.
-
-```
-./gradlew build -x test
-```
-
-### Enable JWT authentication
-
-If you want to turn on JWT authentication, configure the following parameters:
-
-* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization.
-* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET.
-* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode.
-* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode.
-* `jwt.broker.secret.key`: configure this option if you use the SECRET mode.
-
-For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/).
-
-
-If you want to enable JWT authentication, use one of the following methods.
-
-
-* Method 1: use command-line tool
-
-```
-wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/apache-pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
-tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
-cd pulsar-manager
-tar -zxvf pulsar-manager.tar
-cd pulsar-manager
-cp -r ../dist ui
-./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key 
+3. Add a configuration mount and start with a docker image.
+```bash
+docker pull apachepulsar/pulsar-manager:v0.2.0
+docker run -it \
+    -p 9527:9527 -p 7750:7750 \
+    -v /your-path/application.properties:/pulsar-manager/pulsar-
+manager/application.properties
+    -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \
+    apachepulsar/pulsar-manager:v0.2.0
 ```
-Firstly, [set the administrator account and password](#set-administrator-account-and-password)
 
-Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html.
+####  Enable JWT authentication (optional)
 
-* Method 2: configure the application.properties file
+If you want to turn on JWT authentication, configure the `application.properties` file.
 
-```
+```properties
 backend.jwt.token=token
 
 jwt.broker.token.mode=PRIVATE
@@ -114,69 +67,116 @@ or
 jwt.broker.token.mode=SECRET
 jwt.broker.secret.key=file:///path/broker-secret.key
 ```
+•	`backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization.   
+•	`jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET.  
+•	`jwt.broker.public.key`: configure this option if you use the PUBLIC mode.  
+•	`jwt.broker.private.key`: configure this option if you use the PRIVATE mode.  
+•	`jwt.broker.secret.key`: configure this option if you use the SECRET mode.  
+For more information, see [Token Authentication Admin of Pulsar](https://pulsar.apache.org/docs/en/security-token-admin/).
 
-* Method 3: use Docker and enable token authentication.
+Docker command to add profile and key files mount.
 
-```
-export JWT_TOKEN="your-token"
-docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh
+```bash
+docker pull apachepulsar/pulsar-manager:v0.2.0
+docker run -it \
+    -p 9527:9527 -p 7750:7750 \
+    -v /your-path/application.properties:/pulsar-manager/pulsar-
+manager/application.properties
+    -v /your-path/private.key:/pulsar-manager/private.key
+    -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \
+    apachepulsar/pulsar-manager:v0.2.0
 ```
 
-* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the  `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command.
-* `REDIRECT_HOST`: the IP address of the front-end server.
-* `REDIRECT_PORT`: the port of the front-end server.
-* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database.
-* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database.
-* `USERNAME`: the username of PostgreSQL.
-* `PASSWORD`: the password of PostgreSQL.
-* `LOG_LEVEL`: the level of log.
 
-* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key.
+### Set the administrator account and password
 
+```bash
+CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token)
+curl \
+   -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \
+   -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \
+   -H "Content-Type: application/json" \
+   -X PUT http://localhost:7750/pulsar-manager/users/superuser \
+   -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}'
 ```
-export JWT_TOKEN="your-token"
-export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key"
-export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key"
-docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh
+The request parameter in curl command:
+```json
+{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}
 ```
+- `name` is the Pulsar Manager login username, currently `admin`.
+- `password` is the password of the current user of Pulsar Manager, currently `apachepulsar`. The password should be more than or equal to 6 digits.
 
-* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command.
-* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command.
-* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command.
-* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally
-* `REDIRECT_HOST`: the IP address of the front-end server.
-* `REDIRECT_PORT`: the port of the front-end server.
-* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database.
-* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database.
-* `USERNAME`: the username of PostgreSQL.
-* `PASSWORD`: the password of PostgreSQL.
-* `LOG_LEVEL`: the level of log.
 
-* Method 5: use Docker and turn on **token authentication** and **token management** by secret key.
 
+### Configure the environment
+1. Login to the system, Visit http://localhost:9527 to login.  The current default account is  `admin/apachepulsar`
 
-```
-export JWT_TOKEN="your-token"
-export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key"
-docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh
-```
+2. Click "New Environment" button to add an environment.
 
-* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command.
-* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command.
-* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally
-* `REDIRECT_HOST`: the IP address of the front-end server.
-* `REDIRECT_PORT`: the port of the front-end server.
-* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database.
-* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database.
-* `USERNAME`: the username of PostgreSQL.
-* `PASSWORD`: the password of PostgreSQL.
-* `LOG_LEVEL`: the level of log.
+3. Input the "Environment Name". The environment name is used for identifying an environment.
 
-* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README.md).
-* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end).
+4. Input the "Service URL". The Service URL is the admin service url of your Pulsar cluster.
+
+
+## Other Installation
+### Bare-metal installation
+
+When using binary packages for direct deployment, you can follow these steps.
+
+- Download and unzip the binary package, which is available on the [Pulsar Download](https://pulsar.apache.org/en/download/) page.
+
+  ```bash
+	wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
+	tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
+  ```
+- Extract the back-end service binary package and place the front-end resources in the back-end service directory.
 
-## Log in
+  ```bash
+	cd pulsar-manager
+	tar -zxvf pulsar-manager.tar
+	cd pulsar-manager
+	cp -r ../dist ui
+  ```
+- Modify `application.properties` configuration on demand.
 
-[Set the administrator account and password](#set-administrator-account-and-password).
+  > If you don't want to modify the `application.properties` file, you can add the configuration to the startup parameters via `. /bin/pulsar-manager --backend.jwt.token=token` to add the configuration to the startup parameters. This is a capability of the spring boot framework.
+
+- Start Pulsar Manager
+  ```bash
+  ./bin/pulsar-manager 
+  ```
+
+### Custom docker image installation
+
+You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well:
+
+  ```bash
+  git clone https://github.com/apache/pulsar-manager
+  cd pulsar-manager/front-end
+  npm install --save
+  npm run build:prod
+  cd ..
+  ./gradlew build -x test
+  cd ..
+  docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager .
+  ```
+
+## Configuration
+
+
+
+| application.properties              | System env on Docker Image | Desc                                                         | Example                                           |
+| ----------------------------------- | -------------------------- | ------------------------------------------------------------ | ------------------------------------------------- |
+| backend.jwt.token                   | JWT_TOKEN                  | token for the superuser. You need to configure this parameter during cluster initialization. | `token`                                           |
+| jwt.broker.token.mode               | N/A                        | multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. | `PUBLIC` or `PRIVATE` or `SECRET`.                |
+| jwt.broker.public.key               | PUBLIC_KEY                 | configure this option if you use the PUBLIC mode.            | `file:///path/broker-public.key`                  |
+| jwt.broker.private.key              | PRIVATE_KEY                | configure this option if you use the PRIVATE mode.           | `file:///path/broker-private.key`                 |
+| jwt.broker.secret.key               | SECRET_KEY                 | configure this option if you use the SECRET mode.            | `file:///path/broker-secret.key`                  |
+| spring.datasource.driver-class-name | DRIVER_CLASS_NAME          | the driver class name of the database.                       | `org.postgresql.Driver`                           |
+| spring.datasource.url               | URL                        | the JDBC URL of your  database.                              | `jdbc:postgresql://127.0.0.1:5432/pulsar_manager` |
+| spring.datasource.username          | USERNAME                   | the username of database.                                    | `postgres`                                        |
+| spring.datasource.password          | PASSWORD                   | the password of database.                                    | `postgres`                                        |
+| N/A                                 | LOG_LEVEL                  | the level of log.                                            | DEBUG                                             |
+* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README.md).
+* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end).
 
-Visit http://localhost:9527 to log in.
diff --git a/site2/docs/administration-zk-bk.md b/site2/docs/administration-zk-bk.md
index 3965c32..eb38c16 100644
--- a/site2/docs/administration-zk-bk.md
+++ b/site2/docs/administration-zk-bk.md
@@ -133,27 +133,19 @@ $ bin/pulsar-daemon start configuration-store
 
 ### ZooKeeper configuration
 
-In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation:
+* The `conf/zookeeper.conf` file handles the configuration for local ZooKeeper.
+* The `conf/global-zookeeper.conf` file handles the configuration for configuration store.
+See [parameters](reference-configuration.md#zookeeper) for more details.
 
-#### Local ZooKeeper
+#### Configure batching operations
+Using the batching operations reduces the remote procedure call (RPC) traffic between ZooKeeper client and servers. It also reduces the number of write transactions, because each batching operation corresponds to a single ZooKeeper transaction, containing multiple read and write operations.
 
-The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters:
+The following figure demonstrates a basic benchmark of batching read/write operations that can be requested to ZooKeeper in one second:
 
-|Name|Description|Default|
-|---|---|---|
-|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
-|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
-|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
-|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
-|clientPort|  The port on which the ZooKeeper server listens for connections. |2181|
-|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
-|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1|
-|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+![Zookeeper batching benchmark](assets/zookeeper-batching.png)
 
-
-#### Configuration Store
-
-The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters:
+To enable batching operations, set the [`metadataStoreBatchingEnabled`](reference-configuration.md#broker) parameter to `true` on the broker side.
 
 
 ## BookKeeper
@@ -180,6 +172,9 @@ You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](referenc
 
 The minimum configuration changes required in `conf/bookkeeper.conf` are as follows:
 
+> **Note**
+> Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later.
+
 ```properties
 # Change to point to journal disk mount point
 journalDirectory=data/bookkeeper/journal
@@ -189,6 +184,9 @@ ledgerDirectories=data/bookkeeper/ledgers
 
 # Point to local ZK quorum
 zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
+
+#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud).
+advertisedAddress=
 ```
 
 To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`.
diff --git a/site2/docs/assets/OverloadShedder.png b/site2/docs/assets/OverloadShedder.png
new file mode 100644
index 0000000..0419fa0
Binary files /dev/null and b/site2/docs/assets/OverloadShedder.png differ
diff --git a/site2/docs/assets/ThresholdShedder.png b/site2/docs/assets/ThresholdShedder.png
new file mode 100644
index 0000000..787ac82
Binary files /dev/null and b/site2/docs/assets/ThresholdShedder.png differ
diff --git a/site2/docs/assets/UniformLoadShedder.png b/site2/docs/assets/UniformLoadShedder.png
new file mode 100644
index 0000000..88e2e47
Binary files /dev/null and b/site2/docs/assets/UniformLoadShedder.png differ
diff --git a/site2/docs/assets/cluster-level-failover-1.png b/site2/docs/assets/cluster-level-failover-1.png
new file mode 100644
index 0000000..a01a722
Binary files /dev/null and b/site2/docs/assets/cluster-level-failover-1.png differ
diff --git a/site2/docs/assets/cluster-level-failover-2.png b/site2/docs/assets/cluster-level-failover-2.png
new file mode 100644
index 0000000..36cce4f
Binary files /dev/null and b/site2/docs/assets/cluster-level-failover-2.png differ
diff --git a/site2/docs/assets/cluster-level-failover-3.png b/site2/docs/assets/cluster-level-failover-3.png
new file mode 100644
index 0000000..b17cd65
Binary files /dev/null and b/site2/docs/assets/cluster-level-failover-3.png differ
diff --git a/site2/docs/assets/cluster-level-failover-4.png b/site2/docs/assets/cluster-level-failover-4.png
new file mode 100644
index 0000000..e2e29a6
Binary files /dev/null and b/site2/docs/assets/cluster-level-failover-4.png differ
diff --git a/site2/docs/assets/cluster-level-failover-5.png b/site2/docs/assets/cluster-level-failover-5.png
new file mode 100644
index 0000000..17cc70c
Binary files /dev/null and b/site2/docs/assets/cluster-level-failover-5.png differ
diff --git a/site2/docs/assets/tableview.png b/site2/docs/assets/tableview.png
new file mode 100644
index 0000000..4e5203f
Binary files /dev/null and b/site2/docs/assets/tableview.png differ
diff --git a/site2/docs/assets/zookeeper-batching.png b/site2/docs/assets/zookeeper-batching.png
new file mode 100644
index 0000000..4bd461e
Binary files /dev/null and b/site2/docs/assets/zookeeper-batching.png differ
diff --git a/site2/docs/client-libraries-cpp.md b/site2/docs/client-libraries-cpp.md
index e9f81fa..788453e 100644
--- a/site2/docs/client-libraries-cpp.md
+++ b/site2/docs/client-libraries-cpp.md
@@ -14,7 +14,15 @@ Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms
 
 [Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp).
 
-## System requirements
+
+## Linux
+
+> **Note**   
+> You can choose one of the following installation methods based on your needs: Compilation, Install RPM or Install Debian.
+
+### Compilation 
+
+#### System requirements
 
 You need to install the following components before using the C++ client:
 
@@ -24,10 +32,6 @@ You need to install the following components before using the C++ client:
 * [libcurl](https://curl.se/libcurl/)
 * [Google Test](https://github.com/google/googletest)
 
-## Linux
-
-### Compilation 
-
 1. Clone the Pulsar repository.
 
 ```shell
@@ -125,11 +129,22 @@ The `libpulsarwithdeps.a` does not include library openssl related libraries `li
 $ rpm -ivh apache-pulsar-client*.rpm
 ```
 
-After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory.
+After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory,for example:
+```bash
+lrwxrwxrwx 1 root root 18 Dec 30 22:21 libpulsar.so -> libpulsar.so.2.9.1
+lrwxrwxrwx 1 root root 23 Dec 30 22:21 libpulsarnossl.so -> libpulsarnossl.so.2.9.1
+```
 
 > **Note**  
 > If you get the error that `libpulsar.so: cannot open shared object file: No such file or directory` when starting Pulsar client, you may need to run `ldconfig` first.
 
+2. Install the GCC and g++ using the following command, otherwise errors would occur in installing Node.js.
+
+```bash
+$ sudo yum -y install gcc automake autoconf libtool make
+$ sudo yum -y install gcc-c++
+```
+
 ### Install Debian
 
 1. Download a Debian package from the links in the table. 
@@ -290,104 +305,6 @@ If you use TLS authentication, you need to add `ssl`, and the default port is `6
 pulsar+ssl://pulsar.us-west.example.com:6651
 ```
 
-## Create a consumer
-
-To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer:
-- [Blocking style](#blocking-example): synchronously calling `receive(msg)`.
-- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener.
-
-### Blocking example
-
-The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received.
-
-This example starts a subscription at the earliest offset and consumes 100 messages.
-
-```c++
-#include <pulsar/Client.h>
-
-using namespace pulsar;
-
-int main() {
-    Client client("pulsar://localhost:6650");
-
-    Consumer consumer;
-    ConsumerConfiguration config;
-    config.setSubscriptionInitialPosition(InitialPositionEarliest);
-    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
-    if (result != ResultOk) {
-        std::cout << "Failed to subscribe: " << result << std::endl;
-        return -1;
-    }
-
-    Message msg;
-    int ctr = 0;
-    // consume 100 messages
-    while (ctr < 100) {
-        consumer.receive(msg);
-        std::cout << "Received: " << msg
-            << "  with payload '" << msg.getDataAsString() << "'" << std::endl;
-
-        consumer.acknowledge(msg);
-        ctr++;
-    }
-
-    std::cout << "Finished consuming synchronously!" << std::endl;
-
-    client.close();
-    return 0;
-}
-```
-
-### Consumer with a message listener
-
-You can avoid  running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received.
-
-This example starts a subscription at the earliest offset and consumes 100 messages.
-
-```c++
-#include <pulsar/Client.h>
-#include <atomic>
-#include <thread>
-
-using namespace pulsar;
-
-std::atomic<uint32_t> messagesReceived;
-
-void handleAckComplete(Result res) {
-    std::cout << "Ack res: " << res << std::endl;
-}
-
-void listener(Consumer consumer, const Message& msg) {
-    std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl;
-    messagesReceived++;
-    consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete);
-}
-
-int main() {
-    Client client("pulsar://localhost:6650");
-
-    Consumer consumer;
-    ConsumerConfiguration config;
-    config.setMessageListener(listener);
-    config.setSubscriptionInitialPosition(InitialPositionEarliest);
-    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
-    if (result != ResultOk) {
-        std::cout << "Failed to subscribe: " << result << std::endl;
-        return -1;
-    }
-
-    // wait for 100 messages to be consumed
-    while (messagesReceived < 100) {
-        std::this_thread::sleep_for(std::chrono::milliseconds(100));
-    }
-
-    std::cout << "Finished consuming asynchronously!" << std::endl;
-
-    client.close();
-    return 0;
-}
-```
-
 ## Create a producer
 
 To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer:
@@ -515,6 +432,133 @@ producerConf.setPartitionsRoutingMode(ProducerConfiguration::UseSinglePartition)
 producerConf.setLazyStartPartitionedProducers(true);
 ```
 
+### Enable chunking
+
+Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. 
+
+The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
+
+```c++
+ProducerConfiguration conf;
+conf.setBatchingEnabled(false);
+conf.setChunkingEnabled(true);
+Producer producer;
+client.createProducer("my-topic", conf, producer);
+```
+> **Note:** To enable chunking, you need to disable batching (`setBatchingEnabled`=`false`) concurrently.
+
+## Create a consumer
+
+To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer:
+- [Blocking style](#blocking-example): synchronously calling `receive(msg)`.
+- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener.
+
+### Blocking example
+
+The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received.
+
+This example starts a subscription at the earliest offset and consumes 100 messages.
+
+```c++
+#include <pulsar/Client.h>
+
+using namespace pulsar;
+
+int main() {
+    Client client("pulsar://localhost:6650");
+
+    Consumer consumer;
+    ConsumerConfiguration config;
+    config.setSubscriptionInitialPosition(InitialPositionEarliest);
+    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
+    if (result != ResultOk) {
+        std::cout << "Failed to subscribe: " << result << std::endl;
+        return -1;
+    }
+
+    Message msg;
+    int ctr = 0;
+    // consume 100 messages
+    while (ctr < 100) {
+        consumer.receive(msg);
+        std::cout << "Received: " << msg
+            << "  with payload '" << msg.getDataAsString() << "'" << std::endl;
+
+        consumer.acknowledge(msg);
+        ctr++;
+    }
+
+    std::cout << "Finished consuming synchronously!" << std::endl;
+
+    client.close();
+    return 0;
+}
+```
+
+### Consumer with a message listener
+
+You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received.
+
+This example starts a subscription at the earliest offset and consumes 100 messages.
+
+```c++
+#include <pulsar/Client.h>
+#include <atomic>
+#include <thread>
+
+using namespace pulsar;
+
+std::atomic<uint32_t> messagesReceived;
+
+void handleAckComplete(Result res) {
+    std::cout << "Ack res: " << res << std::endl;
+}
+
+void listener(Consumer consumer, const Message& msg) {
+    std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl;
+    messagesReceived++;
+    consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete);
+}
+
+int main() {
+    Client client("pulsar://localhost:6650");
+
+    Consumer consumer;
+    ConsumerConfiguration config;
+    config.setMessageListener(listener);
+    config.setSubscriptionInitialPosition(InitialPositionEarliest);
+    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
+    if (result != ResultOk) {
+        std::cout << "Failed to subscribe: " << result << std::endl;
+        return -1;
+    }
+
+    // wait for 100 messages to be consumed
+    while (messagesReceived < 100) {
+        std::this_thread::sleep_for(std::chrono::milliseconds(100));
+    }
+
+    std::cout << "Finished consuming asynchronously!" << std::endl;
+
+    client.close();
+    return 0;
+}
+```
+
+### Configure chunking
+
+You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `setMaxPendingChunkedMessage` and `setAutoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. 
+
+The following is an example of how to configure message chunking.
+
+```c++
+ConsumerConfiguration conf;
+conf.setAutoAckOldestChunkedMessageOnQueueFull(true);
+conf.setMaxPendingChunkedMessage(100);
+Consumer consumer;
+client.subscribe("my-topic", "my-sub", conf, consumer);
+```
+
 ## Enable authentication in connection URLs
 If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example.
 
diff --git a/site2/docs/client-libraries-dotnet.md b/site2/docs/client-libraries-dotnet.md
index 5d91d9a..5f19a9f 100644
--- a/site2/docs/client-libraries-dotnet.md
+++ b/site2/docs/client-libraries-dotnet.md
@@ -240,10 +240,7 @@ Messages can be acknowledged individually or cumulatively. For details about mes
 - Acknowledge messages individually.
 
     ```c#
-    await foreach (var message in consumer.Messages())
-    {
-        Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
-    }
+    await consumer.Acknowledge(message);
     ```
 
 - Acknowledge messages cumulatively.
diff --git a/site2/docs/client-libraries-java.md b/site2/docs/client-libraries-java.md
index 3ef8915..59fc715 100644
--- a/site2/docs/client-libraries-java.md
+++ b/site2/docs/client-libraries-java.md
@@ -4,9 +4,9 @@ title: Pulsar Java client
 sidebar_label: Java
 ---
 
-You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), and [readers](#reader) of messages and to perform [administrative tasks](admin-api-overview.md). The current Java client version is **{{pulsar:version}}**.
+You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of messages and to perform [administrative tasks](admin-api-overview.md). The current Java client version is **{{pulsar:version}}**.
 
-All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe.
+All the methods in [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of a Java client are thread-safe.
 
 Javadoc for the Pulsar client is divided into two domains by package as follows.
 
@@ -148,6 +148,295 @@ You can set the client memory allocator configurations through Java properties.<
 -Dpulsar.allocator.out_of_memory_policy=ThrowException
 ```
 
+### Cluster-level failover
+
+This chapter describes the concept, benefits, use cases, constraints, usage, working principles, and more information about the cluster-level failover. It contains the following sections:
+
+- [What is cluster-level failover?](#what-is-cluster-level-failover)
+
+  * [Concept of cluster-level failover](#concept-of-cluster-level-failover)
+   
+  * [Why use cluster-level failover?](#why-use-cluster-level-failover)
+
+  * [When to use cluster-level failover?](#when-to-use-cluster-level-failover)
+
+  * [When cluster-level failover is triggered?](#when-cluster-level-failover-is-triggered)
+
+  * [Why does cluster-level failover fail?](#why-does-cluster-level-failover-fail)
+
+  * [What are the limitations of cluster-level failover?](#what-are-the-limitations-of-cluster-level-failover)
+
+  * [What are the relationships between cluster-level failover and geo-replication?](#what-are-the-relationships-between-cluster-level-failover-and-geo-replication)
+  
+- [How to use cluster-level failover?](#how-to-use-cluster-level-failover)
+
+- [How does cluster-level failover work?](#how-does-cluster-level-failover-work)
+  
+> #### What is cluster-level failover
+
+This chapter helps you better understand the concept of cluster-level failover.
+> ##### Concept of cluster-level failover
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Automatic cluster-level failover-->
+
+Automatic cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters automatically and seamlessly when it detects a failover event based on the configured detecting policy set by **users**. 
+
+![Automatic cluster-level failover](assets/cluster-level-failover-1.png)
+
+<!--Controlled cluster-level failover-->
+
+Controlled cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters. The switchover is manually set by **administrators**.
+
+![Controlled cluster-level failover](assets/cluster-level-failover-2.png)
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+Once the primary cluster functions again, Pulsar clients can switch back to the primary cluster. Most of the time users won’t even notice a thing. Users can keep using applications and services without interruptions or timeouts.
+
+> ##### Why use cluster-level failover?
+
+The cluster-level failover provides fault tolerance, continuous availability, and high availability together. It brings a number of benefits, including but not limited to:
+
+* Reduced cost: services can be switched and recovered automatically with no data loss.
+
+* Simplified management: businesses can operate on an “always-on” basis since no immediate user intervention is required.
+
+* Improved stability and robustness: it ensures continuous performance and minimizes service downtime. 
+
+> ##### When to use cluster-level failover?
+
+The cluster-level failover protects your environment in a number of ways, including but not limited to:
+
+* Disaster recovery: cluster-level failover can automatically and seamlessly transfer the production workload on a primary cluster to one or several backup clusters, which ensures minimum data loss and reduced recovery time.
+
+* Planned migration: if you want to migrate production workloads from an old cluster to a new cluster, you can improve the migration efficiency with cluster-level failover. For example, you can test whether the data migration goes smoothly in case of a failover event, identify possible issues and risks before the migration.
+
+> ##### When cluster-level failover is triggered?
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Automatic cluster-level failover-->
+
+Automatic cluster-level failover is triggered when Pulsar clients cannot connect to the primary cluster for a prolonged period of time. This can be caused by any number of reasons including, but not limited to: 
+
+* Network failure: internet connection is lost.
+
+* Power failure: shutdown time of a primary cluster exceeds time limits.
+
+* Service error: errors occur on a primary cluster (for example, the primary cluster does not function because of time limits).
+
+* Crashed storage space: the primary cluster does not have enough storage space, but the corresponding storage space on the backup server functions normally. 
+
+<!--Controlled cluster-level failover-->
+
+Controlled cluster-level failover is triggered when administrators set the switchover manually. 
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+> ##### Why does cluster-level failover fail?
+
+Obviously, the cluster-level failover does not succeed if the backup cluster is unreachable by active Pulsar clients. This can happen for many reasons, including but not limited to:
+
+* Power failure: the backup cluster is shut down or does not function normally. 
+
+* Crashed storage space: primary and backup clusters do not have enough storage space. 
+
+* If the failover is initiated, but no cluster can assume the role of an available cluster due to errors, and the primary cluster is not able to provide service normally.
+
+* If you manually initiate a switchover, but services cannot be switched to the backup cluster server, then the system will attempt to switch services back to the primary cluster.
+
+* Fail to authenticate or authorize between 1) primary and backup clusters, or 2) between two backup clusters.
+
+> ##### What are the limitations of cluster-level failover?
+
+Currently, cluster-level failover can perform probes to prevent data loss, but it can not check the status of backup clusters. If backup clusters are not healthy, you cannot produce or consume data.
+
+> #### What are the relationships between cluster-level failover and geo-replication?
+
+The cluster-level failover is an extension of [geo-replication](concepts-replication.md) to improve stability and robustness. The cluster-level failover depends on geo-replication, and they have some **differences** as below.
+
+Influence |Cluster-level failover|Geo-replication
+|---|---|---
+Do administrators have heavy workloads?|No or maybe.<br /><br />- For the **automatic** cluster-level failover, the cluster switchover is triggered automatically based on the policies set by **users**.<br /><br />- For the **controlled** cluster-level failover, the switchover is triggered manually by **administrators**.|Yes.<br /><br />If a cluster fails, immediate administration intervention is required.|
+Result in data loss?|No.<br /><br />For both **automatic** and **controlled** cluster-level failover, if the failed primary cluster doesn't replicate messages immediately to the backup cluster, the Pulsar client can't consume the non-replicated messages. After the primary cluster is restored and the Pulsar client switches back, the non-replicated data can still be consumed by the Pulsar client. Consequently, the data is not lost.<br /><br />- For the **automatic** cluster-level failover, [...]
+Result in Pulsar client failure? |No or maybe.<br /><br />- For **automatic** cluster-level failover, services can be switched and recovered automatically and the Pulsar client does not fail. <br /><br />- For **controlled** cluster-level failover, services can be switched and recovered manually, but the Pulsar client fails before administrators can take action. |Same as above.
+
+> #### How to use cluster-level failover
+
+This section guides you through every step on how to configure cluster-level failover.
+
+**Tip**
+
+- You should configure cluster-level failover only when the cluster contains sufficient resources to handle all possible consequences. Workload intensity on the backup cluster may increase significantly.
+
+- Connect clusters to an uninterruptible power supply (UPS) unit to reduce the risk of unexpected power loss.
+
+**Requirements**
+
+* Pulsar client 2.10 or later versions.
+
+* For backup clusters:
+
+    * The number of BooKeeper nodes should be equal to or greater than the ensemble quorum.
+
+    * The number of ZooKeeper nodes should be equal to or greater than 3.
+
+* **Turn on geo-replication** between the primary cluster and any dependent cluster (primary to backup or backup to backup) to prevent data loss.
+
+* Set `replicateSubscriptionState` to `true` when creating consumers.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Automatic cluster-level failover-->
+
+This is an example of how to construct a Java Pulsar client to use automatic cluster-level failover. The switchover is triggered automatically.
+
+```
+  private PulsarClient getAutoFailoverClient() throws PulsarClientException {
+
+        ServiceUrlProvider failover = AutoClusterFailover.builder()
+                .primary("pulsar://localhost:6650")
+                .secondary(Collections.singletonList("pulsar://other1:6650","pulsar://other2:6650"))
+                .failoverDelay(30, TimeUnit.SECONDS)
+                .switchBackDelay(60, TimeUnit.SECONDS)
+                .checkInterval(1000, TimeUnit.MILLISECONDS)
+	    	    .secondaryTlsTrustCertsFilePath("/path/to/ca.cert.pem")
+    .secondaryAuthentication("org.apache.pulsar.client.impl.auth.AuthenticationTls",
+"tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem")
+
+                .build();
+
+        PulsarClient pulsarClient = PulsarClient.builder()
+                .build();
+
+        failover.initialize(pulsarClient);
+        return pulsarClient;
+    }
+```
+
+Configure the following parameters:
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`primary`|N/A|Yes|Service URL of the primary cluster.
+`secondary`|N/A|Yes|Service URL(s) of one or several backup clusters.<br /><br/>You can specify several backup clusters using a comma-separated list.<br /><br/> Note that:<br />- The backup cluster is chosen in the sequence shown in the list. <br />- If all backup clusters are available, the Pulsar client chooses the first backup cluster.
+`failoverDelay`|N/A|Yes|The delay before the Pulsar client switches from the primary cluster to the backup cluster.<br /><br/>Automatic failover is controlled by a probe task: <br />1) The probe task first checks the health status of the primary cluster. <br /> 2) If the probe task finds the continuous failure time of the primary cluster exceeds `failoverDelayMs`, it switches the Pulsar client to the backup cluster. 
+`switchBackDelay`|N/A|Yes|The delay before the Pulsar client switches from the backup cluster to the primary cluster.<br /><br/>Automatic failover switchover is controlled by a probe task: <br /> 1) After the Pulsar client switches from the primary cluster to the backup cluster, the probe task continues to check the status of the primary cluster. <br /> 2) If the primary cluster functions well and continuously remains active longer than `switchBackDelay`, the Pulsar client switches back  [...]
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`secondaryTlsTrustCertsFilePath`|N/A|No|Path to the trusted TLS certificate file of the backup cluster.
+`secondaryAuthentication`|N/A|No|Authentication of the backup cluster.
+
+<!--Controlled cluster-level failover-->
+
+This is an example of how to construct a Java Pulsar client to use controlled cluster-level failover. The switchover is triggered by administrators manually.
+
+**Note**: you can have one or several backup clusters but can only specify one.
+
+```
+ public PulsarClient getControlledFailoverClient() throws IOException {
+Map<String, String> header = new HashMap<>(); 
+  header.put(“service_user_id”, “my-user”);
+  header.put(“service_password”, “tiger”);
+  header.put(“clusterA”, “tokenA”);
+  header.put(“clusterB”, “tokenB”);
+
+  ServiceUrlProvider provider = 
+      ControlledClusterFailover.builder()
+        .defaultServiceUrl("pulsar://localhost:6650")
+        .checkInterval(1, TimeUnit.MINUTES)
+        .urlProvider("http://localhost:8080/test")
+        .urlProviderHeader(header)
+        .build();
+
+  PulsarClient pulsarClient = 
+     PulsarClient.builder()
+      .build();
+
+  provider.initialize(pulsarClient);
+  return pulsarClient;
+}
+
+```
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`defaultServiceUrl`|N/A|Yes|Pulsar service URL.
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`urlProvider`|N/A|Yes|URL provider service.
+`urlProviderHeader`|N/A|No|`urlProviderHeader` is a map containing tokens and credentials. <br /><br />If you enable authentication or authorization between Pulsar clients and primary and backup clusters, you need to provide `urlProviderHeader`.
+
+Here is an example of how `urlProviderHeader` works.
+
+![How urlProviderHeader works](assets/cluster-level-failover-3.png)
+
+Assume that you want to connect Pulsar client 1 to cluster A.
+
+1. Pulsar client 1 sends the token *t1* to the URL provider service.
+
+2. The URL provider service returns the credential *c1* and the cluster A URL to the Pulsar client.
+   
+    The URL provider service manages all tokens and credentials. It returns different credentials based on different tokens and different target cluster URLs to different Pulsar clients.
+
+    **Note**: **the credential must be in a JSON file and contain parameters as shown**.
+
+    ```
+    {
+    "serviceUrl": "pulsar+ssl://target:6651", 
+    "tlsTrustCertsFilePath": "/security/ca.cert.pem",
+    "authPluginClassName":"org.apache.pulsar.client.impl.auth.AuthenticationTls",
+    "authParamsString": " \"tlsCertFile\": \"/security/client.cert.pem\" 
+        \"tlsKeyFile\": \"/security/client-pk8.pem\" "
+    }
+    ```
+
+3. Pulsar client 1 connects to cluster A using credential *c1*.
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+>#### How does cluster-level failover work?
+
+This chapter explains the working process of cluster-level failover. For more implementation details, see [PIP-121](https://github.com/apache/pulsar/issues/13315).
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Automatic cluster-level failover-->
+
+In automatic failover cluster, the primary cluster and backup cluster are aware of each other's availability. The automatic failover cluster performs the following actions without administrator intervention:
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+   
+2. If the probe task finds the failure time of the primary cluster exceeds the time set in the `failoverDelay` parameter, it searches backup clusters for an available healthy cluster.
+
+    2a) If there are healthy backup clusters, the Pulsar client switches to a backup cluster in the order defined in `secondary`.
+
+    2b) If there is no healthy backup cluster, the Pulsar client does not perform the switchover, and the probe task continues to look  for an available backup cluster.
+
+3. The probe task checks whether the primary cluster functions well or not. 
+
+    3a) If the primary cluster comes back and the continuous healthy time exceeds the time set in `switchBackDelay`, the Pulsar client switches back to the primary cluster.
+
+    3b) If the primary cluster does not come back, the Pulsar client does not perform the switchover. 
+
+![Workflow of automatic failover cluster](assets/cluster-level-failover-4.png)
+
+<!--Controlled cluster-level failover-->
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+
+2. The probe task fetches the service URL configuration from the URL provider service, which is configured by `urlProvider`.
+
+    2a) If the service URL configuration is changed, the probe task  switches to the target cluster without checking the health status of the target cluster.
+
+    2b) If the service URL configuration is not changed, the Pulsar client does not perform the switchover.
+
+3. If the Pulsar client switches to the target cluster, the probe task continues to fetch service URL configuration from the URL provider service at intervals defined in `checkInterval`. 
+
+    3a) If the service URL configuration is changed, the probe task  switches to the target cluster without checking the health status of the target cluster.
+
+    3b) If the service URL configuration is not changed, it does not perform the switchover.
+
+![Workflow of controlled failover cluster](assets/cluster-level-failover-5.png)
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
 ## Producer
 
 In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic).
@@ -207,7 +496,9 @@ Name| Type |  <div style="width:300px">Description</div>|  Default
 `batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1)
 `batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000
 `batchingEnabled`| boolean|Enable batching of messages. |true
+`chunkingEnabled` | boolean | Enable chunking of messages. |false
 `compressionType`|CompressionType|Message data compression type used by a producer. <br />Available options:<li>[`LZ4`](https://github.com/lz4/lz4)</li><li>[`ZLIB`](https://zlib.net/)<br /><li>[`ZSTD`](https://facebook.github.io/zstd/)</li><li>[`SNAPPY`](https://google.github.io/snappy/)</li>| No compression
+`initialSubscriptionName`|string|Use this configuration to automatically create an initial subscription when creating a topic. If this field is not set, the initial subscription is not created.|null
 
 You can configure parameters if you do not want to use the default configuration.
 
@@ -255,6 +546,21 @@ producer.newMessage()
 
 You can terminate the builder chain with `sendAsync()` and get a future return.
 
+### Enable chunking
+
+Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. 
+
+The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
+
+```java
+Producer<byte[]> producer = client.newProducer()
+        .topic(topic)
+        .enableChunking(true)
+        .enableBatching(false)
+        .create();
+```
+> **Note:** To enable chunking, you need to disable batching (`enableBatching`=`false`) concurrently.
+
 ## Consumer
 
 In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)).
@@ -336,7 +642,11 @@ When you create a consumer, you can use the `loadConf` configuration. The follow
 `deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.<br /><br />By default, some messages are probably redelivered many times, even to the extent that it never stops.<br /><br />By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.<br /><br />You can enable the dead letter mechanism by setting `deadLetterPolicy`.<br /><br [...]
 `autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.<br /><br />**Note**: this is only for partitioned consumers.|true
 `replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false
-`negativeAckRedeliveryBackoff`|NegativeAckRedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `NegativeAckRedeliveryBackoff` for a consumer.| `NegativeAckRedeliveryExponentialBackoff`
+`negativeAckRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`ackTimeoutRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is ackTimeout policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`autoAckOldestChunkedMessageOnQueueFull`|boolean|Whether to automatically acknowledge pending chunked messages when the threashold of `maxPendingChunkedMessage` is reached. If set to `false`, these messages will be redelivered by their broker. |true
+`maxPendingChunkedMessage`|int| The maximum size of a queue holding pending chunked messages. When the threshold is reached, the consumer drops pending messages to optimize memory utilization.|10
+`expireTimeOfIncompleteChunkedMessageMillis`|long|The time interval to expire incomplete chunks if a consumer fails to receive all the chunks in the specified time period. The default value is 1 minute. | 60000
 
 You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. 
 
@@ -403,24 +713,69 @@ consumer.acknowledge(messages)
 >     .build();
 > ```
 
+### Configure chunking
+
+You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` and `autoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. The `expireTimeOfIncompleteChunkedMessage` parameter decides the time interval to expire incomplete chunks if the consumer fails to receive all chunks of a me [...]
+
+The following is an example of how to configure message chunking.
+
+```java
+Consumer<byte[]> consumer = client.newConsumer()
+        .topic(topic)
+        .subscriptionName("test")
+        .autoAckOldestChunkedMessageOnQueueFull(true)
+        .maxPendingChunkedMessage(100)
+        .expireTimeOfIncompleteChunkedMessage(10, TimeUnit.MINUTES)
+        .subscribe();
+```
+
 ### Negative acknowledgment redelivery backoff
 
-The `NegativeAckRedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. 
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. 
+
+```java
+Consumer consumer =  client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+                .minDelayMs(1000)
+                .maxDelayMs(60 * 1000)
+                .build())
+        .subscribe();
+```
+### Acknowledgement timeout redelivery backoff
+
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can redeliver messages with different delays by setting the number
+of times the messages is retried.
 
 ```java
 Consumer consumer =  client.newConsumer()
         .topic("my-topic")
         .subscriptionName("my-subscription")
-        .negativeAckRedeliveryBackoff(NegativeAckRedeliveryExponentialBackoff.builder()
-                .minNackTimeMs(1000)
-                .maxNackTimeMs(60 * 1000)
+        .ackTimeout(10, TimeUnit.SECOND)
+        .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+                .minDelayMs(1000)
+                .maxDelayMs(60000)
+                .multiplier(2)
                 .build())
         .subscribe();
 ```
+The message redelivery behavior should be as follows.
+
+Redelivery count | Redelivery delay
+:--------------------|:-----------
+1 | 10 + 1 seconds
+2 | 10 + 2 seconds
+3 | 10 + 4 seconds
+4 | 10 + 8 seconds
+5 | 10 + 16 seconds
+6 | 10 + 32 seconds
+7 | 10 + 60 seconds
+8 | 10 + 60 seconds
 
 > **Note** 
 >   - The `negativeAckRedeliveryBackoff` does not work with `consumer.negativeAcknowledge(MessageId messageId)` because you are not able to get the redelivery count from the message ID.
->   - If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `NegativeAckRedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff.
+>   - If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `RedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff.
 
 ### Multi-topic subscriptions
 
@@ -760,6 +1115,49 @@ pulsarClient.newReader()
 
 Total hash range size is 65536, so the max end of the range should be less than or equal to 65535.
 
+
+## TableView
+
+The TableView interface serves an encapsulated access pattern, providing a continuously updated key-value map view of the compacted topic data. Messages without keys will be ignored.
+
+With TableView, Pulsar clients can fetch all the message updates from a topic and construct a map with the latest values of each key. These values can then be used to build a local cache of data. In addition, you can register consumers with the TableView by specifying a listener to perform a scan of the map and then receive notifications when new messages are received. Consequently, event handling can be triggered to serve use cases, such as event-driven applications and message monitoring.
+
+> **Note:** Each TableView uses one Reader instance per partition, and reads the topic starting from the compacted view by default. It is highly recommended to enable automatic compaction by [configuring the topic compaction policies](cookbooks-compaction.md#configuring-compaction-to-run-automatically) for the given topic or namespace. More frequent compaction results in shorter startup times because less data is replayed to reconstruct the TableView of the topic.
+
+The following figure illustrates the dynamic construction of a TableView updated with newer values of each key.
+![TableView](assets/tableview.png)
+
+### Configure TableView
+ 
+The following is an example of how to configure a TableView.
+
+```java
+TableView<String> tv = client.newTableViewBuilder(Schema.STRING)
+  .topic("my-tableview")
+  .create()
+```
+
+You can use the available parameters in the `loadConf` configuration or related [API](https://pulsar.apache.org/api/client/2.10.0-SNAPSHOT/org/apache/pulsar/client/api/TableViewBuilder.html) to customize your TableView.
+
+| Name | Type| Required? |  <div style="width:300px">Description</div> | Default
+|---|---|---|---|---
+| `topic` | string | yes | The topic name of the TableView. | N/A
+| `autoUpdatePartitionInterval` | int | no | The interval to check for newly added partitions. | 60 (seconds)
+
+### Register listeners
+ 
+You can register listeners for both existing messages on a topic and new messages coming into the topic by using `forEachAndListen`, and specify to perform operations for all existing messages by using `forEach`.
+
+The following is an example of how to register listeners with TableView.
+
+```java
+// Register listeners for all existing and incoming messages
+tv.forEachAndListen((key, value) -> /*operations on all existing and incoming messages*/)
+
+// Register action for all existing messages
+tv.forEach((key, value) -> /*operations on all existing messages*/)
+```
+
 ## Schema
 
 In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example.
diff --git a/site2/docs/client-libraries-python.md b/site2/docs/client-libraries-python.md
index 4984de6..8bd0982 100644
--- a/site2/docs/client-libraries-python.md
+++ b/site2/docs/client-libraries-python.md
@@ -40,8 +40,8 @@ Installation via PyPi is available for the following Python versions:
 
 Platform | Supported Python versions
 :--------|:-------------------------
-MacOS <br />  10.13 (High Sierra), 10.14 (Mojave) <br /> | 2.7, 3.7
-Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8
+MacOS <br />  10.13 (High Sierra), 10.14 (Mojave) <br /> | 2.7, 3.7, 3.8, 3.9
+Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9
 
 ### Install from source
 
@@ -97,7 +97,7 @@ while True:
         print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
         # Acknowledge successful processing of the message
         consumer.acknowledge(msg)
-    except:
+    except Exception:
         # Message failed to be processed
         consumer.negative_acknowledge(msg)
 
@@ -161,7 +161,7 @@ while True:
         print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
         # Acknowledge successful processing of the message
         consumer.acknowledge(msg)
-    except:
+    except Exception:
         # Message failed to be processed
         consumer.negative_acknowledge(msg)
 client.close()
@@ -297,7 +297,7 @@ while True:
         print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c))
         # Acknowledge successful processing of the message
         consumer.acknowledge(msg)
-    except:
+    except Exception:
         # Message failed to be processed
         consumer.negative_acknowledge(msg)
 ```
diff --git a/site2/docs/client-libraries-websocket.md b/site2/docs/client-libraries-websocket.md
index 2cff73a..d9f83ad 100644
--- a/site2/docs/client-libraries-websocket.md
+++ b/site2/docs/client-libraries-websocket.md
@@ -30,14 +30,14 @@ webSocketServiceEnabled=true
 
 In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters:
 
-* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers)
+* [`configurationMetadataStoreUrl`](reference-configuration.md#websocket)
 * [`webServicePort`](reference-configuration.md#websocket-webServicePort)
 * [`clusterName`](reference-configuration.md#websocket-clusterName)
 
 Here's an example:
 
 ```properties
-configurationStoreServers=zk1:2181,zk2:2181,zk3:2181
+configurationMetadataStoreUrl=zk1:2181,zk2:2181,zk3:2181
 webServicePort=8080
 clusterName=my-cluster
 ```
diff --git a/site2/docs/concepts-architecture-overview.md b/site2/docs/concepts-architecture-overview.md
index 74cf514..b33e75a 100644
--- a/site2/docs/concepts-architecture-overview.md
+++ b/site2/docs/concepts-architecture-overview.md
@@ -47,6 +47,9 @@ Clusters can replicate amongst themselves using [geo-replication](concepts-repli
 
 The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and [BookKeeper metadata store](https://bookkee [...]
 
+> Pulsar also supports more metadata backend services, including [ETCD](https://etcd.io/) and [RocksDB](http://rocksdb.org/) (for standalone Pulsar only). 
+
+
 In a Pulsar instance:
 
 * A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
@@ -125,9 +128,10 @@ The **Pulsar proxy** provides a solution to this problem by acting as a single g
 Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example:
 
 ```bash
+$ cd /path/to/pulsar/directory
 $ bin/pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk-2 \
-  --configuration-store-servers zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 ```
 
 > #### Pulsar proxy docs
diff --git a/site2/docs/concepts-messaging.md b/site2/docs/concepts-messaging.md
index f106dba..595000f 100644
--- a/site2/docs/concepts-messaging.md
+++ b/site2/docs/concepts-messaging.md
@@ -96,29 +96,44 @@ To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar i
 By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. 
 
 ### Chunking
-Before you enable chunking, read the following instructions.
-- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance.
-- Chunking is only supported for persisted topics.
-- Chunking is only supported for Exclusive and Failover subscription types.
+Message chunking enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side.
 
-When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
+With message chunking enabled, when the size of a message exceeds the allowed maximum payload size (the `maxMessageSize` parameter of broker), the workflow of messaging is as follows:
+1. The producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. 
+2. The broker stores the chunked messages in one managed-ledger in the same way as that of ordinary messages, and it uses the `chunkedMessageRate` parameter to record chunked message rate on the topic.
+3. The consumer buffers the chunked messages and aggregates them into the receiver queue when it receives all the chunks of a message.
+4. The client consumes the aggregated message from the receiver queue. 
 
-The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` param [...]
+**Limitations:** 
+- Chunking is only available for persisted topics.
+- Chunking is only available for the exclusive and failover subscription types.
+- Chunking cannot be enabled simultaneously with batching.
 
-The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic.
+#### Handle consecutive chunked messages with one ordered consumer
 
-#### Handle chunked messages with one producer and one ordered consumer
-
-As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combi [...]
+The following figure shows a topic with one producer which publishes a large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks labeled M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches them to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, a [...]
 
 ![](assets/chunking-01.png)
 
-#### Handle chunked messages with multiple producers and one ordered consumer
+#### Handle interwoven chunked messages with one ordered consumer
 
-When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the c [...]
+When multiple producers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different producers in the same managed-ledger. The chunked messages in the managed-ledger can be interwoven with each other. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be  [...]
 
 ![](assets/chunking-02.png)
 
+> **Note**  
+> In this case, interwoven chunked messages may bring some memory pressure to the consumer because the consumer keeps a separate buffer for each large message to aggregate all its chunks in one message. You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` parameter. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later, o [...]
+
+#### Enable Message Chunking
+
+**Prerequisite:** Disable batching by setting the `enableBatching` parameter to `false`.
+
+The message chunking feature is OFF by default. 
+To enable message chunking, set the `chunkingEnabled` parameter to `true` when creating a producer.
+
+> **Note**  
+> If the consumer fails to receive all chunks of a message within a specified time period, it expires incomplete chunks. The default value is 1 minute. For more information about the `expireTimeOfIncompleteChunkedMessage` parameter, refer to [org.apache.pulsar.client.api](https://pulsar.apache.org/api/client/).
+
 ## Consumers
 
 A consumer is a process that attaches to a topic via a subscription and then receives messages.
@@ -206,9 +221,9 @@ But this is not flexible enough. A better way is to use the **redelivery backoff
 Use the following API to enable `Negative Redelivery Backoff`.
 
 ```java
-consumer.negativeAckRedeliveryBackoff(NegativeAckRedeliveryExponentialBackoff.builder()
-        .minNackTimeMs(1000)
-        .maxNackTimeMs(60 * 1000)
+consumer.negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+        .minDelayMs(1000)
+        .maxDelayMs(60 * 1000)
         .build())
 ```
 
@@ -218,6 +233,31 @@ The acknowledgement timeout mechanism allows you to set a time range during whic
 
 You can configure the acknowledgement timeout mechanism to redeliver the message if it is not acknowledged after `ackTimeout` or to execute a timer task to check the acknowledgement timeout messages during every `ackTimeoutTickTime` period.
 
+You can also use the redelivery backoff mechanism, redeliver messages with different delays by setting the number 
+of times the messages is retried.
+
+If you want to use redelivery backoff, you can use the following API.
+```java
+consumer.ackTimeout(10, TimeUnit.SECOND)
+        .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+        .minDelayMs(1000)
+        .maxDelayMs(60000)
+        .multiplier(2).build())
+```
+
+The message redelivery behavior should be as follows.
+
+Redelivery count | Redelivery delay
+:--------------------|:-----------
+1 | 10 + 1 seconds
+2 | 10 + 2 seconds
+3 | 10 + 4 seconds
+4 | 10 + 8 seconds
+5 | 10 + 16 seconds
+6 | 10 + 32 seconds
+7 | 10 + 60 seconds
+8 | 10 + 60 seconds
+
 > **Note**  
 > - If batching is enabled, all messages in one batch are redelivered to the consumer.  
 > - Compared with acknowledgement timeout, negative acknowledgement is preferred. First, it is difficult to set a timeout value. Second, a broker resends messages when the message processing time exceeds the acknowledgement timeout, but these messages might not need to be re-consumed.
@@ -277,6 +317,22 @@ Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
                 
 ```
 
+By default, there is no subscription during a DLQ topic creation. Without a just-in-time subscription to the DLQ topic, you may lose messages. To automatically create an initial subscription for the DLQ, you can specify the `initialSubscriptionName` parameter. If this parameter is set but the broker's `allowAutoSubscriptionCreation` is disabled, the DLQ producer will fail to be created.
+
+```java
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+                .topic(topic)
+                .subscriptionName("my-subscription")
+                .subscriptionType(SubscriptionType.Shared)
+                .deadLetterPolicy(DeadLetterPolicy.builder()
+                      .maxRedeliverCount(maxRedeliveryCount)
+                      .deadLetterTopic("your-topic-name")
+                      .initialSubscriptionName("init-sub")
+                      .build())
+                .subscribe();
+                
+```
+
 Dead letter topic depends on message redelivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. 
 
 > **Note**    
diff --git a/site2/docs/cookbooks-deduplication.md b/site2/docs/cookbooks-deduplication.md
index 6140bde..0c067b6 100644
--- a/site2/docs/cookbooks-deduplication.md
+++ b/site2/docs/cookbooks-deduplication.md
@@ -25,6 +25,7 @@ Parameter | Description | Default
 `brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false`
 `brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000`
 `brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000`
+`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120`
 `brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours)
 
 ### Set default value at the broker-level
diff --git a/site2/docs/deploy-bare-metal-multi-cluster.md b/site2/docs/deploy-bare-metal-multi-cluster.md
index 6923831..71eb174 100644
--- a/site2/docs/deploy-bare-metal-multi-cluster.md
+++ b/site2/docs/deploy-bare-metal-multi-cluster.md
@@ -201,8 +201,8 @@ You can initialize this metadata using the [`initialize-cluster-metadata`](refer
 ```shell
 $ bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
@@ -275,7 +275,7 @@ Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper b
 
 You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
 
-The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those  [...]
+The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`metadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the local quorum and the [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same c [...]
 
 You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster.
 
@@ -283,10 +283,10 @@ The following is an example configuration:
 
 ```properties
 # Local ZooKeeper servers
-zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
 
 # Configuration store quorum connection string.
-configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+configurationMetadataStoreUrl=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
 
 clusterName=us-west
 
diff --git a/site2/docs/deploy-bare-metal.md b/site2/docs/deploy-bare-metal.md
index 13c001f..4f73cbb 100644
--- a/site2/docs/deploy-bare-metal.md
+++ b/site2/docs/deploy-bare-metal.md
@@ -40,7 +40,7 @@ To run Pulsar on bare metal, the following configuration is recommended:
 > * Broker is only supported on 64-bit JVM.
 > * If you do not have enough machines, or you want to test Pulsar in cluster mode (and expand the cluster later), You can fully deploy Pulsar on a node on which ZooKeeper, bookie and broker run.
 > * If you do not have a DNS server, you can use the multi-host format in the service URL instead.
-Each machine in your cluster needs to have [Java 8](https://adoptopenjdk.net/?variant=openjdk8) or [Java 11](https://adoptopenjdk.net/?variant=openjdk11) installed.
+Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed.
 
 The following is a diagram showing the basic setup:
 
@@ -241,8 +241,8 @@ You can initialize this metadata using the [`initialize-cluster-metadata`](refer
 ```shell
 $ bin/pulsar initialize-cluster-metadata \
   --cluster pulsar-cluster-1 \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2181 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080 \
   --web-service-url-tls https://pulsar.us-west.example.com:8443 \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650 \
@@ -331,11 +331,11 @@ Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Bro
 
 ### Configure Brokers
 
-The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`.
+The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`metadataStoreUrl`](reference-configuration.md#broker) and [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationMetadataStoreUrl` point to the same `metadataStoreUrl`.
 
 ```properties
-zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
-configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+configurationMetadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
 ```
 
 You also need to specify the cluster name (matching the name that you provided when you [initialize the metadata of the cluster](#initialize-cluster-metadata)):
diff --git a/site2/docs/deploy-monitoring.md b/site2/docs/deploy-monitoring.md
index 8c80750..407cdc6 100644
--- a/site2/docs/deploy-monitoring.md
+++ b/site2/docs/deploy-monitoring.md
@@ -43,7 +43,7 @@ http://$LOCAL_ZK_SERVER:8000/metrics
 http://$GLOBAL_ZK_SERVER:8001/metrics
 ```
 
-The default port of local ZooKeeper is `8000` and the default port of configuration store is `8001`. You can change the default port of local ZooKeeper and configuration store by specifying system property `stats_server_port`.
+The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file.
 
 ### BookKeeper stats
 
diff --git a/site2/docs/developing-binary-protocol.md b/site2/docs/developing-binary-protocol.md
index 4c4cc8b..3caf769 100644
--- a/site2/docs/developing-binary-protocol.md
+++ b/site2/docs/developing-binary-protocol.md
@@ -233,9 +233,10 @@ Parameters:
 ##### Command Send
 
 Command `Send` is used to publish a new message within the context of an
-already existing producer. This command is used in a frame that includes command
-as well as message payload, for which the complete format is specified in the
-[payload commands](#payload-commands) section.
+already existing producer. If a producer has not yet been created for the
+connection, the broker will terminate the connection. This command is used
+in a frame that includes command as well as message payload, for which the
+complete format is specified in the [payload commands](#payload-commands) section.
 
 ```protobuf
 message CommandSend {
diff --git a/site2/docs/functions-develop.md b/site2/docs/functions-develop.md
index fcd8274..5853dee 100644
--- a/site2/docs/functions-develop.md
+++ b/site2/docs/functions-develop.md
@@ -13,7 +13,9 @@ Interface | Description | Use cases
 :---------|:------------|:---------
 Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context).
 Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context).
+Extended Pulsar Function SDK for Java | An extension to Pulsar-specific libraries, providing the initialization and close interfaces in Java. | Functions that require initializing and releasing external resources.
 
+### Language-native interface
 The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function.
 
 <!--DOCUSAURUS_CODE_TABS-->
@@ -50,6 +52,7 @@ For complete code, see [here](https://github.com/apache/pulsar/blob/master/pulsa
 
 <!--END_DOCUSAURUS_CODE_TABS-->
 
+### Pulsar Function SDK for Java/Python/Go
 The following example uses Pulsar Functions SDK.
 <!--DOCUSAURUS_CODE_TABS-->
 <!--Java-->
@@ -100,7 +103,52 @@ func main() {
 }
 ```
 For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f1626a53a1fe2e65dd25f377c1127/pulsar-function-go/examples/inputFunc/inputFunc.go#L20-L36).
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+### Extended Pulsar Function SDK for Java
+This extended Pulsar Function SDK provides two additional interfaces to initialize and release external resources.
+- By using the `initialize` interface, you can initialize external resources which only need one-time initialization when the function instance starts.
+- By using the `close` interface, you can close the referenced external resources when the function instance closes. 
+
+> **Note**
+>
+> The extended Pulsar Function SDK for Java is available in Pulsar 2.10.0 and later versions.
+> Before using it, you need to set up Pulsar Function worker 2.10.0 or later versions.
+
+The following example uses the extended interface of Pulsar Function SDK for Java to initialize RedisClient when the function instance starts and release it when the function instance closes.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--Java-->
+```Java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import io.lettuce.core.RedisClient;
+
+public class InitializableFunction implements Function<String, String> {
+    private RedisClient redisClient;
+    
+    private void initRedisClient(Map<String, Object> connectInfo) {
+        redisClient = RedisClient.create(connectInfo.get("redisURI"));
+    }
 
+    @Override
+    public void initialize(Context context) {
+        Map<String, Object> connectInfo = context.getUserConfigMap();
+        redisClient = initRedisClient(connectInfo);
+    }
+    
+    @Override
+    public String process(String input, Context context) {
+        String value = client.get(key);
+        return String.format("%s-%s", input, value);
+    }
+
+    @Override
+    public void close() {
+        redisClient.close();
+    }
+}
+```
 <!--END_DOCUSAURUS_CODE_TABS-->
 
 ## Schema registry
@@ -1006,7 +1054,22 @@ class MetricRecorderFunction(Function):
             context.record_metric('elevens-count', 1)
 ```
 <!--Go-->
-Currently, the feature is not available in Go.
+The Go SDK [`Context`](#context) object enables you to record metrics on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message:
+
+```go
+func metricRecorderFunction(ctx context.Context, in []byte) error {
+	inputstr := string(in)
+	fctx, ok := pf.FromContext(ctx)
+	if !ok {
+		return errors.New("get Go Functions Context error")
+	}
+	fctx.RecordMetric("hit-count", 1)
+	if inputstr == "eleven" {
+		fctx.RecordMetric("elevens-count", 1)
+	}
+	return nil
+}
+```
 
 <!--END_DOCUSAURUS_CODE_TABS-->
 
diff --git a/site2/docs/functions-runtime.md b/site2/docs/functions-runtime.md
index bdbd658..f65393a 100644
--- a/site2/docs/functions-runtime.md
+++ b/site2/docs/functions-runtime.md
@@ -270,7 +270,7 @@ For example, if you use token authentication, you need to configure the followin
 ```Yaml
 clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken
 clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt
-configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper
+configurationMetadataStoreUrl: zk:zookeeper-cluster:2181 # auth requires a connection to zookeeper
 authenticationProviders:
  - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken"
 authorizationEnabled: true
diff --git a/site2/docs/functions-worker.md b/site2/docs/functions-worker.md
index 36cf864..c58d5a4 100644
--- a/site2/docs/functions-worker.md
+++ b/site2/docs/functions-worker.md
@@ -216,12 +216,12 @@ properties:
 
 ##### Enable Authorization Provider
 
-To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies.
+To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationMetadataStoreUrl`. The authentication provider connects to `configurationMetadataStoreUrl` to receive namespace policies.
 
 ```yaml
 authorizationEnabled: true
 authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider
-configurationStoreServers: <configuration-store-servers>
+configurationMetadataStoreUrl: <meta-type>:<configuration-metadata-store-url>
 ```
 
 You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example.
diff --git a/site2/docs/getting-started-clients.md b/site2/docs/getting-started-clients.md
index 42b45db..2b9a911 100644
--- a/site2/docs/getting-started-clients.md
+++ b/site2/docs/getting-started-clients.md
@@ -6,16 +6,24 @@ sidebar_label: Overview
 
 Pulsar supports the following client libraries:
 
-- [Java client](client-libraries-java.md)
-- [Go client](client-libraries-go.md)
-- [Python client](client-libraries-python.md)
-- [C++ client](client-libraries-cpp.md)
-- [Node.js client](client-libraries-node.md)
-- [WebSocket client](client-libraries-websocket.md)
-- [C# client](client-libraries-dotnet.md)
+|Language|Documentation|Release note|Code repo
+|---|---|---|---
+Java |- [User doc](client-libraries-java.md) <br /><br />- [API doc](https://pulsar.apache.org/api/client/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client) 
+C++ | - [User doc](client-libraries-cpp.md) <br /><br />- [API doc](https://pulsar.apache.org/api/cpp/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp) 
+Python | - [User doc](client-libraries-python.md) <br /><br />- [API doc](https://pulsar.apache.org/api/python/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) 
+WebSocket| [User doc](client-libraries-websocket.md) | [Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-websocket) 
+Go client|[User doc](client-libraries-go.md)|[Here](https://github.com/apache/pulsar-client-go/blob/master/CHANGELOG.md) |[Here](https://github.com/apache/pulsar-client-go) 
+Node.js|[User doc](client-libraries-node.md)|[Here](https://github.com/apache/pulsar-client-node/releases) |[Here](https://github.com/apache/pulsar-client-node) 
+C# |[User doc](client-libraries-dotnet.md)| [Here](https://github.com/apache/pulsar-dotpulsar/blob/master/CHANGELOG.md)|[Here](https://github.com/apache/pulsar-dotpulsar) 
+
+> **Note**
+> 
+> - The code repos of **Java, C++, Python,** and **WebSocket** clients are hosted in the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are released with Pulsar, so their release notes are parts of [Pulsar release note](https://pulsar.apache.org/release-notes/).
+> 
+> - The code repos of **Go, Node.js,** and **C#** clients are hosted outside of the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are not released with Pulsar, so they have independent release notes.
 
 ## Feature matrix
-Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://github.com/apache/pulsar/wiki/PIP-108%3A-Pulsar-Feature-Matrix-%28Client-and-Function%29) page.
+Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://docs.google.com/spreadsheets/d/1YHYTkIXR8-Ql103u-IMI18TXLlGStK8uJjDsOOA0T20/edit#gid=1784579914) page.
 
 ## Third-party clients
 
diff --git a/site2/docs/getting-started-docker.md b/site2/docs/getting-started-docker.md
index 1489951..0d8fb11 100644
--- a/site2/docs/getting-started-docker.md
+++ b/site2/docs/getting-started-docker.md
@@ -20,6 +20,7 @@ A few things to note about this command:
  * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
 time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
  * For Docker on Windows make sure to configure it to use Linux containers
+ * The docker container will run as UID 10000 and GID 0, by default. You'll need to ensure the mounted volumes give write permission to either UID 10000 or GID 0. Note that UID 10000 is arbitrary, so it is recommended to make these mounts writable for the root group (GID 0).
 
 If you start Pulsar successfully, you will see `INFO`-level log messages like this:
 
diff --git a/site2/docs/getting-started-standalone.md b/site2/docs/getting-started-standalone.md
index 4bddbe9..94449f5 100644
--- a/site2/docs/getting-started-standalone.md
+++ b/site2/docs/getting-started-standalone.md
@@ -4,7 +4,7 @@ title: Set up a standalone Pulsar locally
 sidebar_label: Run Pulsar locally
 ---
 
-For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
+For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary [RocksDB](http://rocksdb.org/) and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
 
 > **Pulsar in production?**  
 > If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide.
@@ -53,7 +53,7 @@ The Pulsar binary package initially contains the following directories:
 Directory | Contains
 :---------|:--------
 `bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/).
-`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
+`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker) and more.<br />**Note:** Pulsar standalone uses RocksDB as the local metadata store and its configuration file path [`metadataStoreConfigPath`](reference-configuration.md) is configurable in the `standalone.conf` file. For more information about the configurations of RocksDB, see [here](https://github.com/facebook/rocksdb/blob/main/examples/rocksdb_option_file_example.ini) and rel [...]
 `examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example.
 `instances` | Artifacts created for [Pulsar Functions](functions-overview.md).
 `lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
@@ -63,7 +63,7 @@ These directories are created once you begin running Pulsar.
 
 Directory | Contains
 :---------|:--------
-`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`data` | The data storage directory used by RocksDB and BookKeeper.
 `logs` | Logs created by the installation.
 
 > **Tip**  
diff --git a/site2/docs/io-elasticsearch-sink.md b/site2/docs/io-elasticsearch-sink.md
index 053756d..bf7c553 100644
--- a/site2/docs/io-elasticsearch-sink.md
+++ b/site2/docs/io-elasticsearch-sink.md
@@ -49,8 +49,8 @@ The configuration of the Elasticsearch sink connector has the following properti
 
 | Name | Type|Required | Default | Description 
 |------|----------|----------|---------|-------------|
-| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. |
-| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. |
+| `elasticSearchUrl` | String| true |" " (empty string)| The index name to which the connector writes messages. The default value is the topic name. It accepts date formats in the name to support event time based index with the pattern `%{+<date-format>}`. For example, suppose the event time of the record is 1645182000000L, the indexName is `logs-%{+yyyy-MM-dd}`, then the formatted index name would be `logs-2022-02-18`. |
+| `indexName` | String| false |" " (empty string)| The index name to which the connector writes messages. |
 | `schemaEnable` | Boolean | false | false | Turn on the Schema Aware mode. |
 | `createIndexIfNeeded` | Boolean | false | false | Manage index if missing. |
 | `maxRetries` | Integer | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it.  |
diff --git a/site2/docs/io-file-source.md b/site2/docs/io-file-source.md
index 03e2fd7..71266dc 100644
--- a/site2/docs/io-file-source.md
+++ b/site2/docs/io-file-source.md
@@ -26,6 +26,7 @@ The configuration of the File source connector has the following properties.
 | `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. |
 | `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. |
 | `numWorkers` | Integer | false | 1 | The number of worker threads that process files.<br><br> This allows you to process a larger number of files concurrently. <br><br>However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. |
+| `processedFileSuffix` | String | false | NULL | If set, do not delete but only rename file that has been processed. <br><br>  This config only work when 'keepFile' property is false. |
 
 ### Example
 
@@ -47,7 +48,8 @@ Before using the File source connector, you need to create a configuration file
           "maximumSize": 5000000,
           "ignoreHiddenFiles": true,
           "pollingInterval": 5000,
-          "numWorkers": 1
+          "numWorkers": 1,
+          "processedFileSuffix": ".processed_done"
        }
     }
     ```
@@ -68,6 +70,7 @@ Before using the File source connector, you need to create a configuration file
         ignoreHiddenFiles: true
         pollingInterval: 5000
         numWorkers: 1
+        processedFileSuffix: ".processed_done"
     ```
 
 ## Usage
diff --git a/site2/docs/io-mongo-sink.md b/site2/docs/io-mongo-sink.md
index b584dc4..ee9eb48 100644
--- a/site2/docs/io-mongo-sink.md
+++ b/site2/docs/io-mongo-sink.md
@@ -43,11 +43,10 @@ Before using the Mongo sink connector, you need to create a configuration file t
 * YAML
   
     ```yaml
-    {
+    configs:
         mongoUri: "mongodb://localhost:27017"
         database: "pulsar"
         collection: "messages"
         batchSize: 2
         batchTimeMs: 500
-    }
     ```
diff --git a/site2/docs/reference-cli-tools.md b/site2/docs/reference-cli-tools.md
index d3f3af1..b53ec81 100644
--- a/site2/docs/reference-cli-tools.md
+++ b/site2/docs/reference-cli-tools.md
@@ -168,7 +168,7 @@ Options
 |`-c` , `--cluster`|Cluster name||
 |`-cms` , `--configuration-metadata-store`|The configuration metadata store quorum connection string||
 |`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use||
-|`-h` , `--help`|Cluster name|false|
+|`-h` , `--help`|Help message|false|
 |`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16|
 |`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16|
 |`-uw` , `--web-service-url`|The web service URL for the new cluster||
@@ -190,14 +190,14 @@ Options
 
 |Flag|Description|Default|
 |---|---|---|
-|`--configuration-store`|Configuration store connection string||
-|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string||
+|`-cms`, `--configuration-metadata-store`|Configuration meta store connection string||
+|`-md` , `--metadata-store`|Metadata Store service url||
 
 Example
 ```bash
 $ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk2 \
-  --configuration-store zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 ```
 
 ### `standalone`
@@ -467,7 +467,7 @@ Options
 |`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false|
 |`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload||
 |`-h`, `--help`|Help message|false|
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0|
 |`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0|
 |`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
@@ -528,7 +528,7 @@ Options
 |`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false|
 |`-d`, `--delay`|Mark messages with a given delay in seconds|0s|
 |`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-k`, `--encryption-key-name`|The public key name to encrypt payload||
 |`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload||
 |`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false|
@@ -585,7 +585,7 @@ Options
 |`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
 |`--auth-plugin`|Authentication plugin class name||
 |`--listener-name`|Listener name for the broker||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-h`, `--help`|Help message|false|
 |`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0|
 |`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
@@ -616,7 +616,7 @@ Options
 |---|---|---|
 |`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
 |`--auth-plugin`|Authentication plugin class name||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-h`, `--help`|Help message|false|
 |`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0|
 |`-t`, `--num-topic`|The number of topics|1|
@@ -655,7 +655,7 @@ Options
 |`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0|
 |`--threads`|Number of threads writing|1|
 |`-w`, `--write-quorum`|Ledger write quorum|1|
-|`-zk`, `--zookeeperServers`|ZooKeeper connection string||
+|`-md`, `--metadata-store`|Metadata store service URL. For example: zk:my-zk:2181||
 
 
 ### `monitor-brokers`
@@ -721,8 +721,10 @@ $ pulsar-perf transaction options
 
 |Flag|Description|Default|
 |---|---|---|
+`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|N/A
+`--auth-plugin`|Authentication plugin class name.|N/A
 `-au`, `--admin-url`|Pulsar admin URL.|N/A
-`--conf-file`|Configuration file.|N/A
+`-cf`, `--conf-file`|Configuration file.|N/A
 `-h`, `--help`|Help messages.|N/A
 `-c`, `--max-connections`|Maximum number of TCP connections to a single broker.|100
 `-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers. |1
diff --git a/site2/docs/reference-configuration.md b/site2/docs/reference-configuration.md
index f431ee3..9e4dfc0 100644
--- a/site2/docs/reference-configuration.md
+++ b/site2/docs/reference-configuration.md
@@ -109,7 +109,7 @@ BookKeeper is a replicated log storage system that Pulsar uses for durable stora
 |readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096|
 |writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536|
 |useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ip address for the registration.|false|
-|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`. <br><br>Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.<br><br>The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots. <br><br>For more information about ` [...]
+|bookieId | If you want to custom a bookie ID or use a dynamic network address for the bookie, you can set the `bookieId`. <br /><br />Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`). If you set the `bookieId`, then the `useHostNameAsBookieID` does not take effect.<br /><br />The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots. <br /><br />For more informa [...]
 |allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port (port 0) as its server port. By default, an ephemeral port is not allowed. Using an ephemeral port as the service port usually indicates a configuration error. However, in unit tests, using an ephemeral port will address port conflict problems and allow running tests in parallel.|false|
 |enableLocalTransport|Whether the bookie is allowed to listen for the BookKeeper clients executed on the local JVM.|false|
 |disableServerSocketBind|Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM.|false|
@@ -136,8 +136,8 @@ Pulsar brokers are responsible for handling incoming messages from producers, di
 
 |Name|Description|Default|
 |---|---|---|
-|advertisedListeners|Specify multiple advertised listeners for the broker.<br><br>The format is `<listener_name>:pulsar://<host>:<port>`.<br><br>If there are multiple listeners, separate them with commas.<br><br>**Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/|
-|internalListenerName|Specify the internal listener name for the broker.<br><br>**Note**: the listener name must be contained in `advertisedListeners`.<br><br> If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/|
+|advertisedListeners|Specify multiple advertised listeners for the broker.<br /><br />The format is `<listener_name>:pulsar://<host>:<port>`.<br /><br />If there are multiple listeners, separate them with commas.<br /><br />**Note**: do not use this configuration with `advertisedAddress` and `brokerServicePort`. If the value of this configuration is empty, the broker uses `advertisedAddress` and `brokerServicePort`|/|
+|internalListenerName|Specify the internal listener name for the broker.<br /><br />**Note**: the listener name must be contained in `advertisedListeners`.<br /><br /> If the value of this configuration is empty, the broker uses the first listener as the internal listener.|/|
 |authenticateOriginalAuthData|  If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false|
 |enablePersistentTopics|  Whether persistent topics are enabled on the broker |true|
 |enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true|
@@ -145,9 +145,9 @@ Pulsar brokers are responsible for handling incoming messages from producers, di
 |exposePublisherStats|Whether to enable topic level metrics.|true|
 |statsUpdateFrequencyInSecs||60|
 |statsUpdateInitialDelayInSecs||60|
-|zookeeperServers|  Zookeeper quorum connection string  ||
-|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
-|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|metadataStoreUrl| Metadata store quorum connection string  ||
+|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300|
+|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) ||
 |brokerServicePort| Broker data port  |6650|
 |brokerServicePortTls|  Broker data port for TLS  |6651|
 |webServicePort|  Port to use to server HTTP request  |8080|
@@ -164,9 +164,11 @@ Pulsar brokers are responsible for handling incoming messages from producers, di
 |bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`.  ||
 |advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used.  ||
 |clusterName| Name of the cluster to which this broker belongs to ||
+|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0|
 |brokerDeduplicationEnabled|  Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis.  |false|
 |brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes.  |10000|
 |brokerDeduplicationEntriesInterval|  The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000|
+|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120|
 |brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360|
 |brokerDeduplicationSnapshotFrequencyInSeconds| How often is the thread pool scheduled to check whether a snapshot needs to be taken. The value of `0` means it is disabled. |120| 
 |dispatchThrottlingRateInMsg| Dispatch throttling-limit of messages for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0|
@@ -179,7 +181,7 @@ Pulsar brokers are responsible for handling incoming messages from producers, di
 |dispatchThrottlingRatePerSubscriptionInByte|Dispatch throttling-limit of bytes for a subscription. 0 means the dispatch throttling-limit is disabled.|0|
 |dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 |
 |dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | 
-|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000|
+|metadataStoreSessionTimeoutMillis| Metadata store session timeout in milliseconds |30000|
 |brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed  |60000|
 |skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false|
 |backlogQuotaCheckEnabled|  Enable backlog quota check. Enforces action on topic when the quota is reached  |true|
@@ -198,7 +200,7 @@ Pulsar brokers are responsible for handling incoming messages from producers, di
 |forceDeleteNamespaceAllowed| Enable you to delete a namespace forcefully. |false|
 |messageExpiryCheckIntervalInMinutes| The frequency of proactively checking and purging expired messages. |5|
 |brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to determine whether topics with compaction policies need compaction. |60|
-brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.<br><br>Set this threshold to 0 means disabling the compression check.|N/A
+brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.<br /><br />Set this threshold to 0 means disabling the compression check.|N/A
 |delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. If disabled, messages will be immediately delivered and there will be no tracking overhead.|true|
 |delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed delivery, which affects the accuracy of the delivery time compared to the scheduled time. By default, it is 1 second.|1000|
 |activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed.  |1000|
@@ -207,7 +209,10 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |statusFilePath|  Path for the file used to determine the rotation status for the broker when responding to service discovery health checks ||
 |preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles)  |false|
 |maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0|
-|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false|
+| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 |
 |tlsCertificateFilePath|  Path for the TLS certificate file ||
 |tlsKeyFilePath|  Path for the TLS private key file ||
 |tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. ||
@@ -226,6 +231,10 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers ||
 |brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g.  [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]||
 |brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g.  `TLSv1.3`, `TLSv1.2` ||
+| metadataStoreBatchingEnabled | Enable metadata operations batching. | true |
+| metadataStoreBatchingMaxDelayMillis | Maximum delay to impose on batching grouping. | 5 |
+| metadataStoreBatchingMaxOperations | Maximum number of operations to include in a singular batch. | 1000 |
+| metadataStoreBatchingMaxSizeKb | Maximum size of a batch. | 128 |
 |ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0|
 |tokenSettingPrefix| Configure the prefix of the token-related settings, such as `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. ||
 |tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`.  Note: key file must be DER-encoded.||
@@ -250,7 +259,11 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication  ||
 |exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false|
 |schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory|
-|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false|
+|isSchemaValidationEnforced| Whether to enable schema validation, when schema validation is enabled, if a producer without a schema attempts to produce the message to a topic with schema, the producer is rejected and disconnected.|false|
+|isAllowAutoUpdateSchemaEnabled|Allow schema to be auto updated at broker level.|true|
+|schemaCompatibilityStrategy| The schema compatibility strategy at broker level, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|FULL|
+|systemTopicSchemaCompatibilityStrategy| The schema compatibility strategy is used for system topics, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|ALWAYS_COMPATIBLE|
+| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 |
 |offloadersDirectory|The directory for all the offloader implementations.|./offloaders|
 |bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers ||
 |bookkeeperClientAuthenticationPlugin|  Authentication plugin to use when connecting to bookies ||
@@ -288,6 +301,7 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true: <ul><li>The max rollover time has been reached</li><li>The max entries have been written to the ledger</li><li>The max ledger size has been written to the ledger</li></ul>|50000|
 |managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic  |10|
 |managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240|
+|managedLedgerInactiveLedgerRolloverTimeSeconds| Time to rollover ledger for inactive topic |0|
 |managedLedgerCursorMaxEntriesPerLedger|  Max number of entries to append to a cursor ledger  |50000|
 |managedLedgerCursorRolloverTimeInSeconds|  Max time before triggering a rollover on a cursor ledger  |14400|
 |managedLedgerMaxUnackedRangesToPersist|  Max number of “acknowledgment holes” that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in “ranges” of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redel [...]
@@ -295,7 +309,7 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |loadBalancerEnabled| Enable load balancer  |true|
 |loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection ||
 |loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update  |10|
-|loadBalancerReportUpdateMaxIntervalMinutes|  maximum interval to update load report  |15|
+|loadBalancerReportUpdateMaxIntervalMinutes|  Maximum interval to update load report  |15|
 |loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect  |1|
 |loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers  |30|
 |loadBalancerSheddingGracePeriodMinutes|  Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30|
@@ -310,12 +324,11 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered  |1000|
 |loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered  |100|
 |loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace  |128|
+|loadBalancerLoadSheddingStrategy | The shedding strategy of load balance. <br /><br />Available values: <li>`org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`</li><li>`org.apache.pulsar.broker.loadbalance.impl.OverloadShedder`</li><li>`org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder`</li><br />For the comparisons of the shedding strategies, see [here](administration-load-balance/#shed-load-automatically).|`org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`
 |replicationMetricsEnabled| Enable replication metrics  |true|
 |replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links.  |16|
 |replicationProducerQueueSize|  Replicator producer queue size  |1000|
 |replicatorPrefix|  Replicator prefix used for replicator producer name and cursor name pulsar.repl||
-|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false|
-|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60|
 |transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true|
 |transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider|
 |defaultRetentionTimeInMinutes| Default message retention time  |0|
@@ -357,14 +370,32 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 | preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false |
 | lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false |  
 |haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false|
+| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0|
 | maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 |
 |subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared |
-| managedLedgerInfoCompressionType | Compression type of managed ledger information. <br><br>Available options are `NONE`, `LZ4`, `ZLIB`, `ZSTD`, and `SNAPPY`). <br><br>If this value is `NONE` or invalid, the `managedLedgerInfo` is not compressed. <br><br>**Note** that after enabling this configuration, if you want to degrade a broker, you need to change the value to `NONE` and make sure all ledger metadata is saved without compression. | None |
-| additionalServlets | Additional servlet name. <br><br>If you have multiple additional servlets, separate them by commas. <br><br>For example, additionalServlet_1, additionalServlet_2 | N/A |
+| managedLedgerInfoCompressionType | Compression type of managed ledger information. <br /><br />Available options are `NONE`, `LZ4`, `ZLIB`, `ZSTD`, and `SNAPPY`). <br /><br />If this value is `NONE` or invalid, the `managedLedgerInfo` is not compressed. <br /><br />**Note** that after enabling this configuration, if you want to degrade a broker, you need to change the value to `NONE` and make sure all ledger metadata is saved without compression. | None |
+| additionalServlets | Additional servlet name. <br /><br />If you have multiple additional servlets, separate them by commas. <br /><br />For example, additionalServlet_1, additionalServlet_2 | N/A |
 | additionalServletDirectory | Location of broker additional servlet NAR directory | ./brokerAdditionalServlet |
 | brokerEntryMetadataInterceptors | Set broker entry metadata interceptors.<br /><br />Multiple interceptors should be separated by commas. <br /><br />Available values:<li>org.apache.pulsar.common.intercept.AppendBrokerTimestampMetadataInterceptor</li><li>org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor</li> <br /><br />Example<br />brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendBrokerTimestampMetadataInterceptor, org.apache.pulsar.common.inter [...]
 | enableExposingBrokerEntryMetadataToClient|Whether to expose broker entry metadata to client or not.<br /><br />Available values:<li>true</li><li>false</li><br />Example<br />enableExposingBrokerEntryMetadataToClient=true  | false |
 
+
+#### Deprecated parameters of Broker
+The following parameters have been deprecated in the `conf/broker.conf` file.
+
+|Name|Description|Default|
+|---|---|---|
+|backlogQuotaDefaultLimitGB|  Use `backlogQuotaDefaultLimitBytes` instead. |-1|
+|brokerServicePurgeInactiveFrequencyInSeconds|  Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60|
+|tlsEnabled|  Use `webServicePortTls` and `brokerServicePortTls` instead. |false|
+|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages. Use `brokerClientTlsEnabled` instead. |false|
+|subscriptionKeySharedEnable|  Whether to enable the Key_Shared subscription. Use `subscriptionTypesEnabled` instead. |true|
+|zookeeperServers|  Zookeeper quorum connection string. Use `metadataStoreUrl` instead.  |N/A|
+|configurationStoreServers| Configuration store connection string (as a comma-separated list). Use `configurationMetadataStoreUrl` instead. |N/A|
+|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300|
+
+
 ## Client
 
 You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library.
@@ -438,9 +469,9 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |Name|Description|Default|
 |---|---|---|
 |authenticateOriginalAuthData|  If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false|
-|zookeeperServers|  The quorum connection string for local ZooKeeper  ||
-|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
-|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|metadataStoreUrl|  The quorum connection string for local metadata store  ||
+|metadataStoreCacheExpirySeconds| Metadata store cache expiry time in seconds|300|
+|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) ||
 |brokerServicePort| The port on which the standalone broker listens for connections |6650|
 |webServicePort|  The port used by the standalone broker for HTTP requests  |8080|
 |bindAddress| The hostname or IP address on which the standalone service binds  |0.0.0.0|
@@ -452,8 +483,8 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A|
 |clusterName| The name of the cluster that this broker belongs to. |standalone|
 | failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false |
-|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000|
-|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30|
+|metadataStoreSessionTimeoutMillis| Metadata store session timeout, in milliseconds. |30000|
+|metadataStoreOperationTimeoutSeconds|Metadata store operation timeout in seconds.|30|
 |brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000|
 |skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false|
 |backlogQuotaCheckEnabled|  Enable the backlog quota check, which enforces a specified action when the quota is reached.  |true|
@@ -467,7 +498,6 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed.  |1000|
 | subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 |
 | subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true |
-|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true|
 | subscriptionKeySharedUseConsistentHashing | In Key_Shared subscription type, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false |
 | subscriptionKeySharedConsistentHashingReplicaPoints | In Key_Shared subscription type, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 |
 | subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 |
@@ -484,8 +514,6 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 | maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 |
 | maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 |
 | unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false|
-|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0|
-|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown|
 | topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10|
 | brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 |
 | brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0|
@@ -515,10 +543,15 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 | numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 |
 | enablePersistentTopics | Enable broker to load persistent topics. | true |
 | enableNonPersistentTopics | Enable broker to load non-persistent topics. | true |
-| maxProducersPerTopic | Maximum number of producers allowed to connect to topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, maxProducersPerTopic-limit check is disabled. | 0 |
-| maxConsumersPerTopic | Maximum number of consumers allowed to connect to topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, maxConsumersPerTopic-limit check is disabled. | 0 |
-| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, maxConsumersPerSubscription-limit check is disabled. | 0 |
+| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 |
 | maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 |
+| metadataStoreBatchingEnabled | Enable metadata operations batching. | true |
+| metadataStoreBatchingMaxDelayMillis | Maximum delay to impose on batching grouping. | 5 |
+| metadataStoreBatchingMaxOperations | Maximum number of operations to include in a singular batch. | 1000 |
+| metadataStoreBatchingMaxSizeKb | Maximum size of a batch. | 128 |
 | tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 |
 | tlsCertificateFilePath | Path for the TLS certificate file. | |
 | tlsKeyFilePath | Path for the TLS private key file. | |
@@ -544,8 +577,8 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 | brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | |
 | systemTopicEnabled | Enable/Disable system topics. | false |
 | topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false |
+| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 |
 | proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | |
-| authenticateOriginalAuthData | If this flag is set, the broker authenticates the original Auth data. Otherwise, it just accepts the originalPrincipal and authorizes it (if required). | false |
 |authenticationEnabled| Enable authentication for the broker. |false|
 |authenticationProviders| A comma-separated list of class names for authentication providers. |false|
 |authorizationEnabled|  Enforce authorization in brokers. |false|
@@ -646,7 +679,7 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |loadBalancerAutoBundleSplitEnabled|    |false|
 | loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true |
 |loadBalancerNamespaceBundleMaxTopics|    |1000|
-|loadBalancerNamespaceBundleMaxSessions|    |1000|
+|loadBalancerNamespaceBundleMaxSessions|  Maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered. <br />To disable the threshold check, set the value to -1.  |1000|
 |loadBalancerNamespaceBundleMaxMsgRate|   |1000|
 |loadBalancerNamespaceBundleMaxBandwidthMbytes|   |100|
 |loadBalancerNamespaceMaximumBundles|   |128|
@@ -669,16 +702,31 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |defaultRetentionSizeInMB|    |0|
 |keepAliveIntervalSeconds|    |30|
 |haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false|
-|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`. <br><br>Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).<br><br> The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots. <br><br>For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/|
+|bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`. <br /><br />Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).<br /><br /> The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots. <br /><br />For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/|
 | maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 |
+| metadataStoreConfigPath | The configuration file path of the local metadata store. Standalone Pulsar uses [RocksDB](http://rocksdb.org/) as the local metadata store. The format is `/xxx/xx/rocksdb.ini`. |N/A|
+|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory|
+|isSchemaValidationEnforced| Whether to enable schema validation, when schema validation is enabled, if a producer without a schema attempts to produce the message to a topic with schema, the producer is rejected and disconnected.|false|
+|isAllowAutoUpdateSchemaEnabled|Allow schema to be auto updated at broker level.|true|
+|schemaCompatibilityStrategy| The schema compatibility strategy at broker level, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|FULL|
+|systemTopicSchemaCompatibilityStrategy| The schema compatibility strategy is used for system topics, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|ALWAYS_COMPATIBLE|
+
+#### Deprecated parameters of standalone Pulsar
+The following parameters have been deprecated in the `conf/standalone.conf` file.
+
+|Name|Description|Default|
+|---|---|---|
+|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds. Use `metadataStoreOperationTimeoutSeconds` instead. |30|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead. |300|
+|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000|
 
 ## WebSocket
 
 |Name|Description|Default|
 |---|---|---|
-|configurationStoreServers    |||
-|zooKeeperSessionTimeoutMillis|   |30000|
-|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
+|configurationMetadataStoreUrl    |||
+|metadataStoreSessionTimeoutMillis|Metadata store session timeout in milliseconds.  |30000|
+|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300|
 |serviceUrl|||
 |serviceUrlTls|||
 |brokerServiceUrl|||
@@ -699,6 +747,14 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |tlsKeyFilePath |||
 |tlsTrustCertsFilePath|||
 
+#### Deprecated parameters of WebSocket
+The following parameters have been deprecated in the `conf/websocket.conf` file.
+
+|Name|Description|Default|
+|---|---|---|
+|zooKeeperSessionTimeoutMillis|The ZooKeeper session timeout in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300|
+
 ## Pulsar proxy
 
 The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file.
@@ -707,16 +763,16 @@ The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be config
 |Name|Description|Default|
 |---|---|---|
 |forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false|
-|zookeeperServers|  The ZooKeeper quorum connection string (as a comma-separated list)  ||
-|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|metadataStoreUrl| Metadata store quorum connection string (as a comma-separated list)  ||
+|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) ||
 | brokerServiceURL | The service URL pointing to the broker cluster. | |
 | brokerServiceURLTLS | The TLS service URL pointing to the broker cluster | |
 | brokerWebServiceURL | The Web service URL pointing to the broker cluster | |
 | brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | |
 | functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | |
 | functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | |
-|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
-|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
+|metadataStoreSessionTimeoutMillis| Metadata store session timeout (in milliseconds) |30000|
+|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300|
 |advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A|
 |servicePort| The port to use for server binary Protobuf requests |6650|
 |servicePortTls|  The port to use to server binary Protobuf TLS requests  |6651|
@@ -734,7 +790,6 @@ The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be config
 |superUserRoles|  Role names that are treated as “super-users,” meaning that they will be able to perform all admin ||
 |maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000|
 |maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000|
-|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false|
 |tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar brokers. |false|
 | tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 |
 |tlsCertificateFilePath|  Path for the TLS certificate file ||
@@ -755,6 +810,15 @@ The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be config
 | tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| |
 |haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false|
 
+#### Deprecated parameters of Pulsar proxy
+The following parameters have been deprecated in the `conf/proxy.conf` file.
+
+|Name|Description|Default|
+|---|---|---|
+|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false|
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds). Use `metadataStoreSessionTimeoutMillis` instead. |30000|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300|
+
 ## ZooKeeper
 
 ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available:
@@ -786,4 +850,4 @@ server.2=zk2.us-west.example.com:2888:3888
 server.3=zk3.us-west.example.com:2888:3888
 ```
 
-> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration
+> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration
\ No newline at end of file
diff --git a/site2/docs/reference-metrics.md b/site2/docs/reference-metrics.md
index e991610..b464ef1 100644
--- a/site2/docs/reference-metrics.md
+++ b/site2/docs/reference-metrics.md
@@ -124,7 +124,7 @@ All the BookKeeper client metric are labelled with the following label:
 
 | Name | Type | Description |
 |---|---|---|
-| bookkeeper_server_BOOKIE_QUARANTINE_count | Counter | The number of bookie clients to be quarantined. |
+| pulsar_managedLedger_client_bookkeeper_client_BOOKIE_QUARANTINE | Counter | The number of bookie clients to be quarantined.<br /><br />If you want to expose this metric, set `bookkeeperClientExposeStatsToPrometheus` to `true` in the `broker.conf` file.|
 
 ### Namespace metrics
 
@@ -190,6 +190,7 @@ All the topic metrics are labelled with the following labels:
 | pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. |
 | pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). |
 | pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). |
+| pulsar_publish_rate_limit_times | Gauge | The number of times the publish rate limit is triggered. |
 | pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). |
 | pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). |
 | pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). |
@@ -273,14 +274,16 @@ All the managedLedger metrics are labelled with the following labels:
 | pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added |
 | pulsar_ml_AddEntryWithReplicasBytesRate | Gauge | The bytes/s rate of messages added with replicas |
 | pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed |
-| pulsar_ml_AddEntryLatencyBuckets | Histogram | The add entry latency of a ledger with a given quantile (threshold).<br> Available quantile: <br><ul><li> quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]</li> <li>quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]</li><li>quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]</li><li>quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]</li><li>quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]</li>< [...]
-| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The add entry latency > 1s |
+| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side.<br> Available quantile: <br><ul><li> quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]</li> <li>quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]</li><li>quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]</li><li>quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]</li><li>qu [...]
+| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second |
 | pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added |
 | pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded |
-| pulsar_ml_EntrySizeBuckets | Histogram | The add entry size of a ledger with given quantile.<br> Available quantile: <br><ul><li>quantile="0.0_128.0" is EntrySize between (0byte, 128byte]</li><li>quantile="128.0_512.0" is EntrySize between (128byte, 512byte]</li><li>quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]</li><li>quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]</li><li>quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]</li><li>quantile="4096.0_16384.0" [...]
-| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge  | The add entry size > 1MB |
-| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with given quantile. <br> Available quantile: <br><ul><li>quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]</li><li>quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]</li><li>quantile="1.0_5.0" is EntrySize between (1ms, 5ms]</li><li>quantile="5.0_10.0" is EntrySize between (5ms, 10ms]</li><li>quantile="10.0_20.0" is EntrySize between (10ms, 20ms]</li><li>quantile="20.0_50.0" is EntrySize between (20ms, 5 [...]
-| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The ledger switch latency > 1s |
+| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.<br> Available quantile: <br><ul><li>quantile="0.0_128.0" is EntrySize between (0byte, 128byte]</li><li>quantile="128.0_512.0" is EntrySize between (128byte, 512byte]</li><li>quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]</li><li>quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]</li><li>quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]</li><li>quantile="4096.0_1638 [...]
+| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge  | The number of times the EntrySize is larger than 1MB |
+| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile. <br> Available quantile: <br><ul><li>quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]</li><li>quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]</li><li>quantile="1.0_5.0" is EntrySize between (1ms, 5ms]</li><li>quantile="5.0_10.0" is EntrySize between (5ms, 10ms]</li><li>quantile="10.0_20.0" is EntrySize between (10ms, 20ms]</li><li>quantile="20.0_50.0" is EntrySize between (20ms, [...]
+| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second |
+| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold). <br /> Available quantile: <br /><ul><li> quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]</li> <li>quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]</li><li>quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]</li><li>quantile="5.0_10.0" is LedgerAddEntryLatency bet [...]
+| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second |
 | pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s |
 | pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers |
 | pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read |
diff --git a/site2/docs/schema-evolution-compatibility.md b/site2/docs/schema-evolution-compatibility.md
index ba33a8b..eee1b3e 100644
--- a/site2/docs/schema-evolution-compatibility.md
+++ b/site2/docs/schema-evolution-compatibility.md
@@ -628,7 +628,7 @@ In some data formats, for example, Avro, you can define fields with default valu
 
 > **Tip**
 > 
-> You can set schema compatibility check strategy at namespace or broker level. For how to set the strategy, see [here](schema-manage.md/#set-schema-compatibility-check-strategy).
+> You can set schema compatibility check strategy at the topic, namespace or broker level. For how to set the strategy, see [here](schema-manage.md/#set-schema-compatibility-check-strategy).
 
 ## Schema verification
 
diff --git a/site2/docs/schema-manage.md b/site2/docs/schema-manage.md
index f53cf44..eb69b67 100644
--- a/site2/docs/schema-manage.md
+++ b/site2/docs/schema-manage.md
@@ -809,23 +809,124 @@ To use your custom schema storage implementation, perform the following steps.
 
 ## Set schema compatibility check strategy 
 
-You can set [schema compatibility check strategy](schema-evolution-compatibility.md#schema-compatibility-check-strategy) at namespace or broker level. 
+You can set [schema compatibility check strategy](schema-evolution-compatibility.md#schema-compatibility-check-strategy) at the topic, namespace or broker level. 
 
-- If you set schema compatibility check strategy at both namespace or broker level, it uses the strategy set for the namespace level.
+The schema compatibility check strategy set at different levels has priority: topic level > namespace level > broker level. 
 
-- If you do not set schema compatibility check strategy at both namespace or broker level, it uses the `FULL` strategy.
+- If you set the strategy at both topic and namespace level, it uses the topic-level strategy. 
 
-- If you set schema compatibility check strategy at broker level rather than namespace level, it uses the strategy set for the broker level.
+- If you set the strategy at both namespace and broker level, it uses the namespace-level strategy.
 
-- If you set schema compatibility check strategy at namespace level rather than broker level, it uses the strategy set for the namespace level.
+- If you do not set the strategy at any level, it uses the `FULL` strategy. For all available values, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy).
 
-### Namespace 
+
+### Topic level
+
+To set a schema compatibility check strategy at the topic level, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the [`pulsar-admin topicsPolicies set-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. 
+
+```shell
+pulsar-admin topicsPolicies set-schema-compatibility-strategy <strategy> <topicName>
+```
+<!--REST API-->
+
+Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=[[pulsar:version_number]]}
+
+<!--Java Admin API-->
+
+```java
+void setSchemaCompatibilityStrategy(String topic, SchemaCompatibilityStrategy strategy)
+```
+
+Here is an example of setting a schema compatibility check strategy at the topic level.
+
+```java
+PulsarAdmin admin = …;
+
+admin.topicPolicies().setSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", SchemaCompatibilityStrategy.ALWAYS_INCOMPATIBLE);
+```
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+<br />
+To get the topic-level schema compatibility check strategy, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the [`pulsar-admin topicsPolicies get-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. 
+
+```shell
+pulsar-admin topicsPolicies get-schema-compatibility-strategy <topicName>
+```
+<!--REST API-->
+
+Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=[[pulsar:version_number]]}
+
+<!--Java Admin API-->
+
+```java
+SchemaCompatibilityStrategy getSchemaCompatibilityStrategy(String topic, boolean applied)
+```
+
+Here is an example of getting the topic-level schema compatibility check strategy.
+
+```java
+PulsarAdmin admin = …;
+
+// get the current applied schema compatibility strategy
+admin.topicPolicies().getSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", true);
+
+// only get the schema compatibility strategy from topic policies
+admin.topicPolicies().getSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", false);
+```
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+<br />
+To remove the topic-level schema compatibility check strategy, use one of the following methods.
+
+<!--DOCUSAURUS_CODE_TABS-->
+
+<!--Admin CLI-->
+
+Use the [`pulsar-admin topicsPolicies remove-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. 
+
+```shell
+pulsar-admin topicsPolicies remove-schema-compatibility-strategy <topicName>
+```
+<!--REST API-->
+
+Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=[[pulsar:version_number]]}
+
+<!--Java Admin API-->
+
+```java
+void removeSchemaCompatibilityStrategy(String topic)
+```
+
+Here is an example of removing the topic-level schema compatibility check strategy.
+
+```java
+PulsarAdmin admin = …;
+
+admin.removeSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic");
+```
+
+<!--END_DOCUSAURUS_CODE_TABS-->
+
+
+### Namespace level
 
 You can set schema compatibility check strategy at namespace level using one of the following methods.
 
 <!--DOCUSAURUS_CODE_TABS-->
 
-<!--pulsar-admin-->
+<!--Admin CLI-->
 
 Use the [`pulsar-admin namespaces set-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. 
 
@@ -836,7 +937,7 @@ pulsar-admin namespaces set-schema-compatibility-strategy options
 
 Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/schemaCompatibilityStrategy?version=[[pulsar:version_number]]}
 
-<!--Java-->
+<!--Java Admin CLI-->
 
 Use the [`setSchemaCompatibilityStrategy`](https://pulsar.apache.org/api/admin/)method.
 
@@ -846,7 +947,7 @@ admin.namespaces().setSchemaCompatibilityStrategy("test", SchemaCompatibilityStr
 
 <!--END_DOCUSAURUS_CODE_TABS-->
 
-### Broker 
+### Broker level
 
 You can set schema compatibility check strategy at broker level by setting `schemaCompatibilityStrategy` in [`broker.conf`](https://github.com/apache/pulsar/blob/f24b4890c278f72a67fe30e7bf22dc36d71aac6a/conf/broker.conf#L1240) or [`standalone.conf`](https://github.com/apache/pulsar/blob/master/conf/standalone.conf) file.
 
diff --git a/site2/docs/security-tls-keystore.md b/site2/docs/security-tls-keystore.md
index b95d988..c9bc9d0 100644
--- a/site2/docs/security-tls-keystore.md
+++ b/site2/docs/security-tls-keystore.md
@@ -158,10 +158,10 @@ Optional settings that may worth consider:
     By default, it is not set.
 ### Configuring Clients
 
-This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration).
-For a a minimal configuration, user need to provide the TrustStore information.
+This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#client-configuration).
+For a minimal configuration, you need to provide the TrustStore information.
 
-e.g. 
+For example:
 1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation.
 
     ```properties
@@ -188,14 +188,16 @@ e.g.
     ```
 
 1. for java admin client
-```java
-    PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443")
-                .useKeyStoreTls(true)
-                .tlsTrustStorePath("/var/private/tls/client.truststore.jks")
-                .tlsTrustStorePassword("clientpw")
-                .allowTlsInsecureConnection(false)
-                .build();
-```
+    ```java
+        PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443")
+            .useKeyStoreTls(true)
+            .tlsTrustStorePath("/var/private/tls/client.truststore.jks")
+            .tlsTrustStorePassword("clientpw")
+            .allowTlsInsecureConnection(false)
+            .build();
+    ```
+
+> **Note:** Please configure `tlsTrustStorePath` when you set `useKeyStoreTls` to `true`.
 
 ## TLS authentication with KeyStore configure
 
@@ -244,7 +246,7 @@ webSocketServiceEnabled=false
 
 Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client.
 
-e.g. 
+For example:
 1. for [Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation.
 
     ```properties
@@ -288,6 +290,8 @@ e.g.
             .build();
     ```
 
+> **Note:** Please configure `tlsTrustStorePath` when you set `useKeyStoreTls` to `true`.
+
 ## Enabling TLS Logging
 
 You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example:
diff --git a/site2/docs/security-versioning-policy.md b/site2/docs/security-versioning-policy.md
new file mode 100644
index 0000000..0b65f1b
--- /dev/null
+++ b/site2/docs/security-versioning-policy.md
@@ -0,0 +1,67 @@
+---
+id: security-policy-and-supported-versions
+title: Security Policy and Supported Versions
+sidebar_label: Security Policy and Supported Versions
+---
+
+## Reporting a Vulnerability
+
+The current process for reporting vulnerabilities is outlined here: https://www.apache.org/security/. When reporting a
+vulnerability to security@apache.org, you can copy your email to [private@pulsar.apache.org](mailto:private@pulsar.apache.org)
+to send your report to the Apache Pulsar Project Management Committee. This is a private mailing list.
+
+## Using Pulsar's Security Features
+
+You can find documentation on Pulsar's available security features and how to use them here:
+https://pulsar.apache.org/docs/en/security-overview/.
+
+## Security Vulnerability Announcements
+
+The Pulsar community will announce security vulnerabilities and how to mitigate them on the [users@pulsar.apache.org](mailto:users@pulsar.apache.org).
+For instructions on how to subscribe, please see https://pulsar.apache.org/contact/.
+
+## Versioning Policy
+
+The Pulsar project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). Existing releases can expect
+patches for bugs and security vulnerabilities. New features will target minor releases.
+
+When upgrading an existing cluster, it is important to upgrade components linearly through each minor version. For
+example, when upgrading from 2.8.x to 2.10.x, it is important to upgrade to 2.9.x before going to 2.10.x.
+
+## Supported Versions
+
+Feature release branches will be maintained with security fix and bug fix releases for a period of at least 12 months
+after initial release. For example, branch 2.5.x is no longer considered maintained as of January 2021, 12 months after
+the release of 2.5.0 in January 2020. No more 2.5.x releases should be expected at this point, even to fix security
+vulnerabilities.
+
+Note that a minor version can be maintained past it's 12 month initial support period. For example, version 2.7 is still
+actively maintained.
+
+Security fixes will be given priority when it comes to back porting fixes to older versions that are within the
+supported time window. It is challenging to decide which bug fixes to back port to old versions. As such, the latest
+versions will have the most bug fixes.
+
+When 3.0.0 is released, the community will decide how to continue supporting 2.x. It is possible that the last minor
+release within 2.x will be maintained for longer as an “LTS” release, but it has not been officially decided.
+
+The following table shows version support timelines and will be updated with each release.
+
+| Version | Supported          | Initial Release | At Least Until |
+|:-------:|:------------------:|:---------------:|:--------------:|
+| 2.9.x   | :white_check_mark: | November 2021   | November 2022  |
+| 2.8.x   | :white_check_mark: | June 2021       | June 2022      |
+| 2.7.x   | :white_check_mark: | November 2020   | November 2021  |
+| 2.6.x   | :x:                | June 2020       | June 2021      |
+| 2.5.x   | :x:                | January 2020    | January 2021   |
+| 2.4.x   | :x:                | July 2019       | July 2020      |
+| < 2.3.x | :x:                | -               | -              |
+
+If there is ambiguity about which versions of Pulsar are actively supported, please ask on the [users@pulsar.apache.org](mailto:users@pulsar.apache.org)
+mailing list.
+
+## Release Frequency
+
+With the acceptance of [PIP-47 - A Time Based Release Plan](https://github.com/apache/pulsar/wiki/PIP-47%3A-Time-Based-Release-Plan),
+the Pulsar community aims to complete 4 minor releases each year. Patch releases are completed based on demand as well
+as need, in the event of security fixes.
diff --git a/site2/docs/sql-deployment-configurations.md b/site2/docs/sql-deployment-configurations.md
index 6fa6ef4..cb9af25 100644
--- a/site2/docs/sql-deployment-configurations.md
+++ b/site2/docs/sql-deployment-configurations.md
@@ -110,20 +110,15 @@ pulsar.web-service-url=http://localhost:8080,localhost:8081,localhost:8082
 pulsar.zookeeper-uri=localhost1,localhost2:2181
 ```
 
-A frequently asked question is why my latest message not showing up when querying with Pulsar SQL.
-It's not a bug but controlled by a setting, by default BookKeeper LAC only advanced when subsequent entries are added.
-If there is no subsequent entries added, the last entry written will not be visible to readers until the ledger is closed.
-This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly read from BookKeeper ledger.
-We can add following setting to change the behavior:
-In Broker config, set
-bookkeeperExplicitLacIntervalInMills > 0
-bookkeeperUseV2WireProtocol=false
-
-And in Presto config, set
-pulsar.bookkeeper-explicit-interval > 0
-pulsar.bookkeeper-use-v2-protocol=false
-
-However,keep in mind that using bk V3 protocol will introduce additional GC overhead to BK as it uses Protobuf.
+**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. 
+
+If you want to get the last message in a topic, set the following configurations:
+
+1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`.
+   
+2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`.
+
+However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf.
 
 ## Query data from existing Presto clusters
 
diff --git a/site2/docs/tiered-storage-azure.md b/site2/docs/tiered-storage-azure.md
index 1c224fe..64feb69 100644
--- a/site2/docs/tiered-storage-azure.md
+++ b/site2/docs/tiered-storage-azure.md
@@ -203,7 +203,6 @@ For individual topics, you can trigger Azure BlobStore offloader manually using
     Offload was a success
     ```
 
-
     If there is an error in offloading, the error is propagated to the `pulsar-admin topics offload-status` command.
 
     ```bash
diff --git a/site2/docs/tiered-storage-filesystem.md b/site2/docs/tiered-storage-filesystem.md
index 242efa6..2e96a48 100644
--- a/site2/docs/tiered-storage-filesystem.md
+++ b/site2/docs/tiered-storage-filesystem.md
@@ -282,7 +282,6 @@ This tutorial sets up a Hadoop single node cluster and uses Hadoop 3.2.1.
 
     ![](assets/FileSystem-1.png)
 
-
     1. At the top navigation bar, click **Datanodes** to check DataNode information.
 
         ![](assets/FileSystem-2.png)
diff --git a/site2/docs/txn-why.md b/site2/docs/txn-why.md
index 73d9f8a..f30d567 100644
--- a/site2/docs/txn-why.md
+++ b/site2/docs/txn-why.md
@@ -17,7 +17,7 @@ successfully produced, and vice versa.
 
 ![](assets/txn-1.png)
 
-The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until.
+The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single unit.
 
 ## Limitation of idempotent producer
 
diff --git a/site2/website-next/docs/admin-api-clusters.md b/site2/website-next/docs/admin-api-clusters.md
index ccd3ebb..3c2f661 100644
--- a/site2/website-next/docs/admin-api-clusters.md
+++ b/site2/website-next/docs/admin-api-clusters.md
@@ -103,8 +103,8 @@ Here's an example cluster metadata initialization command:
 
 bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
diff --git a/site2/website-next/docs/admin-api-topics.md b/site2/website-next/docs/admin-api-topics.md
index ccdf96a..f365f78 100644
--- a/site2/website-next/docs/admin-api-topics.md
+++ b/site2/website-next/docs/admin-api-topics.md
@@ -1306,6 +1306,377 @@ admin.topics().getBacklogSizeByMessageId(topic, messageId);
 </Tabs>
 ````
 
+
+### Configure deduplication snapshot interval
+
+#### Get deduplication snapshot interval
+
+To get the topic-level deduplication snapshot interval, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Pulsar-admin API"
+  values={[{"label":"Pulsar-admin API","value":"Pulsar-admin API"},{"label":"REST API","value":"REST API"},{"label":"Java API","value":"Java API"}]}>
+<TabItem value="Pulsar-admin API">
+
+```
+
+pulsar-admin topics get-deduplication-snapshot-interval options
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+```
+
+{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval?version=@pulsar:version_number@}
+
+```
+
+</TabItem>
+<TabItem value="Java API">
+
+```java
+
+admin.topics().getDeduplicationSnapshotInterval(topic)
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+#### Set deduplication snapshot interval
+
+To set the topic-level deduplication snapshot interval, use one of the following methods.
+
+> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Pulsar-admin API"
+  values={[{"label":"Pulsar-admin API","value":"Pulsar-admin API"},{"label":"REST API","value":"REST API"},{"label":"Java API","value":"Java API"}]}>
+<TabItem value="Pulsar-admin API">
+
+```
+
+pulsar-admin topics set-deduplication-snapshot-interval options
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+```
+
+{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval?version=@pulsar:version_number@}
+
+```
+
+```json
+
+{
+  "interval": 1000
+}
+
+```
+
+</TabItem>
+<TabItem value="Java API">
+
+```java
+
+admin.topics().setDeduplicationSnapshotInterval(topic, 1000)
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+#### Remove deduplication snapshot interval
+
+To remove the topic-level deduplication snapshot interval, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Pulsar-admin API"
+  values={[{"label":"Pulsar-admin API","value":"Pulsar-admin API"},{"label":"REST API","value":"REST API"},{"label":"Java API","value":"Java API"}]}>
+<TabItem value="Pulsar-admin API">
+
+```
+
+pulsar-admin topics remove-deduplication-snapshot-interval options
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+```
+
+{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval?version=@pulsar:version_number@}
+
+```
+
+</TabItem>
+<TabItem value="Java API">
+
+```java
+
+admin.topics().removeDeduplicationSnapshotInterval(topic)
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+
+### Configure inactive topic policies
+
+#### Get inactive topic policies
+
+To get the topic-level inactive topic policies, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Pulsar-admin API"
+  values={[{"label":"Pulsar-admin API","value":"Pulsar-admin API"},{"label":"REST API","value":"REST API"},{"label":"Java API","value":"Java API"}]}>
+<TabItem value="Pulsar-admin API">
+
+```
+
+pulsar-admin topics get-inactive-topic-policies options
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+```
+
+{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies?version=@pulsar:version_number@}
+
+```
+
+</TabItem>
+<TabItem value="Java API">
+
+```java
+
+admin.topics().getInactiveTopicPolicies(topic)
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+#### Set inactive topic policies
+
+To set the topic-level inactive topic policies, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Pulsar-admin API"
+  values={[{"label":"Pulsar-admin API","value":"Pulsar-admin API"},{"label":"REST API","value":"REST API"},{"label":"Java API","value":"Java API"}]}>
+<TabItem value="Pulsar-admin API">
+
+```
+
+pulsar-admin topics set-inactive-topic-policies options
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+```
+
+{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies?version=@pulsar:version_number@}
+
+```
+
+</TabItem>
+<TabItem value="Java API">
+
+```java
+
+admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies)
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+#### Remove inactive topic policies
+
+To remove the topic-level inactive topic policies, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Pulsar-admin API"
+  values={[{"label":"Pulsar-admin API","value":"Pulsar-admin API"},{"label":"REST API","value":"REST API"},{"label":"Java API","value":"Java API"}]}>
+<TabItem value="Pulsar-admin API">
+
+```
+
+pulsar-admin topics remove-inactive-topic-policies options
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+```
+
+{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies?version=@pulsar:version_number@}
+
+```
+
+</TabItem>
+<TabItem value="Java API">
+
+```java
+
+admin.topics().removeInactiveTopicPolicies(topic)
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+
+### Configure offload policies
+
+#### Get offload policies
+
+To get the topic-level offload policies, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Pulsar-admin API"
+  values={[{"label":"Pulsar-admin API","value":"Pulsar-admin API"},{"label":"REST API","value":"REST API"},{"label":"Java API","value":"Java API"}]}>
+<TabItem value="Pulsar-admin API">
+
+```
+
+pulsar-admin topics get-offload-policies options
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+```
+
+{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies?version=@pulsar:version_number@}
+
+```
+
+</TabItem>
+<TabItem value="Java API">
+
+```java
+
+admin.topics().getOffloadPolicies(topic)
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+#### Set offload policies
+
+To set the topic-level offload policies, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Pulsar-admin API"
+  values={[{"label":"Pulsar-admin API","value":"Pulsar-admin API"},{"label":"REST API","value":"REST API"},{"label":"Java API","value":"Java API"}]}>
+<TabItem value="Pulsar-admin API">
+
+```
+
+pulsar-admin topics set-offload-policies options
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+```
+
+{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies?version=@pulsar:version_number@}
+
+```
+
+</TabItem>
+<TabItem value="Java API">
+
+```java
+
+admin.topics().setOffloadPolicies(topic, offloadPolicies)
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+#### Remove offload policies
+
+To remove the topic-level offload policies, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Pulsar-admin API"
+  values={[{"label":"Pulsar-admin API","value":"Pulsar-admin API"},{"label":"REST API","value":"REST API"},{"label":"Java API","value":"Java API"}]}>
+<TabItem value="Pulsar-admin API">
+
+```
+
+pulsar-admin topics remove-offload-policies options
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+```
+
+{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies?version=@pulsar:version_number@}
+
+```
+
+</TabItem>
+<TabItem value="Java API">
+
+```java
+
+admin.topics().removeOffloadPolicies(topic)
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+
 ## Manage non-partitioned topics
 You can use Pulsar [admin API](admin-api-overview) to create, delete and check status of non-partitioned topics.
 
diff --git a/site2/website-next/docs/administration-isolation.md b/site2/website-next/docs/administration-isolation.md
index c629f92..c808043 100644
--- a/site2/website-next/docs/administration-isolation.md
+++ b/site2/website-next/docs/administration-isolation.md
@@ -100,7 +100,10 @@ bin/pulsar-admin namespaces set-bookie-affinity-group public/default \
 
 :::note
 
-Do not set a bookie rack name to slash (`/`) or an empty string (`""`) if you use Pulsar earlier than 2.7.5, 2.8.3, and 2.9.2. For the bookie rack name restrictions, see [pulsar-admin bookies set-bookie-rack](https://pulsar.apache.org/tools/pulsar-admin/).
+- Do not set a bookie rack name to slash (`/`) or an empty string (`""`) if you use Pulsar earlier than 2.7.5, 2.8.3, and 2.9.2. If you use Pulsar 2.7.5, 2.8.3, 2.9.2 or later versions, it falls back to `/default-rack` or `/default-region/default-rack`.
+- When `RackawareEnsemblePlacementPolicy` is enabled, the rack name is not allowed to contain slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/rack0` is okay, but `/rack/0` is not allowed.
+- When `RegionawareEnsemblePlacementPolicy` is enabled, the rack name can only contain one slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/region0/rack0` is okay, but `/region0rack0` and `/region0/rack/0` are not allowed.
+For the bookie rack name restrictions, see [pulsar-admin bookies set-bookie-rack](https://pulsar.apache.org/tools/pulsar-admin/).
 
 :::
 
diff --git a/site2/website-next/docs/administration-load-balance.md b/site2/website-next/docs/administration-load-balance.md
index 834b156..811e8e5 100644
--- a/site2/website-next/docs/administration-load-balance.md
+++ b/site2/website-next/docs/administration-load-balance.md
@@ -155,20 +155,26 @@ loadBalancerSheddingGracePeriodMinutes=30
 
 ```
 
-Pulsar supports three types of shedding strategies:
+Pulsar supports the following types of shedding strategies. From Pulsar 2.10, the **default** shedding strategy is `ThresholdShedder`.
 
 ##### ThresholdShedder
-This strategy tends to shed the bundles if any broker's usage is above the configured threshold. It does this by first computing the average resource usage per broker for the whole cluster. The resource usage for each broker is calculated using the following method: LocalBrokerData#getMaxResourceUsageWithWeight). The weights for each resource are configurable. Historical observations are included in the running average based on the broker's setting for loadBalancerHistoryResourcePercenta [...]
+This strategy tends to shed the bundles if any broker's usage is above the configured threshold. It does this by first computing the average resource usage per broker for the whole cluster. The resource usage for each broker is calculated using the following method: LocalBrokerData#getMaxResourceUsageWithWeight. The weights for each resource are configurable. Historical observations are included in the running average based on the broker's setting for loadBalancerHistoryResourcePercentag [...]
 `loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`
 
+![Shedding strategy - ThresholdShedder](/assets/ThresholdShedder.png)
+
 ##### OverloadShedder
 This strategy will attempt to shed exactly one bundle on brokers which are overloaded, that is, whose maximum system resource usage exceeds loadBalancerBrokerOverloadedThresholdPercentage. To see which resources are considered when determining the maximum system resource. A bundle is recommended for unloading off that broker if and only if the following conditions hold: The broker has at least two bundles assigned and the broker has at least one bundle that has not been unloaded recently [...]
 `loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.OverloadShedder`
 
+![Shedding strategy - OverloadShedder](/assets/OverloadShedder.png)
+
 ##### UniformLoadShedder
 This strategy tends to distribute load uniformly across all brokers. This strategy checks laod difference between broker with highest load and broker with lowest load. If the difference is higher than configured thresholds `loadBalancerMsgRateDifferenceShedderThreshold` and `loadBalancerMsgThroughputMultiplierDifferenceShedderThreshold` then it finds out bundles which can be unloaded to distribute traffic evenly across all brokers. Configure broker with below value to use this strategy.
 `loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder`
 
+![Shedding strategy - UniformLoadShedder](/assets/UniformLoadShedder.png)
+
 #### Broker overload thresholds
 
 The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled).
diff --git a/site2/website-next/docs/administration-proxy.md b/site2/website-next/docs/administration-proxy.md
index 3cef937..5228b9a 100644
--- a/site2/website-next/docs/administration-proxy.md
+++ b/site2/website-next/docs/administration-proxy.md
@@ -8,22 +8,9 @@ Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connection
 
 ## Configure the proxy
 
-Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. 
+Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery.
 
-### Use service discovery
-
-Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
-
-```properties
-
-zookeeperServers=zk-0,zk-1,zk-2
-configurationStoreServers=zk-0:2184,zk-remote:2184
-
-```
-
-> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
-
-> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+> In a production environment service discovery is not recommended.
 
 ### Use broker URLs
 
@@ -57,6 +44,21 @@ The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651
 
 Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`.
 
+### Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+
+```properties
+
+metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181
+configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184
+
+```
+
+> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
+
+> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+
 ## Start the proxy
 
 To start the proxy:
@@ -64,7 +66,9 @@ To start the proxy:
 ```bash
 
 $ cd /path/to/pulsar/directory
-$ bin/pulsar proxy
+$ bin/pulsar proxy \
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/docs/administration-pulsar-manager.md b/site2/website-next/docs/administration-pulsar-manager.md
index 3513739..3e9aeba 100644
--- a/site2/website-next/docs/administration-pulsar-manager.md
+++ b/site2/website-next/docs/administration-pulsar-manager.md
@@ -14,6 +14,7 @@ If you are monitoring your current stats with [Pulsar dashboard](administration-
 
 ## Install
 
+### Quick Install
 The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container.
 
 ```shell
@@ -26,46 +27,20 @@ docker run -it \
 
 ```
 
+* Pulsar Manager is divided into front-end and back-end, the front-end service port is `9527` and the back-end service port is `7750`.
 * `SPRING_CONFIGURATION_FILE`: Default configuration file for spring.
+* By default, Pulsar Manager uses the `herddb` database. HerdDB is a SQL distributed database implemented in Java and can be found at [herddb.org](https://herddb.org/) for more information.
 
-### Set administrator account and password
+### Configure Database or JWT authentication
+####  Configure Database (optional)
 
- ```shell
- 
- CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token)
- curl \
-     -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \
-     -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \
-     -H "Content-Type: application/json" \
-     -X PUT http://localhost:7750/pulsar-manager/users/superuser \
-     -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}'
- 
- ```
+If you have a large amount of data, you can use a custom database, otherwise, some display errors may occur, such as the topic information cannot be displayed when the topic exceeds 10000.
+The following is an example of PostgreSQL.
 
-You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well:
-
-```
-
-git clone https://github.com/apache/pulsar-manager
-cd pulsar-manager/front-end
-npm install --save
-npm run build:prod
-cd ..
-./gradlew build -x test
-cd ..
-docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager .
-
-```
-
-### Use custom databases
+1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/META-INF/sql/postgresql-schema.sql).
+2. Download and modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties), then add the PostgreSQL configuration.
 
-If you have a large amount of data, you can use a custom database. The following is an example of PostgreSQL.   
-
-1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/tree/master/src/main/resources/META-INF/sql/postgresql-schema.sql).
-
-2. Modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties) and add PostgreSQL configuration.
-
-```
+```properties
 
 spring.datasource.driver-class-name=org.postgresql.Driver
 spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager
@@ -74,131 +49,167 @@ spring.datasource.password=postgres
 
 ```
 
-3. Compile to generate a new executable jar package.
+3. Add a configuration mount and start with a docker image.
 
-```
+```bash
 
-./gradlew build -x test
+docker pull apachepulsar/pulsar-manager:v0.2.0
+docker run -it \
+    -p 9527:9527 -p 7750:7750 \
+    -v /your-path/application.properties:/pulsar-manager/pulsar-
+manager/application.properties
+    -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \
+    apachepulsar/pulsar-manager:v0.2.0
 
 ```
 
-### Enable JWT authentication
+####  Enable JWT authentication (optional)
 
-If you want to turn on JWT authentication, configure the following parameters:
+If you want to turn on JWT authentication, configure the `application.properties` file.
 
-* `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization.
-* `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET.
-* `jwt.broker.public.key`: configure this option if you use the PUBLIC mode.
-* `jwt.broker.private.key`: configure this option if you use the PRIVATE mode.
-* `jwt.broker.secret.key`: configure this option if you use the SECRET mode.
+```properties
 
-For more information, see [Token Authentication Admin of Pulsar](http://pulsar.apache.org/docs/en/security-token-admin/).
+backend.jwt.token=token
 
+jwt.broker.token.mode=PRIVATE
+jwt.broker.public.key=file:///path/broker-public.key
+jwt.broker.private.key=file:///path/broker-private.key
 
-If you want to enable JWT authentication, use one of the following methods.
+or 
+jwt.broker.token.mode=SECRET
+jwt.broker.secret.key=file:///path/broker-secret.key
 
+```
 
-* Method 1: use command-line tool
+•	`backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization.   
+•	`jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET.  
+•	`jwt.broker.public.key`: configure this option if you use the PUBLIC mode.  
+•	`jwt.broker.private.key`: configure this option if you use the PRIVATE mode.  
+•	`jwt.broker.secret.key`: configure this option if you use the SECRET mode.  
+For more information, see [Token Authentication Admin of Pulsar](https://pulsar.apache.org/docs/en/security-token-admin/).
 
-```
+Docker command to add profile and key files mount.
 
-wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/apache-pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
-tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
-cd pulsar-manager
-tar -zxvf pulsar-manager.tar
-cd pulsar-manager
-cp -r ../dist ui
-./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 insert.stats.interval=600000 --backend.jwt.token=token --jwt.broker.token.mode=PRIVATE --jwt.broker.private.key=file:///path/broker-private.key --jwt.broker.public.key=file:///path/broker-public.key
+```bash
+
+docker pull apachepulsar/pulsar-manager:v0.2.0
+docker run -it \
+    -p 9527:9527 -p 7750:7750 \
+    -v /your-path/application.properties:/pulsar-manager/pulsar-
+manager/application.properties
+    -v /your-path/private.key:/pulsar-manager/private.key
+    -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \
+    apachepulsar/pulsar-manager:v0.2.0
 
 ```
 
-Firstly, [set the administrator account and password](#set-administrator-account-and-password)
+### Set the administrator account and password
 
-Secondly, log in to Pulsar manager through http://localhost:7750/ui/index.html.
+```bash
 
-* Method 2: configure the application.properties file
+CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token)
+curl \
+   -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \
+   -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \
+   -H "Content-Type: application/json" \
+   -X PUT http://localhost:7750/pulsar-manager/users/superuser \
+   -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}'
 
 ```
 
-backend.jwt.token=token
+The request parameter in curl command:
 
-jwt.broker.token.mode=PRIVATE
-jwt.broker.public.key=file:///path/broker-public.key
-jwt.broker.private.key=file:///path/broker-private.key
+```json
 
-or 
-jwt.broker.token.mode=SECRET
-jwt.broker.secret.key=file:///path/broker-secret.key
+{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}
 
 ```
 
-* Method 3: use Docker and enable token authentication.
+- `name` is the Pulsar Manager login username, currently `admin`.
+- `password` is the password of the current user of Pulsar Manager, currently `apachepulsar`. The password should be more than or equal to 6 digits.
 
-```
 
-export JWT_TOKEN="your-token"
-docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -v $PWD:/data apachepulsar/pulsar-manager:v0.2.0 /bin/sh
 
-```
+### Configure the environment
+1. Login to the system, Visit http://localhost:9527 to login.  The current default account is  `admin/apachepulsar`
 
-* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the  `bin/pulsar tokens create --secret-key` or `bin/pulsar tokens create --private-key` command.
-* `REDIRECT_HOST`: the IP address of the front-end server.
-* `REDIRECT_PORT`: the port of the front-end server.
-* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database.
-* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database.
-* `USERNAME`: the username of PostgreSQL.
-* `PASSWORD`: the password of PostgreSQL.
-* `LOG_LEVEL`: the level of log.
+2. Click "New Environment" button to add an environment.
 
-* Method 4: use Docker and turn on **token authentication** and **token management** by private key and public key.
+3. Input the "Environment Name". The environment name is used for identifying an environment.
 
-```
+4. Input the "Service URL". The Service URL is the admin service url of your Pulsar cluster.
 
-export JWT_TOKEN="your-token"
-export PRIVATE_KEY="file:///pulsar-manager/secret/my-private.key"
-export PUBLIC_KEY="file:///pulsar-manager/secret/my-public.key"
-docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e PRIVATE_KEY=$PRIVATE_KEY -e PUBLIC_KEY=$PUBLIC_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh
 
-```
+## Other Installation
+### Bare-metal installation
 
-* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --private-key` command.
-* `PRIVATE_KEY`: private key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command.
-* `PUBLIC_KEY`: public key path mounted in container, generated by `bin/pulsar tokens create-key-pair` command.
-* `$PWD/secret`: the folder where the private key and public key generated by the `bin/pulsar tokens create-key-pair` command are placed locally
-* `REDIRECT_HOST`: the IP address of the front-end server.
-* `REDIRECT_PORT`: the port of the front-end server.
-* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database.
-* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database.
-* `USERNAME`: the username of PostgreSQL.
-* `PASSWORD`: the password of PostgreSQL.
-* `LOG_LEVEL`: the level of log.
+When using binary packages for direct deployment, you can follow these steps.
 
-* Method 5: use Docker and turn on **token authentication** and **token management** by secret key.
+- Download and unzip the binary package, which is available on the [Pulsar Download](https://pulsar.apache.org/en/download/) page.
 
-```
+  ```bash
+  
+  	wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
+  	tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
+  
+  ```
 
-export JWT_TOKEN="your-token"
-export SECRET_KEY="file:///pulsar-manager/secret/my-secret.key"
-docker run -it -p 9527:9527 -p 7750:7750 -e REDIRECT_HOST=http://localhost -e REDIRECT_PORT=9527 -e DRIVER_CLASS_NAME=org.postgresql.Driver -e URL='jdbc:postgresql://127.0.0.1:5432/pulsar_manager' -e USERNAME=pulsar -e PASSWORD=pulsar -e LOG_LEVEL=DEBUG -e JWT_TOKEN=$JWT_TOKEN -e SECRET_KEY=$SECRET_KEY -v $PWD:/data -v $PWD/secret:/pulsar-manager/secret apachepulsar/pulsar-manager:v0.2.0 /bin/sh
+- Extract the back-end service binary package and place the front-end resources in the back-end service directory.
 
-```
+  ```bash
+  
+  	cd pulsar-manager
+  	tar -zxvf pulsar-manager.tar
+  	cd pulsar-manager
+  	cp -r ../dist ui
+  
+  ```
 
-* `JWT_TOKEN`: the token of superuser configured for the broker. It is generated by the `bin/pulsar tokens create --secret-key` command.
-* `SECRET_KEY`: secret key path mounted in container, generated by `bin/pulsar tokens create-secret-key` command.
-* `$PWD/secret`: the folder where the secret key generated by the `bin/pulsar tokens create-secret-key` command are placed locally
-* `REDIRECT_HOST`: the IP address of the front-end server.
-* `REDIRECT_PORT`: the port of the front-end server.
-* `DRIVER_CLASS_NAME`: the driver class name of the PostgreSQL database.
-* `URL`: the JDBC URL of your PostgreSQL database, such as jdbc:postgresql://127.0.0.1:5432/pulsar_manager. The docker image automatically start a local instance of the PostgreSQL database.
-* `USERNAME`: the username of PostgreSQL.
-* `PASSWORD`: the password of PostgreSQL.
-* `LOG_LEVEL`: the level of log.
+- Modify `application.properties` configuration on demand.
 
-* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README).
-* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end).
+  > If you don't want to modify the `application.properties` file, you can add the configuration to the startup parameters via `. /bin/pulsar-manager --backend.jwt.token=token` to add the configuration to the startup parameters. This is a capability of the spring boot framework.
 
-## Log in
+- Start Pulsar Manager
 
-[Set the administrator account and password](#set-administrator-account-and-password).
+  ```bash
+  
+  ./bin/pulsar-manager
+  
+  ```
+
+### Custom docker image installation
+
+You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well:
+
+  ```bash
+  
+  git clone https://github.com/apache/pulsar-manager
+  cd pulsar-manager/front-end
+  npm install --save
+  npm run build:prod
+  cd ..
+  ./gradlew build -x test
+  cd ..
+  docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager .
+  
+  ```
+
+## Configuration
+
+
+
+| application.properties              | System env on Docker Image | Desc                                                         | Example                                           |
+| ----------------------------------- | -------------------------- | ------------------------------------------------------------ | ------------------------------------------------- |
+| backend.jwt.token                   | JWT_TOKEN                  | token for the superuser. You need to configure this parameter during cluster initialization. | `token`                                           |
+| jwt.broker.token.mode               | N/A                        | multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. | `PUBLIC` or `PRIVATE` or `SECRET`.                |
+| jwt.broker.public.key               | PUBLIC_KEY                 | configure this option if you use the PUBLIC mode.            | `file:///path/broker-public.key`                  |
+| jwt.broker.private.key              | PRIVATE_KEY                | configure this option if you use the PRIVATE mode.           | `file:///path/broker-private.key`                 |
+| jwt.broker.secret.key               | SECRET_KEY                 | configure this option if you use the SECRET mode.            | `file:///path/broker-secret.key`                  |
+| spring.datasource.driver-class-name | DRIVER_CLASS_NAME          | the driver class name of the database.                       | `org.postgresql.Driver`                           |
+| spring.datasource.url               | URL                        | the JDBC URL of your  database.                              | `jdbc:postgresql://127.0.0.1:5432/pulsar_manager` |
+| spring.datasource.username          | USERNAME                   | the username of database.                                    | `postgres`                                        |
+| spring.datasource.password          | PASSWORD                   | the password of database.                                    | `postgres`                                        |
+| N/A                                 | LOG_LEVEL                  | the level of log.                                            | DEBUG                                             |
+* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README).
+* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end).
 
-Visit http://localhost:9527 to log in.
diff --git a/site2/website-next/docs/administration-zk-bk.md b/site2/website-next/docs/administration-zk-bk.md
index e5f9688..c0aec95 100644
--- a/site2/website-next/docs/administration-zk-bk.md
+++ b/site2/website-next/docs/administration-zk-bk.md
@@ -147,27 +147,19 @@ $ bin/pulsar-daemon start configuration-store
 
 ### ZooKeeper configuration
 
-In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation:
+* The `conf/zookeeper.conf` file handles the configuration for local ZooKeeper.
+* The `conf/global-zookeeper.conf` file handles the configuration for configuration store.
+See [parameters](reference-configuration.md#zookeeper) for more details.
 
-#### Local ZooKeeper
+#### Configure batching operations
+Using the batching operations reduces the remote procedure call (RPC) traffic between ZooKeeper client and servers. It also reduces the number of write transactions, because each batching operation corresponds to a single ZooKeeper transaction, containing multiple read and write operations.
 
-The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters:
+The following figure demonstrates a basic benchmark of batching read/write operations that can be requested to ZooKeeper in one second:
 
-|Name|Description|Default|
-|---|---|---|
-|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
-|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
-|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
-|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
-|clientPort|  The port on which the ZooKeeper server listens for connections. |2181|
-|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
-|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1|
-|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+![Zookeeper batching benchmark](/assets/zookeeper-batching.png)
 
-
-#### Configuration Store
-
-The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters:
+To enable batching operations, set the [`metadataStoreBatchingEnabled`](reference-configuration.md#broker) parameter to `true` on the broker side.
 
 
 ## BookKeeper
@@ -194,6 +186,12 @@ You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](referenc
 
 The minimum configuration changes required in `conf/bookkeeper.conf` are as follows:
 
+:::note
+
+Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later.
+
+:::
+
 ```properties
 
 # Change to point to journal disk mount point
@@ -205,6 +203,9 @@ ledgerDirectories=data/bookkeeper/ledgers
 # Point to local ZK quorum
 zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
 
+#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud).
+advertisedAddress=
+
 ```
 
 To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`.
diff --git a/site2/website-next/docs/client-libraries-cpp.md b/site2/website-next/docs/client-libraries-cpp.md
index 958861a..b67f6d9 100644
--- a/site2/website-next/docs/client-libraries-cpp.md
+++ b/site2/website-next/docs/client-libraries-cpp.md
@@ -14,7 +14,18 @@ Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms
 
 [Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp).
 
-## System requirements
+
+## Linux
+
+:::note
+
+You can choose one of the following installation methods based on your needs: Compilation, Install RPM or Install Debian.
+
+:::
+
+### Compilation 
+
+#### System requirements
 
 You need to install the following components before using the C++ client:
 
@@ -24,10 +35,6 @@ You need to install the following components before using the C++ client:
 * [libcurl](https://curl.se/libcurl/)
 * [Google Test](https://github.com/google/googletest)
 
-## Linux
-
-### Compilation 
-
 1. Clone the Pulsar repository.
 
 ```shell
@@ -144,7 +151,14 @@ $ rpm -ivh apache-pulsar-client*.rpm
 
 ```
 
-After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory.
+After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory,for example:
+
+```bash
+
+lrwxrwxrwx 1 root root 18 Dec 30 22:21 libpulsar.so -> libpulsar.so.2.9.1
+lrwxrwxrwx 1 root root 23 Dec 30 22:21 libpulsarnossl.so -> libpulsarnossl.so.2.9.1
+
+```
 
 :::note
 
@@ -152,6 +166,15 @@ If you get the error that `libpulsar.so: cannot open shared object file: No such
 
 :::
 
+2. Install the GCC and g++ using the following command, otherwise errors would occur in installing Node.js.
+
+```bash
+
+$ sudo yum -y install gcc automake autoconf libtool make
+$ sudo yum -y install gcc-c++
+
+```
+
 ### Install Debian
 
 1. Download a Debian package from the links in the table. 
@@ -344,108 +367,6 @@ pulsar+ssl://pulsar.us-west.example.com:6651
 
 ```
 
-## Create a consumer
-
-To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer:
-- [Blocking style](#blocking-example): synchronously calling `receive(msg)`.
-- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener.
-
-### Blocking example
-
-The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received.
-
-This example starts a subscription at the earliest offset and consumes 100 messages.
-
-```c++
-
-#include <pulsar/Client.h>
-
-using namespace pulsar;
-
-int main() {
-    Client client("pulsar://localhost:6650");
-
-    Consumer consumer;
-    ConsumerConfiguration config;
-    config.setSubscriptionInitialPosition(InitialPositionEarliest);
-    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
-    if (result != ResultOk) {
-        std::cout << "Failed to subscribe: " << result << std::endl;
-        return -1;
-    }
-
-    Message msg;
-    int ctr = 0;
-    // consume 100 messages
-    while (ctr < 100) {
-        consumer.receive(msg);
-        std::cout << "Received: " << msg
-            << "  with payload '" << msg.getDataAsString() << "'" << std::endl;
-
-        consumer.acknowledge(msg);
-        ctr++;
-    }
-
-    std::cout << "Finished consuming synchronously!" << std::endl;
-
-    client.close();
-    return 0;
-}
-
-```
-
-### Consumer with a message listener
-
-You can avoid  running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received.
-
-This example starts a subscription at the earliest offset and consumes 100 messages.
-
-```c++
-
-#include <pulsar/Client.h>
-#include <atomic>
-#include <thread>
-
-using namespace pulsar;
-
-std::atomic<uint32_t> messagesReceived;
-
-void handleAckComplete(Result res) {
-    std::cout << "Ack res: " << res << std::endl;
-}
-
-void listener(Consumer consumer, const Message& msg) {
-    std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl;
-    messagesReceived++;
-    consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete);
-}
-
-int main() {
-    Client client("pulsar://localhost:6650");
-
-    Consumer consumer;
-    ConsumerConfiguration config;
-    config.setMessageListener(listener);
-    config.setSubscriptionInitialPosition(InitialPositionEarliest);
-    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
-    if (result != ResultOk) {
-        std::cout << "Failed to subscribe: " << result << std::endl;
-        return -1;
-    }
-
-    // wait for 100 messages to be consumed
-    while (messagesReceived < 100) {
-        std::this_thread::sleep_for(std::chrono::milliseconds(100));
-    }
-
-    std::cout << "Finished consuming asynchronously!" << std::endl;
-
-    client.close();
-    return 0;
-}
-
-```
-
 ## Create a producer
 
 To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer:
@@ -579,6 +500,142 @@ producerConf.setLazyStartPartitionedProducers(true);
 
 ```
 
+### Enable chunking
+
+Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. 
+
+The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
+
+```c++
+
+ProducerConfiguration conf;
+conf.setBatchingEnabled(false);
+conf.setChunkingEnabled(true);
+Producer producer;
+client.createProducer("my-topic", conf, producer);
+
+```
+
+> **Note:** To enable chunking, you need to disable batching (`setBatchingEnabled`=`false`) concurrently.
+
+## Create a consumer
+
+To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer:
+- [Blocking style](#blocking-example): synchronously calling `receive(msg)`.
+- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener.
+
+### Blocking example
+
+The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received.
+
+This example starts a subscription at the earliest offset and consumes 100 messages.
+
+```c++
+
+#include <pulsar/Client.h>
+
+using namespace pulsar;
+
+int main() {
+    Client client("pulsar://localhost:6650");
+
+    Consumer consumer;
+    ConsumerConfiguration config;
+    config.setSubscriptionInitialPosition(InitialPositionEarliest);
+    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
+    if (result != ResultOk) {
+        std::cout << "Failed to subscribe: " << result << std::endl;
+        return -1;
+    }
+
+    Message msg;
+    int ctr = 0;
+    // consume 100 messages
+    while (ctr < 100) {
+        consumer.receive(msg);
+        std::cout << "Received: " << msg
+            << "  with payload '" << msg.getDataAsString() << "'" << std::endl;
+
+        consumer.acknowledge(msg);
+        ctr++;
+    }
+
+    std::cout << "Finished consuming synchronously!" << std::endl;
+
+    client.close();
+    return 0;
+}
+
+```
+
+### Consumer with a message listener
+
+You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received.
+
+This example starts a subscription at the earliest offset and consumes 100 messages.
+
+```c++
+
+#include <pulsar/Client.h>
+#include <atomic>
+#include <thread>
+
+using namespace pulsar;
+
+std::atomic<uint32_t> messagesReceived;
+
+void handleAckComplete(Result res) {
+    std::cout << "Ack res: " << res << std::endl;
+}
+
+void listener(Consumer consumer, const Message& msg) {
+    std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl;
+    messagesReceived++;
+    consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete);
+}
+
+int main() {
+    Client client("pulsar://localhost:6650");
+
+    Consumer consumer;
+    ConsumerConfiguration config;
+    config.setMessageListener(listener);
+    config.setSubscriptionInitialPosition(InitialPositionEarliest);
+    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
+    if (result != ResultOk) {
+        std::cout << "Failed to subscribe: " << result << std::endl;
+        return -1;
+    }
+
+    // wait for 100 messages to be consumed
+    while (messagesReceived < 100) {
+        std::this_thread::sleep_for(std::chrono::milliseconds(100));
+    }
+
+    std::cout << "Finished consuming asynchronously!" << std::endl;
+
+    client.close();
+    return 0;
+}
+
+```
+
+### Configure chunking
+
+You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `setMaxPendingChunkedMessage` and `setAutoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. 
+
+The following is an example of how to configure message chunking.
+
+```c++
+
+ConsumerConfiguration conf;
+conf.setAutoAckOldestChunkedMessageOnQueueFull(true);
+conf.setMaxPendingChunkedMessage(100);
+Consumer consumer;
+client.subscribe("my-topic", "my-sub", conf, consumer);
+
+```
+
 ## Enable authentication in connection URLs
 If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example.
 
diff --git a/site2/website-next/docs/client-libraries-dotnet.md b/site2/website-next/docs/client-libraries-dotnet.md
index aad9c82..b5d4389 100644
--- a/site2/website-next/docs/client-libraries-dotnet.md
+++ b/site2/website-next/docs/client-libraries-dotnet.md
@@ -275,10 +275,7 @@ Messages can be acknowledged individually or cumulatively. For details about mes
 
   ```c#
   
-  await foreach (var message in consumer.Messages())
-  {
-      Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
-  }
+  await consumer.Acknowledge(message);
   
   ```
 
diff --git a/site2/website-next/docs/client-libraries-java.md b/site2/website-next/docs/client-libraries-java.md
index b8150e1..a0c4f98 100644
--- a/site2/website-next/docs/client-libraries-java.md
+++ b/site2/website-next/docs/client-libraries-java.md
@@ -4,9 +4,15 @@ title: Pulsar Java client
 sidebar_label: "Java"
 ---
 
-You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), and [readers](#reader) of messages and to perform [administrative tasks](admin-api-overview). The current Java client version is **@pulsar:version@**.
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
 
-All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe.
+
+You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of messages and to perform [administrative tasks](admin-api-overview). The current Java client version is **@pulsar:version@**.
+
+All the methods in [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of a Java client are thread-safe.
 
 Javadoc for the Pulsar client is divided into two domains by package as follows.
 
@@ -168,6 +174,328 @@ You can set the client memory allocator configurations through Java properties.<
 
 ```
 
+### Cluster-level failover
+
+This chapter describes the concept, benefits, use cases, constraints, usage, working principles, and more information about the cluster-level failover. It contains the following sections:
+
+- [What is cluster-level failover?](#what-is-cluster-level-failover)
+
+  * [Concept of cluster-level failover](#concept-of-cluster-level-failover)
+   
+  * [Why use cluster-level failover?](#why-use-cluster-level-failover)
+
+  * [When to use cluster-level failover?](#when-to-use-cluster-level-failover)
+
+  * [When cluster-level failover is triggered?](#when-cluster-level-failover-is-triggered)
+
+  * [Why does cluster-level failover fail?](#why-does-cluster-level-failover-fail)
+
+  * [What are the limitations of cluster-level failover?](#what-are-the-limitations-of-cluster-level-failover)
+
+  * [What are the relationships between cluster-level failover and geo-replication?](#what-are-the-relationships-between-cluster-level-failover-and-geo-replication)
+  
+- [How to use cluster-level failover?](#how-to-use-cluster-level-failover)
+
+- [How does cluster-level failover work?](#how-does-cluster-level-failover-work)
+  
+> #### What is cluster-level failover
+
+This chapter helps you better understand the concept of cluster-level failover.
+> ##### Concept of cluster-level failover
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+Automatic cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters automatically and seamlessly when it detects a failover event based on the configured detecting policy set by **users**. 
+
+![Automatic cluster-level failover](/assets/cluster-level-failover-1.png)
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+Controlled cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters. The switchover is manually set by **administrators**.
+
+![Controlled cluster-level failover](/assets/cluster-level-failover-2.png)
+
+</TabItem>
+
+</Tabs>
+````
+
+Once the primary cluster functions again, Pulsar clients can switch back to the primary cluster. Most of the time users won’t even notice a thing. Users can keep using applications and services without interruptions or timeouts.
+
+> ##### Why use cluster-level failover?
+
+The cluster-level failover provides fault tolerance, continuous availability, and high availability together. It brings a number of benefits, including but not limited to:
+
+* Reduced cost: services can be switched and recovered automatically with no data loss.
+
+* Simplified management: businesses can operate on an “always-on” basis since no immediate user intervention is required.
+
+* Improved stability and robustness: it ensures continuous performance and minimizes service downtime. 
+
+> ##### When to use cluster-level failover?
+
+The cluster-level failover protects your environment in a number of ways, including but not limited to:
+
+* Disaster recovery: cluster-level failover can automatically and seamlessly transfer the production workload on a primary cluster to one or several backup clusters, which ensures minimum data loss and reduced recovery time.
+
+* Planned migration: if you want to migrate production workloads from an old cluster to a new cluster, you can improve the migration efficiency with cluster-level failover. For example, you can test whether the data migration goes smoothly in case of a failover event, identify possible issues and risks before the migration.
+
+> ##### When cluster-level failover is triggered?
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+Automatic cluster-level failover is triggered when Pulsar clients cannot connect to the primary cluster for a prolonged period of time. This can be caused by any number of reasons including, but not limited to: 
+
+* Network failure: internet connection is lost.
+
+* Power failure: shutdown time of a primary cluster exceeds time limits.
+
+* Service error: errors occur on a primary cluster (for example, the primary cluster does not function because of time limits).
+
+* Crashed storage space: the primary cluster does not have enough storage space, but the corresponding storage space on the backup server functions normally.
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+Controlled cluster-level failover is triggered when administrators set the switchover manually.
+
+</TabItem>
+
+</Tabs>
+````
+
+> ##### Why does cluster-level failover fail?
+
+Obviously, the cluster-level failover does not succeed if the backup cluster is unreachable by active Pulsar clients. This can happen for many reasons, including but not limited to:
+
+* Power failure: the backup cluster is shut down or does not function normally. 
+
+* Crashed storage space: primary and backup clusters do not have enough storage space. 
+
+* If the failover is initiated, but no cluster can assume the role of an available cluster due to errors, and the primary cluster is not able to provide service normally.
+
+* If you manually initiate a switchover, but services cannot be switched to the backup cluster server, then the system will attempt to switch services back to the primary cluster.
+
+* Fail to authenticate or authorize between 1) primary and backup clusters, or 2) between two backup clusters.
+
+> ##### What are the limitations of cluster-level failover?
+
+Currently, cluster-level failover can perform probes to prevent data loss, but it can not check the status of backup clusters. If backup clusters are not healthy, you cannot produce or consume data.
+
+> #### What are the relationships between cluster-level failover and geo-replication?
+
+The cluster-level failover is an extension of [geo-replication](concepts-replication) to improve stability and robustness. The cluster-level failover depends on geo-replication, and they have some **differences** as below.
+
+Influence |Cluster-level failover|Geo-replication
+|---|---|---
+Do administrators have heavy workloads?|No or maybe.<br /><br />- For the **automatic** cluster-level failover, the cluster switchover is triggered automatically based on the policies set by **users**.<br /><br />- For the **controlled** cluster-level failover, the switchover is triggered manually by **administrators**.|Yes.<br /><br />If a cluster fails, immediate administration intervention is required.|
+Result in data loss?|No.<br /><br />For both **automatic** and **controlled** cluster-level failover, if the failed primary cluster doesn't replicate messages immediately to the backup cluster, the Pulsar client can't consume the non-replicated messages. After the primary cluster is restored and the Pulsar client switches back, the non-replicated data can still be consumed by the Pulsar client. Consequently, the data is not lost.<br /><br />- For the **automatic** cluster-level failover, [...]
+Result in Pulsar client failure? |No or maybe.<br /><br />- For **automatic** cluster-level failover, services can be switched and recovered automatically and the Pulsar client does not fail. <br /><br />- For **controlled** cluster-level failover, services can be switched and recovered manually, but the Pulsar client fails before administrators can take action. |Same as above.
+
+> #### How to use cluster-level failover
+
+This section guides you through every step on how to configure cluster-level failover.
+
+**Tip**
+
+- You should configure cluster-level failover only when the cluster contains sufficient resources to handle all possible consequences. Workload intensity on the backup cluster may increase significantly.
+
+- Connect clusters to an uninterruptible power supply (UPS) unit to reduce the risk of unexpected power loss.
+
+**Requirements**
+
+* Pulsar client 2.10 or later versions.
+
+* For backup clusters:
+
+  * The number of BooKeeper nodes should be equal to or greater than the ensemble quorum.
+
+  * The number of ZooKeeper nodes should be equal to or greater than 3.
+
+* **Turn on geo-replication** between the primary cluster and any dependent cluster (primary to backup or backup to backup) to prevent data loss.
+
+* Set `replicateSubscriptionState` to `true` when creating consumers.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+This is an example of how to construct a Java Pulsar client to use automatic cluster-level failover. The switchover is triggered automatically.
+
+```
+
+  private PulsarClient getAutoFailoverClient() throws PulsarClientException {
+
+        ServiceUrlProvider failover = AutoClusterFailover.builder()
+                .primary("pulsar://localhost:6650")
+                .secondary(Collections.singletonList("pulsar://other1:6650","pulsar://other2:6650"))
+                .failoverDelay(30, TimeUnit.SECONDS)
+                .switchBackDelay(60, TimeUnit.SECONDS)
+                .checkInterval(1000, TimeUnit.MILLISECONDS)
+	    	    .secondaryTlsTrustCertsFilePath("/path/to/ca.cert.pem")
+    .secondaryAuthentication("org.apache.pulsar.client.impl.auth.AuthenticationTls",
+"tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem")
+
+                .build();
+
+        PulsarClient pulsarClient = PulsarClient.builder()
+                .build();
+
+        failover.initialize(pulsarClient);
+        return pulsarClient;
+    }
+
+```
+
+Configure the following parameters:
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`primary`|N/A|Yes|Service URL of the primary cluster.
+`secondary`|N/A|Yes|Service URL(s) of one or several backup clusters.<br /><br />You can specify several backup clusters using a comma-separated list.<br /><br /> Note that:<br />- The backup cluster is chosen in the sequence shown in the list. <br />- If all backup clusters are available, the Pulsar client chooses the first backup cluster.
+`failoverDelay`|N/A|Yes|The delay before the Pulsar client switches from the primary cluster to the backup cluster.<br /><br />Automatic failover is controlled by a probe task: <br />1) The probe task first checks the health status of the primary cluster. <br /> 2) If the probe task finds the continuous failure time of the primary cluster exceeds `failoverDelayMs`, it switches the Pulsar client to the backup cluster. 
+`switchBackDelay`|N/A|Yes|The delay before the Pulsar client switches from the backup cluster to the primary cluster.<br /><br />Automatic failover switchover is controlled by a probe task: <br /> 1) After the Pulsar client switches from the primary cluster to the backup cluster, the probe task continues to check the status of the primary cluster. <br /> 2) If the primary cluster functions well and continuously remains active longer than `switchBackDelay`, the Pulsar client switches back [...]
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`secondaryTlsTrustCertsFilePath`|N/A|No|Path to the trusted TLS certificate file of the backup cluster.
+`secondaryAuthentication`|N/A|No|Authentication of the backup cluster.
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+This is an example of how to construct a Java Pulsar client to use controlled cluster-level failover. The switchover is triggered by administrators manually.
+
+**Note**: you can have one or several backup clusters but can only specify one.
+
+```
+
+ public PulsarClient getControlledFailoverClient() throws IOException {
+Map<String, String> header = new HashMap(); 
+  header.put(“service_user_id”, “my-user”);
+  header.put(“service_password”, “tiger”);
+  header.put(“clusterA”, “tokenA”);
+  header.put(“clusterB”, “tokenB”);
+
+  ServiceUrlProvider provider = 
+      ControlledClusterFailover.builder()
+        .defaultServiceUrl("pulsar://localhost:6650")
+        .checkInterval(1, TimeUnit.MINUTES)
+        .urlProvider("http://localhost:8080/test")
+        .urlProviderHeader(header)
+        .build();
+
+  PulsarClient pulsarClient = 
+     PulsarClient.builder()
+      .build();
+
+  provider.initialize(pulsarClient);
+  return pulsarClient;
+}
+
+```
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`defaultServiceUrl`|N/A|Yes|Pulsar service URL.
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`urlProvider`|N/A|Yes|URL provider service.
+`urlProviderHeader`|N/A|No|`urlProviderHeader` is a map containing tokens and credentials. <br /><br />If you enable authentication or authorization between Pulsar clients and primary and backup clusters, you need to provide `urlProviderHeader`.
+
+Here is an example of how `urlProviderHeader` works.
+
+![How urlProviderHeader works](/assets/cluster-level-failover-3.png)
+
+Assume that you want to connect Pulsar client 1 to cluster A.
+
+1. Pulsar client 1 sends the token *t1* to the URL provider service.
+
+2. The URL provider service returns the credential *c1* and the cluster A URL to the Pulsar client.
+   
+   The URL provider service manages all tokens and credentials. It returns different credentials based on different tokens and different target cluster URLs to different Pulsar clients.
+
+   **Note**: **the credential must be in a JSON file and contain parameters as shown**.
+
+   ```
+   
+   {
+   "serviceUrl": "pulsar+ssl://target:6651", 
+   "tlsTrustCertsFilePath": "/security/ca.cert.pem",
+   "authPluginClassName":"org.apache.pulsar.client.impl.auth.AuthenticationTls",
+   "authParamsString": " \"tlsCertFile\": \"/security/client.cert.pem\" 
+       \"tlsKeyFile\": \"/security/client-pk8.pem\" "
+   }
+   
+   ```
+
+3. Pulsar client 1 connects to cluster A using credential *c1*.
+
+</TabItem>
+
+</Tabs>
+````
+
+>#### How does cluster-level failover work?
+
+This chapter explains the working process of cluster-level failover. For more implementation details, see [PIP-121](https://github.com/apache/pulsar/issues/13315).
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+In automatic failover cluster, the primary cluster and backup cluster are aware of each other's availability. The automatic failover cluster performs the following actions without administrator intervention:
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+   
+2. If the probe task finds the failure time of the primary cluster exceeds the time set in the `failoverDelay` parameter, it searches backup clusters for an available healthy cluster.
+
+   2a) If there are healthy backup clusters, the Pulsar client switches to a backup cluster in the order defined in `secondary`.
+
+   2b) If there is no healthy backup cluster, the Pulsar client does not perform the switchover, and the probe task continues to look  for an available backup cluster.
+
+3. The probe task checks whether the primary cluster functions well or not. 
+
+   3a) If the primary cluster comes back and the continuous healthy time exceeds the time set in `switchBackDelay`, the Pulsar client switches back to the primary cluster.
+
+   3b) If the primary cluster does not come back, the Pulsar client does not perform the switchover. 
+
+![Workflow of automatic failover cluster](/assets/cluster-level-failover-4.png)
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+
+2. The probe task fetches the service URL configuration from the URL provider service, which is configured by `urlProvider`.
+
+   2a) If the service URL configuration is changed, the probe task  switches to the target cluster without checking the health status of the target cluster.
+
+   2b) If the service URL configuration is not changed, the Pulsar client does not perform the switchover.
+
+3. If the Pulsar client switches to the target cluster, the probe task continues to fetch service URL configuration from the URL provider service at intervals defined in `checkInterval`. 
+
+   3a) If the service URL configuration is changed, the probe task  switches to the target cluster without checking the health status of the target cluster.
+
+   3b) If the service URL configuration is not changed, it does not perform the switchover.
+
+![Workflow of controlled failover cluster](/assets/cluster-level-failover-5.png)
+
+</TabItem>
+
+</Tabs>
+````
+
 ## Producer
 
 In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic).
@@ -241,7 +569,9 @@ Name| Type |  <div>Description</div>|  Default
 `batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1)
 `batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000
 `batchingEnabled`| boolean|Enable batching of messages. |true
+`chunkingEnabled` | boolean | Enable chunking of messages. |false
 `compressionType`|CompressionType|Message data compression type used by a producer. <br />Available options:<li>[`LZ4`](https://github.com/lz4/lz4)</li><li>[`ZLIB`](https://zlib.net/)<br /></li><li>[`ZSTD`](https://facebook.github.io/zstd/)</li><li>[`SNAPPY`](https://google.github.io/snappy/)</li>| No compression
+`initialSubscriptionName`|string|Use this configuration to automatically create an initial subscription when creating a topic. If this field is not set, the initial subscription is not created.|null
 
 You can configure parameters if you do not want to use the default configuration.
 
@@ -295,6 +625,24 @@ producer.newMessage()
 
 You can terminate the builder chain with `sendAsync()` and get a future return.
 
+### Enable chunking
+
+Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. 
+
+The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
+
+```java
+
+Producer<byte[]> producer = client.newProducer()
+        .topic(topic)
+        .enableChunking(true)
+        .enableBatching(false)
+        .create();
+
+```
+
+> **Note:** To enable chunking, you need to disable batching (`enableBatching`=`false`) concurrently.
+
 ## Consumer
 
 In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)).
@@ -382,7 +730,11 @@ When you create a consumer, you can use the `loadConf` configuration. The follow
 `deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.<br /><br />By default, some messages are probably redelivered many times, even to the extent that it never stops.<br /><br />By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.<br /><br />You can enable the dead letter mechanism by setting `deadLetterPolicy`.<br /><br [...]
 `autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.<br /><br />**Note**: this is only for partitioned consumers.|true
 `replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false
-`negativeAckRedeliveryBackoff`|NegativeAckRedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `NegativeAckRedeliveryBackoff` for a consumer.| `NegativeAckRedeliveryExponentialBackoff`
+`negativeAckRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`ackTimeoutRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is ackTimeout policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`autoAckOldestChunkedMessageOnQueueFull`|boolean|Whether to automatically acknowledge pending chunked messages when the threashold of `maxPendingChunkedMessage` is reached. If set to `false`, these messages will be redelivered by their broker. |true
+`maxPendingChunkedMessage`|int| The maximum size of a queue holding pending chunked messages. When the threshold is reached, the consumer drops pending messages to optimize memory utilization.|10
+`expireTimeOfIncompleteChunkedMessageMillis`|long|The time interval to expire incomplete chunks if a consumer fails to receive all the chunks in the specified time period. The default value is 1 minute. | 60000
 
 You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. 
 
@@ -462,27 +814,78 @@ BatchReceivePolicy.builder()
 
 :::
 
+### Configure chunking
+
+You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` and `autoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. The `expireTimeOfIncompleteChunkedMessage` parameter decides the time interval to expire incomplete chunks if the consumer fails to receive all chunks of a me [...]
+
+The following is an example of how to configure message chunking.
+
+```java
+
+Consumer<byte[]> consumer = client.newConsumer()
+        .topic(topic)
+        .subscriptionName("test")
+        .autoAckOldestChunkedMessageOnQueueFull(true)
+        .maxPendingChunkedMessage(100)
+        .expireTimeOfIncompleteChunkedMessage(10, TimeUnit.MINUTES)
+        .subscribe();
+
+```
+
 ### Negative acknowledgment redelivery backoff
 
-The `NegativeAckRedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. 
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. 
+
+```java
+
+Consumer consumer =  client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+                .minDelayMs(1000)
+                .maxDelayMs(60 * 1000)
+                .build())
+        .subscribe();
+
+```
+
+### Acknowledgement timeout redelivery backoff
+
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can redeliver messages with different delays by setting the number
+of times the messages is retried.
 
 ```java
 
 Consumer consumer =  client.newConsumer()
         .topic("my-topic")
         .subscriptionName("my-subscription")
-        .negativeAckRedeliveryBackoff(NegativeAckRedeliveryExponentialBackoff.builder()
-                .minNackTimeMs(1000)
-                .maxNackTimeMs(60 * 1000)
+        .ackTimeout(10, TimeUnit.SECOND)
+        .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+                .minDelayMs(1000)
+                .maxDelayMs(60000)
+                .multiplier(2)
                 .build())
         .subscribe();
 
 ```
 
+The message redelivery behavior should be as follows.
+
+Redelivery count | Redelivery delay
+:--------------------|:-----------
+1 | 10 + 1 seconds
+2 | 10 + 2 seconds
+3 | 10 + 4 seconds
+4 | 10 + 8 seconds
+5 | 10 + 16 seconds
+6 | 10 + 32 seconds
+7 | 10 + 60 seconds
+8 | 10 + 60 seconds
+
 :::note
 
 - The `negativeAckRedeliveryBackoff` does not work with `consumer.negativeAcknowledge(MessageId messageId)` because you are not able to get the redelivery count from the message ID.
-- If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `NegativeAckRedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff.
+- If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `RedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff.
 
 :::
 
@@ -870,6 +1273,53 @@ pulsarClient.newReader()
 
 Total hash range size is 65536, so the max end of the range should be less than or equal to 65535.
 
+
+## TableView
+
+The TableView interface serves an encapsulated access pattern, providing a continuously updated key-value map view of the compacted topic data. Messages without keys will be ignored.
+
+With TableView, Pulsar clients can fetch all the message updates from a topic and construct a map with the latest values of each key. These values can then be used to build a local cache of data. In addition, you can register consumers with the TableView by specifying a listener to perform a scan of the map and then receive notifications when new messages are received. Consequently, event handling can be triggered to serve use cases, such as event-driven applications and message monitoring.
+
+> **Note:** Each TableView uses one Reader instance per partition, and reads the topic starting from the compacted view by default. It is highly recommended to enable automatic compaction by [configuring the topic compaction policies](cookbooks-compaction.md#configuring-compaction-to-run-automatically) for the given topic or namespace. More frequent compaction results in shorter startup times because less data is replayed to reconstruct the TableView of the topic.
+
+The following figure illustrates the dynamic construction of a TableView updated with newer values of each key.
+![TableView](/assets/tableview.png)
+
+### Configure TableView
+ 
+The following is an example of how to configure a TableView.
+
+```java
+
+TableView<String> tv = client.newTableViewBuilder(Schema.STRING)
+  .topic("my-tableview")
+  .create()
+
+```
+
+You can use the available parameters in the `loadConf` configuration or related [API](https://pulsar.apache.org/api/client/2.10.0-SNAPSHOT/org/apache/pulsar/client/api/TableViewBuilder.html) to customize your TableView.
+
+| Name | Type| Required? |  <div>Description</div> | Default
+|---|---|---|---|---
+| `topic` | string | yes | The topic name of the TableView. | N/A
+| `autoUpdatePartitionInterval` | int | no | The interval to check for newly added partitions. | 60 (seconds)
+
+### Register listeners
+ 
+You can register listeners for both existing messages on a topic and new messages coming into the topic by using `forEachAndListen`, and specify to perform operations for all existing messages by using `forEach`.
+
+The following is an example of how to register listeners with TableView.
+
+```java
+
+// Register listeners for all existing and incoming messages
+tv.forEachAndListen((key, value) -> /*operations on all existing and incoming messages*/)
+
+// Register action for all existing messages
+tv.forEach((key, value) -> /*operations on all existing messages*/)
+
+```
+
 ## Schema
 
 In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example.
diff --git a/site2/website-next/docs/client-libraries-python.md b/site2/website-next/docs/client-libraries-python.md
index e601730..666971b 100644
--- a/site2/website-next/docs/client-libraries-python.md
+++ b/site2/website-next/docs/client-libraries-python.md
@@ -50,8 +50,8 @@ Installation via PyPi is available for the following Python versions:
 
 Platform | Supported Python versions
 :--------|:-------------------------
-MacOS <br />  10.13 (High Sierra), 10.14 (Mojave) <br /> | 2.7, 3.7
-Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8
+MacOS <br />  10.13 (High Sierra), 10.14 (Mojave) <br /> | 2.7, 3.7, 3.8, 3.9
+Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9
 
 ### Install from source
 
@@ -112,7 +112,7 @@ while True:
         print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
         # Acknowledge successful processing of the message
         consumer.acknowledge(msg)
-    except:
+    except Exception:
         # Message failed to be processed
         consumer.negative_acknowledge(msg)
 
@@ -183,7 +183,7 @@ while True:
         print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
         # Acknowledge successful processing of the message
         consumer.acknowledge(msg)
-    except:
+    except Exception:
         # Message failed to be processed
         consumer.negative_acknowledge(msg)
 client.close()
@@ -333,7 +333,7 @@ while True:
         print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c))
         # Acknowledge successful processing of the message
         consumer.acknowledge(msg)
-    except:
+    except Exception:
         # Message failed to be processed
         consumer.negative_acknowledge(msg)
 
diff --git a/site2/website-next/docs/client-libraries-websocket.md b/site2/website-next/docs/client-libraries-websocket.md
index c663f97..a6e6036 100644
--- a/site2/website-next/docs/client-libraries-websocket.md
+++ b/site2/website-next/docs/client-libraries-websocket.md
@@ -32,7 +32,7 @@ webSocketServiceEnabled=true
 
 In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters:
 
-* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers)
+* [`configurationMetadataStoreUrl`](reference-configuration.md#websocket)
 * [`webServicePort`](reference-configuration.md#websocket-webServicePort)
 * [`clusterName`](reference-configuration.md#websocket-clusterName)
 
@@ -40,7 +40,7 @@ Here's an example:
 
 ```properties
 
-configurationStoreServers=zk1:2181,zk2:2181,zk3:2181
+configurationMetadataStoreUrl=zk1:2181,zk2:2181,zk3:2181
 webServicePort=8080
 clusterName=my-cluster
 
diff --git a/site2/website-next/docs/client-libraries.md b/site2/website-next/docs/client-libraries.md
index ab5b7c4..536cd0c 100644
--- a/site2/website-next/docs/client-libraries.md
+++ b/site2/website-next/docs/client-libraries.md
@@ -6,16 +6,25 @@ sidebar_label: "Overview"
 
 Pulsar supports the following client libraries:
 
-- [Java client](client-libraries-java)
-- [Go client](client-libraries-go)
-- [Python client](client-libraries-python)
-- [C++ client](client-libraries-cpp)
-- [Node.js client](client-libraries-node)
-- [WebSocket client](client-libraries-websocket)
-- [C# client](client-libraries-dotnet)
+|Language|Documentation|Release note|Code repo
+|---|---|---|---
+Java |- [User doc](client-libraries-java) <br /><br />- [API doc](https://pulsar.apache.org/api/client/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client) 
+C++ | - [User doc](client-libraries-cpp) <br /><br />- [API doc](https://pulsar.apache.org/api/cpp/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp) 
+Python | - [User doc](client-libraries-python) <br /><br />- [API doc](https://pulsar.apache.org/api/python/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) 
+WebSocket| [User doc](client-libraries-websocket) | [Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-websocket) 
+Go client|[User doc](client-libraries-go.md)|[Here](https://github.com/apache/pulsar-client-go/blob/master/CHANGELOG) |[Here](https://github.com/apache/pulsar-client-go) 
+Node.js|[User doc](client-libraries-node)|[Here](https://github.com/apache/pulsar-client-node/releases) |[Here](https://github.com/apache/pulsar-client-node) 
+C# |[User doc](client-libraries-dotnet.md)| [Here](https://github.com/apache/pulsar-dotpulsar/blob/master/CHANGELOG)|[Here](https://github.com/apache/pulsar-dotpulsar) 
+
+:::note
+
+- The code repos of **Java, C++, Python,** and **WebSocket** clients are hosted in the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are released with Pulsar, so their release notes are parts of [Pulsar release note](https://pulsar.apache.org/release-notes/).
+- The code repos of **Go, Node.js,** and **C#** clients are hosted outside of the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are not released with Pulsar, so they have independent release notes.
+
+:::
 
 ## Feature matrix
-Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://github.com/apache/pulsar/wiki/PIP-108%3A-Pulsar-Feature-Matrix-%28Client-and-Function%29) page.
+Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://docs.google.com/spreadsheets/d/1YHYTkIXR8-Ql103u-IMI18TXLlGStK8uJjDsOOA0T20/edit#gid=1784579914) page.
 
 ## Third-party clients
 
diff --git a/site2/website-next/docs/concepts-architecture-overview.md b/site2/website-next/docs/concepts-architecture-overview.md
index 8fe0717..a2b024d 100644
--- a/site2/website-next/docs/concepts-architecture-overview.md
+++ b/site2/website-next/docs/concepts-architecture-overview.md
@@ -47,6 +47,9 @@ Clusters can replicate amongst themselves using [geo-replication](concepts-repli
 
 The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and [BookKeeper metadata store](https://bookkee [...]
 
+> Pulsar also supports more metadata backend services, including [ETCD](https://etcd.io/) and [RocksDB](http://rocksdb.org/) (for standalone Pulsar only). 
+
+
 In a Pulsar instance:
 
 * A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
@@ -128,9 +131,10 @@ Architecturally, the Pulsar proxy gets all the information it requires from ZooK
 
 ```bash
 
+$ cd /path/to/pulsar/directory
 $ bin/pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk-2 \
-  --configuration-store-servers zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/docs/concepts-messaging.md b/site2/website-next/docs/concepts-messaging.md
index f23fae9..c370181 100644
--- a/site2/website-next/docs/concepts-messaging.md
+++ b/site2/website-next/docs/concepts-messaging.md
@@ -108,29 +108,50 @@ To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar i
 By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. 
 
 ### Chunking
-Before you enable chunking, read the following instructions.
-- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance.
-- Chunking is only supported for persisted topics.
-- Chunking is only supported for Exclusive and Failover subscription types.
+Message chunking enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side.
 
-When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
+With message chunking enabled, when the size of a message exceeds the allowed maximum payload size (the `maxMessageSize` parameter of broker), the workflow of messaging is as follows:
+1. The producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. 
+2. The broker stores the chunked messages in one managed-ledger in the same way as that of ordinary messages, and it uses the `chunkedMessageRate` parameter to record chunked message rate on the topic.
+3. The consumer buffers the chunked messages and aggregates them into the receiver queue when it receives all the chunks of a message.
+4. The client consumes the aggregated message from the receiver queue. 
 
-The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` param [...]
+**Limitations:** 
+- Chunking is only available for persisted topics.
+- Chunking is only available for the exclusive and failover subscription types.
+- Chunking cannot be enabled simultaneously with batching.
 
-The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic.
+#### Handle consecutive chunked messages with one ordered consumer
 
-#### Handle chunked messages with one producer and one ordered consumer
-
-As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combi [...]
+The following figure shows a topic with one producer which publishes a large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks labeled M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches them to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, a [...]
 
 ![](/assets/chunking-01.png)
 
-#### Handle chunked messages with multiple producers and one ordered consumer
+#### Handle interwoven chunked messages with one ordered consumer
 
-When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the c [...]
+When multiple producers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different producers in the same managed-ledger. The chunked messages in the managed-ledger can be interwoven with each other. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be  [...]
 
 ![](/assets/chunking-02.png)
 
+:::note
+
+In this case, interwoven chunked messages may bring some memory pressure to the consumer because the consumer keeps a separate buffer for each large message to aggregate all its chunks in one message. You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` parameter. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later, opt [...]
+
+:::
+
+#### Enable Message Chunking
+
+**Prerequisite:** Disable batching by setting the `enableBatching` parameter to `false`.
+
+The message chunking feature is OFF by default. 
+To enable message chunking, set the `chunkingEnabled` parameter to `true` when creating a producer.
+
+:::note
+
+If the consumer fails to receive all chunks of a message within a specified time period, it expires incomplete chunks. The default value is 1 minute. For more information about the `expireTimeOfIncompleteChunkedMessage` parameter, refer to [org.apache.pulsar.client.api](https://pulsar.apache.org/api/client/).
+
+:::
+
 ## Consumers
 
 A consumer is a process that attaches to a topic via a subscription and then receives messages.
@@ -232,9 +253,9 @@ Use the following API to enable `Negative Redelivery Backoff`.
 
 ```java
 
-consumer.negativeAckRedeliveryBackoff(NegativeAckRedeliveryExponentialBackoff.builder()
-        .minNackTimeMs(1000)
-        .maxNackTimeMs(60 * 1000)
+consumer.negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+        .minDelayMs(1000)
+        .maxDelayMs(60 * 1000)
         .build())
 
 ```
@@ -245,6 +266,34 @@ The acknowledgement timeout mechanism allows you to set a time range during whic
 
 You can configure the acknowledgement timeout mechanism to redeliver the message if it is not acknowledged after `ackTimeout` or to execute a timer task to check the acknowledgement timeout messages during every `ackTimeoutTickTime` period.
 
+You can also use the redelivery backoff mechanism, redeliver messages with different delays by setting the number 
+of times the messages is retried.
+
+If you want to use redelivery backoff, you can use the following API.
+
+```java
+
+consumer.ackTimeout(10, TimeUnit.SECOND)
+        .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+        .minDelayMs(1000)
+        .maxDelayMs(60000)
+        .multiplier(2).build())
+
+```
+
+The message redelivery behavior should be as follows.
+
+Redelivery count | Redelivery delay
+:--------------------|:-----------
+1 | 10 + 1 seconds
+2 | 10 + 2 seconds
+3 | 10 + 4 seconds
+4 | 10 + 8 seconds
+5 | 10 + 16 seconds
+6 | 10 + 32 seconds
+7 | 10 + 60 seconds
+8 | 10 + 60 seconds
+
 :::note
 
 - If batching is enabled, all messages in one batch are redelivered to the consumer.  
@@ -315,6 +364,23 @@ Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
 
 ```
 
+By default, there is no subscription during a DLQ topic creation. Without a just-in-time subscription to the DLQ topic, you may lose messages. To automatically create an initial subscription for the DLQ, you can specify the `initialSubscriptionName` parameter. If this parameter is set but the broker's `allowAutoSubscriptionCreation` is disabled, the DLQ producer will fail to be created.
+
+```java
+
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+                .topic(topic)
+                .subscriptionName("my-subscription")
+                .subscriptionType(SubscriptionType.Shared)
+                .deadLetterPolicy(DeadLetterPolicy.builder()
+                      .maxRedeliverCount(maxRedeliveryCount)
+                      .deadLetterTopic("your-topic-name")
+                      .initialSubscriptionName("init-sub")
+                      .build())
+                .subscribe();
+
+```
+
 Dead letter topic depends on message redelivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. 
 
 :::note
diff --git a/site2/website-next/docs/cookbooks-deduplication.md b/site2/website-next/docs/cookbooks-deduplication.md
index e71e6f4..a14a3c3 100644
--- a/site2/website-next/docs/cookbooks-deduplication.md
+++ b/site2/website-next/docs/cookbooks-deduplication.md
@@ -31,6 +31,7 @@ Parameter | Description | Default
 `brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false`
 `brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000`
 `brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000`
+`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120`
 `brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours)
 
 ### Set default value at the broker-level
diff --git a/site2/website-next/docs/deploy-bare-metal-multi-cluster.md b/site2/website-next/docs/deploy-bare-metal-multi-cluster.md
index 9dd2526..875b75d 100644
--- a/site2/website-next/docs/deploy-bare-metal-multi-cluster.md
+++ b/site2/website-next/docs/deploy-bare-metal-multi-cluster.md
@@ -226,8 +226,8 @@ You can initialize this metadata using the [`initialize-cluster-metadata`](refer
 
 $ bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
@@ -308,7 +308,7 @@ Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper b
 
 You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
 
-The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those  [...]
+The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`metadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the local quorum and the [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same c [...]
 
 You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster.
 
@@ -317,10 +317,10 @@ The following is an example configuration:
 ```properties
 
 # Local ZooKeeper servers
-zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
 
 # Configuration store quorum connection string.
-configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+configurationMetadataStoreUrl=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
 
 clusterName=us-west
 
diff --git a/site2/website-next/docs/deploy-bare-metal.md b/site2/website-next/docs/deploy-bare-metal.md
index 4e3ba08..f32701d 100644
--- a/site2/website-next/docs/deploy-bare-metal.md
+++ b/site2/website-next/docs/deploy-bare-metal.md
@@ -46,7 +46,7 @@ To run Pulsar on bare metal, the following configuration is recommended:
 
 :::
 
-Each machine in your cluster needs to have [Java 8](https://adoptopenjdk.net/?variant=openjdk8) or [Java 11](https://adoptopenjdk.net/?variant=openjdk11) installed.
+Each machine in your cluster needs to have [Java 8](https://adoptium.net/?variant=openjdk8) or [Java 11](https://adoptium.net/?variant=openjdk11) installed.
 
 The following is a diagram showing the basic setup:
 
@@ -270,8 +270,8 @@ You can initialize this metadata using the [`initialize-cluster-metadata`](refer
 
 $ bin/pulsar initialize-cluster-metadata \
   --cluster pulsar-cluster-1 \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2181 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080 \
   --web-service-url-tls https://pulsar.us-west.example.com:8443 \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650 \
@@ -381,12 +381,12 @@ Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Bro
 
 ### Configure Brokers
 
-The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationStoreServers` point to the same `zookeeperServers`.
+The most important element of broker configuration is ensuring that each broker is aware of the ZooKeeper cluster that you have deployed. Ensure that the [`metadataStoreUrl`](reference-configuration.md#broker) and [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameters are correct. In this case, since you only have 1 cluster and no configuration store setup, the `configurationMetadataStoreUrl` point to the same `metadataStoreUrl`.
 
 ```properties
 
-zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
-configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+configurationMetadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
 
 ```
 
diff --git a/site2/website-next/docs/deploy-monitoring.md b/site2/website-next/docs/deploy-monitoring.md
index 95ccdd6..adf3587 100644
--- a/site2/website-next/docs/deploy-monitoring.md
+++ b/site2/website-next/docs/deploy-monitoring.md
@@ -51,7 +51,7 @@ http://$GLOBAL_ZK_SERVER:8001/metrics
 
 ```
 
-The default port of local ZooKeeper is `8000` and the default port of configuration store is `8001`. You can change the default port of local ZooKeeper and configuration store by specifying system property `stats_server_port`.
+The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file.
 
 ### BookKeeper stats
 
diff --git a/site2/website-next/docs/develop-binary-protocol.md b/site2/website-next/docs/develop-binary-protocol.md
index fa03383..63e43dd 100644
--- a/site2/website-next/docs/develop-binary-protocol.md
+++ b/site2/website-next/docs/develop-binary-protocol.md
@@ -240,8 +240,10 @@ Parameters:
 ##### Command Send
 
 Command `Send` is used to publish a new message within the context of an
-already existing producer. This command is used in a frame that includes command
-as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section.
+already existing producer. If a producer has not yet been created for the
+connection, the broker will terminate the connection. This command is used
+in a frame that includes command as well as message payload, for which the
+complete format is specified in the [payload commands](#payload-commands) section.
 
 ```protobuf
 
diff --git a/site2/website-next/docs/functions-develop.md b/site2/website-next/docs/functions-develop.md
index f48df25..c5a09bc 100644
--- a/site2/website-next/docs/functions-develop.md
+++ b/site2/website-next/docs/functions-develop.md
@@ -19,7 +19,9 @@ Interface | Description | Use cases
 :---------|:------------|:---------
 Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python). | Functions that do not require access to the function [context](#context).
 Pulsar Function SDK for Java/Python/Go | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces. | Functions that require access to the function [context](#context).
+Extended Pulsar Function SDK for Java | An extension to Pulsar-specific libraries, providing the initialization and close interfaces in Java. | Functions that require initializing and releasing external resources.
 
+### Language-native interface
 The language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, has no external dependencies. The following example is language-native function.
 
 ````mdx-code-block
@@ -75,6 +77,7 @@ sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10
 </Tabs>
 ````
 
+### Pulsar Function SDK for Java/Python/Go
 The following example uses Pulsar Functions SDK.
 ````mdx-code-block
 <Tabs 
@@ -148,6 +151,64 @@ For complete code, see [here](https://github.com/apache/pulsar/blob/77cf09eafa4f
 </Tabs>
 ````
 
+### Extended Pulsar Function SDK for Java
+This extended Pulsar Function SDK provides two additional interfaces to initialize and release external resources.
+- By using the `initialize` interface, you can initialize external resources which only need one-time initialization when the function instance starts.
+- By using the `close` interface, you can close the referenced external resources when the function instance closes. 
+
+:::note
+
+The extended Pulsar Function SDK for Java is available in Pulsar 2.10.0 and later versions.
+Before using it, you need to set up Pulsar Function worker 2.10.0 or later versions.
+
+:::
+
+The following example uses the extended interface of Pulsar Function SDK for Java to initialize RedisClient when the function instance starts and release it when the function instance closes.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Java"
+  values={[{"label":"Java","value":"Java"}]}>
+<TabItem value="Java">
+
+```Java
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import io.lettuce.core.RedisClient;
+
+public class InitializableFunction implements Function<String, String> {
+    private RedisClient redisClient;
+    
+    private void initRedisClient(Map<String, Object> connectInfo) {
+        redisClient = RedisClient.create(connectInfo.get("redisURI"));
+    }
+
+    @Override
+    public void initialize(Context context) {
+        Map<String, Object> connectInfo = context.getUserConfigMap();
+        redisClient = initRedisClient(connectInfo);
+    }
+    
+    @Override
+    public String process(String input, Context context) {
+        String value = client.get(key);
+        return String.format("%s-%s", input, value);
+    }
+
+    @Override
+    public void close() {
+        redisClient.close();
+    }
+}
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
 ## Schema registry
 Pulsar has a built-in schema registry and is bundled with popular schema types, such as Avro, JSON and Protobuf. Pulsar Functions can leverage the existing schema information from input topics and derive the input type. The schema registry applies for output topic as well.
 
@@ -1204,7 +1265,24 @@ class MetricRecorderFunction(Function):
 </TabItem>
 <TabItem value="Go">
 
-Currently, the feature is not available in Go.
+The Go SDK [`Context`](#context) object enables you to record metrics on a per-key basis. For example, you can set a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message:
+
+```go
+
+func metricRecorderFunction(ctx context.Context, in []byte) error {
+	inputstr := string(in)
+	fctx, ok := pf.FromContext(ctx)
+	if !ok {
+		return errors.New("get Go Functions Context error")
+	}
+	fctx.RecordMetric("hit-count", 1)
+	if inputstr == "eleven" {
+		fctx.RecordMetric("elevens-count", 1)
+	}
+	return nil
+}
+
+```
 
 </TabItem>
 
diff --git a/site2/website-next/docs/functions-runtime.md b/site2/website-next/docs/functions-runtime.md
index 67dd892..13edb1b 100644
--- a/site2/website-next/docs/functions-runtime.md
+++ b/site2/website-next/docs/functions-runtime.md
@@ -295,7 +295,7 @@ For example, if you use token authentication, you need to configure the followin
 
 clientAuthenticationPlugin: org.apache.pulsar.client.impl.auth.AuthenticationToken
 clientAuthenticationParameters: file:///etc/pulsar/token/admin-token.txt
-configurationStoreServers: zookeeper-cluster:2181 # auth requires a connection to zookeeper
+configurationMetadataStoreUrl: zk:zookeeper-cluster:2181 # auth requires a connection to zookeeper
 authenticationProviders:
  - "org.apache.pulsar.broker.authentication.AuthenticationProviderToken"
 authorizationEnabled: true
diff --git a/site2/website-next/docs/functions-worker.md b/site2/website-next/docs/functions-worker.md
index e8b1ce8..92ee57f 100644
--- a/site2/website-next/docs/functions-worker.md
+++ b/site2/website-next/docs/functions-worker.md
@@ -256,13 +256,13 @@ properties:
 
 ##### Enable Authorization Provider
 
-To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies.
+To enable authorization on Functions Worker, you need to configure `authorizationEnabled`, `authorizationProvider` and `configurationMetadataStoreUrl`. The authentication provider connects to `configurationMetadataStoreUrl` to receive namespace policies.
 
 ```yaml
 
 authorizationEnabled: true
 authorizationProvider: org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider
-configurationStoreServers: <configuration-store-servers>
+configurationMetadataStoreUrl: <meta-type>:<configuration-metadata-store-url>
 
 ```
 
diff --git a/site2/website-next/docs/io-elasticsearch-sink.md b/site2/website-next/docs/io-elasticsearch-sink.md
index b655917..568c12e 100644
--- a/site2/website-next/docs/io-elasticsearch-sink.md
+++ b/site2/website-next/docs/io-elasticsearch-sink.md
@@ -49,8 +49,8 @@ The configuration of the Elasticsearch sink connector has the following properti
 
 | Name | Type|Required | Default | Description 
 |------|----------|----------|---------|-------------|
-| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. |
-| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. |
+| `elasticSearchUrl` | String| true |" " (empty string)| The index name to which the connector writes messages. The default value is the topic name. It accepts date formats in the name to support event time based index with the pattern `%{+<date-format>}`. For example, suppose the event time of the record is 1645182000000L, the indexName is `logs-%{+yyyy-MM-dd}`, then the formatted index name would be `logs-2022-02-18`. |
+| `indexName` | String| false |" " (empty string)| The index name to which the connector writes messages. |
 | `schemaEnable` | Boolean | false | false | Turn on the Schema Aware mode. |
 | `createIndexIfNeeded` | Boolean | false | false | Manage index if missing. |
 | `maxRetries` | Integer | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it.  |
diff --git a/site2/website-next/docs/io-file-source.md b/site2/website-next/docs/io-file-source.md
index 2046247..ee9414e 100644
--- a/site2/website-next/docs/io-file-source.md
+++ b/site2/website-next/docs/io-file-source.md
@@ -26,6 +26,7 @@ The configuration of the File source connector has the following properties.
 | `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. |
 | `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. |
 | `numWorkers` | Integer | false | 1 | The number of worker threads that process files.<br /><br /> This allows you to process a larger number of files concurrently. <br /><br />However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. |
+| `processedFileSuffix` | String | false | NULL | If set, do not delete but only rename file that has been processed. <br /><br />  This config only work when 'keepFile' property is false. |
 
 ### Example
 
@@ -48,7 +49,8 @@ Before using the File source connector, you need to create a configuration file
         "maximumSize": 5000000,
         "ignoreHiddenFiles": true,
         "pollingInterval": 5000,
-        "numWorkers": 1
+        "numWorkers": 1,
+        "processedFileSuffix": ".processed_done"
      }
   }
   
@@ -71,6 +73,7 @@ Before using the File source connector, you need to create a configuration file
       ignoreHiddenFiles: true
       pollingInterval: 5000
       numWorkers: 1
+      processedFileSuffix: ".processed_done"
   
   ```
 
diff --git a/site2/website-next/docs/io-mongo-sink.md b/site2/website-next/docs/io-mongo-sink.md
index b3fe1a2..623da49 100644
--- a/site2/website-next/docs/io-mongo-sink.md
+++ b/site2/website-next/docs/io-mongo-sink.md
@@ -46,13 +46,12 @@ Before using the Mongo sink connector, you need to create a configuration file t
 
   ```yaml
   
-  {
+  configs:
       mongoUri: "mongodb://localhost:27017"
       database: "pulsar"
       collection: "messages"
       batchSize: 2
       batchTimeMs: 500
-  }
   
   ```
 
diff --git a/site2/website-next/docs/reference-cli-tools.md b/site2/website-next/docs/reference-cli-tools.md
index 0c8aea1..32b23e9 100644
--- a/site2/website-next/docs/reference-cli-tools.md
+++ b/site2/website-next/docs/reference-cli-tools.md
@@ -208,7 +208,7 @@ Options
 |`-c` , `--cluster`|Cluster name||
 |`-cms` , `--configuration-metadata-store`|The configuration metadata store quorum connection string||
 |`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use||
-|`-h` , `--help`|Cluster name|false|
+|`-h` , `--help`|Help message|false|
 |`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16|
 |`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16|
 |`-uw` , `--web-service-url`|The web service URL for the new cluster||
@@ -233,16 +233,16 @@ Options
 
 |Flag|Description|Default|
 |---|---|---|
-|`--configuration-store`|Configuration store connection string||
-|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string||
+|`-cms`, `--configuration-metadata-store`|Configuration meta store connection string||
+|`-md` , `--metadata-store`|Metadata Store service url||
 
 Example
 
 ```bash
 
 $ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk2 \
-  --configuration-store zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
@@ -562,7 +562,7 @@ Options
 |`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false|
 |`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload||
 |`-h`, `--help`|Help message|false|
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0|
 |`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0|
 |`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
@@ -626,7 +626,7 @@ Options
 |`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false|
 |`-d`, `--delay`|Mark messages with a given delay in seconds|0s|
 |`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-k`, `--encryption-key-name`|The public key name to encrypt payload||
 |`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload||
 |`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false|
@@ -686,7 +686,7 @@ Options
 |`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
 |`--auth-plugin`|Authentication plugin class name||
 |`--listener-name`|Listener name for the broker||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-h`, `--help`|Help message|false|
 |`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0|
 |`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
@@ -720,7 +720,7 @@ Options
 |---|---|---|
 |`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
 |`--auth-plugin`|Authentication plugin class name||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-h`, `--help`|Help message|false|
 |`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0|
 |`-t`, `--num-topic`|The number of topics|1|
@@ -762,7 +762,7 @@ Options
 |`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0|
 |`--threads`|Number of threads writing|1|
 |`-w`, `--write-quorum`|Ledger write quorum|1|
-|`-zk`, `--zookeeperServers`|ZooKeeper connection string||
+|`-md`, `--metadata-store`|Metadata store service URL. For example: zk:my-zk:2181||
 
 
 ### `monitor-brokers`
@@ -839,8 +839,10 @@ $ pulsar-perf transaction options
 
 |Flag|Description|Default|
 |---|---|---|
+`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|N/A
+`--auth-plugin`|Authentication plugin class name.|N/A
 `-au`, `--admin-url`|Pulsar admin URL.|N/A
-`--conf-file`|Configuration file.|N/A
+`-cf`, `--conf-file`|Configuration file.|N/A
 `-h`, `--help`|Help messages.|N/A
 `-c`, `--max-connections`|Maximum number of TCP connections to a single broker.|100
 `-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers. |1
diff --git a/site2/website-next/docs/reference-configuration.md b/site2/website-next/docs/reference-configuration.md
index b106bf4..d469c11 100644
--- a/site2/website-next/docs/reference-configuration.md
+++ b/site2/website-next/docs/reference-configuration.md
@@ -141,9 +141,9 @@ Pulsar brokers are responsible for handling incoming messages from producers, di
 |exposePublisherStats|Whether to enable topic level metrics.|true|
 |statsUpdateFrequencyInSecs||60|
 |statsUpdateInitialDelayInSecs||60|
-|zookeeperServers|  Zookeeper quorum connection string  ||
-|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
-|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|metadataStoreUrl| Metadata store quorum connection string  ||
+|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300|
+|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) ||
 |brokerServicePort| Broker data port  |6650|
 |brokerServicePortTls|  Broker data port for TLS  |6651|
 |webServicePort|  Port to use to server HTTP request  |8080|
@@ -160,9 +160,11 @@ Pulsar brokers are responsible for handling incoming messages from producers, di
 |bindAddresses| Additional Hostname or IP addresses the service binds on: `listener_name:scheme://host:port,...`.  ||
 |advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used.  ||
 |clusterName| Name of the cluster to which this broker belongs to ||
+|maxTenants|The maximum number of tenants that can be created in each Pulsar cluster. When the number of tenants reaches the threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0|
 |brokerDeduplicationEnabled|  Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis.  |false|
 |brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes.  |10000|
 |brokerDeduplicationEntriesInterval|  The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000|
+|brokerDeduplicationSnapshotIntervalSeconds| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |120|
 |brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360|
 |brokerDeduplicationSnapshotFrequencyInSeconds| How often is the thread pool scheduled to check whether a snapshot needs to be taken. The value of `0` means it is disabled. |120| 
 |dispatchThrottlingRateInMsg| Dispatch throttling-limit of messages for a broker (per second). 0 means the dispatch throttling-limit is disabled. |0|
@@ -175,7 +177,7 @@ Pulsar brokers are responsible for handling incoming messages from producers, di
 |dispatchThrottlingRatePerSubscriptionInByte|Dispatch throttling-limit of bytes for a subscription. 0 means the dispatch throttling-limit is disabled.|0|
 |dispatchThrottlingRatePerReplicatorInMsg| The default messages per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message dispatch-throttling| 0 |
 |dispatchThrottlingRatePerReplicatorInByte| The default bytes per second dispatch throttling-limit for every replicator in replication. The value of `0` means disabling replication message-byte dispatch-throttling| 0 | 
-|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000|
+|metadataStoreSessionTimeoutMillis| Metadata store session timeout in milliseconds |30000|
 |brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed  |60000|
 |skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false|
 |backlogQuotaCheckEnabled|  Enable backlog quota check. Enforces action on topic when the quota is reached  |true|
@@ -203,7 +205,10 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |statusFilePath|  Path for the file used to determine the rotation status for the broker when responding to service discovery health checks ||
 |preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles)  |false|
 |maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0|
-|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` instead. |false|
+| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 |
 |tlsCertificateFilePath|  Path for the TLS certificate file ||
 |tlsKeyFilePath|  Path for the TLS private key file ||
 |tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert is used to verify that any certs presented by connecting clients are signed by a certificate authority. If this verification fails, then the certs are untrusted and the connections are dropped. ||
@@ -222,6 +227,10 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers ||
 |brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (a comma-separated list of ciphers) e.g.  [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]||
 |brokerClientTlsProtocols|Specify the tls protocols the broker will use to negotiate during TLS handshake. (a comma-separated list of protocol names). e.g.  `TLSv1.3`, `TLSv1.2` ||
+| metadataStoreBatchingEnabled | Enable metadata operations batching. | true |
+| metadataStoreBatchingMaxDelayMillis | Maximum delay to impose on batching grouping. | 5 |
+| metadataStoreBatchingMaxOperations | Maximum number of operations to include in a singular batch. | 1000 |
+| metadataStoreBatchingMaxSizeKb | Maximum size of a batch. | 128 |
 |ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to `0`, TTL is disabled. By default, TTL is disabled. |0|
 |tokenSettingPrefix| Configure the prefix of the token-related settings, such as `tokenSecretKey`, `tokenPublicKey`, `tokenAuthClaim`, `tokenPublicAlg`, `tokenAudienceClaim`, and `tokenAudience`. ||
 |tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: `tokenSecretKey=data:;base64,xxxxxxxxx` or `tokenSecretKey=file:///my/secret.key`.  Note: key file must be DER-encoded.||
@@ -246,7 +255,11 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication  ||
 |exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false|
 |schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory|
-|isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. If this setting is enabled, then non-java clients fail to produce.|false|
+|isSchemaValidationEnforced| Whether to enable schema validation, when schema validation is enabled, if a producer without a schema attempts to produce the message to a topic with schema, the producer is rejected and disconnected.|false|
+|isAllowAutoUpdateSchemaEnabled|Allow schema to be auto updated at broker level.|true|
+|schemaCompatibilityStrategy| The schema compatibility strategy at broker level, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|FULL|
+|systemTopicSchemaCompatibilityStrategy| The schema compatibility strategy is used for system topics, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|ALWAYS_COMPATIBLE|
+| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 |
 |offloadersDirectory|The directory for all the offloader implementations.|./offloaders|
 |bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can also be semicolon separated values like below: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers ||
 |bookkeeperClientAuthenticationPlugin|  Authentication plugin to use when connecting to bookies ||
@@ -284,6 +297,7 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |managedLedgerMaxEntriesPerLedger| The max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true: <ul><li>The max rollover time has been reached</li><li>The max entries have been written to the ledger</li><li>The max ledger size has been written to the ledger</li></ul>|50000|
 |managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic  |10|
 |managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240|
+|managedLedgerInactiveLedgerRolloverTimeSeconds| Time to rollover ledger for inactive topic |0|
 |managedLedgerCursorMaxEntriesPerLedger|  Max number of entries to append to a cursor ledger  |50000|
 |managedLedgerCursorRolloverTimeInSeconds|  Max time before triggering a rollover on a cursor ledger  |14400|
 |managedLedgerMaxUnackedRangesToPersist|  Max number of “acknowledgment holes” that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in “ranges” of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redel [...]
@@ -291,7 +305,7 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |loadBalancerEnabled| Enable load balancer  |true|
 |loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection ||
 |loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update  |10|
-|loadBalancerReportUpdateMaxIntervalMinutes|  maximum interval to update load report  |15|
+|loadBalancerReportUpdateMaxIntervalMinutes|  Maximum interval to update load report  |15|
 |loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect  |1|
 |loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers  |30|
 |loadBalancerSheddingGracePeriodMinutes|  Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30|
@@ -306,12 +320,11 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 |loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered  |1000|
 |loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered  |100|
 |loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace  |128|
+|loadBalancerLoadSheddingStrategy | The shedding strategy of load balance. <br /><br />Available values: <li>`org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`</li><li>`org.apache.pulsar.broker.loadbalance.impl.OverloadShedder`</li><li>`org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder`</li><br />For the comparisons of the shedding strategies, see [here](administration-load-balance/#shed-load-automatically).|`org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`
 |replicationMetricsEnabled| Enable replication metrics  |true|
 |replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links.  |16|
 |replicationProducerQueueSize|  Replicator producer queue size  |1000|
 |replicatorPrefix|  Replicator prefix used for replicator producer name and cursor name pulsar.repl||
-|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false|
-|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60|
 |transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true|
 |transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider|
 |defaultRetentionTimeInMinutes| Default message retention time  |0|
@@ -353,6 +366,7 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 | preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false |
 | lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we're not sure if all old consumers' last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false |  
 |haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false|
+| maxNamespacesPerTenant | The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check. |0|
 | maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 |
 |subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared |
 | managedLedgerInfoCompressionType | Compression type of managed ledger information. <br /><br />Available options are `NONE`, `LZ4`, `ZLIB`, `ZSTD`, and `SNAPPY`). <br /><br />If this value is `NONE` or invalid, the `managedLedgerInfo` is not compressed. <br /><br />**Note** that after enabling this configuration, if you want to degrade a broker, you need to change the value to `NONE` and make sure all ledger metadata is saved without compression. | None |
@@ -361,6 +375,23 @@ brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater
 | brokerEntryMetadataInterceptors | Set broker entry metadata interceptors.<br /><br />Multiple interceptors should be separated by commas. <br /><br />Available values:<li>org.apache.pulsar.common.intercept.AppendBrokerTimestampMetadataInterceptor</li><li>org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor</li> <br /><br />Example<br />brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendBrokerTimestampMetadataInterceptor, org.apache.pulsar.common.inter [...]
 | enableExposingBrokerEntryMetadataToClient|Whether to expose broker entry metadata to client or not.<br /><br />Available values:<li>true</li><li>false</li><br />Example<br />enableExposingBrokerEntryMetadataToClient=true  | false |
 
+
+#### Deprecated parameters of Broker
+The following parameters have been deprecated in the `conf/broker.conf` file.
+
+|Name|Description|Default|
+|---|---|---|
+|backlogQuotaDefaultLimitGB|  Use `backlogQuotaDefaultLimitBytes` instead. |-1|
+|brokerServicePurgeInactiveFrequencyInSeconds|  Use `brokerDeleteInactiveTopicsFrequencySeconds`.|60|
+|tlsEnabled|  Use `webServicePortTls` and `brokerServicePortTls` instead. |false|
+|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages. Use `brokerClientTlsEnabled` instead. |false|
+|subscriptionKeySharedEnable|  Whether to enable the Key_Shared subscription. Use `subscriptionTypesEnabled` instead. |true|
+|zookeeperServers|  Zookeeper quorum connection string. Use `metadataStoreUrl` instead.  |N/A|
+|configurationStoreServers| Configuration store connection string (as a comma-separated list). Use `configurationMetadataStoreUrl` instead. |N/A|
+|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300|
+
+
 ## Client
 
 You can use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library.
@@ -434,9 +465,9 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |Name|Description|Default|
 |---|---|---|
 |authenticateOriginalAuthData|  If this flag is set to `true`, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). |false|
-|zookeeperServers|  The quorum connection string for local ZooKeeper  ||
-|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
-|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|metadataStoreUrl|  The quorum connection string for local metadata store  ||
+|metadataStoreCacheExpirySeconds| Metadata store cache expiry time in seconds|300|
+|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) ||
 |brokerServicePort| The port on which the standalone broker listens for connections |6650|
 |webServicePort|  The port used by the standalone broker for HTTP requests  |8080|
 |bindAddress| The hostname or IP address on which the standalone service binds  |0.0.0.0|
@@ -448,8 +479,8 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |isRunningStandalone|This flag controls features that are meant to be used when running in standalone mode.|N/A|
 |clusterName| The name of the cluster that this broker belongs to. |standalone|
 | failureDomainsEnabled | Enable cluster's failure-domain which can distribute brokers into logical region. | false |
-|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000|
-|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30|
+|metadataStoreSessionTimeoutMillis| Metadata store session timeout, in milliseconds. |30000|
+|metadataStoreOperationTimeoutSeconds|Metadata store operation timeout in seconds.|30|
 |brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000|
 |skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out of memory error. |false|
 |backlogQuotaCheckEnabled|  Enable the backlog quota check, which enforces a specified action when the quota is reached.  |true|
@@ -463,7 +494,6 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed.  |1000|
 | subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 |
 | subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true |
-|subscriptionKeySharedEnable|Whether to enable the Key_Shared subscription.|true|
 | subscriptionKeySharedUseConsistentHashing | In Key_Shared subscription type, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false |
 | subscriptionKeySharedConsistentHashingReplicaPoints | In Key_Shared subscription type, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 |
 | subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription |5 |
@@ -480,8 +510,6 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 | maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 |
 | maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 |
 | unblockStuckSubscriptionEnabled|Broker periodically checks if subscription is stuck and unblock if flag is enabled.|false|
-|maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned topic. Use 0 or negative number to disable the check|0|
-|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to "reconnect" policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the "reconnect" policy is an experiment feature.|shutdown|
 | topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks topic publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (Disable publish throttling with value 0) | 10|
 | brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that checks broker publish rate limiting across all topics. A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. When the value is set to 0, publish throttling is disabled. |50 |
 | brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0|
@@ -511,10 +539,15 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 | numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 |
 | enablePersistentTopics | Enable broker to load persistent topics. | true |
 | enableNonPersistentTopics | Enable broker to load non-persistent topics. | true |
-| maxProducersPerTopic | Maximum number of producers allowed to connect to topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, maxProducersPerTopic-limit check is disabled. | 0 |
-| maxConsumersPerTopic | Maximum number of consumers allowed to connect to topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, maxConsumersPerTopic-limit check is disabled. | 0 |
-| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, maxConsumersPerSubscription-limit check is disabled. | 0 |
+| maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. Once this limit reaches, the broker rejects new subscriptions until the number of subscriptions decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 |
+| maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. When the value is set to 0, the limit check is disabled. | 0 |
 | maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 |
+| metadataStoreBatchingEnabled | Enable metadata operations batching. | true |
+| metadataStoreBatchingMaxDelayMillis | Maximum delay to impose on batching grouping. | 5 |
+| metadataStoreBatchingMaxOperations | Maximum number of operations to include in a singular batch. | 1000 |
+| metadataStoreBatchingMaxSizeKb | Maximum size of a batch. | 128 |
 | tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 |
 | tlsCertificateFilePath | Path for the TLS certificate file. | |
 | tlsKeyFilePath | Path for the TLS private key file. | |
@@ -540,8 +573,8 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 | brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | |
 | systemTopicEnabled | Enable/Disable system topics. | false |
 | topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false |
+| topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. If set to 0 or a negative number, the fenced topic is not closed. | 0 |
 | proxyRoles | Role names that are treated as "proxy roles". If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | |
-| authenticateOriginalAuthData | If this flag is set, the broker authenticates the original Auth data. Otherwise, it just accepts the originalPrincipal and authorizes it (if required). | false |
 |authenticationEnabled| Enable authentication for the broker. |false|
 |authenticationProviders| A comma-separated list of class names for authentication providers. |false|
 |authorizationEnabled|  Enforce authorization in brokers. |false|
@@ -642,7 +675,7 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |loadBalancerAutoBundleSplitEnabled|    |false|
 | loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true |
 |loadBalancerNamespaceBundleMaxTopics|    |1000|
-|loadBalancerNamespaceBundleMaxSessions|    |1000|
+|loadBalancerNamespaceBundleMaxSessions|  Maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered. <br />To disable the threshold check, set the value to -1.  |1000|
 |loadBalancerNamespaceBundleMaxMsgRate|   |1000|
 |loadBalancerNamespaceBundleMaxBandwidthMbytes|   |100|
 |loadBalancerNamespaceMaximumBundles|   |128|
@@ -667,14 +700,29 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false|
 |bookieId | If you want to custom a bookie ID or use a dynamic network address for a bookie, you can set the `bookieId`. <br /><br />Bookie advertises itself using the `bookieId` rather than the `BookieSocketAddress` (`hostname:port` or `IP:port`).<br /><br /> The `bookieId` is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots. <br /><br />For more information about `bookieId`, see [here](http://bookkeeper.apache.org/bps/BP-41-bookieid/).|/|
 | maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 |
+| metadataStoreConfigPath | The configuration file path of the local metadata store. Standalone Pulsar uses [RocksDB](http://rocksdb.org/) as the local metadata store. The format is `/xxx/xx/rocksdb.ini`. |N/A|
+|schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory|
+|isSchemaValidationEnforced| Whether to enable schema validation, when schema validation is enabled, if a producer without a schema attempts to produce the message to a topic with schema, the producer is rejected and disconnected.|false|
+|isAllowAutoUpdateSchemaEnabled|Allow schema to be auto updated at broker level.|true|
+|schemaCompatibilityStrategy| The schema compatibility strategy at broker level, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|FULL|
+|systemTopicSchemaCompatibilityStrategy| The schema compatibility strategy is used for system topics, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy) for available values.|ALWAYS_COMPATIBLE|
+
+#### Deprecated parameters of standalone Pulsar
+The following parameters have been deprecated in the `conf/standalone.conf` file.
+
+|Name|Description|Default|
+|---|---|---|
+|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds. Use `metadataStoreOperationTimeoutSeconds` instead. |30|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead. |300|
+|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000|
 
 ## WebSocket
 
 |Name|Description|Default|
 |---|---|---|
-|configurationStoreServers    |||
-|zooKeeperSessionTimeoutMillis|   |30000|
-|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
+|configurationMetadataStoreUrl    |||
+|metadataStoreSessionTimeoutMillis|Metadata store session timeout in milliseconds.  |30000|
+|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300|
 |serviceUrl|||
 |serviceUrlTls|||
 |brokerServiceUrl|||
@@ -695,6 +743,14 @@ You can set the log level and configuration in the  [log4j2.yaml](https://github
 |tlsKeyFilePath |||
 |tlsTrustCertsFilePath|||
 
+#### Deprecated parameters of WebSocket
+The following parameters have been deprecated in the `conf/websocket.conf` file.
+
+|Name|Description|Default|
+|---|---|---|
+|zooKeeperSessionTimeoutMillis|The ZooKeeper session timeout in milliseconds. Use `metadataStoreSessionTimeoutMillis` instead. |30000|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300|
+
 ## Pulsar proxy
 
 The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file.
@@ -703,16 +759,16 @@ The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be config
 |Name|Description|Default|
 |---|---|---|
 |forwardAuthorizationCredentials| Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. |false|
-|zookeeperServers|  The ZooKeeper quorum connection string (as a comma-separated list)  ||
-|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|metadataStoreUrl| Metadata store quorum connection string (as a comma-separated list)  ||
+|configurationMetadataStoreUrl| Configuration store connection string (as a comma-separated list) ||
 | brokerServiceURL | The service URL pointing to the broker cluster. | |
 | brokerServiceURLTLS | The TLS service URL pointing to the broker cluster | |
 | brokerWebServiceURL | The Web service URL pointing to the broker cluster | |
 | brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | |
 | functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | |
 | functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | |
-|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
-|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300|
+|metadataStoreSessionTimeoutMillis| Metadata store session timeout (in milliseconds) |30000|
+|metadataStoreCacheExpirySeconds|Metadata store cache expiry time in seconds|300|
 |advertisedAddress|Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.|N/A|
 |servicePort| The port to use for server binary Protobuf requests |6650|
 |servicePortTls|  The port to use to server binary Protobuf TLS requests  |6651|
@@ -730,7 +786,6 @@ The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be config
 |superUserRoles|  Role names that are treated as “super-users,” meaning that they will be able to perform all admin ||
 |maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000|
 |maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000|
-|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false|
 |tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar brokers. |false|
 | tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 |
 |tlsCertificateFilePath|  Path for the TLS certificate file ||
@@ -751,6 +806,15 @@ The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be config
 | tokenAudience | The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token need contains this parameter.| |
 |haProxyProtocolEnabled | Enable or disable the [HAProxy](http://www.haproxy.org/) protocol. |false|
 
+#### Deprecated parameters of Pulsar proxy
+The following parameters have been deprecated in the `conf/proxy.conf` file.
+
+|Name|Description|Default|
+|---|---|---|
+|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` instead. |false|
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds). Use `metadataStoreSessionTimeoutMillis` instead. |30000|
+|zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds. Use `metadataStoreCacheExpirySeconds` instead.|300|
+
 ## ZooKeeper
 
 ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available:
@@ -784,4 +848,4 @@ server.3=zk3.us-west.example.com:2888:3888
 
 ```
 
-> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration
+> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration
\ No newline at end of file
diff --git a/site2/website-next/docs/reference-metrics.md b/site2/website-next/docs/reference-metrics.md
index 7ede92d..d62da6e 100644
--- a/site2/website-next/docs/reference-metrics.md
+++ b/site2/website-next/docs/reference-metrics.md
@@ -120,7 +120,7 @@ All the BookKeeper client metric are labelled with the following label:
 
 | Name | Type | Description |
 |---|---|---|
-| bookkeeper_server_BOOKIE_QUARANTINE_count | Counter | The number of bookie clients to be quarantined. |
+| pulsar_managedLedger_client_bookkeeper_client_BOOKIE_QUARANTINE | Counter | The number of bookie clients to be quarantined.<br /><br />If you want to expose this metric, set `bookkeeperClientExposeStatsToPrometheus` to `true` in the `broker.conf` file.|
 
 ### Namespace metrics
 
@@ -186,6 +186,7 @@ All the topic metrics are labelled with the following labels:
 | pulsar_consumers_count | Gauge | The number of active consumers of the topic connected to this broker. |
 | pulsar_rate_in | Gauge | The total message rate of the topic coming into this broker (messages/second). |
 | pulsar_rate_out | Gauge | The total message rate of the topic going out from this broker (messages/second). |
+| pulsar_publish_rate_limit_times | Gauge | The number of times the publish rate limit is triggered. |
 | pulsar_throughput_in | Gauge | The total throughput of the topic coming into this broker (bytes/second). |
 | pulsar_throughput_out | Gauge | The total throughput of the topic going out from this broker (bytes/second). |
 | pulsar_storage_size | Gauge | The total storage size of the topics in this topic owned by this broker (bytes). |
@@ -269,14 +270,16 @@ All the managedLedger metrics are labelled with the following labels:
 | pulsar_ml_AddEntryBytesRate | Gauge | The bytes/s rate of messages added |
 | pulsar_ml_AddEntryWithReplicasBytesRate | Gauge | The bytes/s rate of messages added with replicas |
 | pulsar_ml_AddEntryErrors | Gauge | The number of addEntry requests that failed |
-| pulsar_ml_AddEntryLatencyBuckets | Histogram | The add entry latency of a ledger with a given quantile (threshold).<br /> Available quantile: <br /><ul><li> quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]</li> <li>quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]</li><li>quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]</li><li>quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]</li><li>quantile="10.0_20.0" is AddEntryLatency between (10ms, 20ms]</ [...]
-| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The add entry latency > 1s |
+| pulsar_ml_AddEntryLatencyBuckets | Histogram | The latency of adding a ledger entry with a given quantile (threshold), including time spent on waiting in queue on the broker side.<br /> Available quantile: <br /><ul><li> quantile="0.0_0.5" is AddEntryLatency between (0.0ms, 0.5ms]</li> <li>quantile="0.5_1.0" is AddEntryLatency between (0.5ms, 1.0ms]</li><li>quantile="1.0_5.0" is AddEntryLatency between (1ms, 5ms]</li><li>quantile="5.0_10.0" is AddEntryLatency between (5ms, 10ms]</li><l [...]
+| pulsar_ml_AddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the AddEntryLatency is longer than 1 second |
 | pulsar_ml_AddEntryMessagesRate | Gauge | The msg/s rate of messages added |
 | pulsar_ml_AddEntrySucceed | Gauge | The number of addEntry requests that succeeded |
-| pulsar_ml_EntrySizeBuckets | Histogram | The add entry size of a ledger with given quantile.<br /> Available quantile: <br /><ul><li>quantile="0.0_128.0" is EntrySize between (0byte, 128byte]</li><li>quantile="128.0_512.0" is EntrySize between (128byte, 512byte]</li><li>quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]</li><li>quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]</li><li>quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]</li><li>quantile="4096.0_1638 [...]
-| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge  | The add entry size > 1MB |
-| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with given quantile. <br /> Available quantile: <br /><ul><li>quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]</li><li>quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]</li><li>quantile="1.0_5.0" is EntrySize between (1ms, 5ms]</li><li>quantile="5.0_10.0" is EntrySize between (5ms, 10ms]</li><li>quantile="10.0_20.0" is EntrySize between (10ms, 20ms]</li><li>quantile="20.0_50.0" is EntrySize between (20m [...]
-| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The ledger switch latency > 1s |
+| pulsar_ml_EntrySizeBuckets | Histogram | The added entry size of a ledger with a given quantile.<br /> Available quantile: <br /><ul><li>quantile="0.0_128.0" is EntrySize between (0byte, 128byte]</li><li>quantile="128.0_512.0" is EntrySize between (128byte, 512byte]</li><li>quantile="512.0_1024.0" is EntrySize between (512byte, 1KB]</li><li>quantile="1024.0_2048.0" is EntrySize between (1KB, 2KB]</li><li>quantile="2048.0_4096.0" is EntrySize between (2KB, 4KB]</li><li>quantile="4096.0_ [...]
+| pulsar_ml_EntrySizeBuckets_OVERFLOW |Gauge  | The number of times the EntrySize is larger than 1MB |
+| pulsar_ml_LedgerSwitchLatencyBuckets | Histogram | The ledger switch latency with a given quantile. <br /> Available quantile: <br /><ul><li>quantile="0.0_0.5" is EntrySize between (0ms, 0.5ms]</li><li>quantile="0.5_1.0" is EntrySize between (0.5ms, 1ms]</li><li>quantile="1.0_5.0" is EntrySize between (1ms, 5ms]</li><li>quantile="5.0_10.0" is EntrySize between (5ms, 10ms]</li><li>quantile="10.0_20.0" is EntrySize between (10ms, 20ms]</li><li>quantile="20.0_50.0" is EntrySize between (2 [...]
+| pulsar_ml_LedgerSwitchLatencyBuckets_OVERFLOW | Gauge | The number of times the ledger switch latency is longer than 1 second |
+| pulsar_ml_LedgerAddEntryLatencyBuckets | Histogram | The latency for bookie client to persist a ledger entry from broker to BookKeeper service with a given quantile (threshold). <br /> Available quantile: <br /><ul><li> quantile="0.0_0.5" is LedgerAddEntryLatency between (0.0ms, 0.5ms]</li> <li>quantile="0.5_1.0" is LedgerAddEntryLatency between (0.5ms, 1.0ms]</li><li>quantile="1.0_5.0" is LedgerAddEntryLatency between (1ms, 5ms]</li><li>quantile="5.0_10.0" is LedgerAddEntryLatency bet [...]
+| pulsar_ml_LedgerAddEntryLatencyBuckets_OVERFLOW | Gauge | The number of times the LedgerAddEntryLatency is longer than 1 second |
 | pulsar_ml_MarkDeleteRate | Gauge | The rate of mark-delete ops/s |
 | pulsar_ml_NumberOfMessagesInBacklog | Gauge | The number of backlog messages for all the consumers |
 | pulsar_ml_ReadEntriesBytesRate | Gauge | The bytes/s rate of messages read |
diff --git a/site2/website-next/docs/schema-evolution-compatibility.md b/site2/website-next/docs/schema-evolution-compatibility.md
index c886b1f..faddc45 100644
--- a/site2/website-next/docs/schema-evolution-compatibility.md
+++ b/site2/website-next/docs/schema-evolution-compatibility.md
@@ -155,7 +155,7 @@ In some data formats, for example, Avro, you can define fields with default valu
 
 :::tip
 
-You can set schema compatibility check strategy at namespace or broker level. For how to set the strategy, see [here](schema-manage.md/#set-schema-compatibility-check-strategy).
+You can set schema compatibility check strategy at the topic, namespace or broker level. For how to set the strategy, see [here](schema-manage.md/#set-schema-compatibility-check-strategy).
 
 :::
 
diff --git a/site2/website-next/docs/schema-manage.md b/site2/website-next/docs/schema-manage.md
index a64f20a..ad9f500 100644
--- a/site2/website-next/docs/schema-manage.md
+++ b/site2/website-next/docs/schema-manage.md
@@ -639,26 +639,172 @@ To use your custom schema storage implementation, perform the following steps.
 
 ## Set schema compatibility check strategy 
 
-You can set [schema compatibility check strategy](schema-evolution-compatibility.md#schema-compatibility-check-strategy) at namespace or broker level. 
+You can set [schema compatibility check strategy](schema-evolution-compatibility.md#schema-compatibility-check-strategy) at the topic, namespace or broker level. 
 
-- If you set schema compatibility check strategy at both namespace or broker level, it uses the strategy set for the namespace level.
+The schema compatibility check strategy set at different levels has priority: topic level > namespace level > broker level. 
 
-- If you do not set schema compatibility check strategy at both namespace or broker level, it uses the `FULL` strategy.
+- If you set the strategy at both topic and namespace level, it uses the topic-level strategy. 
 
-- If you set schema compatibility check strategy at broker level rather than namespace level, it uses the strategy set for the broker level.
+- If you set the strategy at both namespace and broker level, it uses the namespace-level strategy.
 
-- If you set schema compatibility check strategy at namespace level rather than broker level, it uses the strategy set for the namespace level.
+- If you do not set the strategy at any level, it uses the `FULL` strategy. For all available values, see [here](schema-evolution-compatibility.md#schema-compatibility-check-strategy).
 
-### Namespace 
+
+### Topic level
+
+To set a schema compatibility check strategy at the topic level, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Admin CLI"
+  values={[{"label":"Admin CLI","value":"Admin CLI"},{"label":"REST API","value":"REST API"},{"label":"Java Admin API","value":"Java Admin API"}]}>
+
+<TabItem value="Admin CLI">
+
+Use the [`pulsar-admin topicsPolicies set-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. 
+
+```shell
+
+pulsar-admin topicsPolicies set-schema-compatibility-strategy <strategy> <topicName>
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@}
+
+</TabItem>
+<TabItem value="Java Admin API">
+
+```java
+
+void setSchemaCompatibilityStrategy(String topic, SchemaCompatibilityStrategy strategy)
+
+```
+
+Here is an example of setting a schema compatibility check strategy at the topic level.
+
+```java
+
+PulsarAdmin admin = …;
+
+admin.topicPolicies().setSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", SchemaCompatibilityStrategy.ALWAYS_INCOMPATIBLE);
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+<br />
+To get the topic-level schema compatibility check strategy, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Admin CLI"
+  values={[{"label":"Admin CLI","value":"Admin CLI"},{"label":"REST API","value":"REST API"},{"label":"Java Admin API","value":"Java Admin API"}]}>
+
+<TabItem value="Admin CLI">
+
+Use the [`pulsar-admin topicsPolicies get-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. 
+
+```shell
+
+pulsar-admin topicsPolicies get-schema-compatibility-strategy <topicName>
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+Send a `GET` request to this endpoint: {@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@}
+
+</TabItem>
+<TabItem value="Java Admin API">
+
+```java
+
+SchemaCompatibilityStrategy getSchemaCompatibilityStrategy(String topic, boolean applied)
+
+```
+
+Here is an example of getting the topic-level schema compatibility check strategy.
+
+```java
+
+PulsarAdmin admin = …;
+
+// get the current applied schema compatibility strategy
+admin.topicPolicies().getSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", true);
+
+// only get the schema compatibility strategy from topic policies
+admin.topicPolicies().getSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic", false);
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+<br />
+To remove the topic-level schema compatibility check strategy, use one of the following methods.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Admin CLI"
+  values={[{"label":"Admin CLI","value":"Admin CLI"},{"label":"REST API","value":"REST API"},{"label":"Java Admin API","value":"Java Admin API"}]}>
+
+<TabItem value="Admin CLI">
+
+Use the [`pulsar-admin topicsPolicies remove-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. 
+
+```shell
+
+pulsar-admin topicsPolicies remove-schema-compatibility-strategy <topicName>
+
+```
+
+</TabItem>
+<TabItem value="REST API">
+
+Send a `DELETE` request to this endpoint: {@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@}
+
+</TabItem>
+<TabItem value="Java Admin API">
+
+```java
+
+void removeSchemaCompatibilityStrategy(String topic)
+
+```
+
+Here is an example of removing the topic-level schema compatibility check strategy.
+
+```java
+
+PulsarAdmin admin = …;
+
+admin.removeSchemaCompatibilityStrategy("my-tenant/my-ns/my-topic");
+
+```
+
+</TabItem>
+
+</Tabs>
+````
+
+
+### Namespace level
 
 You can set schema compatibility check strategy at namespace level using one of the following methods.
 
 ````mdx-code-block
 <Tabs 
-  defaultValue="pulsar-admin"
-  values={[{"label":"pulsar-admin","value":"pulsar-admin"},{"label":"REST API","value":"REST API"},{"label":"Java","value":"Java"}]}>
+  defaultValue="Admin CLI"
+  values={[{"label":"Admin CLI","value":"Admin CLI"},{"label":"REST API","value":"REST API"},{"label":"Java Admin CLI","value":"Java Admin CLI"}]}>
 
-<TabItem value="pulsar-admin">
+<TabItem value="Admin CLI">
 
 Use the [`pulsar-admin namespaces set-schema-compatibility-strategy`](https://pulsar.apache.org/tools/pulsar-admin/) command. 
 
@@ -674,7 +820,7 @@ pulsar-admin namespaces set-schema-compatibility-strategy options
 Send a `PUT` request to this endpoint: {@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/schemaCompatibilityStrategy?version=@pulsar:version_number@}
 
 </TabItem>
-<TabItem value="Java">
+<TabItem value="Java Admin CLI">
 
 Use the [`setSchemaCompatibilityStrategy`](https://pulsar.apache.org/api/admin/)method.
 
@@ -689,7 +835,7 @@ admin.namespaces().setSchemaCompatibilityStrategy("test", SchemaCompatibilityStr
 </Tabs>
 ````
 
-### Broker 
+### Broker level
 
 You can set schema compatibility check strategy at broker level by setting `schemaCompatibilityStrategy` in [`broker.conf`](https://github.com/apache/pulsar/blob/f24b4890c278f72a67fe30e7bf22dc36d71aac6a/conf/broker.conf#L1240) or [`standalone.conf`](https://github.com/apache/pulsar/blob/master/conf/standalone.conf) file.
 
diff --git a/site2/website-next/docs/security-tls-keystore.md b/site2/website-next/docs/security-tls-keystore.md
index 7b0f772..e7b4152 100644
--- a/site2/website-next/docs/security-tls-keystore.md
+++ b/site2/website-next/docs/security-tls-keystore.md
@@ -179,10 +179,10 @@ Optional settings that may worth consider:
    By default, it is not set.
 ### Configuring Clients
 
-This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#Client configuration).
-For a a minimal configuration, user need to provide the TrustStore information.
+This is similar to [TLS encryption configuing for client with PEM type](security-tls-transport.md#client-configuration).
+For a minimal configuration, you need to provide the TrustStore information.
 
-e.g. 
+For example:
 1. for [Command-line tools](reference-cli-tools) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation.
 
    ```properties
@@ -215,16 +215,18 @@ e.g.
 
 1. for java admin client
 
-```java
-
-    PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443")
-                .useKeyStoreTls(true)
-                .tlsTrustStorePath("/var/private/tls/client.truststore.jks")
-                .tlsTrustStorePassword("clientpw")
-                .allowTlsInsecureConnection(false)
-                .build();
+   ```java
+   
+       PulsarAdmin amdin = PulsarAdmin.builder().serviceHttpUrl("https://broker.example.com:8443")
+           .useKeyStoreTls(true)
+           .tlsTrustStorePath("/var/private/tls/client.truststore.jks")
+           .tlsTrustStorePassword("clientpw")
+           .allowTlsInsecureConnection(false)
+           .build();
+   
+   ```
 
-```
+> **Note:** Please configure `tlsTrustStorePath` when you set `useKeyStoreTls` to `true`.
 
 ## TLS authentication with KeyStore configure
 
@@ -275,7 +277,7 @@ webSocketServiceEnabled=false
 
 Besides the TLS encryption configuring. The main work is configuring the KeyStore, which contains a valid CN as client role, for client.
 
-e.g. 
+For example:
 1. for [Command-line tools](reference-cli-tools) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation.
 
    ```properties
@@ -327,6 +329,8 @@ e.g.
    
    ```
 
+> **Note:** Please configure `tlsTrustStorePath` when you set `useKeyStoreTls` to `true`.
+
 ## Enabling TLS Logging
 
 You can enable TLS debug logging at the JVM level by starting the brokers and/or clients with `javax.net.debug` system property. For example:
diff --git a/site2/website-next/docs/security-versioning-policy.md b/site2/website-next/docs/security-versioning-policy.md
new file mode 100644
index 0000000..88a14e0
--- /dev/null
+++ b/site2/website-next/docs/security-versioning-policy.md
@@ -0,0 +1,67 @@
+---
+id: security-policy-and-supported-versions
+title: Security Policy and Supported Versions
+sidebar_label: "Security Policy and Supported Versions"
+---
+
+## Reporting a Vulnerability
+
+The current process for reporting vulnerabilities is outlined here: https://www.apache.org/security/. When reporting a
+vulnerability to security@apache.org, you can copy your email to [private@pulsar.apache.org](mailto:private@pulsar.apache.org)
+to send your report to the Apache Pulsar Project Management Committee. This is a private mailing list.
+
+## Using Pulsar's Security Features
+
+You can find documentation on Pulsar's available security features and how to use them here:
+https://pulsar.apache.org/docs/en/security-overview/.
+
+## Security Vulnerability Announcements
+
+The Pulsar community will announce security vulnerabilities and how to mitigate them on the [users@pulsar.apache.org](mailto:users@pulsar.apache.org).
+For instructions on how to subscribe, please see https://pulsar.apache.org/contact/.
+
+## Versioning Policy
+
+The Pulsar project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). Existing releases can expect
+patches for bugs and security vulnerabilities. New features will target minor releases.
+
+When upgrading an existing cluster, it is important to upgrade components linearly through each minor version. For
+example, when upgrading from 2.8.x to 2.10.x, it is important to upgrade to 2.9.x before going to 2.10.x.
+
+## Supported Versions
+
+Feature release branches will be maintained with security fix and bug fix releases for a period of at least 12 months
+after initial release. For example, branch 2.5.x is no longer considered maintained as of January 2021, 12 months after
+the release of 2.5.0 in January 2020. No more 2.5.x releases should be expected at this point, even to fix security
+vulnerabilities.
+
+Note that a minor version can be maintained past it's 12 month initial support period. For example, version 2.7 is still
+actively maintained.
+
+Security fixes will be given priority when it comes to back porting fixes to older versions that are within the
+supported time window. It is challenging to decide which bug fixes to back port to old versions. As such, the latest
+versions will have the most bug fixes.
+
+When 3.0.0 is released, the community will decide how to continue supporting 2.x. It is possible that the last minor
+release within 2.x will be maintained for longer as an “LTS” release, but it has not been officially decided.
+
+The following table shows version support timelines and will be updated with each release.
+
+| Version | Supported          | Initial Release | At Least Until |
+|:-------:|:------------------:|:---------------:|:--------------:|
+| 2.9.x   | :white_check_mark: | November 2021   | November 2022  |
+| 2.8.x   | :white_check_mark: | June 2021       | June 2022      |
+| 2.7.x   | :white_check_mark: | November 2020   | November 2021  |
+| 2.6.x   | :x:                | June 2020       | June 2021      |
+| 2.5.x   | :x:                | January 2020    | January 2021   |
+| 2.4.x   | :x:                | July 2019       | July 2020      |
+| < 2.3.x | :x:                | -               | -              |
+
+If there is ambiguity about which versions of Pulsar are actively supported, please ask on the [users@pulsar.apache.org](mailto:users@pulsar.apache.org)
+mailing list.
+
+## Release Frequency
+
+With the acceptance of [PIP-47 - A Time Based Release Plan](https://github.com/apache/pulsar/wiki/PIP-47%3A-Time-Based-Release-Plan),
+the Pulsar community aims to complete 4 minor releases each year. Patch releases are completed based on demand as well
+as need, in the event of security fixes.
diff --git a/site2/website-next/docs/sql-deployment-configurations.md b/site2/website-next/docs/sql-deployment-configurations.md
index 9e7ff5a..10fb47a 100644
--- a/site2/website-next/docs/sql-deployment-configurations.md
+++ b/site2/website-next/docs/sql-deployment-configurations.md
@@ -115,20 +115,15 @@ pulsar.zookeeper-uri=localhost1,localhost2:2181
 
 ```
 
-A frequently asked question is why my latest message not showing up when querying with Pulsar SQL.
-It's not a bug but controlled by a setting, by default BookKeeper LAC only advanced when subsequent entries are added.
-If there is no subsequent entries added, the last entry written will not be visible to readers until the ledger is closed.
-This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly read from BookKeeper ledger.
-We can add following setting to change the behavior:
-In Broker config, set
-bookkeeperExplicitLacIntervalInMills > 0
-bookkeeperUseV2WireProtocol=false
-
-And in Presto config, set
-pulsar.bookkeeper-explicit-interval > 0
-pulsar.bookkeeper-use-v2-protocol=false
-
-However,keep in mind that using bk V3 protocol will introduce additional GC overhead to BK as it uses Protobuf.
+**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. 
+
+If you want to get the last message in a topic, set the following configurations:
+
+1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`.
+   
+2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`.
+
+However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf.
 
 ## Query data from existing Presto clusters
 
diff --git a/site2/website-next/docs/standalone-docker.md b/site2/website-next/docs/standalone-docker.md
index 7ee20c2..f636d2d 100644
--- a/site2/website-next/docs/standalone-docker.md
+++ b/site2/website-next/docs/standalone-docker.md
@@ -22,6 +22,7 @@ A few things to note about this command:
  * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
 time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
  * For Docker on Windows make sure to configure it to use Linux containers
+ * The docker container will run as UID 10000 and GID 0, by default. You'll need to ensure the mounted volumes give write permission to either UID 10000 or GID 0. Note that UID 10000 is arbitrary, so it is recommended to make these mounts writable for the root group (GID 0).
 
 If you start Pulsar successfully, you will see `INFO`-level log messages like this:
 
diff --git a/site2/website-next/docs/standalone.md b/site2/website-next/docs/standalone.md
index a10a32c..738bccf 100644
--- a/site2/website-next/docs/standalone.md
+++ b/site2/website-next/docs/standalone.md
@@ -5,7 +5,7 @@ title: Set up a standalone Pulsar locally
 sidebar_label: "Run Pulsar locally"
 ---
 
-For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
+For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary [RocksDB](http://rocksdb.org/) and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
 
 > **Pulsar in production?**  
 > If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal) guide.
@@ -64,7 +64,7 @@ The Pulsar binary package initially contains the following directories:
 Directory | Contains
 :---------|:--------
 `bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/).
-`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
+`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker) and more.<br />**Note:** Pulsar standalone uses RocksDB as the local metadata store and its configuration file path [`metadataStoreConfigPath`](reference-configuration) is configurable in the `standalone.conf` file. For more information about the configurations of RocksDB, see [here](https://github.com/facebook/rocksdb/blob/main/examples/rocksdb_option_file_example.ini) and relate [...]
 `examples` | A Java JAR file containing [Pulsar Functions](functions-overview) example.
 `instances` | Artifacts created for [Pulsar Functions](functions-overview).
 `lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
@@ -74,7 +74,7 @@ These directories are created once you begin running Pulsar.
 
 Directory | Contains
 :---------|:--------
-`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`data` | The data storage directory used by RocksDB and BookKeeper.
 `logs` | Logs created by the installation.
 
 :::tip
diff --git a/site2/website-next/docs/tiered-storage-filesystem.md b/site2/website-next/docs/tiered-storage-filesystem.md
index 019f63c..cde7a3c 100644
--- a/site2/website-next/docs/tiered-storage-filesystem.md
+++ b/site2/website-next/docs/tiered-storage-filesystem.md
@@ -344,7 +344,6 @@ For details about how to set up a Hadoop single node cluster, see [here](https:/
 
    ![](/assets/FileSystem-1.png)
 
-
    1. At the top navigation bar, click **Datanodes** to check DataNode information.
 
        ![](/assets/FileSystem-2.png)
diff --git a/site2/website-next/docs/txn-why.md b/site2/website-next/docs/txn-why.md
index dc32a1f..0696b38 100644
--- a/site2/website-next/docs/txn-why.md
+++ b/site2/website-next/docs/txn-why.md
@@ -17,7 +17,7 @@ successfully produced, and vice versa.
 
 ![](/assets/txn-1.png)
 
-The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single until.
+The Pulsar transactions API strengthens the message delivery semantics and the processing guarantees for stream processing. It enables stream processing applications to consume, process, and produce messages in one atomic operation. That means, a batch of messages in a transaction can be received from, produced to and acknowledged by many topic partitions. All the operations involved in a transaction succeed or fail as one single unit.
 
 ## Limitation of idempotent producer
 
diff --git a/site2/website-next/static/assets/DDLC.png b/site2/website-next/static/assets/DDLC.png
new file mode 100644
index 0000000..7870ef2
Binary files /dev/null and b/site2/website-next/static/assets/DDLC.png differ
diff --git a/site2/website-next/static/assets/OverloadShedder.png b/site2/website-next/static/assets/OverloadShedder.png
new file mode 100644
index 0000000..0419fa0
Binary files /dev/null and b/site2/website-next/static/assets/OverloadShedder.png differ
diff --git a/site2/website-next/static/assets/ThresholdShedder.png b/site2/website-next/static/assets/ThresholdShedder.png
new file mode 100644
index 0000000..787ac82
Binary files /dev/null and b/site2/website-next/static/assets/ThresholdShedder.png differ
diff --git a/site2/website-next/static/assets/UniformLoadShedder.png b/site2/website-next/static/assets/UniformLoadShedder.png
new file mode 100644
index 0000000..88e2e47
Binary files /dev/null and b/site2/website-next/static/assets/UniformLoadShedder.png differ
diff --git a/site2/website-next/static/assets/cluster-level-failover-1.png b/site2/website-next/static/assets/cluster-level-failover-1.png
new file mode 100644
index 0000000..a01a722
Binary files /dev/null and b/site2/website-next/static/assets/cluster-level-failover-1.png differ
diff --git a/site2/website-next/static/assets/cluster-level-failover-2.png b/site2/website-next/static/assets/cluster-level-failover-2.png
new file mode 100644
index 0000000..36cce4f
Binary files /dev/null and b/site2/website-next/static/assets/cluster-level-failover-2.png differ
diff --git a/site2/website-next/static/assets/cluster-level-failover-3.png b/site2/website-next/static/assets/cluster-level-failover-3.png
new file mode 100644
index 0000000..b17cd65
Binary files /dev/null and b/site2/website-next/static/assets/cluster-level-failover-3.png differ
diff --git a/site2/website-next/static/assets/cluster-level-failover-4.png b/site2/website-next/static/assets/cluster-level-failover-4.png
new file mode 100644
index 0000000..e2e29a6
Binary files /dev/null and b/site2/website-next/static/assets/cluster-level-failover-4.png differ
diff --git a/site2/website-next/static/assets/cluster-level-failover-5.png b/site2/website-next/static/assets/cluster-level-failover-5.png
new file mode 100644
index 0000000..17cc70c
Binary files /dev/null and b/site2/website-next/static/assets/cluster-level-failover-5.png differ
diff --git a/site2/website-next/static/assets/tableview.png b/site2/website-next/static/assets/tableview.png
new file mode 100644
index 0000000..4e5203f
Binary files /dev/null and b/site2/website-next/static/assets/tableview.png differ
diff --git a/site2/website-next/static/assets/zookeeper-batching.png b/site2/website-next/static/assets/zookeeper-batching.png
new file mode 100644
index 0000000..4bd461e
Binary files /dev/null and b/site2/website-next/static/assets/zookeeper-batching.png differ
diff --git a/site2/website-next/versioned_docs/version-2.2.0/admin-api-clusters.md b/site2/website-next/versioned_docs/version-2.2.0/admin-api-clusters.md
index ccd3ebb..3c2f661 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/admin-api-clusters.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/admin-api-clusters.md
@@ -103,8 +103,8 @@ Here's an example cluster metadata initialization command:
 
 bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
diff --git a/site2/website-next/versioned_docs/version-2.2.0/administration-proxy.md b/site2/website-next/versioned_docs/version-2.2.0/administration-proxy.md
index 3cef937..5228b9a 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/administration-proxy.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/administration-proxy.md
@@ -8,22 +8,9 @@ Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connection
 
 ## Configure the proxy
 
-Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. 
+Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery.
 
-### Use service discovery
-
-Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
-
-```properties
-
-zookeeperServers=zk-0,zk-1,zk-2
-configurationStoreServers=zk-0:2184,zk-remote:2184
-
-```
-
-> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
-
-> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+> In a production environment service discovery is not recommended.
 
 ### Use broker URLs
 
@@ -57,6 +44,21 @@ The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651
 
 Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`.
 
+### Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+
+```properties
+
+metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181
+configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184
+
+```
+
+> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
+
+> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+
 ## Start the proxy
 
 To start the proxy:
@@ -64,7 +66,9 @@ To start the proxy:
 ```bash
 
 $ cd /path/to/pulsar/directory
-$ bin/pulsar proxy
+$ bin/pulsar proxy \
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/versioned_docs/version-2.2.0/administration-zk-bk.md b/site2/website-next/versioned_docs/version-2.2.0/administration-zk-bk.md
index e5f9688..c0aec95 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/administration-zk-bk.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/administration-zk-bk.md
@@ -147,27 +147,19 @@ $ bin/pulsar-daemon start configuration-store
 
 ### ZooKeeper configuration
 
-In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation:
+* The `conf/zookeeper.conf` file handles the configuration for local ZooKeeper.
+* The `conf/global-zookeeper.conf` file handles the configuration for configuration store.
+See [parameters](reference-configuration.md#zookeeper) for more details.
 
-#### Local ZooKeeper
+#### Configure batching operations
+Using the batching operations reduces the remote procedure call (RPC) traffic between ZooKeeper client and servers. It also reduces the number of write transactions, because each batching operation corresponds to a single ZooKeeper transaction, containing multiple read and write operations.
 
-The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters:
+The following figure demonstrates a basic benchmark of batching read/write operations that can be requested to ZooKeeper in one second:
 
-|Name|Description|Default|
-|---|---|---|
-|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
-|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
-|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
-|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
-|clientPort|  The port on which the ZooKeeper server listens for connections. |2181|
-|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
-|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1|
-|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+![Zookeeper batching benchmark](/assets/zookeeper-batching.png)
 
-
-#### Configuration Store
-
-The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters:
+To enable batching operations, set the [`metadataStoreBatchingEnabled`](reference-configuration.md#broker) parameter to `true` on the broker side.
 
 
 ## BookKeeper
@@ -194,6 +186,12 @@ You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](referenc
 
 The minimum configuration changes required in `conf/bookkeeper.conf` are as follows:
 
+:::note
+
+Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later.
+
+:::
+
 ```properties
 
 # Change to point to journal disk mount point
@@ -205,6 +203,9 @@ ledgerDirectories=data/bookkeeper/ledgers
 # Point to local ZK quorum
 zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
 
+#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud).
+advertisedAddress=
+
 ```
 
 To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`.
diff --git a/site2/website-next/versioned_docs/version-2.2.0/client-libraries-java.md b/site2/website-next/versioned_docs/version-2.2.0/client-libraries-java.md
index b8150e1..a0c4f98 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/client-libraries-java.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/client-libraries-java.md
@@ -4,9 +4,15 @@ title: Pulsar Java client
 sidebar_label: "Java"
 ---
 
-You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), and [readers](#reader) of messages and to perform [administrative tasks](admin-api-overview). The current Java client version is **@pulsar:version@**.
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
 
-All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe.
+
+You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of messages and to perform [administrative tasks](admin-api-overview). The current Java client version is **@pulsar:version@**.
+
+All the methods in [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of a Java client are thread-safe.
 
 Javadoc for the Pulsar client is divided into two domains by package as follows.
 
@@ -168,6 +174,328 @@ You can set the client memory allocator configurations through Java properties.<
 
 ```
 
+### Cluster-level failover
+
+This chapter describes the concept, benefits, use cases, constraints, usage, working principles, and more information about the cluster-level failover. It contains the following sections:
+
+- [What is cluster-level failover?](#what-is-cluster-level-failover)
+
+  * [Concept of cluster-level failover](#concept-of-cluster-level-failover)
+   
+  * [Why use cluster-level failover?](#why-use-cluster-level-failover)
+
+  * [When to use cluster-level failover?](#when-to-use-cluster-level-failover)
+
+  * [When cluster-level failover is triggered?](#when-cluster-level-failover-is-triggered)
+
+  * [Why does cluster-level failover fail?](#why-does-cluster-level-failover-fail)
+
+  * [What are the limitations of cluster-level failover?](#what-are-the-limitations-of-cluster-level-failover)
+
+  * [What are the relationships between cluster-level failover and geo-replication?](#what-are-the-relationships-between-cluster-level-failover-and-geo-replication)
+  
+- [How to use cluster-level failover?](#how-to-use-cluster-level-failover)
+
+- [How does cluster-level failover work?](#how-does-cluster-level-failover-work)
+  
+> #### What is cluster-level failover
+
+This chapter helps you better understand the concept of cluster-level failover.
+> ##### Concept of cluster-level failover
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+Automatic cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters automatically and seamlessly when it detects a failover event based on the configured detecting policy set by **users**. 
+
+![Automatic cluster-level failover](/assets/cluster-level-failover-1.png)
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+Controlled cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters. The switchover is manually set by **administrators**.
+
+![Controlled cluster-level failover](/assets/cluster-level-failover-2.png)
+
+</TabItem>
+
+</Tabs>
+````
+
+Once the primary cluster functions again, Pulsar clients can switch back to the primary cluster. Most of the time users won’t even notice a thing. Users can keep using applications and services without interruptions or timeouts.
+
+> ##### Why use cluster-level failover?
+
+The cluster-level failover provides fault tolerance, continuous availability, and high availability together. It brings a number of benefits, including but not limited to:
+
+* Reduced cost: services can be switched and recovered automatically with no data loss.
+
+* Simplified management: businesses can operate on an “always-on” basis since no immediate user intervention is required.
+
+* Improved stability and robustness: it ensures continuous performance and minimizes service downtime. 
+
+> ##### When to use cluster-level failover?
+
+The cluster-level failover protects your environment in a number of ways, including but not limited to:
+
+* Disaster recovery: cluster-level failover can automatically and seamlessly transfer the production workload on a primary cluster to one or several backup clusters, which ensures minimum data loss and reduced recovery time.
+
+* Planned migration: if you want to migrate production workloads from an old cluster to a new cluster, you can improve the migration efficiency with cluster-level failover. For example, you can test whether the data migration goes smoothly in case of a failover event, identify possible issues and risks before the migration.
+
+> ##### When cluster-level failover is triggered?
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+Automatic cluster-level failover is triggered when Pulsar clients cannot connect to the primary cluster for a prolonged period of time. This can be caused by any number of reasons including, but not limited to: 
+
+* Network failure: internet connection is lost.
+
+* Power failure: shutdown time of a primary cluster exceeds time limits.
+
+* Service error: errors occur on a primary cluster (for example, the primary cluster does not function because of time limits).
+
+* Crashed storage space: the primary cluster does not have enough storage space, but the corresponding storage space on the backup server functions normally.
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+Controlled cluster-level failover is triggered when administrators set the switchover manually.
+
+</TabItem>
+
+</Tabs>
+````
+
+> ##### Why does cluster-level failover fail?
+
+Obviously, the cluster-level failover does not succeed if the backup cluster is unreachable by active Pulsar clients. This can happen for many reasons, including but not limited to:
+
+* Power failure: the backup cluster is shut down or does not function normally. 
+
+* Crashed storage space: primary and backup clusters do not have enough storage space. 
+
+* If the failover is initiated, but no cluster can assume the role of an available cluster due to errors, and the primary cluster is not able to provide service normally.
+
+* If you manually initiate a switchover, but services cannot be switched to the backup cluster server, then the system will attempt to switch services back to the primary cluster.
+
+* Fail to authenticate or authorize between 1) primary and backup clusters, or 2) between two backup clusters.
+
+> ##### What are the limitations of cluster-level failover?
+
+Currently, cluster-level failover can perform probes to prevent data loss, but it can not check the status of backup clusters. If backup clusters are not healthy, you cannot produce or consume data.
+
+> #### What are the relationships between cluster-level failover and geo-replication?
+
+The cluster-level failover is an extension of [geo-replication](concepts-replication) to improve stability and robustness. The cluster-level failover depends on geo-replication, and they have some **differences** as below.
+
+Influence |Cluster-level failover|Geo-replication
+|---|---|---
+Do administrators have heavy workloads?|No or maybe.<br /><br />- For the **automatic** cluster-level failover, the cluster switchover is triggered automatically based on the policies set by **users**.<br /><br />- For the **controlled** cluster-level failover, the switchover is triggered manually by **administrators**.|Yes.<br /><br />If a cluster fails, immediate administration intervention is required.|
+Result in data loss?|No.<br /><br />For both **automatic** and **controlled** cluster-level failover, if the failed primary cluster doesn't replicate messages immediately to the backup cluster, the Pulsar client can't consume the non-replicated messages. After the primary cluster is restored and the Pulsar client switches back, the non-replicated data can still be consumed by the Pulsar client. Consequently, the data is not lost.<br /><br />- For the **automatic** cluster-level failover, [...]
+Result in Pulsar client failure? |No or maybe.<br /><br />- For **automatic** cluster-level failover, services can be switched and recovered automatically and the Pulsar client does not fail. <br /><br />- For **controlled** cluster-level failover, services can be switched and recovered manually, but the Pulsar client fails before administrators can take action. |Same as above.
+
+> #### How to use cluster-level failover
+
+This section guides you through every step on how to configure cluster-level failover.
+
+**Tip**
+
+- You should configure cluster-level failover only when the cluster contains sufficient resources to handle all possible consequences. Workload intensity on the backup cluster may increase significantly.
+
+- Connect clusters to an uninterruptible power supply (UPS) unit to reduce the risk of unexpected power loss.
+
+**Requirements**
+
+* Pulsar client 2.10 or later versions.
+
+* For backup clusters:
+
+  * The number of BooKeeper nodes should be equal to or greater than the ensemble quorum.
+
+  * The number of ZooKeeper nodes should be equal to or greater than 3.
+
+* **Turn on geo-replication** between the primary cluster and any dependent cluster (primary to backup or backup to backup) to prevent data loss.
+
+* Set `replicateSubscriptionState` to `true` when creating consumers.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+This is an example of how to construct a Java Pulsar client to use automatic cluster-level failover. The switchover is triggered automatically.
+
+```
+
+  private PulsarClient getAutoFailoverClient() throws PulsarClientException {
+
+        ServiceUrlProvider failover = AutoClusterFailover.builder()
+                .primary("pulsar://localhost:6650")
+                .secondary(Collections.singletonList("pulsar://other1:6650","pulsar://other2:6650"))
+                .failoverDelay(30, TimeUnit.SECONDS)
+                .switchBackDelay(60, TimeUnit.SECONDS)
+                .checkInterval(1000, TimeUnit.MILLISECONDS)
+	    	    .secondaryTlsTrustCertsFilePath("/path/to/ca.cert.pem")
+    .secondaryAuthentication("org.apache.pulsar.client.impl.auth.AuthenticationTls",
+"tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem")
+
+                .build();
+
+        PulsarClient pulsarClient = PulsarClient.builder()
+                .build();
+
+        failover.initialize(pulsarClient);
+        return pulsarClient;
+    }
+
+```
+
+Configure the following parameters:
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`primary`|N/A|Yes|Service URL of the primary cluster.
+`secondary`|N/A|Yes|Service URL(s) of one or several backup clusters.<br /><br />You can specify several backup clusters using a comma-separated list.<br /><br /> Note that:<br />- The backup cluster is chosen in the sequence shown in the list. <br />- If all backup clusters are available, the Pulsar client chooses the first backup cluster.
+`failoverDelay`|N/A|Yes|The delay before the Pulsar client switches from the primary cluster to the backup cluster.<br /><br />Automatic failover is controlled by a probe task: <br />1) The probe task first checks the health status of the primary cluster. <br /> 2) If the probe task finds the continuous failure time of the primary cluster exceeds `failoverDelayMs`, it switches the Pulsar client to the backup cluster. 
+`switchBackDelay`|N/A|Yes|The delay before the Pulsar client switches from the backup cluster to the primary cluster.<br /><br />Automatic failover switchover is controlled by a probe task: <br /> 1) After the Pulsar client switches from the primary cluster to the backup cluster, the probe task continues to check the status of the primary cluster. <br /> 2) If the primary cluster functions well and continuously remains active longer than `switchBackDelay`, the Pulsar client switches back [...]
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`secondaryTlsTrustCertsFilePath`|N/A|No|Path to the trusted TLS certificate file of the backup cluster.
+`secondaryAuthentication`|N/A|No|Authentication of the backup cluster.
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+This is an example of how to construct a Java Pulsar client to use controlled cluster-level failover. The switchover is triggered by administrators manually.
+
+**Note**: you can have one or several backup clusters but can only specify one.
+
+```
+
+ public PulsarClient getControlledFailoverClient() throws IOException {
+Map<String, String> header = new HashMap(); 
+  header.put(“service_user_id”, “my-user”);
+  header.put(“service_password”, “tiger”);
+  header.put(“clusterA”, “tokenA”);
+  header.put(“clusterB”, “tokenB”);
+
+  ServiceUrlProvider provider = 
+      ControlledClusterFailover.builder()
+        .defaultServiceUrl("pulsar://localhost:6650")
+        .checkInterval(1, TimeUnit.MINUTES)
+        .urlProvider("http://localhost:8080/test")
+        .urlProviderHeader(header)
+        .build();
+
+  PulsarClient pulsarClient = 
+     PulsarClient.builder()
+      .build();
+
+  provider.initialize(pulsarClient);
+  return pulsarClient;
+}
+
+```
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`defaultServiceUrl`|N/A|Yes|Pulsar service URL.
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`urlProvider`|N/A|Yes|URL provider service.
+`urlProviderHeader`|N/A|No|`urlProviderHeader` is a map containing tokens and credentials. <br /><br />If you enable authentication or authorization between Pulsar clients and primary and backup clusters, you need to provide `urlProviderHeader`.
+
+Here is an example of how `urlProviderHeader` works.
+
+![How urlProviderHeader works](/assets/cluster-level-failover-3.png)
+
+Assume that you want to connect Pulsar client 1 to cluster A.
+
+1. Pulsar client 1 sends the token *t1* to the URL provider service.
+
+2. The URL provider service returns the credential *c1* and the cluster A URL to the Pulsar client.
+   
+   The URL provider service manages all tokens and credentials. It returns different credentials based on different tokens and different target cluster URLs to different Pulsar clients.
+
+   **Note**: **the credential must be in a JSON file and contain parameters as shown**.
+
+   ```
+   
+   {
+   "serviceUrl": "pulsar+ssl://target:6651", 
+   "tlsTrustCertsFilePath": "/security/ca.cert.pem",
+   "authPluginClassName":"org.apache.pulsar.client.impl.auth.AuthenticationTls",
+   "authParamsString": " \"tlsCertFile\": \"/security/client.cert.pem\" 
+       \"tlsKeyFile\": \"/security/client-pk8.pem\" "
+   }
+   
+   ```
+
+3. Pulsar client 1 connects to cluster A using credential *c1*.
+
+</TabItem>
+
+</Tabs>
+````
+
+>#### How does cluster-level failover work?
+
+This chapter explains the working process of cluster-level failover. For more implementation details, see [PIP-121](https://github.com/apache/pulsar/issues/13315).
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+In automatic failover cluster, the primary cluster and backup cluster are aware of each other's availability. The automatic failover cluster performs the following actions without administrator intervention:
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+   
+2. If the probe task finds the failure time of the primary cluster exceeds the time set in the `failoverDelay` parameter, it searches backup clusters for an available healthy cluster.
+
+   2a) If there are healthy backup clusters, the Pulsar client switches to a backup cluster in the order defined in `secondary`.
+
+   2b) If there is no healthy backup cluster, the Pulsar client does not perform the switchover, and the probe task continues to look  for an available backup cluster.
+
+3. The probe task checks whether the primary cluster functions well or not. 
+
+   3a) If the primary cluster comes back and the continuous healthy time exceeds the time set in `switchBackDelay`, the Pulsar client switches back to the primary cluster.
+
+   3b) If the primary cluster does not come back, the Pulsar client does not perform the switchover. 
+
+![Workflow of automatic failover cluster](/assets/cluster-level-failover-4.png)
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+
+2. The probe task fetches the service URL configuration from the URL provider service, which is configured by `urlProvider`.
+
+   2a) If the service URL configuration is changed, the probe task  switches to the target cluster without checking the health status of the target cluster.
+
+   2b) If the service URL configuration is not changed, the Pulsar client does not perform the switchover.
+
+3. If the Pulsar client switches to the target cluster, the probe task continues to fetch service URL configuration from the URL provider service at intervals defined in `checkInterval`. 
+
+   3a) If the service URL configuration is changed, the probe task  switches to the target cluster without checking the health status of the target cluster.
+
+   3b) If the service URL configuration is not changed, it does not perform the switchover.
+
+![Workflow of controlled failover cluster](/assets/cluster-level-failover-5.png)
+
+</TabItem>
+
+</Tabs>
+````
+
 ## Producer
 
 In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic).
@@ -241,7 +569,9 @@ Name| Type |  <div>Description</div>|  Default
 `batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1)
 `batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000
 `batchingEnabled`| boolean|Enable batching of messages. |true
+`chunkingEnabled` | boolean | Enable chunking of messages. |false
 `compressionType`|CompressionType|Message data compression type used by a producer. <br />Available options:<li>[`LZ4`](https://github.com/lz4/lz4)</li><li>[`ZLIB`](https://zlib.net/)<br /></li><li>[`ZSTD`](https://facebook.github.io/zstd/)</li><li>[`SNAPPY`](https://google.github.io/snappy/)</li>| No compression
+`initialSubscriptionName`|string|Use this configuration to automatically create an initial subscription when creating a topic. If this field is not set, the initial subscription is not created.|null
 
 You can configure parameters if you do not want to use the default configuration.
 
@@ -295,6 +625,24 @@ producer.newMessage()
 
 You can terminate the builder chain with `sendAsync()` and get a future return.
 
+### Enable chunking
+
+Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. 
+
+The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
+
+```java
+
+Producer<byte[]> producer = client.newProducer()
+        .topic(topic)
+        .enableChunking(true)
+        .enableBatching(false)
+        .create();
+
+```
+
+> **Note:** To enable chunking, you need to disable batching (`enableBatching`=`false`) concurrently.
+
 ## Consumer
 
 In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)).
@@ -382,7 +730,11 @@ When you create a consumer, you can use the `loadConf` configuration. The follow
 `deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.<br /><br />By default, some messages are probably redelivered many times, even to the extent that it never stops.<br /><br />By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.<br /><br />You can enable the dead letter mechanism by setting `deadLetterPolicy`.<br /><br [...]
 `autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.<br /><br />**Note**: this is only for partitioned consumers.|true
 `replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false
-`negativeAckRedeliveryBackoff`|NegativeAckRedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `NegativeAckRedeliveryBackoff` for a consumer.| `NegativeAckRedeliveryExponentialBackoff`
+`negativeAckRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`ackTimeoutRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is ackTimeout policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`autoAckOldestChunkedMessageOnQueueFull`|boolean|Whether to automatically acknowledge pending chunked messages when the threashold of `maxPendingChunkedMessage` is reached. If set to `false`, these messages will be redelivered by their broker. |true
+`maxPendingChunkedMessage`|int| The maximum size of a queue holding pending chunked messages. When the threshold is reached, the consumer drops pending messages to optimize memory utilization.|10
+`expireTimeOfIncompleteChunkedMessageMillis`|long|The time interval to expire incomplete chunks if a consumer fails to receive all the chunks in the specified time period. The default value is 1 minute. | 60000
 
 You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. 
 
@@ -462,27 +814,78 @@ BatchReceivePolicy.builder()
 
 :::
 
+### Configure chunking
+
+You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` and `autoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. The `expireTimeOfIncompleteChunkedMessage` parameter decides the time interval to expire incomplete chunks if the consumer fails to receive all chunks of a me [...]
+
+The following is an example of how to configure message chunking.
+
+```java
+
+Consumer<byte[]> consumer = client.newConsumer()
+        .topic(topic)
+        .subscriptionName("test")
+        .autoAckOldestChunkedMessageOnQueueFull(true)
+        .maxPendingChunkedMessage(100)
+        .expireTimeOfIncompleteChunkedMessage(10, TimeUnit.MINUTES)
+        .subscribe();
+
+```
+
 ### Negative acknowledgment redelivery backoff
 
-The `NegativeAckRedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. 
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. 
+
+```java
+
+Consumer consumer =  client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+                .minDelayMs(1000)
+                .maxDelayMs(60 * 1000)
+                .build())
+        .subscribe();
+
+```
+
+### Acknowledgement timeout redelivery backoff
+
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can redeliver messages with different delays by setting the number
+of times the messages is retried.
 
 ```java
 
 Consumer consumer =  client.newConsumer()
         .topic("my-topic")
         .subscriptionName("my-subscription")
-        .negativeAckRedeliveryBackoff(NegativeAckRedeliveryExponentialBackoff.builder()
-                .minNackTimeMs(1000)
-                .maxNackTimeMs(60 * 1000)
+        .ackTimeout(10, TimeUnit.SECOND)
+        .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+                .minDelayMs(1000)
+                .maxDelayMs(60000)
+                .multiplier(2)
                 .build())
         .subscribe();
 
 ```
 
+The message redelivery behavior should be as follows.
+
+Redelivery count | Redelivery delay
+:--------------------|:-----------
+1 | 10 + 1 seconds
+2 | 10 + 2 seconds
+3 | 10 + 4 seconds
+4 | 10 + 8 seconds
+5 | 10 + 16 seconds
+6 | 10 + 32 seconds
+7 | 10 + 60 seconds
+8 | 10 + 60 seconds
+
 :::note
 
 - The `negativeAckRedeliveryBackoff` does not work with `consumer.negativeAcknowledge(MessageId messageId)` because you are not able to get the redelivery count from the message ID.
-- If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `NegativeAckRedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff.
+- If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `RedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff.
 
 :::
 
@@ -870,6 +1273,53 @@ pulsarClient.newReader()
 
 Total hash range size is 65536, so the max end of the range should be less than or equal to 65535.
 
+
+## TableView
+
+The TableView interface serves an encapsulated access pattern, providing a continuously updated key-value map view of the compacted topic data. Messages without keys will be ignored.
+
+With TableView, Pulsar clients can fetch all the message updates from a topic and construct a map with the latest values of each key. These values can then be used to build a local cache of data. In addition, you can register consumers with the TableView by specifying a listener to perform a scan of the map and then receive notifications when new messages are received. Consequently, event handling can be triggered to serve use cases, such as event-driven applications and message monitoring.
+
+> **Note:** Each TableView uses one Reader instance per partition, and reads the topic starting from the compacted view by default. It is highly recommended to enable automatic compaction by [configuring the topic compaction policies](cookbooks-compaction.md#configuring-compaction-to-run-automatically) for the given topic or namespace. More frequent compaction results in shorter startup times because less data is replayed to reconstruct the TableView of the topic.
+
+The following figure illustrates the dynamic construction of a TableView updated with newer values of each key.
+![TableView](/assets/tableview.png)
+
+### Configure TableView
+ 
+The following is an example of how to configure a TableView.
+
+```java
+
+TableView<String> tv = client.newTableViewBuilder(Schema.STRING)
+  .topic("my-tableview")
+  .create()
+
+```
+
+You can use the available parameters in the `loadConf` configuration or related [API](https://pulsar.apache.org/api/client/2.10.0-SNAPSHOT/org/apache/pulsar/client/api/TableViewBuilder.html) to customize your TableView.
+
+| Name | Type| Required? |  <div>Description</div> | Default
+|---|---|---|---|---
+| `topic` | string | yes | The topic name of the TableView. | N/A
+| `autoUpdatePartitionInterval` | int | no | The interval to check for newly added partitions. | 60 (seconds)
+
+### Register listeners
+ 
+You can register listeners for both existing messages on a topic and new messages coming into the topic by using `forEachAndListen`, and specify to perform operations for all existing messages by using `forEach`.
+
+The following is an example of how to register listeners with TableView.
+
+```java
+
+// Register listeners for all existing and incoming messages
+tv.forEachAndListen((key, value) -> /*operations on all existing and incoming messages*/)
+
+// Register action for all existing messages
+tv.forEach((key, value) -> /*operations on all existing messages*/)
+
+```
+
 ## Schema
 
 In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example.
diff --git a/site2/website-next/versioned_docs/version-2.2.0/concepts-architecture-overview.md b/site2/website-next/versioned_docs/version-2.2.0/concepts-architecture-overview.md
index 8fe0717..a2b024d 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/concepts-architecture-overview.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/concepts-architecture-overview.md
@@ -47,6 +47,9 @@ Clusters can replicate amongst themselves using [geo-replication](concepts-repli
 
 The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and [BookKeeper metadata store](https://bookkee [...]
 
+> Pulsar also supports more metadata backend services, including [ETCD](https://etcd.io/) and [RocksDB](http://rocksdb.org/) (for standalone Pulsar only). 
+
+
 In a Pulsar instance:
 
 * A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
@@ -128,9 +131,10 @@ Architecturally, the Pulsar proxy gets all the information it requires from ZooK
 
 ```bash
 
+$ cd /path/to/pulsar/directory
 $ bin/pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk-2 \
-  --configuration-store-servers zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/versioned_docs/version-2.2.0/concepts-messaging.md b/site2/website-next/versioned_docs/version-2.2.0/concepts-messaging.md
index f23fae9..c370181 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/concepts-messaging.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/concepts-messaging.md
@@ -108,29 +108,50 @@ To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar i
 By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. 
 
 ### Chunking
-Before you enable chunking, read the following instructions.
-- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance.
-- Chunking is only supported for persisted topics.
-- Chunking is only supported for Exclusive and Failover subscription types.
+Message chunking enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side.
 
-When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
+With message chunking enabled, when the size of a message exceeds the allowed maximum payload size (the `maxMessageSize` parameter of broker), the workflow of messaging is as follows:
+1. The producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. 
+2. The broker stores the chunked messages in one managed-ledger in the same way as that of ordinary messages, and it uses the `chunkedMessageRate` parameter to record chunked message rate on the topic.
+3. The consumer buffers the chunked messages and aggregates them into the receiver queue when it receives all the chunks of a message.
+4. The client consumes the aggregated message from the receiver queue. 
 
-The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` param [...]
+**Limitations:** 
+- Chunking is only available for persisted topics.
+- Chunking is only available for the exclusive and failover subscription types.
+- Chunking cannot be enabled simultaneously with batching.
 
-The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic.
+#### Handle consecutive chunked messages with one ordered consumer
 
-#### Handle chunked messages with one producer and one ordered consumer
-
-As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combi [...]
+The following figure shows a topic with one producer which publishes a large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks labeled M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches them to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, a [...]
 
 ![](/assets/chunking-01.png)
 
-#### Handle chunked messages with multiple producers and one ordered consumer
+#### Handle interwoven chunked messages with one ordered consumer
 
-When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the c [...]
+When multiple producers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different producers in the same managed-ledger. The chunked messages in the managed-ledger can be interwoven with each other. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be  [...]
 
 ![](/assets/chunking-02.png)
 
+:::note
+
+In this case, interwoven chunked messages may bring some memory pressure to the consumer because the consumer keeps a separate buffer for each large message to aggregate all its chunks in one message. You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` parameter. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later, opt [...]
+
+:::
+
+#### Enable Message Chunking
+
+**Prerequisite:** Disable batching by setting the `enableBatching` parameter to `false`.
+
+The message chunking feature is OFF by default. 
+To enable message chunking, set the `chunkingEnabled` parameter to `true` when creating a producer.
+
+:::note
+
+If the consumer fails to receive all chunks of a message within a specified time period, it expires incomplete chunks. The default value is 1 minute. For more information about the `expireTimeOfIncompleteChunkedMessage` parameter, refer to [org.apache.pulsar.client.api](https://pulsar.apache.org/api/client/).
+
+:::
+
 ## Consumers
 
 A consumer is a process that attaches to a topic via a subscription and then receives messages.
@@ -232,9 +253,9 @@ Use the following API to enable `Negative Redelivery Backoff`.
 
 ```java
 
-consumer.negativeAckRedeliveryBackoff(NegativeAckRedeliveryExponentialBackoff.builder()
-        .minNackTimeMs(1000)
-        .maxNackTimeMs(60 * 1000)
+consumer.negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+        .minDelayMs(1000)
+        .maxDelayMs(60 * 1000)
         .build())
 
 ```
@@ -245,6 +266,34 @@ The acknowledgement timeout mechanism allows you to set a time range during whic
 
 You can configure the acknowledgement timeout mechanism to redeliver the message if it is not acknowledged after `ackTimeout` or to execute a timer task to check the acknowledgement timeout messages during every `ackTimeoutTickTime` period.
 
+You can also use the redelivery backoff mechanism, redeliver messages with different delays by setting the number 
+of times the messages is retried.
+
+If you want to use redelivery backoff, you can use the following API.
+
+```java
+
+consumer.ackTimeout(10, TimeUnit.SECOND)
+        .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+        .minDelayMs(1000)
+        .maxDelayMs(60000)
+        .multiplier(2).build())
+
+```
+
+The message redelivery behavior should be as follows.
+
+Redelivery count | Redelivery delay
+:--------------------|:-----------
+1 | 10 + 1 seconds
+2 | 10 + 2 seconds
+3 | 10 + 4 seconds
+4 | 10 + 8 seconds
+5 | 10 + 16 seconds
+6 | 10 + 32 seconds
+7 | 10 + 60 seconds
+8 | 10 + 60 seconds
+
 :::note
 
 - If batching is enabled, all messages in one batch are redelivered to the consumer.  
@@ -315,6 +364,23 @@ Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
 
 ```
 
+By default, there is no subscription during a DLQ topic creation. Without a just-in-time subscription to the DLQ topic, you may lose messages. To automatically create an initial subscription for the DLQ, you can specify the `initialSubscriptionName` parameter. If this parameter is set but the broker's `allowAutoSubscriptionCreation` is disabled, the DLQ producer will fail to be created.
+
+```java
+
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+                .topic(topic)
+                .subscriptionName("my-subscription")
+                .subscriptionType(SubscriptionType.Shared)
+                .deadLetterPolicy(DeadLetterPolicy.builder()
+                      .maxRedeliverCount(maxRedeliveryCount)
+                      .deadLetterTopic("your-topic-name")
+                      .initialSubscriptionName("init-sub")
+                      .build())
+                .subscribe();
+
+```
+
 Dead letter topic depends on message redelivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. 
 
 :::note
diff --git a/site2/website-next/versioned_docs/version-2.2.0/cookbooks-deduplication.md b/site2/website-next/versioned_docs/version-2.2.0/cookbooks-deduplication.md
index e71e6f4..a14a3c3 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/cookbooks-deduplication.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/cookbooks-deduplication.md
@@ -31,6 +31,7 @@ Parameter | Description | Default
 `brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false`
 `brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000`
 `brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000`
+`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120`
 `brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours)
 
 ### Set default value at the broker-level
diff --git a/site2/website-next/versioned_docs/version-2.2.0/deploy-monitoring.md b/site2/website-next/versioned_docs/version-2.2.0/deploy-monitoring.md
index 95ccdd6..adf3587 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/deploy-monitoring.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/deploy-monitoring.md
@@ -51,7 +51,7 @@ http://$GLOBAL_ZK_SERVER:8001/metrics
 
 ```
 
-The default port of local ZooKeeper is `8000` and the default port of configuration store is `8001`. You can change the default port of local ZooKeeper and configuration store by specifying system property `stats_server_port`.
+The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file.
 
 ### BookKeeper stats
 
diff --git a/site2/website-next/versioned_docs/version-2.2.0/reference-cli-tools.md b/site2/website-next/versioned_docs/version-2.2.0/reference-cli-tools.md
index 0c8aea1..32b23e9 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/reference-cli-tools.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/reference-cli-tools.md
@@ -208,7 +208,7 @@ Options
 |`-c` , `--cluster`|Cluster name||
 |`-cms` , `--configuration-metadata-store`|The configuration metadata store quorum connection string||
 |`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use||
-|`-h` , `--help`|Cluster name|false|
+|`-h` , `--help`|Help message|false|
 |`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16|
 |`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16|
 |`-uw` , `--web-service-url`|The web service URL for the new cluster||
@@ -233,16 +233,16 @@ Options
 
 |Flag|Description|Default|
 |---|---|---|
-|`--configuration-store`|Configuration store connection string||
-|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string||
+|`-cms`, `--configuration-metadata-store`|Configuration meta store connection string||
+|`-md` , `--metadata-store`|Metadata Store service url||
 
 Example
 
 ```bash
 
 $ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk2 \
-  --configuration-store zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
@@ -562,7 +562,7 @@ Options
 |`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false|
 |`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload||
 |`-h`, `--help`|Help message|false|
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0|
 |`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0|
 |`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
@@ -626,7 +626,7 @@ Options
 |`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false|
 |`-d`, `--delay`|Mark messages with a given delay in seconds|0s|
 |`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-k`, `--encryption-key-name`|The public key name to encrypt payload||
 |`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload||
 |`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false|
@@ -686,7 +686,7 @@ Options
 |`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
 |`--auth-plugin`|Authentication plugin class name||
 |`--listener-name`|Listener name for the broker||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-h`, `--help`|Help message|false|
 |`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0|
 |`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
@@ -720,7 +720,7 @@ Options
 |---|---|---|
 |`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
 |`--auth-plugin`|Authentication plugin class name||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-h`, `--help`|Help message|false|
 |`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0|
 |`-t`, `--num-topic`|The number of topics|1|
@@ -762,7 +762,7 @@ Options
 |`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0|
 |`--threads`|Number of threads writing|1|
 |`-w`, `--write-quorum`|Ledger write quorum|1|
-|`-zk`, `--zookeeperServers`|ZooKeeper connection string||
+|`-md`, `--metadata-store`|Metadata store service URL. For example: zk:my-zk:2181||
 
 
 ### `monitor-brokers`
@@ -839,8 +839,10 @@ $ pulsar-perf transaction options
 
 |Flag|Description|Default|
 |---|---|---|
+`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|N/A
+`--auth-plugin`|Authentication plugin class name.|N/A
 `-au`, `--admin-url`|Pulsar admin URL.|N/A
-`--conf-file`|Configuration file.|N/A
+`-cf`, `--conf-file`|Configuration file.|N/A
 `-h`, `--help`|Help messages.|N/A
 `-c`, `--max-connections`|Maximum number of TCP connections to a single broker.|100
 `-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers. |1
diff --git a/site2/website-next/versioned_docs/version-2.2.0/standalone-docker.md b/site2/website-next/versioned_docs/version-2.2.0/standalone-docker.md
index 7ee20c2..f636d2d 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/standalone-docker.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/standalone-docker.md
@@ -22,6 +22,7 @@ A few things to note about this command:
  * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
 time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
  * For Docker on Windows make sure to configure it to use Linux containers
+ * The docker container will run as UID 10000 and GID 0, by default. You'll need to ensure the mounted volumes give write permission to either UID 10000 or GID 0. Note that UID 10000 is arbitrary, so it is recommended to make these mounts writable for the root group (GID 0).
 
 If you start Pulsar successfully, you will see `INFO`-level log messages like this:
 
diff --git a/site2/website-next/versioned_docs/version-2.2.1/admin-api-clusters.md b/site2/website-next/versioned_docs/version-2.2.1/admin-api-clusters.md
index ccd3ebb..3c2f661 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/admin-api-clusters.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/admin-api-clusters.md
@@ -103,8 +103,8 @@ Here's an example cluster metadata initialization command:
 
 bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
diff --git a/site2/website-next/versioned_docs/version-2.2.1/administration-proxy.md b/site2/website-next/versioned_docs/version-2.2.1/administration-proxy.md
index 3cef937..5228b9a 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/administration-proxy.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/administration-proxy.md
@@ -8,22 +8,9 @@ Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connection
 
 ## Configure the proxy
 
-Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. 
+Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery.
 
-### Use service discovery
-
-Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
-
-```properties
-
-zookeeperServers=zk-0,zk-1,zk-2
-configurationStoreServers=zk-0:2184,zk-remote:2184
-
-```
-
-> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
-
-> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+> In a production environment service discovery is not recommended.
 
 ### Use broker URLs
 
@@ -57,6 +44,21 @@ The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651
 
 Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`.
 
+### Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+
+```properties
+
+metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181
+configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184
+
+```
+
+> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
+
+> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+
 ## Start the proxy
 
 To start the proxy:
@@ -64,7 +66,9 @@ To start the proxy:
 ```bash
 
 $ cd /path/to/pulsar/directory
-$ bin/pulsar proxy
+$ bin/pulsar proxy \
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/versioned_docs/version-2.2.1/administration-zk-bk.md b/site2/website-next/versioned_docs/version-2.2.1/administration-zk-bk.md
index e5f9688..c0aec95 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/administration-zk-bk.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/administration-zk-bk.md
@@ -147,27 +147,19 @@ $ bin/pulsar-daemon start configuration-store
 
 ### ZooKeeper configuration
 
-In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation:
+* The `conf/zookeeper.conf` file handles the configuration for local ZooKeeper.
+* The `conf/global-zookeeper.conf` file handles the configuration for configuration store.
+See [parameters](reference-configuration.md#zookeeper) for more details.
 
-#### Local ZooKeeper
+#### Configure batching operations
+Using the batching operations reduces the remote procedure call (RPC) traffic between ZooKeeper client and servers. It also reduces the number of write transactions, because each batching operation corresponds to a single ZooKeeper transaction, containing multiple read and write operations.
 
-The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters:
+The following figure demonstrates a basic benchmark of batching read/write operations that can be requested to ZooKeeper in one second:
 
-|Name|Description|Default|
-|---|---|---|
-|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
-|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
-|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
-|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
-|clientPort|  The port on which the ZooKeeper server listens for connections. |2181|
-|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
-|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1|
-|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+![Zookeeper batching benchmark](/assets/zookeeper-batching.png)
 
-
-#### Configuration Store
-
-The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters:
+To enable batching operations, set the [`metadataStoreBatchingEnabled`](reference-configuration.md#broker) parameter to `true` on the broker side.
 
 
 ## BookKeeper
@@ -194,6 +186,12 @@ You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](referenc
 
 The minimum configuration changes required in `conf/bookkeeper.conf` are as follows:
 
+:::note
+
+Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later.
+
+:::
+
 ```properties
 
 # Change to point to journal disk mount point
@@ -205,6 +203,9 @@ ledgerDirectories=data/bookkeeper/ledgers
 # Point to local ZK quorum
 zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
 
+#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud).
+advertisedAddress=
+
 ```
 
 To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`.
diff --git a/site2/website-next/versioned_docs/version-2.2.1/client-libraries-cpp.md b/site2/website-next/versioned_docs/version-2.2.1/client-libraries-cpp.md
index 958861a..b67f6d9 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/client-libraries-cpp.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/client-libraries-cpp.md
@@ -14,7 +14,18 @@ Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms
 
 [Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp).
 
-## System requirements
+
+## Linux
+
+:::note
+
+You can choose one of the following installation methods based on your needs: Compilation, Install RPM or Install Debian.
+
+:::
+
+### Compilation 
+
+#### System requirements
 
 You need to install the following components before using the C++ client:
 
@@ -24,10 +35,6 @@ You need to install the following components before using the C++ client:
 * [libcurl](https://curl.se/libcurl/)
 * [Google Test](https://github.com/google/googletest)
 
-## Linux
-
-### Compilation 
-
 1. Clone the Pulsar repository.
 
 ```shell
@@ -144,7 +151,14 @@ $ rpm -ivh apache-pulsar-client*.rpm
 
 ```
 
-After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory.
+After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory,for example:
+
+```bash
+
+lrwxrwxrwx 1 root root 18 Dec 30 22:21 libpulsar.so -> libpulsar.so.2.9.1
+lrwxrwxrwx 1 root root 23 Dec 30 22:21 libpulsarnossl.so -> libpulsarnossl.so.2.9.1
+
+```
 
 :::note
 
@@ -152,6 +166,15 @@ If you get the error that `libpulsar.so: cannot open shared object file: No such
 
 :::
 
+2. Install the GCC and g++ using the following command, otherwise errors would occur in installing Node.js.
+
+```bash
+
+$ sudo yum -y install gcc automake autoconf libtool make
+$ sudo yum -y install gcc-c++
+
+```
+
 ### Install Debian
 
 1. Download a Debian package from the links in the table. 
@@ -344,108 +367,6 @@ pulsar+ssl://pulsar.us-west.example.com:6651
 
 ```
 
-## Create a consumer
-
-To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer:
-- [Blocking style](#blocking-example): synchronously calling `receive(msg)`.
-- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener.
-
-### Blocking example
-
-The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received.
-
-This example starts a subscription at the earliest offset and consumes 100 messages.
-
-```c++
-
-#include <pulsar/Client.h>
-
-using namespace pulsar;
-
-int main() {
-    Client client("pulsar://localhost:6650");
-
-    Consumer consumer;
-    ConsumerConfiguration config;
-    config.setSubscriptionInitialPosition(InitialPositionEarliest);
-    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
-    if (result != ResultOk) {
-        std::cout << "Failed to subscribe: " << result << std::endl;
-        return -1;
-    }
-
-    Message msg;
-    int ctr = 0;
-    // consume 100 messages
-    while (ctr < 100) {
-        consumer.receive(msg);
-        std::cout << "Received: " << msg
-            << "  with payload '" << msg.getDataAsString() << "'" << std::endl;
-
-        consumer.acknowledge(msg);
-        ctr++;
-    }
-
-    std::cout << "Finished consuming synchronously!" << std::endl;
-
-    client.close();
-    return 0;
-}
-
-```
-
-### Consumer with a message listener
-
-You can avoid  running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received.
-
-This example starts a subscription at the earliest offset and consumes 100 messages.
-
-```c++
-
-#include <pulsar/Client.h>
-#include <atomic>
-#include <thread>
-
-using namespace pulsar;
-
-std::atomic<uint32_t> messagesReceived;
-
-void handleAckComplete(Result res) {
-    std::cout << "Ack res: " << res << std::endl;
-}
-
-void listener(Consumer consumer, const Message& msg) {
-    std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl;
-    messagesReceived++;
-    consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete);
-}
-
-int main() {
-    Client client("pulsar://localhost:6650");
-
-    Consumer consumer;
-    ConsumerConfiguration config;
-    config.setMessageListener(listener);
-    config.setSubscriptionInitialPosition(InitialPositionEarliest);
-    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
-    if (result != ResultOk) {
-        std::cout << "Failed to subscribe: " << result << std::endl;
-        return -1;
-    }
-
-    // wait for 100 messages to be consumed
-    while (messagesReceived < 100) {
-        std::this_thread::sleep_for(std::chrono::milliseconds(100));
-    }
-
-    std::cout << "Finished consuming asynchronously!" << std::endl;
-
-    client.close();
-    return 0;
-}
-
-```
-
 ## Create a producer
 
 To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer:
@@ -579,6 +500,142 @@ producerConf.setLazyStartPartitionedProducers(true);
 
 ```
 
+### Enable chunking
+
+Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. 
+
+The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
+
+```c++
+
+ProducerConfiguration conf;
+conf.setBatchingEnabled(false);
+conf.setChunkingEnabled(true);
+Producer producer;
+client.createProducer("my-topic", conf, producer);
+
+```
+
+> **Note:** To enable chunking, you need to disable batching (`setBatchingEnabled`=`false`) concurrently.
+
+## Create a consumer
+
+To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer:
+- [Blocking style](#blocking-example): synchronously calling `receive(msg)`.
+- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener.
+
+### Blocking example
+
+The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received.
+
+This example starts a subscription at the earliest offset and consumes 100 messages.
+
+```c++
+
+#include <pulsar/Client.h>
+
+using namespace pulsar;
+
+int main() {
+    Client client("pulsar://localhost:6650");
+
+    Consumer consumer;
+    ConsumerConfiguration config;
+    config.setSubscriptionInitialPosition(InitialPositionEarliest);
+    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
+    if (result != ResultOk) {
+        std::cout << "Failed to subscribe: " << result << std::endl;
+        return -1;
+    }
+
+    Message msg;
+    int ctr = 0;
+    // consume 100 messages
+    while (ctr < 100) {
+        consumer.receive(msg);
+        std::cout << "Received: " << msg
+            << "  with payload '" << msg.getDataAsString() << "'" << std::endl;
+
+        consumer.acknowledge(msg);
+        ctr++;
+    }
+
+    std::cout << "Finished consuming synchronously!" << std::endl;
+
+    client.close();
+    return 0;
+}
+
+```
+
+### Consumer with a message listener
+
+You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received.
+
+This example starts a subscription at the earliest offset and consumes 100 messages.
+
+```c++
+
+#include <pulsar/Client.h>
+#include <atomic>
+#include <thread>
+
+using namespace pulsar;
+
+std::atomic<uint32_t> messagesReceived;
+
+void handleAckComplete(Result res) {
+    std::cout << "Ack res: " << res << std::endl;
+}
+
+void listener(Consumer consumer, const Message& msg) {
+    std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl;
+    messagesReceived++;
+    consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete);
+}
+
+int main() {
+    Client client("pulsar://localhost:6650");
+
+    Consumer consumer;
+    ConsumerConfiguration config;
+    config.setMessageListener(listener);
+    config.setSubscriptionInitialPosition(InitialPositionEarliest);
+    Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
+    if (result != ResultOk) {
+        std::cout << "Failed to subscribe: " << result << std::endl;
+        return -1;
+    }
+
+    // wait for 100 messages to be consumed
+    while (messagesReceived < 100) {
+        std::this_thread::sleep_for(std::chrono::milliseconds(100));
+    }
+
+    std::cout << "Finished consuming asynchronously!" << std::endl;
+
+    client.close();
+    return 0;
+}
+
+```
+
+### Configure chunking
+
+You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `setMaxPendingChunkedMessage` and `setAutoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. 
+
+The following is an example of how to configure message chunking.
+
+```c++
+
+ConsumerConfiguration conf;
+conf.setAutoAckOldestChunkedMessageOnQueueFull(true);
+conf.setMaxPendingChunkedMessage(100);
+Consumer consumer;
+client.subscribe("my-topic", "my-sub", conf, consumer);
+
+```
+
 ## Enable authentication in connection URLs
 If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example.
 
diff --git a/site2/website-next/versioned_docs/version-2.2.1/client-libraries-python.md b/site2/website-next/versioned_docs/version-2.2.1/client-libraries-python.md
index e601730..666971b 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/client-libraries-python.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/client-libraries-python.md
@@ -50,8 +50,8 @@ Installation via PyPi is available for the following Python versions:
 
 Platform | Supported Python versions
 :--------|:-------------------------
-MacOS <br />  10.13 (High Sierra), 10.14 (Mojave) <br /> | 2.7, 3.7
-Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8
+MacOS <br />  10.13 (High Sierra), 10.14 (Mojave) <br /> | 2.7, 3.7, 3.8, 3.9
+Linux | 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9
 
 ### Install from source
 
@@ -112,7 +112,7 @@ while True:
         print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
         # Acknowledge successful processing of the message
         consumer.acknowledge(msg)
-    except:
+    except Exception:
         # Message failed to be processed
         consumer.negative_acknowledge(msg)
 
@@ -183,7 +183,7 @@ while True:
         print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
         # Acknowledge successful processing of the message
         consumer.acknowledge(msg)
-    except:
+    except Exception:
         # Message failed to be processed
         consumer.negative_acknowledge(msg)
 client.close()
@@ -333,7 +333,7 @@ while True:
         print("Received message a={} b={} c={}".format(ex.a, ex.b, ex.c))
         # Acknowledge successful processing of the message
         consumer.acknowledge(msg)
-    except:
+    except Exception:
         # Message failed to be processed
         consumer.negative_acknowledge(msg)
 
diff --git a/site2/website-next/versioned_docs/version-2.2.1/client-libraries.md b/site2/website-next/versioned_docs/version-2.2.1/client-libraries.md
index ab5b7c4..536cd0c 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/client-libraries.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/client-libraries.md
@@ -6,16 +6,25 @@ sidebar_label: "Overview"
 
 Pulsar supports the following client libraries:
 
-- [Java client](client-libraries-java)
-- [Go client](client-libraries-go)
-- [Python client](client-libraries-python)
-- [C++ client](client-libraries-cpp)
-- [Node.js client](client-libraries-node)
-- [WebSocket client](client-libraries-websocket)
-- [C# client](client-libraries-dotnet)
+|Language|Documentation|Release note|Code repo
+|---|---|---|---
+Java |- [User doc](client-libraries-java) <br /><br />- [API doc](https://pulsar.apache.org/api/client/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client) 
+C++ | - [User doc](client-libraries-cpp) <br /><br />- [API doc](https://pulsar.apache.org/api/cpp/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp) 
+Python | - [User doc](client-libraries-python) <br /><br />- [API doc](https://pulsar.apache.org/api/python/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) 
+WebSocket| [User doc](client-libraries-websocket) | [Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-websocket) 
+Go client|[User doc](client-libraries-go.md)|[Here](https://github.com/apache/pulsar-client-go/blob/master/CHANGELOG) |[Here](https://github.com/apache/pulsar-client-go) 
+Node.js|[User doc](client-libraries-node)|[Here](https://github.com/apache/pulsar-client-node/releases) |[Here](https://github.com/apache/pulsar-client-node) 
+C# |[User doc](client-libraries-dotnet.md)| [Here](https://github.com/apache/pulsar-dotpulsar/blob/master/CHANGELOG)|[Here](https://github.com/apache/pulsar-dotpulsar) 
+
+:::note
+
+- The code repos of **Java, C++, Python,** and **WebSocket** clients are hosted in the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are released with Pulsar, so their release notes are parts of [Pulsar release note](https://pulsar.apache.org/release-notes/).
+- The code repos of **Go, Node.js,** and **C#** clients are hosted outside of the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are not released with Pulsar, so they have independent release notes.
+
+:::
 
 ## Feature matrix
-Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://github.com/apache/pulsar/wiki/PIP-108%3A-Pulsar-Feature-Matrix-%28Client-and-Function%29) page.
+Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://docs.google.com/spreadsheets/d/1YHYTkIXR8-Ql103u-IMI18TXLlGStK8uJjDsOOA0T20/edit#gid=1784579914) page.
 
 ## Third-party clients
 
diff --git a/site2/website-next/versioned_docs/version-2.2.1/concepts-architecture-overview.md b/site2/website-next/versioned_docs/version-2.2.1/concepts-architecture-overview.md
index 8fe0717..a2b024d 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/concepts-architecture-overview.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/concepts-architecture-overview.md
@@ -47,6 +47,9 @@ Clusters can replicate amongst themselves using [geo-replication](concepts-repli
 
 The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and [BookKeeper metadata store](https://bookkee [...]
 
+> Pulsar also supports more metadata backend services, including [ETCD](https://etcd.io/) and [RocksDB](http://rocksdb.org/) (for standalone Pulsar only). 
+
+
 In a Pulsar instance:
 
 * A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
@@ -128,9 +131,10 @@ Architecturally, the Pulsar proxy gets all the information it requires from ZooK
 
 ```bash
 
+$ cd /path/to/pulsar/directory
 $ bin/pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk-2 \
-  --configuration-store-servers zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/versioned_docs/version-2.2.1/concepts-messaging.md b/site2/website-next/versioned_docs/version-2.2.1/concepts-messaging.md
index f23fae9..c370181 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/concepts-messaging.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/concepts-messaging.md
@@ -108,29 +108,50 @@ To avoid redelivering acknowledged messages in a batch to the consumer, Pulsar i
 By default, batch index acknowledgement is disabled (`acknowledgmentAtBatchIndexLevelEnabled=false`). You can enable batch index acknowledgement by setting the `acknowledgmentAtBatchIndexLevelEnabled` parameter to `true` at the broker side. Enabling batch index acknowledgement results in more memory overheads. 
 
 ### Chunking
-Before you enable chunking, read the following instructions.
-- Batching and chunking cannot be enabled simultaneously. To enable chunking, you must disable batching in advance.
-- Chunking is only supported for persisted topics.
-- Chunking is only supported for Exclusive and Failover subscription types.
+Message chunking enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side.
 
-When chunking is enabled (`chunkingEnabled=true`), if the message size is greater than the allowed maximum publish-payload size, the producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. At the broker side, the chunked messages are stored in the managed-ledger in the same way as that of ordinary messages. The only difference is that the consumer needs to buffer the chunked messages and combines them into [...]
+With message chunking enabled, when the size of a message exceeds the allowed maximum payload size (the `maxMessageSize` parameter of broker), the workflow of messaging is as follows:
+1. The producer splits the original message into chunked messages and publishes them with chunked metadata to the broker separately and in order. 
+2. The broker stores the chunked messages in one managed-ledger in the same way as that of ordinary messages, and it uses the `chunkedMessageRate` parameter to record chunked message rate on the topic.
+3. The consumer buffers the chunked messages and aggregates them into the receiver queue when it receives all the chunks of a message.
+4. The client consumes the aggregated message from the receiver queue. 
 
-The consumer consumes the chunked messages and buffers them until the consumer receives all the chunks of a message. And then the consumer stitches chunked messages together and places them into the receiver-queue. Clients consume messages from the receiver-queue. Once the consumer consumes the entire large message and acknowledges it, the consumer internally sends acknowledgement of all the chunk messages associated to that large message. You can set the `maxPendingChunkedMessage` param [...]
+**Limitations:** 
+- Chunking is only available for persisted topics.
+- Chunking is only available for the exclusive and failover subscription types.
+- Chunking cannot be enabled simultaneously with batching.
 
-The broker does not require any changes to support chunking for non-shared subscription. The broker only uses `chunkedMessageRate` to record chunked message rate on the topic.
+#### Handle consecutive chunked messages with one ordered consumer
 
-#### Handle chunked messages with one producer and one ordered consumer
-
-As shown in the following figure, when a topic has one producer which publishes large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, combi [...]
+The following figure shows a topic with one producer which publishes a large message payload in chunked messages along with regular non-chunked messages. The producer publishes message M1 in three chunks labeled M1-C1, M1-C2 and M1-C3. The broker stores all the three chunked messages in the managed-ledger and dispatches them to the ordered (exclusive/failover) consumer in the same order. The consumer buffers all the chunked messages in memory until it receives all the chunked messages, a [...]
 
 ![](/assets/chunking-01.png)
 
-#### Handle chunked messages with multiple producers and one ordered consumer
+#### Handle interwoven chunked messages with one ordered consumer
 
-When multiple publishers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different publishers in the same managed-ledger. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be consecutive in the managed-ledger. This brings some memory pressure to the c [...]
+When multiple producers publish chunked messages into a single topic, the broker stores all the chunked messages coming from different producers in the same managed-ledger. The chunked messages in the managed-ledger can be interwoven with each other. As shown below, Producer 1 publishes message M1 in three chunks M1-C1, M1-C2 and M1-C3. Producer 2 publishes message M2 in three chunks M2-C1, M2-C2 and M2-C3. All chunked messages of the specific message are still in order but might not be  [...]
 
 ![](/assets/chunking-02.png)
 
+:::note
+
+In this case, interwoven chunked messages may bring some memory pressure to the consumer because the consumer keeps a separate buffer for each large message to aggregate all its chunks in one message. You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` parameter. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later, opt [...]
+
+:::
+
+#### Enable Message Chunking
+
+**Prerequisite:** Disable batching by setting the `enableBatching` parameter to `false`.
+
+The message chunking feature is OFF by default. 
+To enable message chunking, set the `chunkingEnabled` parameter to `true` when creating a producer.
+
+:::note
+
+If the consumer fails to receive all chunks of a message within a specified time period, it expires incomplete chunks. The default value is 1 minute. For more information about the `expireTimeOfIncompleteChunkedMessage` parameter, refer to [org.apache.pulsar.client.api](https://pulsar.apache.org/api/client/).
+
+:::
+
 ## Consumers
 
 A consumer is a process that attaches to a topic via a subscription and then receives messages.
@@ -232,9 +253,9 @@ Use the following API to enable `Negative Redelivery Backoff`.
 
 ```java
 
-consumer.negativeAckRedeliveryBackoff(NegativeAckRedeliveryExponentialBackoff.builder()
-        .minNackTimeMs(1000)
-        .maxNackTimeMs(60 * 1000)
+consumer.negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+        .minDelayMs(1000)
+        .maxDelayMs(60 * 1000)
         .build())
 
 ```
@@ -245,6 +266,34 @@ The acknowledgement timeout mechanism allows you to set a time range during whic
 
 You can configure the acknowledgement timeout mechanism to redeliver the message if it is not acknowledged after `ackTimeout` or to execute a timer task to check the acknowledgement timeout messages during every `ackTimeoutTickTime` period.
 
+You can also use the redelivery backoff mechanism, redeliver messages with different delays by setting the number 
+of times the messages is retried.
+
+If you want to use redelivery backoff, you can use the following API.
+
+```java
+
+consumer.ackTimeout(10, TimeUnit.SECOND)
+        .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+        .minDelayMs(1000)
+        .maxDelayMs(60000)
+        .multiplier(2).build())
+
+```
+
+The message redelivery behavior should be as follows.
+
+Redelivery count | Redelivery delay
+:--------------------|:-----------
+1 | 10 + 1 seconds
+2 | 10 + 2 seconds
+3 | 10 + 4 seconds
+4 | 10 + 8 seconds
+5 | 10 + 16 seconds
+6 | 10 + 32 seconds
+7 | 10 + 60 seconds
+8 | 10 + 60 seconds
+
 :::note
 
 - If batching is enabled, all messages in one batch are redelivered to the consumer.  
@@ -315,6 +364,23 @@ Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
 
 ```
 
+By default, there is no subscription during a DLQ topic creation. Without a just-in-time subscription to the DLQ topic, you may lose messages. To automatically create an initial subscription for the DLQ, you can specify the `initialSubscriptionName` parameter. If this parameter is set but the broker's `allowAutoSubscriptionCreation` is disabled, the DLQ producer will fail to be created.
+
+```java
+
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+                .topic(topic)
+                .subscriptionName("my-subscription")
+                .subscriptionType(SubscriptionType.Shared)
+                .deadLetterPolicy(DeadLetterPolicy.builder()
+                      .maxRedeliverCount(maxRedeliveryCount)
+                      .deadLetterTopic("your-topic-name")
+                      .initialSubscriptionName("init-sub")
+                      .build())
+                .subscribe();
+
+```
+
 Dead letter topic depends on message redelivery. Messages are redelivered either due to [acknowledgement timeout](#acknowledgement-timeout) or [negative acknowledgement](#negative-acknowledgement). If you are going to use negative acknowledgement on a message, make sure it is negatively acknowledged before the acknowledgement timeout. 
 
 :::note
diff --git a/site2/website-next/versioned_docs/version-2.2.1/cookbooks-deduplication.md b/site2/website-next/versioned_docs/version-2.2.1/cookbooks-deduplication.md
index e71e6f4..a14a3c3 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/cookbooks-deduplication.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/cookbooks-deduplication.md
@@ -31,6 +31,7 @@ Parameter | Description | Default
 `brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false`
 `brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000`
 `brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000`
+`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120`
 `brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours)
 
 ### Set default value at the broker-level
diff --git a/site2/website-next/versioned_docs/version-2.2.1/deploy-bare-metal-multi-cluster.md b/site2/website-next/versioned_docs/version-2.2.1/deploy-bare-metal-multi-cluster.md
index 9dd2526..875b75d 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/deploy-bare-metal-multi-cluster.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/deploy-bare-metal-multi-cluster.md
@@ -226,8 +226,8 @@ You can initialize this metadata using the [`initialize-cluster-metadata`](refer
 
 $ bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
@@ -308,7 +308,7 @@ Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper b
 
 You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
 
-The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those  [...]
+The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`metadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the local quorum and the [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same c [...]
 
 You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster.
 
@@ -317,10 +317,10 @@ The following is an example configuration:
 ```properties
 
 # Local ZooKeeper servers
-zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
 
 # Configuration store quorum connection string.
-configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+configurationMetadataStoreUrl=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
 
 clusterName=us-west
 
diff --git a/site2/website-next/versioned_docs/version-2.2.1/deploy-monitoring.md b/site2/website-next/versioned_docs/version-2.2.1/deploy-monitoring.md
index 95ccdd6..adf3587 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/deploy-monitoring.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/deploy-monitoring.md
@@ -51,7 +51,7 @@ http://$GLOBAL_ZK_SERVER:8001/metrics
 
 ```
 
-The default port of local ZooKeeper is `8000` and the default port of configuration store is `8001`. You can change the default port of local ZooKeeper and configuration store by specifying system property `stats_server_port`.
+The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file.
 
 ### BookKeeper stats
 
diff --git a/site2/website-next/versioned_docs/version-2.2.1/develop-binary-protocol.md b/site2/website-next/versioned_docs/version-2.2.1/develop-binary-protocol.md
index fa03383..63e43dd 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/develop-binary-protocol.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/develop-binary-protocol.md
@@ -240,8 +240,10 @@ Parameters:
 ##### Command Send
 
 Command `Send` is used to publish a new message within the context of an
-already existing producer. This command is used in a frame that includes command
-as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section.
+already existing producer. If a producer has not yet been created for the
+connection, the broker will terminate the connection. This command is used
+in a frame that includes command as well as message payload, for which the
+complete format is specified in the [payload commands](#payload-commands) section.
 
 ```protobuf
 
diff --git a/site2/website-next/versioned_docs/version-2.2.1/reference-cli-tools.md b/site2/website-next/versioned_docs/version-2.2.1/reference-cli-tools.md
index 0c8aea1..32b23e9 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/reference-cli-tools.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/reference-cli-tools.md
@@ -208,7 +208,7 @@ Options
 |`-c` , `--cluster`|Cluster name||
 |`-cms` , `--configuration-metadata-store`|The configuration metadata store quorum connection string||
 |`--existing-bk-metadata-service-uri`|The metadata service URI of the existing BookKeeper cluster that you want to use||
-|`-h` , `--help`|Cluster name|false|
+|`-h` , `--help`|Help message|false|
 |`--initial-num-stream-storage-containers`|The number of storage containers of BookKeeper stream storage|16|
 |`--initial-num-transaction-coordinators`|The number of transaction coordinators assigned in a cluster|16|
 |`-uw` , `--web-service-url`|The web service URL for the new cluster||
@@ -233,16 +233,16 @@ Options
 
 |Flag|Description|Default|
 |---|---|---|
-|`--configuration-store`|Configuration store connection string||
-|`-zk` , `--zookeeper-servers`|Local ZooKeeper connection string||
+|`-cms`, `--configuration-metadata-store`|Configuration meta store connection string||
+|`-md` , `--metadata-store`|Metadata Store service url||
 
 Example
 
 ```bash
 
 $ PULSAR_PROXY_CONF=/path/to/proxy.conf pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk2 \
-  --configuration-store zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
@@ -562,7 +562,7 @@ Options
 |`-bw`, `--busy-wait`|Enable or disable Busy-Wait on the Pulsar client|false|
 |`-v`, `--encryption-key-value-file`|The file which contains the private key to decrypt payload||
 |`-h`, `--help`|Help message|false|
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-m`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0|
 |`-e`, `--expire_time_incomplete_chunked_messages`|The expiration time for incomplete chunk messages (in milliseconds)|0|
 |`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
@@ -626,7 +626,7 @@ Options
 |`-ch`, `--chunking`|Split the message and publish in chunks if the message size is larger than allowed max size|false|
 |`-d`, `--delay`|Mark messages with a given delay in seconds|0s|
 |`-z`, `--compression`|Compress messages’ payload. Possible values are NONE, LZ4, ZLIB, ZSTD or SNAPPY.||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-k`, `--encryption-key-name`|The public key name to encrypt payload||
 |`-v`, `--encryption-key-value-file`|The file which contains the public key to encrypt payload||
 |`-ef`, `--exit-on-failure`|Exit from the process on publish failure|false|
@@ -686,7 +686,7 @@ Options
 |`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
 |`--auth-plugin`|Authentication plugin class name||
 |`--listener-name`|Listener name for the broker||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-h`, `--help`|Help message|false|
 |`-n`, `--num-messages`|Number of messages to consume in total. If the value is equal to or smaller than 0, it keeps consuming messages.|0|
 |`-c`, `--max-connections`|Max number of TCP connections to a single broker|100|
@@ -720,7 +720,7 @@ Options
 |---|---|---|
 |`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.||
 |`--auth-plugin`|Authentication plugin class name||
-|`--conf-file`|Configuration file||
+|`-cf`, `--conf-file`|Configuration file||
 |`-h`, `--help`|Help message|false|
 |`-m`, `--num-messages`|Number of messages to publish in total. If this value is less than or equal to 0, it keeps publishing messages.|0|
 |`-t`, `--num-topic`|The number of topics|1|
@@ -762,7 +762,7 @@ Options
 |`-time`, `--test-duration`|Test duration (in seconds). If this value is less than or equal to 0, it keeps publishing messages.|0|
 |`--threads`|Number of threads writing|1|
 |`-w`, `--write-quorum`|Ledger write quorum|1|
-|`-zk`, `--zookeeperServers`|ZooKeeper connection string||
+|`-md`, `--metadata-store`|Metadata store service URL. For example: zk:my-zk:2181||
 
 
 ### `monitor-brokers`
@@ -839,8 +839,10 @@ $ pulsar-perf transaction options
 
 |Flag|Description|Default|
 |---|---|---|
+`--auth-params`|Authentication parameters, whose format is determined by the implementation of method `configure` in authentication plugin class. For example, `key1:val1,key2:val2` or `{"key1":"val1","key2":"val2"}`.|N/A
+`--auth-plugin`|Authentication plugin class name.|N/A
 `-au`, `--admin-url`|Pulsar admin URL.|N/A
-`--conf-file`|Configuration file.|N/A
+`-cf`, `--conf-file`|Configuration file.|N/A
 `-h`, `--help`|Help messages.|N/A
 `-c`, `--max-connections`|Maximum number of TCP connections to a single broker.|100
 `-ioThreads`, `--num-io-threads`|Set the number of threads to be used for handling connections to brokers. |1
diff --git a/site2/website-next/versioned_docs/version-2.2.1/sql-deployment-configurations.md b/site2/website-next/versioned_docs/version-2.2.1/sql-deployment-configurations.md
index 9e7ff5a..10fb47a 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/sql-deployment-configurations.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/sql-deployment-configurations.md
@@ -115,20 +115,15 @@ pulsar.zookeeper-uri=localhost1,localhost2:2181
 
 ```
 
-A frequently asked question is why my latest message not showing up when querying with Pulsar SQL.
-It's not a bug but controlled by a setting, by default BookKeeper LAC only advanced when subsequent entries are added.
-If there is no subsequent entries added, the last entry written will not be visible to readers until the ledger is closed.
-This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly read from BookKeeper ledger.
-We can add following setting to change the behavior:
-In Broker config, set
-bookkeeperExplicitLacIntervalInMills > 0
-bookkeeperUseV2WireProtocol=false
-
-And in Presto config, set
-pulsar.bookkeeper-explicit-interval > 0
-pulsar.bookkeeper-use-v2-protocol=false
-
-However,keep in mind that using bk V3 protocol will introduce additional GC overhead to BK as it uses Protobuf.
+**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. 
+
+If you want to get the last message in a topic, set the following configurations:
+
+1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`.
+   
+2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`.
+
+However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf.
 
 ## Query data from existing Presto clusters
 
diff --git a/site2/website-next/versioned_docs/version-2.3.0/admin-api-clusters.md b/site2/website-next/versioned_docs/version-2.3.0/admin-api-clusters.md
index ccd3ebb..3c2f661 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/admin-api-clusters.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/admin-api-clusters.md
@@ -103,8 +103,8 @@ Here's an example cluster metadata initialization command:
 
 bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
diff --git a/site2/website-next/versioned_docs/version-2.3.0/administration-zk-bk.md b/site2/website-next/versioned_docs/version-2.3.0/administration-zk-bk.md
index c7867d1..f125f27 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/administration-zk-bk.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/administration-zk-bk.md
@@ -248,6 +248,9 @@ ledgerDirectories=data/bookkeeper/ledgers
 # Point to local ZK quorum
 zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
 
+#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud).
+advertisedAddress=
+
 ```
 
 To change the zookeeper root path used by Bookkeeper, use zkLedgersRootPath=/MY-PREFIX/ledgers instead of zkServers=localhost:2181/MY-PREFIX
diff --git a/site2/website-next/versioned_docs/version-2.3.0/client-libraries-java.md b/site2/website-next/versioned_docs/version-2.3.0/client-libraries-java.md
index b8150e1..a0c4f98 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/client-libraries-java.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/client-libraries-java.md
@@ -4,9 +4,15 @@ title: Pulsar Java client
 sidebar_label: "Java"
 ---
 
-You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), and [readers](#reader) of messages and to perform [administrative tasks](admin-api-overview). The current Java client version is **@pulsar:version@**.
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
 
-All the methods in [producer](#producer), [consumer](#consumer), and [reader](#reader) of a Java client are thread-safe.
+
+You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of messages and to perform [administrative tasks](admin-api-overview). The current Java client version is **@pulsar:version@**.
+
+All the methods in [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of a Java client are thread-safe.
 
 Javadoc for the Pulsar client is divided into two domains by package as follows.
 
@@ -168,6 +174,328 @@ You can set the client memory allocator configurations through Java properties.<
 
 ```
 
+### Cluster-level failover
+
+This chapter describes the concept, benefits, use cases, constraints, usage, working principles, and more information about the cluster-level failover. It contains the following sections:
+
+- [What is cluster-level failover?](#what-is-cluster-level-failover)
+
+  * [Concept of cluster-level failover](#concept-of-cluster-level-failover)
+   
+  * [Why use cluster-level failover?](#why-use-cluster-level-failover)
+
+  * [When to use cluster-level failover?](#when-to-use-cluster-level-failover)
+
+  * [When cluster-level failover is triggered?](#when-cluster-level-failover-is-triggered)
+
+  * [Why does cluster-level failover fail?](#why-does-cluster-level-failover-fail)
+
+  * [What are the limitations of cluster-level failover?](#what-are-the-limitations-of-cluster-level-failover)
+
+  * [What are the relationships between cluster-level failover and geo-replication?](#what-are-the-relationships-between-cluster-level-failover-and-geo-replication)
+  
+- [How to use cluster-level failover?](#how-to-use-cluster-level-failover)
+
+- [How does cluster-level failover work?](#how-does-cluster-level-failover-work)
+  
+> #### What is cluster-level failover
+
+This chapter helps you better understand the concept of cluster-level failover.
+> ##### Concept of cluster-level failover
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+Automatic cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters automatically and seamlessly when it detects a failover event based on the configured detecting policy set by **users**. 
+
+![Automatic cluster-level failover](/assets/cluster-level-failover-1.png)
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+Controlled cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters. The switchover is manually set by **administrators**.
+
+![Controlled cluster-level failover](/assets/cluster-level-failover-2.png)
+
+</TabItem>
+
+</Tabs>
+````
+
+Once the primary cluster functions again, Pulsar clients can switch back to the primary cluster. Most of the time users won’t even notice a thing. Users can keep using applications and services without interruptions or timeouts.
+
+> ##### Why use cluster-level failover?
+
+The cluster-level failover provides fault tolerance, continuous availability, and high availability together. It brings a number of benefits, including but not limited to:
+
+* Reduced cost: services can be switched and recovered automatically with no data loss.
+
+* Simplified management: businesses can operate on an “always-on” basis since no immediate user intervention is required.
+
+* Improved stability and robustness: it ensures continuous performance and minimizes service downtime. 
+
+> ##### When to use cluster-level failover?
+
+The cluster-level failover protects your environment in a number of ways, including but not limited to:
+
+* Disaster recovery: cluster-level failover can automatically and seamlessly transfer the production workload on a primary cluster to one or several backup clusters, which ensures minimum data loss and reduced recovery time.
+
+* Planned migration: if you want to migrate production workloads from an old cluster to a new cluster, you can improve the migration efficiency with cluster-level failover. For example, you can test whether the data migration goes smoothly in case of a failover event, identify possible issues and risks before the migration.
+
+> ##### When cluster-level failover is triggered?
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+Automatic cluster-level failover is triggered when Pulsar clients cannot connect to the primary cluster for a prolonged period of time. This can be caused by any number of reasons including, but not limited to: 
+
+* Network failure: internet connection is lost.
+
+* Power failure: shutdown time of a primary cluster exceeds time limits.
+
+* Service error: errors occur on a primary cluster (for example, the primary cluster does not function because of time limits).
+
+* Crashed storage space: the primary cluster does not have enough storage space, but the corresponding storage space on the backup server functions normally.
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+Controlled cluster-level failover is triggered when administrators set the switchover manually.
+
+</TabItem>
+
+</Tabs>
+````
+
+> ##### Why does cluster-level failover fail?
+
+Obviously, the cluster-level failover does not succeed if the backup cluster is unreachable by active Pulsar clients. This can happen for many reasons, including but not limited to:
+
+* Power failure: the backup cluster is shut down or does not function normally. 
+
+* Crashed storage space: primary and backup clusters do not have enough storage space. 
+
+* If the failover is initiated, but no cluster can assume the role of an available cluster due to errors, and the primary cluster is not able to provide service normally.
+
+* If you manually initiate a switchover, but services cannot be switched to the backup cluster server, then the system will attempt to switch services back to the primary cluster.
+
+* Fail to authenticate or authorize between 1) primary and backup clusters, or 2) between two backup clusters.
+
+> ##### What are the limitations of cluster-level failover?
+
+Currently, cluster-level failover can perform probes to prevent data loss, but it can not check the status of backup clusters. If backup clusters are not healthy, you cannot produce or consume data.
+
+> #### What are the relationships between cluster-level failover and geo-replication?
+
+The cluster-level failover is an extension of [geo-replication](concepts-replication) to improve stability and robustness. The cluster-level failover depends on geo-replication, and they have some **differences** as below.
+
+Influence |Cluster-level failover|Geo-replication
+|---|---|---
+Do administrators have heavy workloads?|No or maybe.<br /><br />- For the **automatic** cluster-level failover, the cluster switchover is triggered automatically based on the policies set by **users**.<br /><br />- For the **controlled** cluster-level failover, the switchover is triggered manually by **administrators**.|Yes.<br /><br />If a cluster fails, immediate administration intervention is required.|
+Result in data loss?|No.<br /><br />For both **automatic** and **controlled** cluster-level failover, if the failed primary cluster doesn't replicate messages immediately to the backup cluster, the Pulsar client can't consume the non-replicated messages. After the primary cluster is restored and the Pulsar client switches back, the non-replicated data can still be consumed by the Pulsar client. Consequently, the data is not lost.<br /><br />- For the **automatic** cluster-level failover, [...]
+Result in Pulsar client failure? |No or maybe.<br /><br />- For **automatic** cluster-level failover, services can be switched and recovered automatically and the Pulsar client does not fail. <br /><br />- For **controlled** cluster-level failover, services can be switched and recovered manually, but the Pulsar client fails before administrators can take action. |Same as above.
+
+> #### How to use cluster-level failover
+
+This section guides you through every step on how to configure cluster-level failover.
+
+**Tip**
+
+- You should configure cluster-level failover only when the cluster contains sufficient resources to handle all possible consequences. Workload intensity on the backup cluster may increase significantly.
+
+- Connect clusters to an uninterruptible power supply (UPS) unit to reduce the risk of unexpected power loss.
+
+**Requirements**
+
+* Pulsar client 2.10 or later versions.
+
+* For backup clusters:
+
+  * The number of BooKeeper nodes should be equal to or greater than the ensemble quorum.
+
+  * The number of ZooKeeper nodes should be equal to or greater than 3.
+
+* **Turn on geo-replication** between the primary cluster and any dependent cluster (primary to backup or backup to backup) to prevent data loss.
+
+* Set `replicateSubscriptionState` to `true` when creating consumers.
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+This is an example of how to construct a Java Pulsar client to use automatic cluster-level failover. The switchover is triggered automatically.
+
+```
+
+  private PulsarClient getAutoFailoverClient() throws PulsarClientException {
+
+        ServiceUrlProvider failover = AutoClusterFailover.builder()
+                .primary("pulsar://localhost:6650")
+                .secondary(Collections.singletonList("pulsar://other1:6650","pulsar://other2:6650"))
+                .failoverDelay(30, TimeUnit.SECONDS)
+                .switchBackDelay(60, TimeUnit.SECONDS)
+                .checkInterval(1000, TimeUnit.MILLISECONDS)
+	    	    .secondaryTlsTrustCertsFilePath("/path/to/ca.cert.pem")
+    .secondaryAuthentication("org.apache.pulsar.client.impl.auth.AuthenticationTls",
+"tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem")
+
+                .build();
+
+        PulsarClient pulsarClient = PulsarClient.builder()
+                .build();
+
+        failover.initialize(pulsarClient);
+        return pulsarClient;
+    }
+
+```
+
+Configure the following parameters:
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`primary`|N/A|Yes|Service URL of the primary cluster.
+`secondary`|N/A|Yes|Service URL(s) of one or several backup clusters.<br /><br />You can specify several backup clusters using a comma-separated list.<br /><br /> Note that:<br />- The backup cluster is chosen in the sequence shown in the list. <br />- If all backup clusters are available, the Pulsar client chooses the first backup cluster.
+`failoverDelay`|N/A|Yes|The delay before the Pulsar client switches from the primary cluster to the backup cluster.<br /><br />Automatic failover is controlled by a probe task: <br />1) The probe task first checks the health status of the primary cluster. <br /> 2) If the probe task finds the continuous failure time of the primary cluster exceeds `failoverDelayMs`, it switches the Pulsar client to the backup cluster. 
+`switchBackDelay`|N/A|Yes|The delay before the Pulsar client switches from the backup cluster to the primary cluster.<br /><br />Automatic failover switchover is controlled by a probe task: <br /> 1) After the Pulsar client switches from the primary cluster to the backup cluster, the probe task continues to check the status of the primary cluster. <br /> 2) If the primary cluster functions well and continuously remains active longer than `switchBackDelay`, the Pulsar client switches back [...]
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`secondaryTlsTrustCertsFilePath`|N/A|No|Path to the trusted TLS certificate file of the backup cluster.
+`secondaryAuthentication`|N/A|No|Authentication of the backup cluster.
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+This is an example of how to construct a Java Pulsar client to use controlled cluster-level failover. The switchover is triggered by administrators manually.
+
+**Note**: you can have one or several backup clusters but can only specify one.
+
+```
+
+ public PulsarClient getControlledFailoverClient() throws IOException {
+Map<String, String> header = new HashMap(); 
+  header.put(“service_user_id”, “my-user”);
+  header.put(“service_password”, “tiger”);
+  header.put(“clusterA”, “tokenA”);
+  header.put(“clusterB”, “tokenB”);
+
+  ServiceUrlProvider provider = 
+      ControlledClusterFailover.builder()
+        .defaultServiceUrl("pulsar://localhost:6650")
+        .checkInterval(1, TimeUnit.MINUTES)
+        .urlProvider("http://localhost:8080/test")
+        .urlProviderHeader(header)
+        .build();
+
+  PulsarClient pulsarClient = 
+     PulsarClient.builder()
+      .build();
+
+  provider.initialize(pulsarClient);
+  return pulsarClient;
+}
+
+```
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`defaultServiceUrl`|N/A|Yes|Pulsar service URL.
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`urlProvider`|N/A|Yes|URL provider service.
+`urlProviderHeader`|N/A|No|`urlProviderHeader` is a map containing tokens and credentials. <br /><br />If you enable authentication or authorization between Pulsar clients and primary and backup clusters, you need to provide `urlProviderHeader`.
+
+Here is an example of how `urlProviderHeader` works.
+
+![How urlProviderHeader works](/assets/cluster-level-failover-3.png)
+
+Assume that you want to connect Pulsar client 1 to cluster A.
+
+1. Pulsar client 1 sends the token *t1* to the URL provider service.
+
+2. The URL provider service returns the credential *c1* and the cluster A URL to the Pulsar client.
+   
+   The URL provider service manages all tokens and credentials. It returns different credentials based on different tokens and different target cluster URLs to different Pulsar clients.
+
+   **Note**: **the credential must be in a JSON file and contain parameters as shown**.
+
+   ```
+   
+   {
+   "serviceUrl": "pulsar+ssl://target:6651", 
+   "tlsTrustCertsFilePath": "/security/ca.cert.pem",
+   "authPluginClassName":"org.apache.pulsar.client.impl.auth.AuthenticationTls",
+   "authParamsString": " \"tlsCertFile\": \"/security/client.cert.pem\" 
+       \"tlsKeyFile\": \"/security/client-pk8.pem\" "
+   }
+   
+   ```
+
+3. Pulsar client 1 connects to cluster A using credential *c1*.
+
+</TabItem>
+
+</Tabs>
+````
+
+>#### How does cluster-level failover work?
+
+This chapter explains the working process of cluster-level failover. For more implementation details, see [PIP-121](https://github.com/apache/pulsar/issues/13315).
+
+````mdx-code-block
+<Tabs 
+  defaultValue="Automatic cluster-level failover"
+  values={[{"label":"Automatic cluster-level failover","value":"Automatic cluster-level failover"},{"label":"Controlled cluster-level failover","value":"Controlled cluster-level failover"}]}>
+<TabItem value="Automatic cluster-level failover">
+
+In automatic failover cluster, the primary cluster and backup cluster are aware of each other's availability. The automatic failover cluster performs the following actions without administrator intervention:
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+   
+2. If the probe task finds the failure time of the primary cluster exceeds the time set in the `failoverDelay` parameter, it searches backup clusters for an available healthy cluster.
+
+   2a) If there are healthy backup clusters, the Pulsar client switches to a backup cluster in the order defined in `secondary`.
+
+   2b) If there is no healthy backup cluster, the Pulsar client does not perform the switchover, and the probe task continues to look  for an available backup cluster.
+
+3. The probe task checks whether the primary cluster functions well or not. 
+
+   3a) If the primary cluster comes back and the continuous healthy time exceeds the time set in `switchBackDelay`, the Pulsar client switches back to the primary cluster.
+
+   3b) If the primary cluster does not come back, the Pulsar client does not perform the switchover. 
+
+![Workflow of automatic failover cluster](/assets/cluster-level-failover-4.png)
+
+</TabItem>
+<TabItem value="Controlled cluster-level failover">
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+
+2. The probe task fetches the service URL configuration from the URL provider service, which is configured by `urlProvider`.
+
+   2a) If the service URL configuration is changed, the probe task  switches to the target cluster without checking the health status of the target cluster.
+
+   2b) If the service URL configuration is not changed, the Pulsar client does not perform the switchover.
+
+3. If the Pulsar client switches to the target cluster, the probe task continues to fetch service URL configuration from the URL provider service at intervals defined in `checkInterval`. 
+
+   3a) If the service URL configuration is changed, the probe task  switches to the target cluster without checking the health status of the target cluster.
+
+   3b) If the service URL configuration is not changed, it does not perform the switchover.
+
+![Workflow of controlled failover cluster](/assets/cluster-level-failover-5.png)
+
+</TabItem>
+
+</Tabs>
+````
+
 ## Producer
 
 In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic).
@@ -241,7 +569,9 @@ Name| Type |  <div>Description</div>|  Default
 `batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1)
 `batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000
 `batchingEnabled`| boolean|Enable batching of messages. |true
+`chunkingEnabled` | boolean | Enable chunking of messages. |false
 `compressionType`|CompressionType|Message data compression type used by a producer. <br />Available options:<li>[`LZ4`](https://github.com/lz4/lz4)</li><li>[`ZLIB`](https://zlib.net/)<br /></li><li>[`ZSTD`](https://facebook.github.io/zstd/)</li><li>[`SNAPPY`](https://google.github.io/snappy/)</li>| No compression
+`initialSubscriptionName`|string|Use this configuration to automatically create an initial subscription when creating a topic. If this field is not set, the initial subscription is not created.|null
 
 You can configure parameters if you do not want to use the default configuration.
 
@@ -295,6 +625,24 @@ producer.newMessage()
 
 You can terminate the builder chain with `sendAsync()` and get a future return.
 
+### Enable chunking
+
+Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side. 
+
+The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
+
+```java
+
+Producer<byte[]> producer = client.newProducer()
+        .topic(topic)
+        .enableChunking(true)
+        .enableBatching(false)
+        .create();
+
+```
+
+> **Note:** To enable chunking, you need to disable batching (`enableBatching`=`false`) concurrently.
+
 ## Consumer
 
 In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)).
@@ -382,7 +730,11 @@ When you create a consumer, you can use the `loadConf` configuration. The follow
 `deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.<br /><br />By default, some messages are probably redelivered many times, even to the extent that it never stops.<br /><br />By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.<br /><br />You can enable the dead letter mechanism by setting `deadLetterPolicy`.<br /><br [...]
 `autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.<br /><br />**Note**: this is only for partitioned consumers.|true
 `replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false
-`negativeAckRedeliveryBackoff`|NegativeAckRedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `NegativeAckRedeliveryBackoff` for a consumer.| `NegativeAckRedeliveryExponentialBackoff`
+`negativeAckRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`ackTimeoutRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is ackTimeout policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`autoAckOldestChunkedMessageOnQueueFull`|boolean|Whether to automatically acknowledge pending chunked messages when the threashold of `maxPendingChunkedMessage` is reached. If set to `false`, these messages will be redelivered by their broker. |true
+`maxPendingChunkedMessage`|int| The maximum size of a queue holding pending chunked messages. When the threshold is reached, the consumer drops pending messages to optimize memory utilization.|10
+`expireTimeOfIncompleteChunkedMessageMillis`|long|The time interval to expire incomplete chunks if a consumer fails to receive all the chunks in the specified time period. The default value is 1 minute. | 60000
 
 You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class. 
 
@@ -462,27 +814,78 @@ BatchReceivePolicy.builder()
 
 :::
 
+### Configure chunking
+
+You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` and `autoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. The `expireTimeOfIncompleteChunkedMessage` parameter decides the time interval to expire incomplete chunks if the consumer fails to receive all chunks of a me [...]
+
+The following is an example of how to configure message chunking.
+
+```java
+
+Consumer<byte[]> consumer = client.newConsumer()
+        .topic(topic)
+        .subscriptionName("test")
+        .autoAckOldestChunkedMessageOnQueueFull(true)
+        .maxPendingChunkedMessage(100)
+        .expireTimeOfIncompleteChunkedMessage(10, TimeUnit.MINUTES)
+        .subscribe();
+
+```
+
 ### Negative acknowledgment redelivery backoff
 
-The `NegativeAckRedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. 
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages. 
+
+```java
+
+Consumer consumer =  client.newConsumer()
+        .topic("my-topic")
+        .subscriptionName("my-subscription")
+        .negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+                .minDelayMs(1000)
+                .maxDelayMs(60 * 1000)
+                .build())
+        .subscribe();
+
+```
+
+### Acknowledgement timeout redelivery backoff
+
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can redeliver messages with different delays by setting the number
+of times the messages is retried.
 
 ```java
 
 Consumer consumer =  client.newConsumer()
         .topic("my-topic")
         .subscriptionName("my-subscription")
-        .negativeAckRedeliveryBackoff(NegativeAckRedeliveryExponentialBackoff.builder()
-                .minNackTimeMs(1000)
-                .maxNackTimeMs(60 * 1000)
+        .ackTimeout(10, TimeUnit.SECOND)
+        .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+                .minDelayMs(1000)
+                .maxDelayMs(60000)
+                .multiplier(2)
                 .build())
         .subscribe();
 
 ```
 
+The message redelivery behavior should be as follows.
+
+Redelivery count | Redelivery delay
+:--------------------|:-----------
+1 | 10 + 1 seconds
+2 | 10 + 2 seconds
+3 | 10 + 4 seconds
+4 | 10 + 8 seconds
+5 | 10 + 16 seconds
+6 | 10 + 32 seconds
+7 | 10 + 60 seconds
+8 | 10 + 60 seconds
+
 :::note
 
 - The `negativeAckRedeliveryBackoff` does not work with `consumer.negativeAcknowledge(MessageId messageId)` because you are not able to get the redelivery count from the message ID.
-- If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `NegativeAckRedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff.
+- If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `RedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff.
 
 :::
 
@@ -870,6 +1273,53 @@ pulsarClient.newReader()
 
 Total hash range size is 65536, so the max end of the range should be less than or equal to 65535.
 
+
+## TableView
+
+The TableView interface serves an encapsulated access pattern, providing a continuously updated key-value map view of the compacted topic data. Messages without keys will be ignored.
+
+With TableView, Pulsar clients can fetch all the message updates from a topic and construct a map with the latest values of each key. These values can then be used to build a local cache of data. In addition, you can register consumers with the TableView by specifying a listener to perform a scan of the map and then receive notifications when new messages are received. Consequently, event handling can be triggered to serve use cases, such as event-driven applications and message monitoring.
+
+> **Note:** Each TableView uses one Reader instance per partition, and reads the topic starting from the compacted view by default. It is highly recommended to enable automatic compaction by [configuring the topic compaction policies](cookbooks-compaction.md#configuring-compaction-to-run-automatically) for the given topic or namespace. More frequent compaction results in shorter startup times because less data is replayed to reconstruct the TableView of the topic.
+
+The following figure illustrates the dynamic construction of a TableView updated with newer values of each key.
+![TableView](/assets/tableview.png)
+
+### Configure TableView
+ 
+The following is an example of how to configure a TableView.
+
+```java
+
+TableView<String> tv = client.newTableViewBuilder(Schema.STRING)
+  .topic("my-tableview")
+  .create()
+
+```
+
+You can use the available parameters in the `loadConf` configuration or related [API](https://pulsar.apache.org/api/client/2.10.0-SNAPSHOT/org/apache/pulsar/client/api/TableViewBuilder.html) to customize your TableView.
+
+| Name | Type| Required? |  <div>Description</div> | Default
+|---|---|---|---|---
+| `topic` | string | yes | The topic name of the TableView. | N/A
+| `autoUpdatePartitionInterval` | int | no | The interval to check for newly added partitions. | 60 (seconds)
+
+### Register listeners
+ 
+You can register listeners for both existing messages on a topic and new messages coming into the topic by using `forEachAndListen`, and specify to perform operations for all existing messages by using `forEach`.
+
+The following is an example of how to register listeners with TableView.
+
+```java
+
+// Register listeners for all existing and incoming messages
+tv.forEachAndListen((key, value) -> /*operations on all existing and incoming messages*/)
+
+// Register action for all existing messages
+tv.forEach((key, value) -> /*operations on all existing messages*/)
+
+```
+
 ## Schema
 
 In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example.
diff --git a/site2/website-next/versioned_docs/version-2.3.0/client-libraries-websocket.md b/site2/website-next/versioned_docs/version-2.3.0/client-libraries-websocket.md
index c663f97..a6e6036 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/client-libraries-websocket.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/client-libraries-websocket.md
@@ -32,7 +32,7 @@ webSocketServiceEnabled=true
 
 In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters:
 
-* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers)
+* [`configurationMetadataStoreUrl`](reference-configuration.md#websocket)
 * [`webServicePort`](reference-configuration.md#websocket-webServicePort)
 * [`clusterName`](reference-configuration.md#websocket-clusterName)
 
@@ -40,7 +40,7 @@ Here's an example:
 
 ```properties
 
-configurationStoreServers=zk1:2181,zk2:2181,zk3:2181
+configurationMetadataStoreUrl=zk1:2181,zk2:2181,zk3:2181
 webServicePort=8080
 clusterName=my-cluster
 
diff --git a/site2/website-next/versioned_docs/version-2.3.0/client-libraries.md b/site2/website-next/versioned_docs/version-2.3.0/client-libraries.md
index ab5b7c4..536cd0c 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/client-libraries.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/client-libraries.md
@@ -6,16 +6,25 @@ sidebar_label: "Overview"
 
 Pulsar supports the following client libraries:
 
-- [Java client](client-libraries-java)
-- [Go client](client-libraries-go)
-- [Python client](client-libraries-python)
-- [C++ client](client-libraries-cpp)
-- [Node.js client](client-libraries-node)
-- [WebSocket client](client-libraries-websocket)
-- [C# client](client-libraries-dotnet)
+|Language|Documentation|Release note|Code repo
+|---|---|---|---
+Java |- [User doc](client-libraries-java) <br /><br />- [API doc](https://pulsar.apache.org/api/client/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client) 
+C++ | - [User doc](client-libraries-cpp) <br /><br />- [API doc](https://pulsar.apache.org/api/cpp/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp) 
+Python | - [User doc](client-libraries-python) <br /><br />- [API doc](https://pulsar.apache.org/api/python/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) 
+WebSocket| [User doc](client-libraries-websocket) | [Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-websocket) 
+Go client|[User doc](client-libraries-go.md)|[Here](https://github.com/apache/pulsar-client-go/blob/master/CHANGELOG) |[Here](https://github.com/apache/pulsar-client-go) 
+Node.js|[User doc](client-libraries-node)|[Here](https://github.com/apache/pulsar-client-node/releases) |[Here](https://github.com/apache/pulsar-client-node) 
+C# |[User doc](client-libraries-dotnet.md)| [Here](https://github.com/apache/pulsar-dotpulsar/blob/master/CHANGELOG)|[Here](https://github.com/apache/pulsar-dotpulsar) 
+
+:::note
+
+- The code repos of **Java, C++, Python,** and **WebSocket** clients are hosted in the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are released with Pulsar, so their release notes are parts of [Pulsar release note](https://pulsar.apache.org/release-notes/).
+- The code repos of **Go, Node.js,** and **C#** clients are hosted outside of the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are not released with Pulsar, so they have independent release notes.
+
+:::
 
 ## Feature matrix
-Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://github.com/apache/pulsar/wiki/PIP-108%3A-Pulsar-Feature-Matrix-%28Client-and-Function%29) page.
+Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://docs.google.com/spreadsheets/d/1YHYTkIXR8-Ql103u-IMI18TXLlGStK8uJjDsOOA0T20/edit#gid=1784579914) page.
 
 ## Third-party clients
 
diff --git a/site2/website-next/versioned_docs/version-2.3.0/concepts-architecture-overview.md b/site2/website-next/versioned_docs/version-2.3.0/concepts-architecture-overview.md
index 8fe0717..a2b024d 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/concepts-architecture-overview.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/concepts-architecture-overview.md
@@ -47,6 +47,9 @@ Clusters can replicate amongst themselves using [geo-replication](concepts-repli
 
 The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and [BookKeeper metadata store](https://bookkee [...]
 
+> Pulsar also supports more metadata backend services, including [ETCD](https://etcd.io/) and [RocksDB](http://rocksdb.org/) (for standalone Pulsar only). 
+
+
 In a Pulsar instance:
 
 * A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
@@ -128,9 +131,10 @@ Architecturally, the Pulsar proxy gets all the information it requires from ZooK
 
 ```bash
 
+$ cd /path/to/pulsar/directory
 $ bin/pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk-2 \
-  --configuration-store-servers zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/versioned_docs/version-2.3.0/cookbooks-deduplication.md b/site2/website-next/versioned_docs/version-2.3.0/cookbooks-deduplication.md
index e71e6f4..a14a3c3 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/cookbooks-deduplication.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/cookbooks-deduplication.md
@@ -31,6 +31,7 @@ Parameter | Description | Default
 `brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false`
 `brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000`
 `brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000`
+`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120`
 `brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours)
 
 ### Set default value at the broker-level
diff --git a/site2/website-next/versioned_docs/version-2.3.0/deploy-bare-metal-multi-cluster.md b/site2/website-next/versioned_docs/version-2.3.0/deploy-bare-metal-multi-cluster.md
index 9dd2526..875b75d 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/deploy-bare-metal-multi-cluster.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/deploy-bare-metal-multi-cluster.md
@@ -226,8 +226,8 @@ You can initialize this metadata using the [`initialize-cluster-metadata`](refer
 
 $ bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
@@ -308,7 +308,7 @@ Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper b
 
 You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
 
-The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those  [...]
+The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`metadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the local quorum and the [`configurationMetadataStoreUrl`](reference-configuration.md#broker) parameter to reflect the configuration store quorum (although you need to specify only those ZooKeeper servers located in the same c [...]
 
 You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster.
 
@@ -317,10 +317,10 @@ The following is an example configuration:
 ```properties
 
 # Local ZooKeeper servers
-zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+metadataStoreUrl=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
 
 # Configuration store quorum connection string.
-configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+configurationMetadataStoreUrl=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
 
 clusterName=us-west
 
diff --git a/site2/website-next/versioned_docs/version-2.3.0/develop-binary-protocol.md b/site2/website-next/versioned_docs/version-2.3.0/develop-binary-protocol.md
index fa03383..63e43dd 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/develop-binary-protocol.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/develop-binary-protocol.md
@@ -240,8 +240,10 @@ Parameters:
 ##### Command Send
 
 Command `Send` is used to publish a new message within the context of an
-already existing producer. This command is used in a frame that includes command
-as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section.
+already existing producer. If a producer has not yet been created for the
+connection, the broker will terminate the connection. This command is used
+in a frame that includes command as well as message payload, for which the
+complete format is specified in the [payload commands](#payload-commands) section.
 
 ```protobuf
 
diff --git a/site2/website-next/versioned_docs/version-2.3.0/sql-deployment-configurations.md b/site2/website-next/versioned_docs/version-2.3.0/sql-deployment-configurations.md
index 9e7ff5a..10fb47a 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/sql-deployment-configurations.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/sql-deployment-configurations.md
@@ -115,20 +115,15 @@ pulsar.zookeeper-uri=localhost1,localhost2:2181
 
 ```
 
-A frequently asked question is why my latest message not showing up when querying with Pulsar SQL.
-It's not a bug but controlled by a setting, by default BookKeeper LAC only advanced when subsequent entries are added.
-If there is no subsequent entries added, the last entry written will not be visible to readers until the ledger is closed.
-This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly read from BookKeeper ledger.
-We can add following setting to change the behavior:
-In Broker config, set
-bookkeeperExplicitLacIntervalInMills > 0
-bookkeeperUseV2WireProtocol=false
-
-And in Presto config, set
-pulsar.bookkeeper-explicit-interval > 0
-pulsar.bookkeeper-use-v2-protocol=false
-
-However,keep in mind that using bk V3 protocol will introduce additional GC overhead to BK as it uses Protobuf.
+**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. 
+
+If you want to get the last message in a topic, set the following configurations:
+
+1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`.
+   
+2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`.
+
+However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf.
 
 ## Query data from existing Presto clusters
 
diff --git a/site2/website-next/versioned_docs/version-2.3.0/standalone-docker.md b/site2/website-next/versioned_docs/version-2.3.0/standalone-docker.md
index 7ee20c2..f636d2d 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/standalone-docker.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/standalone-docker.md
@@ -22,6 +22,7 @@ A few things to note about this command:
  * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
 time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
  * For Docker on Windows make sure to configure it to use Linux containers
+ * The docker container will run as UID 10000 and GID 0, by default. You'll need to ensure the mounted volumes give write permission to either UID 10000 or GID 0. Note that UID 10000 is arbitrary, so it is recommended to make these mounts writable for the root group (GID 0).
 
 If you start Pulsar successfully, you will see `INFO`-level log messages like this:
 
diff --git a/site2/website-next/versioned_docs/version-2.3.1/admin-api-clusters.md b/site2/website-next/versioned_docs/version-2.3.1/admin-api-clusters.md
index ccd3ebb..3c2f661 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/admin-api-clusters.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/admin-api-clusters.md
@@ -103,8 +103,8 @@ Here's an example cluster metadata initialization command:
 
 bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
diff --git a/site2/website-next/versioned_docs/version-2.3.1/administration-proxy.md b/site2/website-next/versioned_docs/version-2.3.1/administration-proxy.md
index 3cef937..5228b9a 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/administration-proxy.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/administration-proxy.md
@@ -8,22 +8,9 @@ Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connection
 
 ## Configure the proxy
 
-Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. 
+Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery.
 
-### Use service discovery
-
-Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
-
-```properties
-
-zookeeperServers=zk-0,zk-1,zk-2
-configurationStoreServers=zk-0:2184,zk-remote:2184
-
-```
-
-> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
-
-> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+> In a production environment service discovery is not recommended.
 
 ### Use broker URLs
 
@@ -57,6 +44,21 @@ The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651
 
 Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`.
 
+### Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+
+```properties
+
+metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181
+configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184
+
+```
+
+> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
+
+> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+
 ## Start the proxy
 
 To start the proxy:
@@ -64,7 +66,9 @@ To start the proxy:
 ```bash
 
 $ cd /path/to/pulsar/directory
-$ bin/pulsar proxy
+$ bin/pulsar proxy \
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/versioned_docs/version-2.3.1/administration-zk-bk.md b/site2/website-next/versioned_docs/version-2.3.1/administration-zk-bk.md
index e5f9688..c0aec95 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/administration-zk-bk.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/administration-zk-bk.md
@@ -147,27 +147,19 @@ $ bin/pulsar-daemon start configuration-store
 
 ### ZooKeeper configuration
 
-In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation:
+* The `conf/zookeeper.conf` file handles the configuration for local ZooKeeper.
+* The `conf/global-zookeeper.conf` file handles the configuration for configuration store.
+See [parameters](reference-configuration.md#zookeeper) for more details.
 
-#### Local ZooKeeper
+#### Configure batching operations
+Using the batching operations reduces the remote procedure call (RPC) traffic between ZooKeeper client and servers. It also reduces the number of write transactions, because each batching operation corresponds to a single ZooKeeper transaction, containing multiple read and write operations.
 
-The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters:
+The following figure demonstrates a basic benchmark of batching read/write operations that can be requested to ZooKeeper in one second:
 
-|Name|Description|Default|
-|---|---|---|
-|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
-|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
-|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
-|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
-|clientPort|  The port on which the ZooKeeper server listens for connections. |2181|
-|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
-|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1|
-|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+![Zookeeper batching benchmark](/assets/zookeeper-batching.png)
 
-
-#### Configuration Store
-
-The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters:
+To enable batching operations, set the [`metadataStoreBatchingEnabled`](reference-configuration.md#broker) parameter to `true` on the broker side.
 
 
 ## BookKeeper
@@ -194,6 +186,12 @@ You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](referenc
 
 The minimum configuration changes required in `conf/bookkeeper.conf` are as follows:
 
+:::note
+
+Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later.
+
+:::
+
 ```properties
 
 # Change to point to journal disk mount point
@@ -205,6 +203,9 @@ ledgerDirectories=data/bookkeeper/ledgers
 # Point to local ZK quorum
 zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
 
+#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud).
+advertisedAddress=
+
 ```
 
 To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`.
diff --git a/site2/website-next/versioned_docs/version-2.3.1/client-libraries-websocket.md b/site2/website-next/versioned_docs/version-2.3.1/client-libraries-websocket.md
index c663f97..a6e6036 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/client-libraries-websocket.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/client-libraries-websocket.md
@@ -32,7 +32,7 @@ webSocketServiceEnabled=true
 
 In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters:
 
-* [`configurationStoreServers`](reference-configuration.md#websocket-configurationStoreServers)
+* [`configurationMetadataStoreUrl`](reference-configuration.md#websocket)
 * [`webServicePort`](reference-configuration.md#websocket-webServicePort)
 * [`clusterName`](reference-configuration.md#websocket-clusterName)
 
@@ -40,7 +40,7 @@ Here's an example:
 
 ```properties
 
-configurationStoreServers=zk1:2181,zk2:2181,zk3:2181
+configurationMetadataStoreUrl=zk1:2181,zk2:2181,zk3:2181
 webServicePort=8080
 clusterName=my-cluster
 
diff --git a/site2/website-next/versioned_docs/version-2.3.1/client-libraries.md b/site2/website-next/versioned_docs/version-2.3.1/client-libraries.md
index ab5b7c4..536cd0c 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/client-libraries.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/client-libraries.md
@@ -6,16 +6,25 @@ sidebar_label: "Overview"
 
 Pulsar supports the following client libraries:
 
-- [Java client](client-libraries-java)
-- [Go client](client-libraries-go)
-- [Python client](client-libraries-python)
-- [C++ client](client-libraries-cpp)
-- [Node.js client](client-libraries-node)
-- [WebSocket client](client-libraries-websocket)
-- [C# client](client-libraries-dotnet)
+|Language|Documentation|Release note|Code repo
+|---|---|---|---
+Java |- [User doc](client-libraries-java) <br /><br />- [API doc](https://pulsar.apache.org/api/client/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client) 
+C++ | - [User doc](client-libraries-cpp) <br /><br />- [API doc](https://pulsar.apache.org/api/cpp/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp) 
+Python | - [User doc](client-libraries-python) <br /><br />- [API doc](https://pulsar.apache.org/api/python/)|[Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) 
+WebSocket| [User doc](client-libraries-websocket) | [Here](https://pulsar.apache.org/release-notes/)|[Here](https://github.com/apache/pulsar/tree/master/pulsar-websocket) 
+Go client|[User doc](client-libraries-go.md)|[Here](https://github.com/apache/pulsar-client-go/blob/master/CHANGELOG) |[Here](https://github.com/apache/pulsar-client-go) 
+Node.js|[User doc](client-libraries-node)|[Here](https://github.com/apache/pulsar-client-node/releases) |[Here](https://github.com/apache/pulsar-client-node) 
+C# |[User doc](client-libraries-dotnet.md)| [Here](https://github.com/apache/pulsar-dotpulsar/blob/master/CHANGELOG)|[Here](https://github.com/apache/pulsar-dotpulsar) 
+
+:::note
+
+- The code repos of **Java, C++, Python,** and **WebSocket** clients are hosted in the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are released with Pulsar, so their release notes are parts of [Pulsar release note](https://pulsar.apache.org/release-notes/).
+- The code repos of **Go, Node.js,** and **C#** clients are hosted outside of the [Pulsar main repo](https://github.com/apache/pulsar) and these clients are not released with Pulsar, so they have independent release notes.
+
+:::
 
 ## Feature matrix
-Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://github.com/apache/pulsar/wiki/PIP-108%3A-Pulsar-Feature-Matrix-%28Client-and-Function%29) page.
+Pulsar client feature matrix for different languages is listed on [Pulsar Feature Matrix (Client and Function)](https://docs.google.com/spreadsheets/d/1YHYTkIXR8-Ql103u-IMI18TXLlGStK8uJjDsOOA0T20/edit#gid=1784579914) page.
 
 ## Third-party clients
 
diff --git a/site2/website-next/versioned_docs/version-2.3.1/concepts-architecture-overview.md b/site2/website-next/versioned_docs/version-2.3.1/concepts-architecture-overview.md
index 8fe0717..a2b024d 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/concepts-architecture-overview.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/concepts-architecture-overview.md
@@ -47,6 +47,9 @@ Clusters can replicate amongst themselves using [geo-replication](concepts-repli
 
 The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and [BookKeeper metadata store](https://bookkee [...]
 
+> Pulsar also supports more metadata backend services, including [ETCD](https://etcd.io/) and [RocksDB](http://rocksdb.org/) (for standalone Pulsar only). 
+
+
 In a Pulsar instance:
 
 * A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
@@ -128,9 +131,10 @@ Architecturally, the Pulsar proxy gets all the information it requires from ZooK
 
 ```bash
 
+$ cd /path/to/pulsar/directory
 $ bin/pulsar proxy \
-  --zookeeper-servers zk-0,zk-1,zk-2 \
-  --configuration-store-servers zk-0,zk-1,zk-2
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/versioned_docs/version-2.3.1/cookbooks-deduplication.md b/site2/website-next/versioned_docs/version-2.3.1/cookbooks-deduplication.md
index e71e6f4..a14a3c3 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/cookbooks-deduplication.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/cookbooks-deduplication.md
@@ -31,6 +31,7 @@ Parameter | Description | Default
 `brokerDeduplicationEnabled` | Sets the default behavior for message deduplication in the Pulsar broker. If it is set to `true`, message deduplication is enabled on all namespaces/topics. If it is set to `false`, you have to enable or disable deduplication at the namespace level or the topic level. | `false`
 `brokerDeduplicationMaxNumberOfProducers` | The maximum number of producers for which information is stored for deduplication purposes. | `10000`
 `brokerDeduplicationEntriesInterval` | The number of entries after which a deduplication informational snapshot is taken. A larger interval leads to fewer snapshots being taken, though this lengthens the topic recovery time (the time required for entries published after the snapshot to be replayed). | `1000`
+`brokerDeduplicationSnapshotIntervalSeconds`| The time period after which a deduplication informational snapshot is taken. It runs simultaneously with `brokerDeduplicationEntriesInterval`. |`120`
 `brokerDeduplicationProducerInactivityTimeoutMinutes` | The time of inactivity (in minutes) after which the broker discards deduplication information related to a disconnected producer. | `360` (6 hours)
 
 ### Set default value at the broker-level
diff --git a/site2/website-next/versioned_docs/version-2.3.1/deploy-monitoring.md b/site2/website-next/versioned_docs/version-2.3.1/deploy-monitoring.md
index 95ccdd6..adf3587 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/deploy-monitoring.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/deploy-monitoring.md
@@ -51,7 +51,7 @@ http://$GLOBAL_ZK_SERVER:8001/metrics
 
 ```
 
-The default port of local ZooKeeper is `8000` and the default port of configuration store is `8001`. You can change the default port of local ZooKeeper and configuration store by specifying system property `stats_server_port`.
+The default port of local ZooKeeper is `8000` and the default port of the configuration store is `8001`. You can use a different stats port by configuring `metricsProvider.httpPort` in the `conf/zookeeper.conf` file.
 
 ### BookKeeper stats
 
diff --git a/site2/website-next/versioned_docs/version-2.3.1/develop-binary-protocol.md b/site2/website-next/versioned_docs/version-2.3.1/develop-binary-protocol.md
index fa03383..63e43dd 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/develop-binary-protocol.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/develop-binary-protocol.md
@@ -240,8 +240,10 @@ Parameters:
 ##### Command Send
 
 Command `Send` is used to publish a new message within the context of an
-already existing producer. This command is used in a frame that includes command
-as well as message payload, for which the complete format is specified in the [payload commands](#payload-commands) section.
+already existing producer. If a producer has not yet been created for the
+connection, the broker will terminate the connection. This command is used
+in a frame that includes command as well as message payload, for which the
+complete format is specified in the [payload commands](#payload-commands) section.
 
 ```protobuf
 
diff --git a/site2/website-next/versioned_docs/version-2.3.1/sql-deployment-configurations.md b/site2/website-next/versioned_docs/version-2.3.1/sql-deployment-configurations.md
index 9e7ff5a..10fb47a 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/sql-deployment-configurations.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/sql-deployment-configurations.md
@@ -115,20 +115,15 @@ pulsar.zookeeper-uri=localhost1,localhost2:2181
 
 ```
 
-A frequently asked question is why my latest message not showing up when querying with Pulsar SQL.
-It's not a bug but controlled by a setting, by default BookKeeper LAC only advanced when subsequent entries are added.
-If there is no subsequent entries added, the last entry written will not be visible to readers until the ledger is closed.
-This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly read from BookKeeper ledger.
-We can add following setting to change the behavior:
-In Broker config, set
-bookkeeperExplicitLacIntervalInMills > 0
-bookkeeperUseV2WireProtocol=false
-
-And in Presto config, set
-pulsar.bookkeeper-explicit-interval > 0
-pulsar.bookkeeper-use-v2-protocol=false
-
-However,keep in mind that using bk V3 protocol will introduce additional GC overhead to BK as it uses Protobuf.
+**Note: by default, Pulsar SQL does not get the last message in a topic**. It is by design and controlled by settings. By default, BookKeeper LAC only advances when subsequent entries are added. If there is no subsequent entry added, the last written entry is not visible to readers until the ledger is closed. This is not a problem for Pulsar which uses managed ledger, but Pulsar SQL directly reads from BookKeeper ledger. 
+
+If you want to get the last message in a topic, set the following configurations:
+
+1. For the broker configuration, set `bookkeeperExplicitLacIntervalInMills` > 0 in `broker.conf` or `standalone.conf`.
+   
+2. For the Presto configuration, set `pulsar.bookkeeper-explicit-interval` > 0 and `pulsar.bookkeeper-use-v2-protocol=false`.
+
+However, using BookKeeper V3 protocol introduces additional GC overhead to BK as it uses Protobuf.
 
 ## Query data from existing Presto clusters
 
diff --git a/site2/website-next/versioned_docs/version-2.3.2/admin-api-clusters.md b/site2/website-next/versioned_docs/version-2.3.2/admin-api-clusters.md
index ccd3ebb..3c2f661 100644
--- a/site2/website-next/versioned_docs/version-2.3.2/admin-api-clusters.md
+++ b/site2/website-next/versioned_docs/version-2.3.2/admin-api-clusters.md
@@ -103,8 +103,8 @@ Here's an example cluster metadata initialization command:
 
 bin/pulsar initialize-cluster-metadata \
   --cluster us-west \
-  --zookeeper zk1.us-west.example.com:2181 \
-  --configuration-store zk1.us-west.example.com:2184 \
+  --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+  --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
   --web-service-url http://pulsar.us-west.example.com:8080/ \
   --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
   --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
diff --git a/site2/website-next/versioned_docs/version-2.3.2/administration-load-balance.md b/site2/website-next/versioned_docs/version-2.3.2/administration-load-balance.md
index 834b156..811e8e5 100644
--- a/site2/website-next/versioned_docs/version-2.3.2/administration-load-balance.md
+++ b/site2/website-next/versioned_docs/version-2.3.2/administration-load-balance.md
@@ -155,20 +155,26 @@ loadBalancerSheddingGracePeriodMinutes=30
 
 ```
 
-Pulsar supports three types of shedding strategies:
+Pulsar supports the following types of shedding strategies. From Pulsar 2.10, the **default** shedding strategy is `ThresholdShedder`.
 
 ##### ThresholdShedder
-This strategy tends to shed the bundles if any broker's usage is above the configured threshold. It does this by first computing the average resource usage per broker for the whole cluster. The resource usage for each broker is calculated using the following method: LocalBrokerData#getMaxResourceUsageWithWeight). The weights for each resource are configurable. Historical observations are included in the running average based on the broker's setting for loadBalancerHistoryResourcePercenta [...]
+This strategy tends to shed the bundles if any broker's usage is above the configured threshold. It does this by first computing the average resource usage per broker for the whole cluster. The resource usage for each broker is calculated using the following method: LocalBrokerData#getMaxResourceUsageWithWeight. The weights for each resource are configurable. Historical observations are included in the running average based on the broker's setting for loadBalancerHistoryResourcePercentag [...]
 `loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`
 
+![Shedding strategy - ThresholdShedder](/assets/ThresholdShedder.png)
+
 ##### OverloadShedder
 This strategy will attempt to shed exactly one bundle on brokers which are overloaded, that is, whose maximum system resource usage exceeds loadBalancerBrokerOverloadedThresholdPercentage. To see which resources are considered when determining the maximum system resource. A bundle is recommended for unloading off that broker if and only if the following conditions hold: The broker has at least two bundles assigned and the broker has at least one bundle that has not been unloaded recently [...]
 `loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.OverloadShedder`
 
+![Shedding strategy - OverloadShedder](/assets/OverloadShedder.png)
+
 ##### UniformLoadShedder
 This strategy tends to distribute load uniformly across all brokers. This strategy checks laod difference between broker with highest load and broker with lowest load. If the difference is higher than configured thresholds `loadBalancerMsgRateDifferenceShedderThreshold` and `loadBalancerMsgThroughputMultiplierDifferenceShedderThreshold` then it finds out bundles which can be unloaded to distribute traffic evenly across all brokers. Configure broker with below value to use this strategy.
 `loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder`
 
+![Shedding strategy - UniformLoadShedder](/assets/UniformLoadShedder.png)
+
 #### Broker overload thresholds
 
 The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled).
diff --git a/site2/website-next/versioned_docs/version-2.3.2/administration-proxy.md b/site2/website-next/versioned_docs/version-2.3.2/administration-proxy.md
index 3cef937..5228b9a 100644
--- a/site2/website-next/versioned_docs/version-2.3.2/administration-proxy.md
+++ b/site2/website-next/versioned_docs/version-2.3.2/administration-proxy.md
@@ -8,22 +8,9 @@ Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connection
 
 ## Configure the proxy
 
-Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the proxy to connect directly to service discovery, or specify a broker URL in the configuration. 
+Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery.
 
-### Use service discovery
-
-Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
-
-```properties
-
-zookeeperServers=zk-0,zk-1,zk-2
-configurationStoreServers=zk-0:2184,zk-remote:2184
-
-```
-
-> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
-
-> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+> In a production environment service discovery is not recommended.
 
 ### Use broker URLs
 
@@ -57,6 +44,21 @@ The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651
 
 Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`.
 
+### Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+
+```properties
+
+metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181
+configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184
+
+```
+
+> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
+
+> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper. 
+
 ## Start the proxy
 
 To start the proxy:
@@ -64,7 +66,9 @@ To start the proxy:
 ```bash
 
 $ cd /path/to/pulsar/directory
-$ bin/pulsar proxy
+$ bin/pulsar proxy \
+  --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+  --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
 
 ```
 
diff --git a/site2/website-next/versioned_docs/version-2.3.2/administration-zk-bk.md b/site2/website-next/versioned_docs/version-2.3.2/administration-zk-bk.md
index e5f9688..c0aec95 100644
--- a/site2/website-next/versioned_docs/version-2.3.2/administration-zk-bk.md
+++ b/site2/website-next/versioned_docs/version-2.3.2/administration-zk-bk.md
@@ -147,27 +147,19 @@ $ bin/pulsar-daemon start configuration-store
 
 ### ZooKeeper configuration
 
-In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation: `conf/zookeeper.conf` for [local ZooKeeper](#local-zookeeper) and `conf/global-zookeeper.conf` for [configuration store](#configuration-store).
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation:
+* The `conf/zookeeper.conf` file handles the configuration for local ZooKeeper.
+* The `conf/global-zookeeper.conf` file handles the configuration for configuration store.
+See [parameters](reference-configuration.md#zookeeper) for more details.
 
-#### Local ZooKeeper
+#### Configure batching operations
+Using the batching operations reduces the remote procedure call (RPC) traffic between ZooKeeper client and servers. It also reduces the number of write transactions, because each batching operation corresponds to a single ZooKeeper transaction, containing multiple read and write operations.
 
-The [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file handles the configuration for local ZooKeeper. The table below shows the available parameters:
+The following figure demonstrates a basic benchmark of batching read/write operations that can be requested to ZooKeeper in one second:
 
-|Name|Description|Default|
-|---|---|---|
-|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
-|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
-|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
-|dataDir| The location where ZooKeeper stores in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
-|clientPort|  The port on which the ZooKeeper server listens for connections. |2181|
-|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
-|autopurge.purgeInterval| The time interval, in hours, which triggers the ZooKeeper database purge task. Setting to a non-zero number enables auto purge; setting to 0 disables. Read this guide before enabling auto purge. |1|
-|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+![Zookeeper batching benchmark](/assets/zookeeper-batching.png)
 
-
-#### Configuration Store
-
-The [`conf/global-zookeeper.conf`](reference-configuration.md#configuration-store) file handles the configuration for configuration store. The table below shows the available parameters:
+To enable batching operations, set the [`metadataStoreBatchingEnabled`](reference-configuration.md#broker) parameter to `true` on the broker side.
 
 
 ## BookKeeper
@@ -194,6 +186,12 @@ You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](referenc
 
 The minimum configuration changes required in `conf/bookkeeper.conf` are as follows:
 
+:::note
+
+Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later.
+
+:::
+
 ```properties
 
 # Change to point to journal disk mount point
@@ -205,6 +203,9 @@ ledgerDirectories=data/bookkeeper/ledgers
 # Point to local ZK quorum
 zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
 
+#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud).
+advertisedAddress=
+
 ```
 
 To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`.
diff --git a/site2/website-next/versioned_docs/version-2.3.2/client-libraries-cpp.md b/site2/website-next/versioned_docs/version-2.3.2/client-libraries-cpp.md
index 958861a..b67f6d9 100644
--- a/site2/website-next/versioned_docs/version-2.3.2/client-libraries-cpp.md
+++ b/site2/website-next/versioned_docs/version-2.3.2/client-libraries-cpp.md
@@ -14,7 +14,18 @@ Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms
 
 [Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp).
 
-## System requirements
+
+## Linux
+
+:::note
+
+You can choose one of the following installation methods based on your needs: Compilation, Install RPM or Install Debian.
+
+:::
+
+### Compilation 
+
+#### System requirements
 
 You need to install the following components before using the C++ client:
 
@@ -24,10 +35,6 @@ You need to install the following components before using the C++ client:
 * [libcurl](https://curl.se/libcurl/)
 * [Google Test](https://github.com/google/googletest)
 
-## Linux
-
-### Compilation 
-
 1. Clone the Pulsar repository.
 
 ```shell
@@ -144,7 +151,14 @@ $ rpm -ivh apache-pulsar-client*.rpm
 
 ```
 
-After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory.
+After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory,for example:
+
+```bash
+
+lrwxrwxrwx 1 root root 18 Dec 30 22:21 libpulsar.so -> libpulsar.so.2.9.1
+lrwxrwxrwx 1 root root 23 Dec 30 22:21 libpulsarnossl.so -> libpulsarnossl.so.2.9.1
+
+```
 
 :::note
 
@@ -152,6 +166,15 @@ If you get the error that `libpulsar.so: cannot open shared object file: No such
 
 :::
 
+2. Install the GCC and g++ using the following command, otherwise errors would occur in installing Node.js.
+
+```bash
+
+$ sudo yum -y install gcc automake autoconf libtool make
+$ sudo yum -y install gcc-c++
+
+```
+
 ### Install Debian
 
... 16214 lines suppressed ...