You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by ur...@apache.org on 2022/03/21 06:03:04 UTC

[pulsar-site] branch main updated: Docs sync done from apache/pulsar(#d4e0797)

This is an automated email from the ASF dual-hosted git repository.

urfree pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/pulsar-site.git


The following commit(s) were added to refs/heads/main by this push:
     new 3414578  Docs sync done from apache/pulsar(#d4e0797)
3414578 is described below

commit 34145789185907b8f7374230357f0f8a3c236bea
Author: Pulsar Site Updater <de...@pulsar.apache.org>
AuthorDate: Mon Mar 21 06:02:56 2022 +0000

    Docs sync done from apache/pulsar(#d4e0797)
---
 site2/docs/administration-geo.md                   | 60 +++++++++++-----
 site2/docs/assets/active-active-replication.svg    |  1 +
 site2/docs/assets/active-standby-replication.svg   |  1 +
 site2/docs/assets/aggregation-replication.svg      |  1 +
 site2/docs/assets/full-mesh-replication.svg        |  1 +
 site2/docs/assets/geo-replication-async.svg        |  1 +
 site2/docs/assets/geo-replication-sync.svg         |  1 +
 site2/docs/concepts-replication.md                 | 62 ++++++++++++++++-
 site2/docs/security-tls-transport.md               | 15 +++-
 site2/website-next/docs/administration-geo.md      | 80 +++++++++++++++++-----
 site2/website-next/docs/concepts-replication.md    | 62 ++++++++++++++++-
 site2/website-next/docs/security-tls-transport.md  | 19 ++++-
 .../static/assets/active-active-replication.svg    |  1 +
 .../static/assets/active-standby-replication.svg   |  1 +
 .../static/assets/aggregation-replication.svg      |  1 +
 .../static/assets/full-mesh-replication.svg        |  1 +
 .../static/assets/geo-replication-async.svg        |  1 +
 .../static/assets/geo-replication-sync.svg         |  1 +
 .../version-2.2.0/administration-geo.md            | 80 +++++++++++++++++-----
 .../version-2.2.0/concepts-replication.md          | 62 ++++++++++++++++-
 .../version-2.2.1/administration-geo.md            | 80 +++++++++++++++++-----
 .../version-2.2.1/concepts-replication.md          | 62 ++++++++++++++++-
 .../version-2.3.0/administration-geo.md            | 80 +++++++++++++++++-----
 .../version-2.3.0/concepts-replication.md          | 62 ++++++++++++++++-
 .../version-2.3.1/administration-geo.md            | 80 +++++++++++++++++-----
 .../version-2.3.1/concepts-replication.md          | 62 ++++++++++++++++-
 .../version-2.3.2/concepts-replication.md          | 62 ++++++++++++++++-
 .../version-2.4.0/concepts-replication.md          | 62 ++++++++++++++++-
 .../version-2.4.1/concepts-replication.md          | 62 ++++++++++++++++-
 .../version-2.4.2/concepts-replication.md          | 62 ++++++++++++++++-
 .../version-2.5.0/concepts-replication.md          | 62 ++++++++++++++++-
 .../version-2.5.2/concepts-replication.md          | 62 ++++++++++++++++-
 32 files changed, 1128 insertions(+), 122 deletions(-)

diff --git a/site2/docs/administration-geo.md b/site2/docs/administration-geo.md
index 5e425089..bef1c16 100644
--- a/site2/docs/administration-geo.md
+++ b/site2/docs/administration-geo.md
@@ -4,26 +4,16 @@ title: Pulsar geo-replication
 sidebar_label: Geo-replication
 ---
 
-*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+## Enable geo-replication for a namespace
 
-## How geo-replication works
+You must enable geo-replication on a [per-tenant basis](#concepts-multi-tenancy) in Pulsar. For example, you can enable geo-replication between two specific clusters only when a tenant has access to both clusters.
 
-The diagram below illustrates the process of geo-replication across Pulsar clusters:
+Geo-replication is managed at the namespace level, which means you only need to create and configure a namespace to replicate messages between two or more provisioned clusters that a tenant can access.
 
-![Replication Diagram](assets/geo-replication.png)
+Complete the following tasks to enable geo-replication for a namespace:
 
-In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
-
-Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
-
-## Geo-replication and Pulsar properties
-
-You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
-
-Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
-
-* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
-* Configure that namespace to replicate across two or more provisioned clusters
+* [Enable a geo-replication namespace](#enable-geo-replication-at-namespace-level)
+* [Configure that namespace to replicate across two or more provisioned clusters](admin-api-namespaces.md/#configure-replication-clusters)
 
 Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
 
@@ -37,13 +27,19 @@ Applications can create producers and consumers in any of the clusters, even whe
 
 Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover [...]
 
+![A typical geo-replication example with full-mesh pattern](assets/geo-replication.png)
+
 In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
 
 All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
 
 ## Configure replication
 
-The following example connects three clusters: **us-east**, **us-west**, and **us-cent**.
+This section guides you through the steps to configure geo-replicated clusters.
+1. [Connect replication clusters](#connect-replication-clusters)
+2. [Grant permissions to properties](#grant-permissions-to-properties)
+3. [Enable geo-replication](#enable-geo-replication)
+4. [Use topics with geo-replication](#use-topics-with-geo-replication)
 
 ### Connect replication clusters
 
@@ -215,4 +211,32 @@ Consumer<String> consumer = client.newConsumer(Schema.STRING)
 ### Limitations
 
 * When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
-* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
\ No newline at end of file
+* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
+
+## Migrate data between clusters using geo-replication
+
+Using geo-replication to migrate data between clusters is a special use case of the [active-active replication pattern](concepts-replication.md/#active-active-replication) when you don't have a large amount of data.
+
+1. Create your new cluster.
+2. Add the new cluster to your old cluster.
+```shell
+  bin/pulsar-admin cluster create new-cluster
+```
+3. Add the new cluster to your tenant.
+```shell
+  bin/pulsar-admin tenants update my-tenant --cluster old-cluster,new-cluster
+```
+4. Set the clusters on your namespace.
+```shell
+  bin/pulsar-admin namespaces set-clusters my-tenant/my-ns --cluster old-cluster,new-cluster
+```
+5. Update your applications using [replicated subscriptions](#replicated-subscriptions).
+6. Validate subscription replication is active.
+```shell
+  bin/pulsar-admin topics stats-internal public/default/t1
+```
+7. Move your consumers and producers to the new cluster by modifying the values of `serviceURL`.
+
+> Note
+> * The replication starts from step 4, which means existing messages in your old cluster are not replicated. 
+> * If you have some older messages to migrate, you can pre-create the replication subscriptions for each topic and set it at the earliest position by using `pulsar-admin topics create-subscription -s pulsar.repl.new-cluster -m earliest <topic>`.
diff --git a/site2/docs/assets/active-active-replication.svg b/site2/docs/assets/active-active-replication.svg
new file mode 100644
index 0000000..ee7d5b8
--- /dev/null
+++ b/site2/docs/assets/active-active-replication.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1042" height="643.05"><g transform="translate(-379 -100)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M680 126c0-3.3 2.7-6 6-6h428c3.3 0 6 2.7 6 6v251.05c0 3.3-2.7 6-6 6H686c-3.3 0-6-2.7-6-6z" fill="#fff"/><path d="M681.5 126c0 .83-.67 1.5-1.5 1.5s-1.5-.67-1.5-1.5.67-1.5 1.5-1.5 1.5.67 1.5 1.5zm1.9-4.1c0 .82-.68 1.5-1.5 1.5-.83 0-1.5-.68-1.5-1. [...]
\ No newline at end of file
diff --git a/site2/docs/assets/active-standby-replication.svg b/site2/docs/assets/active-standby-replication.svg
new file mode 100644
index 0000000..70bccf3
--- /dev/null
+++ b/site2/docs/assets/active-standby-replication.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1096" height="365.89"><g transform="translate(-200 -196.95312499999994)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M220 222.95c0-3.3 2.7-6 6-6h268c3.3 0 6 2.7 6 6V494c0 3.3-2.7 6-6 6H226c-3.3 0-6-2.7-6-6z" fill="#fff"/><path d="M221.5 222.95c0 .83-.67 1.5-1.5 1.5s-1.5-.67-1.5-1.5.67-1.5 1.5-1.5 1.5.67 1.5 1.5zm1.9-4.1c0 .83-.68 1.5-1.5 1.5-.8 [...]
\ No newline at end of file
diff --git a/site2/docs/assets/aggregation-replication.svg b/site2/docs/assets/aggregation-replication.svg
new file mode 100644
index 0000000..a9b10cc
--- /dev/null
+++ b/site2/docs/assets/aggregation-replication.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1020" height="528.48"><g transform="translate(-340 -380)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M722.67 786c0-3.3 2.68-6 6-6h228c3.3 0 6 2.7 6 6v96.48c0 3.3-2.7 6-6 6h-228c-3.32 0-6-2.7-6-6z" fill="#fff"/><path d="M724.17 786c0 .83-.67 1.5-1.5 1.5s-1.5-.67-1.5-1.5.67-1.5 1.5-1.5 1.5.67 1.5 1.5zm1.9-4.1c0 .82-.68 1.5-1.5 1.5-.84 0-1.5-.68- [...]
\ No newline at end of file
diff --git a/site2/docs/assets/full-mesh-replication.svg b/site2/docs/assets/full-mesh-replication.svg
new file mode 100644
index 0000000..69a2b48
--- /dev/null
+++ b/site2/docs/assets/full-mesh-replication.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1260" height="614.11"><g transform="translate(-280 -196.953125)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M762.67 574.94c0-3.3 2.68-6 6-6h268c3.3 0 6 2.7 6 6v210.12c0 3.3-2.7 6-6 6h-268c-3.32 0-6-2.7-6-6z" fill="#fff"/><path d="M764.17 574.94c0 .83-.67 1.5-1.5 1.5s-1.5-.67-1.5-1.5.67-1.5 1.5-1.5 1.5.67 1.5 1.5zm1.9-4.1c0 .82-.68 1.5-1.5 1.5- [...]
\ No newline at end of file
diff --git a/site2/docs/assets/geo-replication-async.svg b/site2/docs/assets/geo-replication-async.svg
new file mode 100644
index 0000000..4280e14
--- /dev/null
+++ b/site2/docs/assets/geo-replication-async.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1100" height="830"><g transform="translate(-280 -360)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M360 686.94c0-3.32 2.7-6 6-6h188c3.3 0 6 2.68 6 6v89.52c0 3.3-2.7 6-6 6H366c-3.3 0-6-2.7-6-6z" stroke="#188fff" stroke-width="3" fill="#fff"/><use xlink:href="#a" transform="matrix(1,0,0,1,365,685.9352387843705) translate(54.99348958333334 55.1271 [...]
\ No newline at end of file
diff --git a/site2/docs/assets/geo-replication-sync.svg b/site2/docs/assets/geo-replication-sync.svg
new file mode 100644
index 0000000..56797b3
--- /dev/null
+++ b/site2/docs/assets/geo-replication-sync.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="980" height="830"><g transform="translate(-340 -360)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M360 686.94c0-3.32 2.7-6 6-6h188c3.3 0 6 2.68 6 6v89.52c0 3.3-2.7 6-6 6H366c-3.3 0-6-2.7-6-6z" stroke="#5e5e5e" stroke-width="3" fill="#fff"/><use xlink:href="#a" transform="matrix(1,0,0,1,365,685.9352387843705) translate(54.99348958333334 55.12717 [...]
\ No newline at end of file
diff --git a/site2/docs/concepts-replication.md b/site2/docs/concepts-replication.md
index 3d1c823..52cef91 100644
--- a/site2/docs/concepts-replication.md
+++ b/site2/docs/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: Geo Replication
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo.md). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](assets/aggregation-replication.svg)
diff --git a/site2/docs/security-tls-transport.md b/site2/docs/security-tls-transport.md
index 3fb9ca0..50cae13 100644
--- a/site2/docs/security-tls-transport.md
+++ b/site2/docs/security-tls-transport.md
@@ -57,13 +57,26 @@ chmod 700 private/
 touch index.txt
 echo 1000 > serial
 openssl genrsa -aes256 -out private/ca.key.pem 4096
+# You need enter a password in the command above
 chmod 400 private/ca.key.pem
 openssl req -config openssl.cnf -key private/ca.key.pem \
     -new -x509 -days 7300 -sha256 -extensions v3_ca \
     -out certs/ca.cert.pem
+# You must enter the same password in the previous openssl command
 chmod 444 certs/ca.cert.pem
 ```
 
+> **Tips**
+>
+> The default `openssl` on macOS doesn't work for the commands above. You must upgrade the `openssl` via Homebrew:
+>
+> ```bash
+> brew install openssl
+> export PATH="/usr/local/Cellar/openssl@3/3.0.1/bin:$PATH"
+> ```
+>
+> The version `3.0.1` might change in the future. Use the actual path from the output of `brew install` command.
+
 4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory:
 
 * `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved.
@@ -259,4 +272,4 @@ var client = PulsarClient.Builder()
                          .VerifyCertificateName(false)     //Default is 'false'
                          .Build();
 ```
-> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client.
\ No newline at end of file
+> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client.
diff --git a/site2/website-next/docs/administration-geo.md b/site2/website-next/docs/administration-geo.md
index 2694762..c4463fb 100644
--- a/site2/website-next/docs/administration-geo.md
+++ b/site2/website-next/docs/administration-geo.md
@@ -10,26 +10,16 @@ import TabItem from '@theme/TabItem';
 ````
 
 
-*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+## Enable geo-replication for a namespace
 
-## How geo-replication works
+You must enable geo-replication on a [per-tenant basis](#concepts-multi-tenancy) in Pulsar. For example, you can enable geo-replication between two specific clusters only when a tenant has access to both clusters.
 
-The diagram below illustrates the process of geo-replication across Pulsar clusters:
+Geo-replication is managed at the namespace level, which means you only need to create and configure a namespace to replicate messages between two or more provisioned clusters that a tenant can access.
 
-![Replication Diagram](/assets/geo-replication.png)
+Complete the following tasks to enable geo-replication for a namespace:
 
-In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
-
-Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
-
-## Geo-replication and Pulsar properties
-
-You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
-
-Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
-
-* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
-* Configure that namespace to replicate across two or more provisioned clusters
+* [Enable a geo-replication namespace](#enable-geo-replication-at-namespace-level)
+* [Configure that namespace to replicate across two or more provisioned clusters](admin-api-namespaces.md/#configure-replication-clusters)
 
 Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
 
@@ -43,13 +33,19 @@ Applications can create producers and consumers in any of the clusters, even whe
 
 Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover [...]
 
+![A typical geo-replication example with full-mesh pattern](/assets/geo-replication.png)
+
 In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
 
 All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
 
 ## Configure replication
 
-The following example connects three clusters: **us-east**, **us-west**, and **us-cent**.
+This section guides you through the steps to configure geo-replicated clusters.
+1. [Connect replication clusters](#connect-replication-clusters)
+2. [Grant permissions to properties](#grant-permissions-to-properties)
+3. [Enable geo-replication](#enable-geo-replication)
+4. [Use topics with geo-replication](#use-topics-with-geo-replication)
 
 ### Connect replication clusters
 
@@ -250,4 +246,52 @@ Consumer<String> consumer = client.newConsumer(Schema.STRING)
 ### Limitations
 
 * When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
-* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
\ No newline at end of file
+* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
+
+## Migrate data between clusters using geo-replication
+
+Using geo-replication to migrate data between clusters is a special use case of the [active-active replication pattern](concepts-replication.md/#active-active-replication) when you don't have a large amount of data.
+
+1. Create your new cluster.
+2. Add the new cluster to your old cluster.
+
+```shell
+
+  bin/pulsar-admin cluster create new-cluster
+
+```
+
+3. Add the new cluster to your tenant.
+
+```shell
+
+  bin/pulsar-admin tenants update my-tenant --cluster old-cluster,new-cluster
+
+```
+
+4. Set the clusters on your namespace.
+
+```shell
+
+  bin/pulsar-admin namespaces set-clusters my-tenant/my-ns --cluster old-cluster,new-cluster
+
+```
+
+5. Update your applications using [replicated subscriptions](#replicated-subscriptions).
+6. Validate subscription replication is active.
+
+```shell
+
+  bin/pulsar-admin topics stats-internal public/default/t1
+
+```
+
+7. Move your consumers and producers to the new cluster by modifying the values of `serviceURL`.
+
+:::note
+
+* The replication starts from step 4, which means existing messages in your old cluster are not replicated. 
+* If you have some older messages to migrate, you can pre-create the replication subscriptions for each topic and set it at the earliest position by using `pulsar-admin topics create-subscription -s pulsar.repl.new-cluster -m earliest <topic>`.
+
+:::
+
diff --git a/site2/website-next/docs/concepts-replication.md b/site2/website-next/docs/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/docs/concepts-replication.md
+++ b/site2/website-next/docs/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/docs/security-tls-transport.md b/site2/website-next/docs/security-tls-transport.md
index 78dd16a..18c5da9 100644
--- a/site2/website-next/docs/security-tls-transport.md
+++ b/site2/website-next/docs/security-tls-transport.md
@@ -60,14 +60,31 @@ chmod 700 private/
 touch index.txt
 echo 1000 > serial
 openssl genrsa -aes256 -out private/ca.key.pem 4096
+# You need enter a password in the command above
 chmod 400 private/ca.key.pem
 openssl req -config openssl.cnf -key private/ca.key.pem \
     -new -x509 -days 7300 -sha256 -extensions v3_ca \
     -out certs/ca.cert.pem
+# You must enter the same password in the previous openssl command
 chmod 444 certs/ca.cert.pem
 
 ```
 
+:::tip
+
+The default `openssl` on macOS doesn't work for the commands above. You must upgrade the `openssl` via Homebrew:
+
+```bash
+
+brew install openssl
+export PATH="/usr/local/Cellar/openssl@3/3.0.1/bin:$PATH"
+
+```
+
+The version `3.0.1` might change in the future. Use the actual path from the output of `brew install` command.
+
+:::
+
 4. After you answer the question prompts, CA-related files are stored in the `./my-ca` directory. Within that directory:
 
 * `certs/ca.cert.pem` is the public certificate. This public certificates is meant to be distributed to all parties involved.
@@ -292,4 +309,4 @@ var client = PulsarClient.Builder()
 
 ```
 
-> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client.
\ No newline at end of file
+> Note that `VerifyCertificateName` refers to the configuration of hostname verification in the C# client.
diff --git a/site2/website-next/static/assets/active-active-replication.svg b/site2/website-next/static/assets/active-active-replication.svg
new file mode 100644
index 0000000..ee7d5b8
--- /dev/null
+++ b/site2/website-next/static/assets/active-active-replication.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1042" height="643.05"><g transform="translate(-379 -100)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M680 126c0-3.3 2.7-6 6-6h428c3.3 0 6 2.7 6 6v251.05c0 3.3-2.7 6-6 6H686c-3.3 0-6-2.7-6-6z" fill="#fff"/><path d="M681.5 126c0 .83-.67 1.5-1.5 1.5s-1.5-.67-1.5-1.5.67-1.5 1.5-1.5 1.5.67 1.5 1.5zm1.9-4.1c0 .82-.68 1.5-1.5 1.5-.83 0-1.5-.68-1.5-1. [...]
\ No newline at end of file
diff --git a/site2/website-next/static/assets/active-standby-replication.svg b/site2/website-next/static/assets/active-standby-replication.svg
new file mode 100644
index 0000000..70bccf3
--- /dev/null
+++ b/site2/website-next/static/assets/active-standby-replication.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1096" height="365.89"><g transform="translate(-200 -196.95312499999994)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M220 222.95c0-3.3 2.7-6 6-6h268c3.3 0 6 2.7 6 6V494c0 3.3-2.7 6-6 6H226c-3.3 0-6-2.7-6-6z" fill="#fff"/><path d="M221.5 222.95c0 .83-.67 1.5-1.5 1.5s-1.5-.67-1.5-1.5.67-1.5 1.5-1.5 1.5.67 1.5 1.5zm1.9-4.1c0 .83-.68 1.5-1.5 1.5-.8 [...]
\ No newline at end of file
diff --git a/site2/website-next/static/assets/aggregation-replication.svg b/site2/website-next/static/assets/aggregation-replication.svg
new file mode 100644
index 0000000..a9b10cc
--- /dev/null
+++ b/site2/website-next/static/assets/aggregation-replication.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1020" height="528.48"><g transform="translate(-340 -380)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M722.67 786c0-3.3 2.68-6 6-6h228c3.3 0 6 2.7 6 6v96.48c0 3.3-2.7 6-6 6h-228c-3.32 0-6-2.7-6-6z" fill="#fff"/><path d="M724.17 786c0 .83-.67 1.5-1.5 1.5s-1.5-.67-1.5-1.5.67-1.5 1.5-1.5 1.5.67 1.5 1.5zm1.9-4.1c0 .82-.68 1.5-1.5 1.5-.84 0-1.5-.68- [...]
\ No newline at end of file
diff --git a/site2/website-next/static/assets/full-mesh-replication.svg b/site2/website-next/static/assets/full-mesh-replication.svg
new file mode 100644
index 0000000..69a2b48
--- /dev/null
+++ b/site2/website-next/static/assets/full-mesh-replication.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1260" height="614.11"><g transform="translate(-280 -196.953125)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M762.67 574.94c0-3.3 2.68-6 6-6h268c3.3 0 6 2.7 6 6v210.12c0 3.3-2.7 6-6 6h-268c-3.32 0-6-2.7-6-6z" fill="#fff"/><path d="M764.17 574.94c0 .83-.67 1.5-1.5 1.5s-1.5-.67-1.5-1.5.67-1.5 1.5-1.5 1.5.67 1.5 1.5zm1.9-4.1c0 .82-.68 1.5-1.5 1.5- [...]
\ No newline at end of file
diff --git a/site2/website-next/static/assets/geo-replication-async.svg b/site2/website-next/static/assets/geo-replication-async.svg
new file mode 100644
index 0000000..4280e14
--- /dev/null
+++ b/site2/website-next/static/assets/geo-replication-async.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="1100" height="830"><g transform="translate(-280 -360)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M360 686.94c0-3.32 2.7-6 6-6h188c3.3 0 6 2.68 6 6v89.52c0 3.3-2.7 6-6 6H366c-3.3 0-6-2.7-6-6z" stroke="#188fff" stroke-width="3" fill="#fff"/><use xlink:href="#a" transform="matrix(1,0,0,1,365,685.9352387843705) translate(54.99348958333334 55.1271 [...]
\ No newline at end of file
diff --git a/site2/website-next/static/assets/geo-replication-sync.svg b/site2/website-next/static/assets/geo-replication-sync.svg
new file mode 100644
index 0000000..56797b3
--- /dev/null
+++ b/site2/website-next/static/assets/geo-replication-sync.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:lucid="lucid" width="980" height="830"><g transform="translate(-340 -360)" lucid:page-tab-id="0_0"><path d="M0 0h1870.87v1322.83H0z" fill="#fff"/><path d="M360 686.94c0-3.32 2.7-6 6-6h188c3.3 0 6 2.68 6 6v89.52c0 3.3-2.7 6-6 6H366c-3.3 0-6-2.7-6-6z" stroke="#5e5e5e" stroke-width="3" fill="#fff"/><use xlink:href="#a" transform="matrix(1,0,0,1,365,685.9352387843705) translate(54.99348958333334 55.12717 [...]
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.2.0/administration-geo.md b/site2/website-next/versioned_docs/version-2.2.0/administration-geo.md
index 2694762..c4463fb 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/administration-geo.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/administration-geo.md
@@ -10,26 +10,16 @@ import TabItem from '@theme/TabItem';
 ````
 
 
-*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+## Enable geo-replication for a namespace
 
-## How geo-replication works
+You must enable geo-replication on a [per-tenant basis](#concepts-multi-tenancy) in Pulsar. For example, you can enable geo-replication between two specific clusters only when a tenant has access to both clusters.
 
-The diagram below illustrates the process of geo-replication across Pulsar clusters:
+Geo-replication is managed at the namespace level, which means you only need to create and configure a namespace to replicate messages between two or more provisioned clusters that a tenant can access.
 
-![Replication Diagram](/assets/geo-replication.png)
+Complete the following tasks to enable geo-replication for a namespace:
 
-In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
-
-Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
-
-## Geo-replication and Pulsar properties
-
-You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
-
-Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
-
-* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
-* Configure that namespace to replicate across two or more provisioned clusters
+* [Enable a geo-replication namespace](#enable-geo-replication-at-namespace-level)
+* [Configure that namespace to replicate across two or more provisioned clusters](admin-api-namespaces.md/#configure-replication-clusters)
 
 Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
 
@@ -43,13 +33,19 @@ Applications can create producers and consumers in any of the clusters, even whe
 
 Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover [...]
 
+![A typical geo-replication example with full-mesh pattern](/assets/geo-replication.png)
+
 In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
 
 All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
 
 ## Configure replication
 
-The following example connects three clusters: **us-east**, **us-west**, and **us-cent**.
+This section guides you through the steps to configure geo-replicated clusters.
+1. [Connect replication clusters](#connect-replication-clusters)
+2. [Grant permissions to properties](#grant-permissions-to-properties)
+3. [Enable geo-replication](#enable-geo-replication)
+4. [Use topics with geo-replication](#use-topics-with-geo-replication)
 
 ### Connect replication clusters
 
@@ -250,4 +246,52 @@ Consumer<String> consumer = client.newConsumer(Schema.STRING)
 ### Limitations
 
 * When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
-* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
\ No newline at end of file
+* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
+
+## Migrate data between clusters using geo-replication
+
+Using geo-replication to migrate data between clusters is a special use case of the [active-active replication pattern](concepts-replication.md/#active-active-replication) when you don't have a large amount of data.
+
+1. Create your new cluster.
+2. Add the new cluster to your old cluster.
+
+```shell
+
+  bin/pulsar-admin cluster create new-cluster
+
+```
+
+3. Add the new cluster to your tenant.
+
+```shell
+
+  bin/pulsar-admin tenants update my-tenant --cluster old-cluster,new-cluster
+
+```
+
+4. Set the clusters on your namespace.
+
+```shell
+
+  bin/pulsar-admin namespaces set-clusters my-tenant/my-ns --cluster old-cluster,new-cluster
+
+```
+
+5. Update your applications using [replicated subscriptions](#replicated-subscriptions).
+6. Validate subscription replication is active.
+
+```shell
+
+  bin/pulsar-admin topics stats-internal public/default/t1
+
+```
+
+7. Move your consumers and producers to the new cluster by modifying the values of `serviceURL`.
+
+:::note
+
+* The replication starts from step 4, which means existing messages in your old cluster are not replicated. 
+* If you have some older messages to migrate, you can pre-create the replication subscriptions for each topic and set it at the earliest position by using `pulsar-admin topics create-subscription -s pulsar.repl.new-cluster -m earliest <topic>`.
+
+:::
+
diff --git a/site2/website-next/versioned_docs/version-2.2.0/concepts-replication.md b/site2/website-next/versioned_docs/version-2.2.0/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.2.0/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.2.0/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/versioned_docs/version-2.2.1/administration-geo.md b/site2/website-next/versioned_docs/version-2.2.1/administration-geo.md
index 2694762..c4463fb 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/administration-geo.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/administration-geo.md
@@ -10,26 +10,16 @@ import TabItem from '@theme/TabItem';
 ````
 
 
-*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+## Enable geo-replication for a namespace
 
-## How geo-replication works
+You must enable geo-replication on a [per-tenant basis](#concepts-multi-tenancy) in Pulsar. For example, you can enable geo-replication between two specific clusters only when a tenant has access to both clusters.
 
-The diagram below illustrates the process of geo-replication across Pulsar clusters:
+Geo-replication is managed at the namespace level, which means you only need to create and configure a namespace to replicate messages between two or more provisioned clusters that a tenant can access.
 
-![Replication Diagram](/assets/geo-replication.png)
+Complete the following tasks to enable geo-replication for a namespace:
 
-In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
-
-Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
-
-## Geo-replication and Pulsar properties
-
-You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
-
-Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
-
-* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
-* Configure that namespace to replicate across two or more provisioned clusters
+* [Enable a geo-replication namespace](#enable-geo-replication-at-namespace-level)
+* [Configure that namespace to replicate across two or more provisioned clusters](admin-api-namespaces.md/#configure-replication-clusters)
 
 Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
 
@@ -43,13 +33,19 @@ Applications can create producers and consumers in any of the clusters, even whe
 
 Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover [...]
 
+![A typical geo-replication example with full-mesh pattern](/assets/geo-replication.png)
+
 In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
 
 All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
 
 ## Configure replication
 
-The following example connects three clusters: **us-east**, **us-west**, and **us-cent**.
+This section guides you through the steps to configure geo-replicated clusters.
+1. [Connect replication clusters](#connect-replication-clusters)
+2. [Grant permissions to properties](#grant-permissions-to-properties)
+3. [Enable geo-replication](#enable-geo-replication)
+4. [Use topics with geo-replication](#use-topics-with-geo-replication)
 
 ### Connect replication clusters
 
@@ -250,4 +246,52 @@ Consumer<String> consumer = client.newConsumer(Schema.STRING)
 ### Limitations
 
 * When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
-* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
\ No newline at end of file
+* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
+
+## Migrate data between clusters using geo-replication
+
+Using geo-replication to migrate data between clusters is a special use case of the [active-active replication pattern](concepts-replication.md/#active-active-replication) when you don't have a large amount of data.
+
+1. Create your new cluster.
+2. Add the new cluster to your old cluster.
+
+```shell
+
+  bin/pulsar-admin cluster create new-cluster
+
+```
+
+3. Add the new cluster to your tenant.
+
+```shell
+
+  bin/pulsar-admin tenants update my-tenant --cluster old-cluster,new-cluster
+
+```
+
+4. Set the clusters on your namespace.
+
+```shell
+
+  bin/pulsar-admin namespaces set-clusters my-tenant/my-ns --cluster old-cluster,new-cluster
+
+```
+
+5. Update your applications using [replicated subscriptions](#replicated-subscriptions).
+6. Validate subscription replication is active.
+
+```shell
+
+  bin/pulsar-admin topics stats-internal public/default/t1
+
+```
+
+7. Move your consumers and producers to the new cluster by modifying the values of `serviceURL`.
+
+:::note
+
+* The replication starts from step 4, which means existing messages in your old cluster are not replicated. 
+* If you have some older messages to migrate, you can pre-create the replication subscriptions for each topic and set it at the earliest position by using `pulsar-admin topics create-subscription -s pulsar.repl.new-cluster -m earliest <topic>`.
+
+:::
+
diff --git a/site2/website-next/versioned_docs/version-2.2.1/concepts-replication.md b/site2/website-next/versioned_docs/version-2.2.1/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.2.1/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.2.1/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/versioned_docs/version-2.3.0/administration-geo.md b/site2/website-next/versioned_docs/version-2.3.0/administration-geo.md
index 2694762..c4463fb 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/administration-geo.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/administration-geo.md
@@ -10,26 +10,16 @@ import TabItem from '@theme/TabItem';
 ````
 
 
-*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+## Enable geo-replication for a namespace
 
-## How geo-replication works
+You must enable geo-replication on a [per-tenant basis](#concepts-multi-tenancy) in Pulsar. For example, you can enable geo-replication between two specific clusters only when a tenant has access to both clusters.
 
-The diagram below illustrates the process of geo-replication across Pulsar clusters:
+Geo-replication is managed at the namespace level, which means you only need to create and configure a namespace to replicate messages between two or more provisioned clusters that a tenant can access.
 
-![Replication Diagram](/assets/geo-replication.png)
+Complete the following tasks to enable geo-replication for a namespace:
 
-In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
-
-Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
-
-## Geo-replication and Pulsar properties
-
-You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
-
-Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
-
-* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
-* Configure that namespace to replicate across two or more provisioned clusters
+* [Enable a geo-replication namespace](#enable-geo-replication-at-namespace-level)
+* [Configure that namespace to replicate across two or more provisioned clusters](admin-api-namespaces.md/#configure-replication-clusters)
 
 Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
 
@@ -43,13 +33,19 @@ Applications can create producers and consumers in any of the clusters, even whe
 
 Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover [...]
 
+![A typical geo-replication example with full-mesh pattern](/assets/geo-replication.png)
+
 In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
 
 All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
 
 ## Configure replication
 
-The following example connects three clusters: **us-east**, **us-west**, and **us-cent**.
+This section guides you through the steps to configure geo-replicated clusters.
+1. [Connect replication clusters](#connect-replication-clusters)
+2. [Grant permissions to properties](#grant-permissions-to-properties)
+3. [Enable geo-replication](#enable-geo-replication)
+4. [Use topics with geo-replication](#use-topics-with-geo-replication)
 
 ### Connect replication clusters
 
@@ -250,4 +246,52 @@ Consumer<String> consumer = client.newConsumer(Schema.STRING)
 ### Limitations
 
 * When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
-* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
\ No newline at end of file
+* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
+
+## Migrate data between clusters using geo-replication
+
+Using geo-replication to migrate data between clusters is a special use case of the [active-active replication pattern](concepts-replication.md/#active-active-replication) when you don't have a large amount of data.
+
+1. Create your new cluster.
+2. Add the new cluster to your old cluster.
+
+```shell
+
+  bin/pulsar-admin cluster create new-cluster
+
+```
+
+3. Add the new cluster to your tenant.
+
+```shell
+
+  bin/pulsar-admin tenants update my-tenant --cluster old-cluster,new-cluster
+
+```
+
+4. Set the clusters on your namespace.
+
+```shell
+
+  bin/pulsar-admin namespaces set-clusters my-tenant/my-ns --cluster old-cluster,new-cluster
+
+```
+
+5. Update your applications using [replicated subscriptions](#replicated-subscriptions).
+6. Validate subscription replication is active.
+
+```shell
+
+  bin/pulsar-admin topics stats-internal public/default/t1
+
+```
+
+7. Move your consumers and producers to the new cluster by modifying the values of `serviceURL`.
+
+:::note
+
+* The replication starts from step 4, which means existing messages in your old cluster are not replicated. 
+* If you have some older messages to migrate, you can pre-create the replication subscriptions for each topic and set it at the earliest position by using `pulsar-admin topics create-subscription -s pulsar.repl.new-cluster -m earliest <topic>`.
+
+:::
+
diff --git a/site2/website-next/versioned_docs/version-2.3.0/concepts-replication.md b/site2/website-next/versioned_docs/version-2.3.0/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.3.0/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.3.0/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/versioned_docs/version-2.3.1/administration-geo.md b/site2/website-next/versioned_docs/version-2.3.1/administration-geo.md
index 2694762..c4463fb 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/administration-geo.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/administration-geo.md
@@ -10,26 +10,16 @@ import TabItem from '@theme/TabItem';
 ````
 
 
-*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+## Enable geo-replication for a namespace
 
-## How geo-replication works
+You must enable geo-replication on a [per-tenant basis](#concepts-multi-tenancy) in Pulsar. For example, you can enable geo-replication between two specific clusters only when a tenant has access to both clusters.
 
-The diagram below illustrates the process of geo-replication across Pulsar clusters:
+Geo-replication is managed at the namespace level, which means you only need to create and configure a namespace to replicate messages between two or more provisioned clusters that a tenant can access.
 
-![Replication Diagram](/assets/geo-replication.png)
+Complete the following tasks to enable geo-replication for a namespace:
 
-In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
-
-Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
-
-## Geo-replication and Pulsar properties
-
-You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
-
-Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
-
-* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
-* Configure that namespace to replicate across two or more provisioned clusters
+* [Enable a geo-replication namespace](#enable-geo-replication-at-namespace-level)
+* [Configure that namespace to replicate across two or more provisioned clusters](admin-api-namespaces.md/#configure-replication-clusters)
 
 Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
 
@@ -43,13 +33,19 @@ Applications can create producers and consumers in any of the clusters, even whe
 
 Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover [...]
 
+![A typical geo-replication example with full-mesh pattern](/assets/geo-replication.png)
+
 In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
 
 All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
 
 ## Configure replication
 
-The following example connects three clusters: **us-east**, **us-west**, and **us-cent**.
+This section guides you through the steps to configure geo-replicated clusters.
+1. [Connect replication clusters](#connect-replication-clusters)
+2. [Grant permissions to properties](#grant-permissions-to-properties)
+3. [Enable geo-replication](#enable-geo-replication)
+4. [Use topics with geo-replication](#use-topics-with-geo-replication)
 
 ### Connect replication clusters
 
@@ -250,4 +246,52 @@ Consumer<String> consumer = client.newConsumer(Schema.STRING)
 ### Limitations
 
 * When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
-* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
\ No newline at end of file
+* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
+
+## Migrate data between clusters using geo-replication
+
+Using geo-replication to migrate data between clusters is a special use case of the [active-active replication pattern](concepts-replication.md/#active-active-replication) when you don't have a large amount of data.
+
+1. Create your new cluster.
+2. Add the new cluster to your old cluster.
+
+```shell
+
+  bin/pulsar-admin cluster create new-cluster
+
+```
+
+3. Add the new cluster to your tenant.
+
+```shell
+
+  bin/pulsar-admin tenants update my-tenant --cluster old-cluster,new-cluster
+
+```
+
+4. Set the clusters on your namespace.
+
+```shell
+
+  bin/pulsar-admin namespaces set-clusters my-tenant/my-ns --cluster old-cluster,new-cluster
+
+```
+
+5. Update your applications using [replicated subscriptions](#replicated-subscriptions).
+6. Validate subscription replication is active.
+
+```shell
+
+  bin/pulsar-admin topics stats-internal public/default/t1
+
+```
+
+7. Move your consumers and producers to the new cluster by modifying the values of `serviceURL`.
+
+:::note
+
+* The replication starts from step 4, which means existing messages in your old cluster are not replicated. 
+* If you have some older messages to migrate, you can pre-create the replication subscriptions for each topic and set it at the earliest position by using `pulsar-admin topics create-subscription -s pulsar.repl.new-cluster -m earliest <topic>`.
+
+:::
+
diff --git a/site2/website-next/versioned_docs/version-2.3.1/concepts-replication.md b/site2/website-next/versioned_docs/version-2.3.1/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.3.1/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.3.1/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-replication.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.3.2/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/versioned_docs/version-2.4.0/concepts-replication.md b/site2/website-next/versioned_docs/version-2.4.0/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.4.0/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.4.0/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/versioned_docs/version-2.4.1/concepts-replication.md b/site2/website-next/versioned_docs/version-2.4.1/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.4.1/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.4.1/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/versioned_docs/version-2.4.2/concepts-replication.md b/site2/website-next/versioned_docs/version-2.4.2/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.4.2/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.4.2/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/versioned_docs/version-2.5.0/concepts-replication.md b/site2/website-next/versioned_docs/version-2.5.0/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.5.0/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.5.0/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)
diff --git a/site2/website-next/versioned_docs/version-2.5.2/concepts-replication.md b/site2/website-next/versioned_docs/version-2.5.2/concepts-replication.md
index 11677cc..d4653ca 100644
--- a/site2/website-next/versioned_docs/version-2.5.2/concepts-replication.md
+++ b/site2/website-next/versioned_docs/version-2.5.2/concepts-replication.md
@@ -4,5 +4,65 @@ title: Geo Replication
 sidebar_label: "Geo Replication"
 ---
 
-Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+Regardless of industries, when an unforeseen event occurs and brings day-to-day operations to a halt, an organization needs a well-prepared disaster recovery plan to quickly restore service to clients. However, a disaster recovery plan usually requires a multi-datacenter deployment with geographically dispersed data centers. Such a multi-datacenter deployment requires a geo-replication mechanism to provide additional redundancy in case a data center fails.
 
+Pulsar's geo-replication mechanism is typically used for disaster recovery, enabling the replication of persistently stored message data across multiple data centers. For instance, your application is publishing data in one region and you would like to process it for consumption in other regions. With Pulsar’s geo-replication mechanism, messages can be produced and consumed in different geo-locations. 
+
+The diagram below illustrates the process of [geo-replication](administration-geo). Whenever three producers (P1, P2 and P3) respectively publish messages to the T1 topic in three clusters, those messages are instantly replicated across clusters. Once the messages are replicated, two consumers (C1 and C2) can consume those messages from their clusters.
+
+![A typical geo-replication example with full-mesh pattern](/assets/full-mesh-replication.svg)
+
+## Replication mechanisms
+
+The geo-replication mechanism can be categorized into synchronous geo-replication and asynchronous geo-replication strategies. Pulsar supports both replication mechanisms.
+
+### Asynchronous geo-replication in Pulsar
+
+An asynchronous geo-replicated cluster is composed of multiple physical clusters set up in different datacenters. Messages produced on a Pulsar topic are first persisted to the local cluster and then replicated asynchronously to the remote clusters by brokers. 
+
+![An example of asynchronous geo-replication mechanism](/assets/geo-replication-async.svg)
+
+In normal cases, when there are no connectivity issues, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, end-to-end delivery latency is defined by the network round-trip time (RTT) between the data centers. Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (for example, during a network partition).
+
+Asynchronous geo-replication provides lower latency but may result in weaker consistency guarantees due to the potential replication lag that some data hasn’t been replicated. 
+
+### Synchronous geo-replication via BookKeeper
+
+In synchronous geo-replication, data is synchronously replicated to multiple data centers and the client has to wait for an acknowledgment from the other data centers. As illustrated below, when the client issues a write request to one cluster, the written data will be replicated to the other two data centers. The write request is only acknowledged to the client when the majority of data centers (in this example, at least 2 data centers) have acknowledged that the write has been persisted. 
+
+![An example of synchronous geo-replication mechanism](/assets/geo-replication-sync.svg)
+
+Synchronous geo-replication in Pulsar is achieved by BookKeeper. A synchronous geo-replicated cluster consists of a cluster of bookies and a cluster of brokers that run in multiple data centers, and a global Zookeeper installation (a ZooKeeper ensemble is running across multiple data centers). You need to configure a BookKeeper region-aware placement policy to store data across multiple data centers and guarantee availability constraints on writes.
+
+Synchronous geo-replication provides the highest availability and also guarantees stronger data consistency between different data centers. However, your applications have to pay an extra latency penalty across data centers.
+
+
+## Replication patterns
+
+Pulsar provides a great degree of flexibility for customizing your replication strategy. You can set up different replication patterns to serve your replication strategy for an application between multiple data centers.
+
+### Full-mesh replication
+
+Using full-mesh replication and applying the [selective message replication](administration-geo.md/#selective-replication), you can customize your replication strategies and topologies between any number of datacenters.
+
+![An example of full-mesh replication pattern](/assets/full-mesh-replication.svg)
+
+### Active-active replication
+
+Active-active replication is a variation of full-mesh replication, with only two data centers. Producers are able to run at any data center to produce messages, and consumers can consume all messages from all data centers.
+
+![An example of active-active replication pattern](/assets/active-active-replication.svg)
+
+For how to use active-active replication to migrate data between clusters, refer to [here](administration-geo.md/#migrate-data-between-clusters-using-geo-replication).
+
+### Active-standby replication
+
+Active-standby replication is a variation of active-active replication. Producers send messages to the active data center while messages are replicated to the standby data center for backup. If the active data center goes down, the standby data center takes over and becomes the active one. 
+
+![An example of active-standby replication pattern](/assets/active-standby-replication.svg)
+
+### Aggregation replication
+
+The aggregation replication pattern is typically used when replicating messages from the edge to the cloud. For example, assume you have 3 clusters in 3 fronting datacenters and one aggregated cluster in a central data center, and you want to replicate messages from multiple fronting datacenters to the central data center for aggregation purposes. You can then create an individual namespace for the topics used by each fronting data center and assign the aggregated data center to those na [...]
+
+![An example of aggregation replication pattern](/assets/aggregation-replication.svg)