You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pekko.apache.org by jc...@apache.org on 2023/01/18 07:13:07 UTC

[incubator-pekko] branch main updated: Renames in doc #98 (#100)

This is an automated email from the ASF dual-hosted git repository.

jchapuis pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-pekko.git


The following commit(s) were added to refs/heads/main by this push:
     new 3d93c29737 Renames in doc #98 (#100)
3d93c29737 is described below

commit 3d93c29737a4c48baa1395b220e4a466f5639007
Author: Jonas Chapuis <me...@jonaschapuis.com>
AuthorDate: Wed Jan 18 08:13:01 2023 +0100

    Renames in doc #98 (#100)
    
    * Rename some base_url values in Paradox.scala
    * Rename Akka to Pekko in documentation, also adapt/remove small blocks of text that were no longer consistent or adequate
    * Rename Akka to Pekko in some related files outside the doc directory
    * Rename akka-bom to pekko-bom
    * Rename artifacts prefixes from akka to pekko
    * Add links to Apache documents in licenses.md
    * Add links to Akka migration guides for earlier versions
---
 .../metrics/ClusterMetricsExtensionSpec.scala      |   4 +-
 .../RemoveInternalClusterShardingData.scala        |   4 +-
 cluster/jmx-client/{akka-cluster => pekko-cluster} |   6 +-
 .../src/main/categories/actor-interop-operators.md |   2 +-
 docs/src/main/paradox/.htaccess                    |  11 -
 docs/src/main/paradox/actors.md                    |  21 +-
 docs/src/main/paradox/additional/deploying.md      |  14 +-
 docs/src/main/paradox/additional/faq.md            |  35 +-
 docs/src/main/paradox/additional/operations.md     |  23 +-
 docs/src/main/paradox/additional/osgi.md           |  18 +-
 docs/src/main/paradox/additional/packaging.md      |   8 +-
 .../src/main/paradox/additional/rolling-updates.md |  67 +-
 docs/src/main/paradox/assets/js/scalafiddle.js     |   8 +-
 docs/src/main/paradox/assets/js/warnOldDocs.js     |  24 +-
 docs/src/main/paradox/camel.md                     |   7 -
 docs/src/main/paradox/cluster-client.md            |  48 +-
 docs/src/main/paradox/cluster-dc.md                |   2 +-
 docs/src/main/paradox/cluster-routing.md           |  10 +-
 docs/src/main/paradox/cluster-sharding.md          |   4 +-
 docs/src/main/paradox/cluster-singleton.md         |   3 -
 docs/src/main/paradox/cluster-usage.md             |  24 +-
 .../paradox/common/binary-compatibility-rules.md   |  56 +-
 docs/src/main/paradox/common/circuitbreaker.md     |   4 +-
 docs/src/main/paradox/common/io-layer.md           |   8 +-
 docs/src/main/paradox/common/may-change.md         |   6 +-
 docs/src/main/paradox/common/other-modules.md      |  80 +-
 docs/src/main/paradox/coordinated-shutdown.md      |  10 +-
 docs/src/main/paradox/coordination.md              |   8 +-
 docs/src/main/paradox/discovery/index.md           |  43 +-
 docs/src/main/paradox/dispatchers.md               |   6 +-
 docs/src/main/paradox/distributed-data.md          |  10 +-
 docs/src/main/paradox/distributed-pub-sub.md       |   6 +-
 .../paradox/durable-state/persistence-query.md     |  18 +-
 docs/src/main/paradox/event-bus.md                 |   8 +-
 .../{extending-akka.md => extending-pekko.md}      |  24 +-
 docs/src/main/paradox/fault-tolerance.md           |   4 +-
 docs/src/main/paradox/fsm.md                       |   6 +-
 docs/src/main/paradox/futures.md                   |   6 +-
 docs/src/main/paradox/general/actor-systems.md     |   6 +-
 docs/src/main/paradox/general/actors.md            |  12 +-
 docs/src/main/paradox/general/addressing.md        |  14 +-
 .../paradox/general/configuration-reference.md     |  80 +-
 docs/src/main/paradox/general/configuration.md     |  32 +-
 docs/src/main/paradox/general/jmm.md               |  14 +-
 .../general/message-delivery-reliability.md        |  24 +-
 docs/src/main/paradox/general/remoting.md          |  16 +-
 .../main/paradox/general/stream/stream-design.md   |  42 +-
 docs/src/main/paradox/general/supervision.md       |   4 +-
 docs/src/main/paradox/general/terminology.md       |   6 +-
 ...onductor.png => pekko-remote-testconductor.png} | Bin
 docs/src/main/paradox/includes.md                  |   4 +-
 docs/src/main/paradox/includes/cluster.md          |   4 +-
 docs/src/main/paradox/index-actors.md              |   8 +-
 docs/src/main/paradox/index-classic.md             |   2 +-
 docs/src/main/paradox/index-network.md             |   1 -
 docs/src/main/paradox/index-utilities-classic.md   |   8 +-
 docs/src/main/paradox/index.md                     |   2 +-
 docs/src/main/paradox/io-dns.md                    |  12 +-
 docs/src/main/paradox/io-tcp.md                    |   8 +-
 docs/src/main/paradox/io-udp.md                    |   6 +-
 docs/src/main/paradox/io.md                        |  16 +-
 docs/src/main/paradox/logging.md                   |  64 +-
 docs/src/main/paradox/mailboxes.md                 |   8 +-
 docs/src/main/paradox/multi-jvm-testing.md         |   8 +-
 docs/src/main/paradox/multi-node-testing.md        |  10 +-
 docs/src/main/paradox/persistence-fsm.md           |   8 +-
 docs/src/main/paradox/persistence-journals.md      |   4 +-
 docs/src/main/paradox/persistence-plugins.md       |  32 +-
 docs/src/main/paradox/persistence-query-leveldb.md |   6 +-
 docs/src/main/paradox/persistence-query.md         |  30 +-
 .../main/paradox/persistence-schema-evolution.md   |  30 +-
 docs/src/main/paradox/persistence.md               |  28 +-
 .../paradox/project/downstream-upgrade-strategy.md |  37 +-
 docs/src/main/paradox/project/examples.md          |  44 +-
 docs/src/main/paradox/project/immutable.md         |   2 +-
 docs/src/main/paradox/project/licenses.md          |  30 +-
 docs/src/main/paradox/project/links.md             |  29 +-
 .../paradox/project/migration-guide-2.4.x-2.5.x.md |   4 -
 .../paradox/project/migration-guide-2.5.x-2.6.x.md | 824 ---------------------
 .../main/paradox/project/migration-guide-old.md    |   9 -
 docs/src/main/paradox/project/migration-guides.md  |  12 +-
 docs/src/main/paradox/project/rolling-update.md    | 108 +--
 docs/src/main/paradox/project/scala3.md            |  10 +-
 docs/src/main/paradox/remoting-artery.md           | 110 ++-
 docs/src/main/paradox/remoting.md                  |  68 +-
 docs/src/main/paradox/routing.md                   |  18 +-
 docs/src/main/paradox/scheduler.md                 |  14 +-
 .../security/2017-02-10-java-serialization.md      |  55 --
 docs/src/main/paradox/security/2017-08-09-camel.md |  31 -
 .../main/paradox/security/2018-08-29-aes-rng.md    |  99 ---
 docs/src/main/paradox/security/index.md            |  14 +-
 docs/src/main/paradox/serialization-classic.md     |   4 +-
 docs/src/main/paradox/serialization-jackson.md     |  26 +-
 docs/src/main/paradox/serialization.md             |  34 +-
 docs/src/main/paradox/split-brain-resolver.md      |  18 +-
 docs/src/main/paradox/stream/actor-interop.md      |   8 +-
 docs/src/main/paradox/stream/futures-interop.md    |   6 +-
 docs/src/main/paradox/stream/index.md              |   2 +-
 .../main/paradox/stream/operators/ActorFlow/ask.md |   4 +-
 .../stream/operators/ActorFlow/askWithContext.md   |   4 +-
 .../stream/operators/ActorFlow/askWithStatus.md    |   2 +-
 .../operators/ActorFlow/askWithStatusAndContext.md |   4 +-
 .../paradox/stream/operators/ActorSink/actorRef.md |   4 +-
 .../ActorSink/actorRefWithBackpressure.md          |   4 +-
 .../stream/operators/ActorSource/actorRef.md       |   4 +-
 .../ActorSource/actorRefWithBackpressure.md        |   4 +-
 .../stream/operators/Flow/fromSinkAndSource.md     |   2 +-
 .../main/paradox/stream/operators/PubSub/sink.md   |   4 +-
 .../main/paradox/stream/operators/PubSub/source.md |   4 +-
 .../stream/operators/RetryFlow/withBackoff.md      |   2 +-
 .../operators/RetryFlow/withBackoffAndContext.md   |   2 +-
 .../paradox/stream/operators/Sink/asPublisher.md   |   4 +-
 .../stream/operators/Sink/foreachParallel.md       |   6 -
 .../stream/operators/Source/asSubscriber.md        |   2 +-
 .../stream/operators/Source/fromPublisher.md       |   2 +-
 .../stream/operators/Source/mergePrioritizedN.md   |   2 +-
 .../main/paradox/stream/operators/Source/range.md  |   4 +-
 .../stream/operators/Source/unfoldResource.md      |   4 +-
 .../stream/operators/Source/unfoldResourceAsync.md |   2 +-
 .../operators/StreamConverters/asJavaStream.md     |   2 +-
 docs/src/main/paradox/stream/operators/index.md    |   2 +-
 .../paradox/stream/reactive-streams-interop.md     |  14 +-
 docs/src/main/paradox/stream/stream-composition.md |  14 +-
 docs/src/main/paradox/stream/stream-cookbook.md    |  16 +-
 docs/src/main/paradox/stream/stream-customize.md   |  10 +-
 docs/src/main/paradox/stream/stream-dynamic.md     |   6 +-
 docs/src/main/paradox/stream/stream-error.md       |  10 +-
 .../main/paradox/stream/stream-flows-and-basics.md |  50 +-
 docs/src/main/paradox/stream/stream-graphs.md      |  14 +-
 .../src/main/paradox/stream/stream-introduction.md |  22 +-
 docs/src/main/paradox/stream/stream-io.md          |  18 +-
 docs/src/main/paradox/stream/stream-parallelism.md |   8 +-
 docs/src/main/paradox/stream/stream-quickstart.md  |  54 +-
 docs/src/main/paradox/stream/stream-rate.md        |  16 +-
 docs/src/main/paradox/stream/stream-refs.md        |  30 +-
 docs/src/main/paradox/stream/stream-substream.md   |   6 +-
 docs/src/main/paradox/stream/stream-testkit.md     |  22 +-
 docs/src/main/paradox/supervision-classic.md       |   8 +-
 docs/src/main/paradox/testing.md                   |  30 +-
 docs/src/main/paradox/typed/actor-discovery.md     |   6 +-
 docs/src/main/paradox/typed/actor-lifecycle.md     |   8 +-
 docs/src/main/paradox/typed/actors.md              |  43 +-
 docs/src/main/paradox/typed/choosing-cluster.md    |  29 +-
 docs/src/main/paradox/typed/cluster-concepts.md    |  16 +-
 docs/src/main/paradox/typed/cluster-dc.md          |  28 +-
 docs/src/main/paradox/typed/cluster-membership.md  |  10 +-
 .../typed/cluster-sharded-daemon-process.md        |   2 +-
 docs/src/main/paradox/typed/cluster-sharding.md    |  42 +-
 docs/src/main/paradox/typed/cluster-singleton.md   |   4 +-
 docs/src/main/paradox/typed/cluster.md             |  36 +-
 docs/src/main/paradox/typed/coexisting.md          |   8 +-
 docs/src/main/paradox/typed/cqrs.md                |   4 +-
 docs/src/main/paradox/typed/dispatchers.md         |  35 +-
 docs/src/main/paradox/typed/distributed-data.md    |  22 +-
 docs/src/main/paradox/typed/distributed-pub-sub.md |   6 +-
 docs/src/main/paradox/typed/durable-state/cqrs.md  |   2 +-
 .../paradox/typed/durable-state/persistence.md     |  18 +-
 docs/src/main/paradox/typed/extending.md           |  14 +-
 docs/src/main/paradox/typed/fault-tolerance.md     |   6 +-
 docs/src/main/paradox/typed/from-classic.md        |  52 +-
 docs/src/main/paradox/typed/fsm.md                 |   8 +-
 docs/src/main/paradox/typed/guide/actors-intro.md  |   6 +-
 docs/src/main/paradox/typed/guide/introduction.md  |  22 +-
 docs/src/main/paradox/typed/guide/modules.md       |  73 +-
 docs/src/main/paradox/typed/guide/tutorial.md      |  10 +-
 docs/src/main/paradox/typed/guide/tutorial_1.md    |  30 +-
 docs/src/main/paradox/typed/guide/tutorial_2.md    |   2 +-
 docs/src/main/paradox/typed/guide/tutorial_3.md    |  16 +-
 docs/src/main/paradox/typed/guide/tutorial_4.md    |   4 +-
 docs/src/main/paradox/typed/guide/tutorial_5.md    |  13 +-
 docs/src/main/paradox/typed/index-cluster.md       |   2 +-
 .../typed/index-persistence-durable-state.md       |   2 +-
 docs/src/main/paradox/typed/index-persistence.md   |   2 +-
 docs/src/main/paradox/typed/index.md               |   2 +-
 .../src/main/paradox/typed/interaction-patterns.md |  20 +-
 docs/src/main/paradox/typed/logging.md             |  84 +--
 docs/src/main/paradox/typed/mailboxes.md           |  14 +-
 .../src/main/paradox/typed/persistence-snapshot.md |   6 +-
 docs/src/main/paradox/typed/persistence-testing.md |  18 +-
 docs/src/main/paradox/typed/persistence.md         |  44 +-
 docs/src/main/paradox/typed/reliable-delivery.md   |  16 +-
 .../main/paradox/typed/replicated-eventsourcing.md |  37 +-
 docs/src/main/paradox/typed/routers.md             |  12 +-
 docs/src/main/paradox/typed/stash.md               |   6 +-
 docs/src/main/paradox/typed/style-guide.md         |   2 +-
 docs/src/main/paradox/typed/testing-async.md       |   6 +-
 docs/src/main/paradox/typed/testing-sync.md        |   2 +-
 docs/src/main/paradox/typed/testing.md             |   4 +-
 docs/src/test/java/jdocs/actor/ActorDocTest.java   |   4 +-
 docs/src/test/java/jdocs/config/ConfigDocTest.java |   2 +-
 .../java/jdocs/discovery/DnsDiscoveryDocTest.java  |   2 +-
 .../java/jdocs/dispatcher/MyUnboundedMailbox.java  |   2 +-
 .../java/jdocs/extension/ExtensionDocTest.java     |   4 +-
 .../jdocs/extension/SettingsExtensionDocTest.java  |   2 +-
 .../LambdaPersistencePluginDocTest.java            |   4 +-
 .../src/test/java/jdocs/routing/RouterDocTest.java |   8 +-
 .../jdocs/serialization/SerializationDocTest.java  |   4 +-
 .../test/java/jdocs/stream/IntegrationDocTest.java |  30 +-
 .../java/jdocs/stream/ReactiveStreamsDocTest.java  |   8 +-
 .../src/test/java/jdocs/stream/RestartDocTest.java |   2 +-
 .../stream/TwitterStreamQuickstartDocTest.java     |  24 +-
 .../java/jdocs/stream/io/StreamFileDocTest.java    |   2 +-
 .../stream/javadsl/cookbook/RecipeParseLines.java  |   2 +-
 .../javadsl/cookbook/RecipeReduceByKeyTest.java    |   8 +-
 .../javadsl/cookbook/RecipeSourceFromFunction.java |   2 +-
 .../jdocs/stream/operators/source/Restart.java     |   6 +-
 .../java/jdocs/typed/tutorial_5/DeviceGroup.java   |   2 +-
 docs/src/test/scala/docs/actor/ActorDocSpec.scala  |   2 +-
 .../src/test/scala/docs/config/ConfigDocSpec.scala |   5 +-
 .../scala/docs/dispatcher/MyUnboundedMailbox.scala |   2 +-
 .../scala/docs/extension/ExtensionDocSpec.scala    |   2 +-
 .../persistence/PersistencePluginDocSpec.scala     |   2 +-
 .../docs/remoting/RemoteDeploymentDocSpec.scala    |  10 +-
 .../test/scala/docs/routing/RouterDocSpec.scala    |  20 +-
 .../docs/serialization/SerializationDocSpec.scala  |   4 +-
 .../scala/docs/stream/IntegrationDocSpec.scala     |  28 +-
 .../scala/docs/stream/ReactiveStreamsDocSpec.scala |   4 +-
 .../test/scala/docs/stream/RestartDocSpec.scala    |   4 +-
 .../stream/TwitterStreamQuickstartDocSpec.scala    |  24 +-
 .../docs/stream/cookbook/RecipeParseLines.scala    |   4 +-
 .../docs/stream/cookbook/RecipeReduceByKey.scala   |   8 +-
 .../scala/docs/stream/io/StreamFileDocSpec.scala   |   2 +-
 .../docs/stream/operators/source/Restart.scala     |   6 +-
 project/Paradox.scala                              |   8 +-
 224 files changed, 1469 insertions(+), 2871 deletions(-)

diff --git a/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/ClusterMetricsExtensionSpec.scala b/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/ClusterMetricsExtensionSpec.scala
index 8016ae8c6c..568e1f2d81 100644
--- a/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/ClusterMetricsExtensionSpec.scala
+++ b/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/ClusterMetricsExtensionSpec.scala
@@ -41,13 +41,13 @@ trait ClusterMetricsCommonConfig extends MultiNodeConfig {
     }
   }
 
-  // Enable metrics extension in akka-cluster-metrics.
+  // Enable metrics extension in pekko-cluster-metrics.
   def enableMetricsExtension = parseString("""
     pekko.extensions=["org.apache.pekko.cluster.metrics.ClusterMetricsExtension"]
     pekko.cluster.metrics.collector.enabled = on
     """)
 
-  // Disable metrics extension in akka-cluster-metrics.
+  // Disable metrics extension in pekko-cluster-metrics.
   def disableMetricsExtension = parseString("""
     pekko.extensions=["org.apache.pekko.cluster.metrics.ClusterMetricsExtension"]
     pekko.cluster.metrics.collector.enabled = off
diff --git a/cluster-sharding/src/main/scala/org/apache/pekko/cluster/sharding/RemoveInternalClusterShardingData.scala b/cluster-sharding/src/main/scala/org/apache/pekko/cluster/sharding/RemoveInternalClusterShardingData.scala
index a2df9db5cf..ac7fe523b5 100644
--- a/cluster-sharding/src/main/scala/org/apache/pekko/cluster/sharding/RemoveInternalClusterShardingData.scala
+++ b/cluster-sharding/src/main/scala/org/apache/pekko/cluster/sharding/RemoveInternalClusterShardingData.scala
@@ -48,12 +48,12 @@ import scala.annotation.nowarn
  *
  * Use this program as a standalone Java main program:
  * {{{
- * java -classpath <jar files, including akka-cluster-sharding>
+ * java -classpath <jar files, including pekko-cluster-sharding>
  *   org.apache.pekko.cluster.sharding.RemoveInternalClusterShardingData
  *     -2.3 entityType1 entityType2 entityType3
  * }}}
  *
- * The program is included in the `akka-cluster-sharding` jar file. It
+ * The program is included in the `pekko-cluster-sharding` jar file. It
  * is easiest to run it with same classpath and configuration as your ordinary
  * application. It can be run from sbt or maven in similar way.
  *
diff --git a/cluster/jmx-client/akka-cluster b/cluster/jmx-client/pekko-cluster
similarity index 96%
rename from cluster/jmx-client/akka-cluster
rename to cluster/jmx-client/pekko-cluster
index 55df10b321..e207d296ec 100755
--- a/cluster/jmx-client/akka-cluster
+++ b/cluster/jmx-client/pekko-cluster
@@ -214,13 +214,13 @@ do
         printf "%26s - %s\n" leader              "Asks the cluster who the current leader is"
         printf "%26s - %s\n" is-singleton        "Checks if the cluster is a singleton cluster (single node cluster)"
         printf "%26s - %s\n" is-available        "Checks if the member node is available"
-        printf "Where the <node-url> should be on the format of 'akka.tcp://actor-system-name@hostname:port'\n"
+        printf "Where the <node-url> should be on the format of 'pekko.tcp://actor-system-name@hostname:port'\n"
         printf "\n"
         printf "Examples: $0 localhost 9999 is-available\n"
-        printf "          $0 localhost 9999 join akka.tcp://MySystem@darkstar:2552\n"
+        printf "          $0 localhost 9999 join pekko.tcp://MySystem@darkstar:2552\n"
         printf "          $0 localhost 9999 cluster-status\n"
         printf "          $0 localhost 9999 -p 2551 is-available\n"
-        printf "          $0 localhost 9999 -p 2551 join akka.tcp://MySystem@darkstar:2552\n"
+        printf "          $0 localhost 9999 -p 2551 join pekko.tcp://MySystem@darkstar:2552\n"
         printf "          $0 localhost 9999 -p 2551 cluster-status\n"
         exit 1
         ;;
diff --git a/docs/src/main/categories/actor-interop-operators.md b/docs/src/main/categories/actor-interop-operators.md
index 86d5d92d31..0f7de5bd7c 100644
--- a/docs/src/main/categories/actor-interop-operators.md
+++ b/docs/src/main/categories/actor-interop-operators.md
@@ -1 +1 @@
-Operators meant for inter-operating between Akka Streams and Actors:
+Operators meant for inter-operating between Pekko Streams and Actors:
diff --git a/docs/src/main/paradox/.htaccess b/docs/src/main/paradox/.htaccess
index 37e36c3da1..bf53ec2310 100644
--- a/docs/src/main/paradox/.htaccess
+++ b/docs/src/main/paradox/.htaccess
@@ -2,19 +2,8 @@ RedirectMatch 301 ^(.*)/scala/(.*) $1/$2?language=scala
 RedirectMatch 301 ^(.*)/java/(.*) $1/$2?language=java
 
 RedirectMatch 301 ^(.*)/stream/stages-overview\.html.* $1/stream/operators/index.html
-RedirectMatch 301 ^(.*)/project/migration-guide-1\.3\.x-2\.0\.x\.html.* $1/project/migration-guide-old.html
-RedirectMatch 301 ^(.*)/project/migration-guide-2\.0\.x-2\.1\.x.html.* $1/project/migration-guide-old.html
-RedirectMatch 301 ^(.*)/project/migration-guide-2\.1\.x-2\.2\.x.html.* $1/project/migration-guide-old.html
-RedirectMatch 301 ^(.*)/project/migration-guide-2\.2\.x-2\.3\.x.html.* $1/project/migration-guide-old.html
-RedirectMatch 301 ^(.*)/project/migration-guide-2\.2\.x-2\.3\.x.html.* $1/project/migration-guide-old.html
-RedirectMatch 301 ^(.*)/project/migration-guide-2\.3\.x-2\.4\.x.html.* $1/project/migration-guide-old.html
-RedirectMatch 301 ^(.*)/project/migration-guide-eventsourced-2\.3\.x.html.* $1/project/migration-guide-old.html
-RedirectMatch 301 ^(.*)/project/migration-guide-persistence-experimental-2\.3\.x-2\.4\.x.html.* $1/project/migration-guide-old.html
-RedirectMatch 301 ^(.*)/project/migration-guide-stream-2\.0-2\.4.html.* $1/project/migration-guide-old.html
 
 RedirectMatch 301 ^(.*)/stream/stages-overview\.html$ $1/stream/operators/index.html
-RedirectMatch 301 ^(.*)/agents\.html$ $1/project/migration-guide-2.5.x-2.6.x.html
-RedirectMatch 301 ^(.*)/typed-actors\.html$ $1/project/migration-guide-2.5.x-2.6.x.html#typedactor
 
 RedirectMatch 301 ^(.*)/stream/operators/Source-or-Flow/balance\.html$ $1/stream/operators/Balance.html
 RedirectMatch 301 ^(.*)/stream/operators/Source-or-Flow/broadcast\.html$ $1/stream/operators/Broadcast.html
diff --git a/docs/src/main/paradox/actors.md b/docs/src/main/paradox/actors.md
index 92e50e9b66..07db055775 100644
--- a/docs/src/main/paradox/actors.md
+++ b/docs/src/main/paradox/actors.md
@@ -31,14 +31,14 @@ Hewitt but have been popularized by the Erlang language and used for example at
 Ericsson with great success to build highly concurrent and reliable telecom
 systems.
 
-The API of Akka’s Actors is similar to Scala Actors which has borrowed some of
+The API of Pekko’s Actors is similar to Scala Actors which has borrowed some of
 its syntax from Erlang.
 
 ## Creating Actors
 
 @@@ note
 
-Since Akka enforces parental supervision every actor is supervised and
+Since Pekko enforces parental supervision every actor is supervised and
 (potentially) the supervisor of its children, it is advisable to
 familiarize yourself with @ref:[Actor Systems](general/actor-systems.md), @ref:[supervision](general/supervision.md)
 and @ref:[handling exceptions](general/supervision.md#actors-and-exceptions)
@@ -76,7 +76,7 @@ Scala
 Java
 :  @@snip [MyActor.java](/docs/src/test/java/jdocs/actor/MyActor.java) { #imports #my-actor }
 
-Please note that the Akka Actor @scala[`receive`] message loop is exhaustive, which is different compared to Erlang and the late Scala Actors. This means that you
+Please note that the Pekko Actor @scala[`receive`] message loop is exhaustive, which is different compared to Erlang and the late Scala Actors. This means that you
 need to provide a pattern match for all messages that it can accept and if you
 want to be able to handle unknown messages then you need to have a default case
 as in the example above. Otherwise an @apidoc[actor.UnhandledMessage] will be published to the @apidoc[actor.ActorSystem]'s
@@ -97,7 +97,7 @@ construction.
 
 #### Here is another example that you can edit and run in the browser:
 
-@@fiddle [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #fiddle_code template="Akka" layout="v75" minheight="400px" }
+@@fiddle [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #fiddle_code template="Pekko" layout="v75" minheight="400px" }
 
 @@@
 
@@ -293,13 +293,6 @@ described here: @ref:[What Restarting Means](general/supervision.md#supervision-
 When using a dependency injection framework, actor beans *MUST NOT* have
 singleton scope.
 
-@@@
-
-Techniques for dependency injection and integration with dependency injection frameworks
-are described in more depth in the
-[Using Akka with Dependency Injection](https://letitcrash.com/post/55958814293/akka-dependency-injection)
-guideline and the [Akka Java Spring](https://github.com/typesafehub/activator-akka-java-spring) tutorial.
-
 ## Actor API
 
 @scala[The @apidoc[actor.Actor] trait defines only one abstract method, the above mentioned
@@ -606,7 +599,7 @@ An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoti
 
 @@@ warning { title=IMPORTANT }
 
-Messages can be any kind of object but have to be immutable. @scala[Scala] @java[Akka] can’t enforce
+Messages can be any kind of object but have to be immutable. @scala[Scala] @java[Pekko] can’t enforce
 immutability (yet) so this has to be by convention. @scala[Primitives like String, Int,
 Boolean are always immutable. Apart from these the recommended approach is to
 use Scala case classes that are immutable (if you don’t explicitly expose the
@@ -835,7 +828,7 @@ That has benefits such as:
 The @javadoc[Receive](pekko.actor.AbstractActor.Receive) can be implemented in other ways than using the `ReceiveBuilder` since in the
 end, it is just a wrapper around a Scala `PartialFunction`. In Java, you can implement `PartialFunction` by
 extending `AbstractPartialFunction`. For example, one could implement an adapter
-to [Vavr Pattern Matching DSL](https://docs.vavr.io/#_pattern_matching). See the [Akka Vavr sample project](https://github.com/akka/akka-samples/tree/2.5/akka-sample-vavr) for more details.
+to [Vavr Pattern Matching DSL](https://docs.vavr.io/#_pattern_matching). See the [Pekko Vavr sample project](https://github.com/apache/incubator-pekko-samples/tree/2.5/akka-sample-vavr) for more details.
 
 If the validation of the `ReceiveBuilder` match logic turns out to be a bottleneck for some of your
 actors you can consider implementing it at a lower level by extending @javadoc[UntypedAbstractActor](pekko.actor.UntypedAbstractActor) instead
@@ -1044,7 +1037,7 @@ message, i.e. not for top-level actors.
 
 ### Upgrade
 
-Akka supports hotswapping the Actor’s message loop (e.g. its implementation) at
+Pekko supports hotswapping the Actor’s message loop (e.g. its implementation) at
 runtime: invoke the @apidoc[context.become](actor.ActorContext) {scala="#become(behavior:org.apache.pekko.actor.Actor.Receive,discardOld:Boolean):Unit" java="#become(scala.PartialFunction,boolean)"} method from within the Actor.
 `become` takes a @scala[`PartialFunction[Any, Unit]`] @java[`PartialFunction<Object, BoxedUnit>`] that implements the new
 message handler. The hotswapped code is kept in a Stack that can be pushed and
diff --git a/docs/src/main/paradox/additional/deploying.md b/docs/src/main/paradox/additional/deploying.md
index 884047fb64..afbca8027e 100644
--- a/docs/src/main/paradox/additional/deploying.md
+++ b/docs/src/main/paradox/additional/deploying.md
@@ -1,18 +1,16 @@
 ---
-project.description: How to deploy Akka Cluster to Kubernetes and Docker.
+project.description: How to deploy Pekko Cluster to Kubernetes and Docker.
 ---
 # Deploying
 
 ## Deploying to Kubernetes
 
-[Akka Cloud Platform](https://developer.lightbend.com/docs/akka-platform-guide/deployment/index.html) is the easiest way to deploy an Akka Cluster application to Amazon Elastic Kubernetes Service (EKS) or Google Kubernetes Engine (GKE).
-
-Alternatively, you can deploy to Kubernetes according to the guide and example project for [Deploying Akka Cluster to Kubernetes](https://doc.akka.io/docs/akka-management/current/kubernetes-deployment/index.html), but that requires more expertise of Kubernetes.
+Deploy to Kubernetes according to the guide and example project for [Deploying Pekko Cluster to Kubernetes](https://doc.akka.io/docs/akka-management/current/kubernetes-deployment/index.html), but that requires more expertise of Kubernetes.
 
 ### Cluster bootstrap
 
 To take advantage of running inside Kubernetes while forming a cluster, 
-[Akka Cluster Bootstrap](https://doc.akka.io/docs/akka-management/current/bootstrap/) helps forming or joining a cluster using Akka Discovery to discover peer nodes. 
+[Pekko Cluster Bootstrap](https://doc.akka.io/docs/akka-management/current/bootstrap/) helps forming or joining a cluster using Pekko Discovery to discover peer nodes. 
 with the Kubernetes API or Kubernetes via DNS.  
 
 You can look at the
@@ -25,16 +23,16 @@ To avoid CFS scheduler limits, it is best not to use `resources.limits.cpu` limi
 
 ## Deploying to Docker containers
 
-You can use both Akka remoting and Akka Cluster inside Docker containers. Note
+You can use both Pekko remoting and Pekko Cluster inside Docker containers. Note
 that you will need to take special care with the network configuration when using Docker,
-described here: @ref:[Akka behind NAT or in a Docker container](../remoting-artery.md#remote-configuration-nat-artery)
+described here: @ref:[Pekko behind NAT or in a Docker container](../remoting-artery.md#remote-configuration-nat-artery)
 
 You can look at the
 @java[@extref[Cluster with docker-compse example project](samples:akka-sample-cluster-docker-compose-java)]
 @scala[@extref[Cluster with docker-compose example project](samples:akka-sample-cluster-docker-compose-scala)]
 to see what this looks like in practice.
 
-For the JVM to run well in a Docker container, there are some general (not Akka specific) parameters that might need tuning:
+For the JVM to run well in a Docker container, there are some general (not Pekko specific) parameters that might need tuning:
 
 ### Resource constraints
 
diff --git a/docs/src/main/paradox/additional/faq.md b/docs/src/main/paradox/additional/faq.md
index a67dadf76c..a5b5de2250 100644
--- a/docs/src/main/paradox/additional/faq.md
+++ b/docs/src/main/paradox/additional/faq.md
@@ -1,26 +1,12 @@
 # Frequently Asked Questions
 
-## Akka Project
+## Pekko Project
 
-### Where does the name Akka come from?
+### Where does the name Pekko come from?
 
-It is the name of a beautiful Swedish [mountain](https://en.wikipedia.org/wiki/%C3%81hkk%C3%A1)
-up in the northern part of Sweden called Laponia. The mountain is also sometimes
-called 'The Queen of Laponia'.
-
-Akka is also the name of a goddess in the Sámi (the native Swedish population)
-mythology. She is the goddess that stands for all the beauty and good in the
-world. The mountain can be seen as the symbol of this goddess.
-
-Also, the name AKKA is a palindrome of the letters A and K as in Actor Kernel.
-
-Akka is also:
-
- * the name of the goose that Nils traveled across Sweden on in [The Wonderful Adventures of Nils](https://en.wikipedia.org/wiki/The_Wonderful_Adventures_of_Nils) by the Swedish writer Selma Lagerlöf.
- * the Finnish word for 'nasty elderly woman' and the word for 'elder sister' in the Indian languages Tamil, Telugu, Kannada and Marathi.
- * a [font](https://www.dafont.com/pekko.font)
- * a town in Morocco
- * a near-earth asteroid
+The former name of this project, Akka, is a goddess in the Sámi (the native Swedish population)
+mythology. She is the goddess that stands for all the beauty and good in the world. Pekko builds on this
+foundation and is the Finnish god of farming & protector of the crops.
 
 ## Resources with Explicit Lifecycle
 
@@ -56,7 +42,7 @@ mailboxes and thereby filling up the heap memory.
 ### How reliable is the message delivery?
 
 The general rule is **at-most-once delivery**, i.e. no guaranteed delivery.
-Stronger reliability can be built on top, and Akka provides tools to do so.
+Stronger reliability can be built on top, and Pekko provides tools to do so.
 
 Read more in @ref:[Message Delivery Reliability](../general/message-delivery-reliability.md).
 
@@ -70,11 +56,4 @@ To turn on debug logging in your actor system add the following to your configur
 pekko.loglevel = DEBUG
 ```
 
-Read more about it in the docs for @ref:[Logging](../typed/logging.md).
-
-# Other questions?
-
-Do you have a question not covered here? Find out how to
-[get involved in the community](https://akka.io/get-involved) or
-[set up a time](https://lightbend.com/contact) to discuss enterprise-grade
-expert support from [Lightbend](https://www.lightbend.com/).
+Read more about it in the docs for @ref:[Logging](../typed/logging.md).
\ No newline at end of file
diff --git a/docs/src/main/paradox/additional/operations.md b/docs/src/main/paradox/additional/operations.md
index 8ddfb9b7af..8d0c8dd70d 100644
--- a/docs/src/main/paradox/additional/operations.md
+++ b/docs/src/main/paradox/additional/operations.md
@@ -1,5 +1,5 @@
 ---
-project.description: Operating, managing and monitoring Akka and Akka Cluster applications.
+project.description: Operating, managing and monitoring Pekko and Pekko Cluster applications.
 ---
 # Operating a Cluster
 
@@ -14,13 +14,13 @@ When starting clusters on cloud systems such as Kubernetes, AWS, Google Cloud, A
 you may want to automate the discovery of nodes for the cluster joining process, using your cloud providers,
 cluster orchestrator, or other form of service discovery (such as managed DNS).
 
-The open source Akka Management library includes the @extref:[Cluster Bootstrap](akka-management:bootstrap/index.html)
+The open source Pekko Management library includes the @extref:[Cluster Bootstrap](pekko-management:bootstrap/index.html)
 module which handles just that. Please refer to its documentation for more details.
 
 @@@ note
  
-If you are running Akka in a Docker container or the nodes for some other reason have separate internal and
-external ip addresses you must configure remoting according to @ref:[Akka behind NAT or in a Docker container](../remoting-artery.md#remote-configuration-nat-artery)
+If you are running Pekko in a Docker container or the nodes for some other reason have separate internal and
+external ip addresses you must configure remoting according to @ref:[Pekko behind NAT or in a Docker container](../remoting-artery.md#remote-configuration-nat-artery)
 
 @@@
  
@@ -31,14 +31,14 @@ See @ref:[Rolling Updates, Cluster Shutdown and Coordinated Shutdown](../additio
 ## Cluster Management
 
 There are several management tools for the cluster. 
-Complete information on running and managing Akka applications can be found in 
-the @exref:[Akka Management](akka-management:) project documentation.
+Complete information on running and managing Pekko applications can be found in 
+the @exref:[Pekko Management](pekko-management:) project documentation.
 
 <a id="cluster-http"></a>
 ### HTTP
 
 Information and management of the cluster is available with a HTTP API.
-See documentation of @extref:[Akka Management](akka-management:).
+See documentation of @extref:[Pekko Management](pekko-management:).
 
 <a id="cluster-jmx"></a>
 ### JMX
@@ -55,11 +55,4 @@ From JMX you can:
  * mark any node in the cluster as down
  * tell any node in the cluster to leave
 
-Member nodes are identified by their address, in format *`akka://actor-system-name@hostname:port`*.
-
-## Monitoring and Observability
-
-Aside from log monitoring and the monitoring provided by your APM or platform provider, [Lightbend Telemetry](https://developer.lightbend.com/docs/telemetry/current/instrumentations/akka/pekko.html),
-available through a [Lightbend Subscription](https://www.lightbend.com/lightbend-subscription),
-can provide additional insights in the run-time characteristics of your application, including metrics, events,
-and distributed tracing for Akka Actors, Cluster, HTTP, and more.
+Member nodes are identified by their address, in format *`pekko://actor-system-name@hostname:port`*.
diff --git a/docs/src/main/paradox/additional/osgi.md b/docs/src/main/paradox/additional/osgi.md
index ed9b644d23..2de8e2d9b7 100644
--- a/docs/src/main/paradox/additional/osgi.md
+++ b/docs/src/main/paradox/additional/osgi.md
@@ -1,11 +1,11 @@
-# Akka in OSGi
+# Pekko in OSGi
 
 ## Dependency
 
-To use Akka in OSGi, you must add the following dependency in your project:
+To use Pekko in OSGi, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -28,7 +28,7 @@ Facilities emerged to "wrap" binary JARs so they could be used as bundles, but t
 situations. An application of the "80/20 Rule" here would have that "80% of the complexity is with 20% of the configuration",
 but it was enough to give OSGi a reputation that has stuck with it to this day.
 
-This document aims to the productivity basics folks need to use it with Akka, the 20% that users need to get 80% of what they want.
+This document aims to the productivity basics folks need to use it with Pekko, the 20% that users need to get 80% of what they want.
 For more information than is provided here, [OSGi In Action](https://www.manning.com/books/osgi-in-action) is worth exploring.
 
 ## Core Components and Structure of OSGi Applications
@@ -83,7 +83,7 @@ in an application composed of multiple JARs to reside under a single package nam
 might scan all classes from `com.example.plugins` for specific service implementations with that package existing in
 several contributed JARs.
    While it is possible to support overlapping packages with complex manifest headers, it's much better to use non-overlapping
-package spaces and facilities such as @ref:[Akka Cluster](../typed/cluster-concepts.md)
+package spaces and facilities such as @ref:[Pekko Cluster](../typed/cluster-concepts.md)
 for service discovery. Stylistically, many organizations opt to use the root package path as the name of the bundle
 distribution file.
 
@@ -94,23 +94,23 @@ separate classloaders for every bundle, resource files such as configurations ar
 
 ## Configuring the OSGi Framework
 
-To use Akka in an OSGi environment, the container must be configured such that the `org.osgi.framework.bootdelegation`
+To use Pekko in an OSGi environment, the container must be configured such that the `org.osgi.framework.bootdelegation`
 property delegates the `sun.misc` package to the boot classloader instead of resolving it through the normal OSGi class space.
 
 ## Intended Use
 
-Akka only supports the usage of an ActorSystem strictly confined to a single OSGi bundle, where that bundle contains or imports
+Pekko only supports the usage of an ActorSystem strictly confined to a single OSGi bundle, where that bundle contains or imports
 all of the actor system's requirements. This means that the approach of offering an ActorSystem as a service to which Actors
 can be deployed dynamically via other bundles is not recommended — an ActorSystem and its contained actors are not meant to be
 dynamic in this way. ActorRefs may safely be exposed to other bundles.
 
 ## Activator
 
-To bootstrap Akka inside an OSGi environment, you can use the @apidoc[osgi.ActorSystemActivator](osgi.ActorSystemActivator) class
+To bootstrap Pekko inside an OSGi environment, you can use the @apidoc[osgi.ActorSystemActivator](osgi.ActorSystemActivator) class
 to conveniently set up the @apidoc[ActorSystem](actor.ActorSystem).
 
 @@snip [Activator.scala](/osgi/src/test/scala/docs/osgi/Activator.scala) { #Activator }
 
-The goal here is to map the OSGi lifecycle more directly to the Akka lifecycle. The @apidoc[ActorSystemActivator](osgi.ActorSystemActivator) creates
+The goal here is to map the OSGi lifecycle more directly to the Pekko lifecycle. The @apidoc[ActorSystemActivator](osgi.ActorSystemActivator) creates
 the actor system with a class loader that finds resources (`application.conf` and `reference.conf` files) and classes
 from the application bundle and all transitive dependencies.
diff --git a/docs/src/main/paradox/additional/packaging.md b/docs/src/main/paradox/additional/packaging.md
index 0e19019416..63b4bace4b 100644
--- a/docs/src/main/paradox/additional/packaging.md
+++ b/docs/src/main/paradox/additional/packaging.md
@@ -1,13 +1,13 @@
 ---
-project.description: How to package an Akka application for deployment.
+project.description: How to package an Pekko application for deployment.
 ---
 # Packaging
 
-The simplest way to use Akka is as a regular library, adding the Akka jars you
+The simplest way to use Pekko is as a regular library, adding the Pekko jars you
 need to your classpath (in case of a web app, in `WEB-INF/lib`).
 
 In many cases, such as deploying to an analytics cluster, building your application into a single 'fat jar' is needed.
-When building fat jars, some additional configuration is needed to merge Akka config files, because each Akka jar
+When building fat jars, some additional configuration is needed to merge Pekko config files, because each Pekko jar
 contains a `reference.conf` resource with default values.
 
 The method for ensuring `reference.conf` and other `*.conf` resources are merged depends on the tooling you use to create the fat jar:
@@ -19,7 +19,7 @@ The method for ensuring `reference.conf` and other `*.conf` resources are merged
 ## sbt: Native Packager
 
 [sbt-native-packager](https://github.com/sbt/sbt-native-packager) is a tool for creating
-distributions of any type of application, including Akka applications.
+distributions of any type of application, including Pekko applications.
 
 Define sbt version in `project/build.properties` file:
 
diff --git a/docs/src/main/paradox/additional/rolling-updates.md b/docs/src/main/paradox/additional/rolling-updates.md
index 1ec7dece3c..950ecb56ce 100644
--- a/docs/src/main/paradox/additional/rolling-updates.md
+++ b/docs/src/main/paradox/additional/rolling-updates.md
@@ -1,5 +1,5 @@
 ---
-project.description: How to do rolling updates and restarts with Akka Cluster.
+project.description: How to do rolling updates and restarts with Pekko Cluster.
 ---
 # Rolling Updates
 
@@ -11,17 +11,17 @@ versus being able to do a rolling update.
 @@@
 
 A rolling update is the process of replacing one version of the system with another without downtime.
-The changes can be new code, changed dependencies such as new Akka version, or modified configuration.
+The changes can be new code, changed dependencies such as new Pekko version, or modified configuration.
 
-In Akka, rolling updates are typically used for a stateful Akka Cluster where you can't run two separate clusters in
+In Pekko, rolling updates are typically used for a stateful Pekko Cluster where you can't run two separate clusters in
 parallel during the update, for example in blue green deployments.
 
-For rolling updates related to Akka dependency version upgrades and the migration guides, please see
-@ref:[Rolling Updates and Akka versions](../project/rolling-update.md)
+For rolling updates related to Pekko dependency version upgrades and the migration guides, please see
+@ref:[Rolling Updates and Pekko versions](../project/rolling-update.md)
 
 ## Serialization Compatibility
 
-There are two parts of Akka that need careful consideration when performing an rolling update.
+There are two parts of Pekko that need careful consideration when performing an rolling update.
 
 1. Compatibility of remote message protocols. Old nodes may send messages to new nodes and vice versa.
 1. Serialization format of persisted events and snapshots. New nodes must be able to read old data, and
@@ -63,7 +63,7 @@ started on new nodes. Messages to shards that were stopped on the old nodes will
 on the new nodes, without waiting for rebalance actions. 
 
 You should also enable the @ref:[health check for Cluster Sharding](../typed/cluster-sharding.md#health-check) if
-you use Akka Management. The readiness check will delay incoming traffic to the node until Sharding has been
+you use Pekko Management. The readiness check will delay incoming traffic to the node until Sharding has been
 initialized and can accept messages.
 
 The `ShardCoordinator` is itself a cluster singleton.
@@ -101,61 +101,14 @@ if there is an unreachability problem Split Brain Resolver would make a decision
 ## Configuration Compatibility Checks
 
 During rolling updates the configuration from existing nodes should pass the Cluster configuration compatibility checks.
-For example, it is possible to migrate Cluster Sharding from Classic to Typed Actors in a rolling update using a two step approach
-as of Akka version `2.5.23`:
+For example, it is possible to migrate Cluster Sharding from Classic to Typed Actors in a rolling update using a two step approach:
 
 * Deploy with the new nodes set to `pekko.cluster.configuration-compatibility-check.enforce-on-join = off`
 and ensure all nodes are in this state
 * Deploy again and with the new nodes set to `pekko.cluster.configuration-compatibility-check.enforce-on-join = on`. 
   
 Full documentation about enforcing these checks on joining nodes and optionally adding custom checks can be found in  
-@ref:[Akka Cluster configuration compatibility checks](../typed/cluster.md#configuration-compatibility-check).
-
-## Rolling Updates and Migrating Akka
-
-### From Java serialization to Jackson
- 
-If you are migrating from Akka 2.5 to 2.6, and use Java serialization you can replace it with, for example, the new
-@ref:[Serialization with Jackson](../serialization-jackson.md) and still be able to perform a rolling updates
-without bringing down the entire cluster.
-
-The procedure for changing from Java serialization to Jackson would look like:
-
-1. Rolling update from 2.5.24 (or later) to 2.6.0
-    * Use config `pekko.actor.allow-java-serialization=on`.
-    * Roll out the change.
-    * Java serialization will be used as before.
-    * This step is optional and you could combine it with next step if you like, but could be good to
-      make one change at a time.
-1. Rolling update to support deserialization but not enable serialization
-    * Change message classes by adding the marker interface and possibly needed annotations as
-      described in @ref:[Serialization with Jackson](../serialization-jackson.md).
-    * Test the system with the new serialization in a new test cluster (no rolling update).
-    * Remove the binding for the marker interface in `pekko.actor.serialization-bindings`, so that Jackson is not used for serialization (toBinary) yet.
-    * Configure `pekko.serialization.jackson.allowed-class-prefix=["com.myapp"]`
-        * This is needed for Jackson deserialization when the `serialization-bindings` isn't defined.
-        * Replace `com.myapp` with the name of the root package of your application to trust all classes.
-    * Roll out the change.
-    * Java serialization is still used, but this version is prepared for next roll out.
-1. Rolling update to enable serialization with Jackson.
-    * Add the binding to the marker interface in `pekko.actor.serialization-bindings` to the Jackson serializer.
-    * Remove `pekko.serialization.jackson.allowed-class-prefix`.
-    * Roll out the change.
-    * Old nodes will still send messages with Java serialization, and that can still be deserialized by new nodes.
-    * New nodes will send messages with Jackson serialization, and old node can deserialize those because they were
-      prepared in previous roll out.
-1. Rolling update to disable Java serialization
-    * Remove `allow-java-serialization` config, to use the default `allow-java-serialization=off`.
-    * Remove `warn-about-java-serializer-usage` config if you had changed that, to use the default `warn-about-java-serializer-usage=on`. 
-    * Roll out the change.
-    
-A similar approach can be used when changing between other serializers, for example between Jackson and Protobuf.    
-
-### Akka Typed with Receptionist or Cluster Receptionist
-
-If you are migrating from Akka 2.5 to 2.6, and use the `Receptionist` or `Cluster Receptionist` with Akka Typed, 
-during a rolling update information will not be disseminated between 2.5 and 2.6 nodes.
-However once all old nodes have been phased out during the rolling update it will work properly again.
+@ref:[Pekko Cluster configuration compatibility checks](../typed/cluster.md#configuration-compatibility-check).
 
 ## When Shutdown Startup Is Required
  
@@ -184,8 +137,6 @@ and are using PersistenceFSM with Cluster Sharding, a full shutdown is required
 
 If you've migrated from classic remoting to Artery
 which has a completely different protocol, a rolling update is not supported.
-For more details on this migration
-see @ref:[the migration guide](../project/migration-guide-2.5.x-2.6.x.md#migrating-from-classic-remoting-to-artery).
 
 ### Changing remoting transport
 
diff --git a/docs/src/main/paradox/assets/js/scalafiddle.js b/docs/src/main/paradox/assets/js/scalafiddle.js
index a62ca16dc3..90e75d7b73 100644
--- a/docs/src/main/paradox/assets/js/scalafiddle.js
+++ b/docs/src/main/paradox/assets/js/scalafiddle.js
@@ -1,8 +1,8 @@
 window.scalaFiddleTemplates = {
-  "Akka": {
-    pre: "// $FiddleDependency org.akka-js %%% akkajsactor % 2.2.6.1 \n" +
-    "// $FiddleDependency org.akka-js %%% akkajsactorstream % 2.2.6.1 \n" +
-    "// $FiddleDependency org.akka-js %%% akkajsactortyped % 2.2.6.1 \n",
+  "Pekko": {
+    pre: "// $FiddleDependency org.pekko-js %%% pekkojsactor % 2.2.6.1 \n" +
+    "// $FiddleDependency org.pekko-js %%% pekkojsactorstream % 2.2.6.1 \n" +
+    "// $FiddleDependency org.pekko-js %%% pekkojsactortyped % 2.2.6.1 \n",
     post: ""
   }
 }
diff --git a/docs/src/main/paradox/assets/js/warnOldDocs.js b/docs/src/main/paradox/assets/js/warnOldDocs.js
index 433c5a9369..91529c71bc 100644
--- a/docs/src/main/paradox/assets/js/warnOldDocs.js
+++ b/docs/src/main/paradox/assets/js/warnOldDocs.js
@@ -1,27 +1,27 @@
 jQuery(document).ready(function ($) {
 
   function initOldVersionWarnings($) {
-    $.get("//akka.io/versions.json", function (akkaVersionsData) {
+    $.get("//pekko.io/versions.json", function (pekkoVersionsData) {
       var site = extractCurrentPageInfo();
       if (site.v === 'snapshot') {
         showSnapshotWarning(site)
       } else {
         var matchingMinor =
-          Object.keys(akkaVersionsData[site.p])
+          Object.keys(pekkoVersionsData[site.p])
             .find(function(s) { return site.v.startsWith(s) })
         if (matchingMinor) {
-          showVersionWarning(site, akkaVersionsData, matchingMinor)
+          showVersionWarning(site, pekkoVersionsData, matchingMinor)
         }
       }
     })
   }
 
-  function getInstead(akkaVersionsData, project, instead) {
+  function getInstead(pekkoVersionsData, project, instead) {
     if (Array.isArray(instead)) {
-      var found = akkaVersionsData[instead[0]][instead[1]]
+      var found = pekkoVersionsData[instead[0]][instead[1]]
       var proj = instead[0]
     } else {
-      var found = akkaVersionsData[project][instead]
+      var found = pekkoVersionsData[project][instead]
       var proj = project
     }
     return {"latest": found.latest, "project": proj}
@@ -72,15 +72,15 @@ jQuery(document).ready(function ($) {
       .show()
   }
 
-  function showVersionWarning(site, akkaVersionsData, series) {
+  function showVersionWarning(site, pekkoVersionsData, series) {
     var version = site.v,
-      seriesInfo = akkaVersionsData[site.p][series]
+      seriesInfo = pekkoVersionsData[site.p][series]
 
 
     if (versionWasAcked(site.p, version)) {
       // hidden for a day
     } else if (seriesInfo.outdated) {
-      var instead = getInstead(akkaVersionsData, site.p, seriesInfo.instead)
+      var instead = getInstead(pekkoVersionsData, site.p, seriesInfo.instead)
       var insteadSeries = targetUrl(false, site, instead)
       var insteadPage = targetUrl(true, site, instead)
 
@@ -88,7 +88,7 @@ jQuery(document).ready(function ($) {
         site,
         version,
         '<h3 class="callout-title">Old Version</h3>' +
-        '<p><span style="font-weight: bold">This version of Akka (' + site.p + ' / ' + version + ') is outdated and not supported! </span></p>' +
+        '<p><span style="font-weight: bold">This version of Pekko (' + site.p + ' / ' + version + ') is outdated and not supported! </span></p>' +
         '<p>Please upgrade to version <a href="' + insteadSeries + '">' + instead.latest + '</a> as soon as possible.</p>' +
         '<p id="samePageLink"></p>')
 
@@ -106,7 +106,7 @@ jQuery(document).ready(function ($) {
         site,
         version,
         '<h3 class="callout-title">Outdated version</h3>' +
-        '<p>You are browsing the docs for Akka ' + version + ', however the latest release in this series is: ' +
+        '<p>You are browsing the docs for Pekko ' + version + ', however the latest release in this series is: ' +
           '<a href="' + targetUrl(true, site, seriesInfo) + '">' + seriesInfo.latest + '</a>. <br/></p>');
     }
   }
@@ -140,7 +140,7 @@ jQuery(document).ready(function ($) {
       base = '' + window.location
 
     // strip off leading /docs/
-    path = path.substring(path.indexOf("akka"))
+    path = path.substring(path.indexOf("pekko"))
     base = base.substring(0, base.indexOf(path))
     var projectEnd = path.indexOf("/")
     var versionEnd = path.indexOf("/", projectEnd + 1)
diff --git a/docs/src/main/paradox/camel.md b/docs/src/main/paradox/camel.md
deleted file mode 100644
index 978cc34d17..0000000000
--- a/docs/src/main/paradox/camel.md
+++ /dev/null
@@ -1,7 +0,0 @@
-# Camel
-
-The akka-camel module was deprecated in 2.5 and has been removed in 2.6.
-
-As an alternative we recommend [Alpakka](https://doc.akka.io/docs/alpakka/current/). This is of course not a drop-in replacement.
-
-If anyone is interested in setting up akka-camel as a separate community-maintained repository then please get in touch. 
diff --git a/docs/src/main/paradox/cluster-client.md b/docs/src/main/paradox/cluster-client.md
index 7c775537e2..13cf85d48e 100644
--- a/docs/src/main/paradox/cluster-client.md
+++ b/docs/src/main/paradox/cluster-client.md
@@ -2,8 +2,8 @@
 
 @@@ warning
 
-Cluster Client is deprecated in favor of using [Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/index.html).
-It is not advised to build new applications with Cluster Client, and existing users @ref[should migrate](#migration-to-akka-grpc).
+Cluster Client is deprecated in favor of using [Pekko gRPC](https://doc.akka.io/docs/akka-grpc/current/index.html).
+It is not advised to build new applications with Cluster Client, and existing users @ref[should migrate](#migration-to-pekko-grpc).
 
 @@@
 
@@ -36,12 +36,12 @@ contact points retrieved from the previous establishment, or periodically refres
 i.e. not necessarily the initial contact points.
 
 Using the @apidoc[ClusterClient] for communicating with a cluster from the outside requires that the system with the client
-can both connect and be connected with Akka Remoting from all the nodes in the cluster with a receptionist.
-This creates a tight coupling in that the client and cluster systems may need to have the same version of both Akka, libraries, message classes, serializers and potentially even the JVM. In many cases it is a better solution
+can both connect and be connected with Pekko Remoting from all the nodes in the cluster with a receptionist.
+This creates a tight coupling in that the client and cluster systems may need to have the same version of both Pekko, libraries, message classes, serializers and potentially even the JVM. In many cases it is a better solution
 to use a more explicit and decoupling protocol such as [HTTP](https://doc.akka.io/docs/akka-http/current/index.html) or
 [gRPC](https://doc.akka.io/docs/akka-grpc/current/).
 
-Additionally, since Akka Remoting is primarily designed as a protocol for Akka Cluster there is no explicit resource
+Additionally, since Pekko Remoting is primarily designed as a protocol for Pekko Cluster there is no explicit resource
 management, when a @apidoc[ClusterClient] has been used it will cause connections with the cluster until the ActorSystem is
 stopped (unlike other kinds of network clients).
 
@@ -149,10 +149,6 @@ Java
 You will probably define the address information of the initial contact points in configuration or system property.
 See also @ref:[Configuration](#cluster-client-config).
 
-A more comprehensive sample is available in the tutorial named
-@scala[[Distributed workers with Akka and Scala](https://github.com/typesafehub/activator-akka-distributed-workers).]
-@java[[Distributed workers with Akka and Java](https://github.com/typesafehub/activator-akka-distributed-workers-java).]
-
 ## ClusterClientReceptionist Extension
 
 In the example above the receptionist is started and accessed with the `org.apache.pekko.cluster.client.ClusterClientReceptionist` extension.
@@ -229,22 +225,22 @@ are entirely dynamic and the entire cluster might shut down or crash, be restart
 client will be stopped in that case a monitoring actor can watch it and upon `Terminate` a new set of initial
 contacts can be fetched and a new cluster client started.
 
-## Migration to Akka gRPC
+## Migration to Pekko gRPC
 
 Cluster Client is deprecated and it is not advised to build new applications with it.
-As a replacement, we recommend using [Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/)
+As a replacement, we recommend using [Pekko gRPC](https://doc.akka.io/docs/akka-grpc/current/)
 with an application-specific protocol. The benefits of this approach are:
 
-* Improved security by using TLS for gRPC (HTTP/2) versus exposing Akka Remoting outside the Akka Cluster
+* Improved security by using TLS for gRPC (HTTP/2) versus exposing Pekko Remoting outside the Pekko Cluster
 * Easier to update clients and servers independent of each other
 * Improved protocol definition between client and server
-* Usage of [Akka gRPC Service Discovery](https://doc.akka.io/docs/akka-grpc/current/client/configuration.html#using-akka-discovery-for-endpoint-discovery)
-* Clients do not need to use Akka
-* See also [gRPC versus Akka Remoting](https://doc.akka.io/docs/akka-grpc/current/whygrpc.html#grpc-vs-akka-remoting)
+* Usage of [Pekko gRPC Service Discovery](https://doc.akka.io/docs/akka-grpc/current/client/configuration.html#using-akka-discovery-for-endpoint-discovery)
+* Clients do not need to use Pekko
+* See also [gRPC versus Pekko Remoting](https://doc.akka.io/docs/akka-grpc/current/whygrpc.html#grpc-vs-akka-remoting)
 
 ### Migrating directly
 
-Existing users of Cluster Client may migrate directly to Akka gRPC and use it
+Existing users of Cluster Client may migrate directly to Pekko gRPC and use it
 as documented in [its documentation](https://doc.akka.io/docs/akka-grpc/current/).
 
 ### Migrating gradually
@@ -253,12 +249,12 @@ If your application extensively uses Cluster Client, a more gradual migration
 might be desired that requires less re-write of the application. That migration step is described in this section. We recommend migration directly if feasible,
 though.
 
-An example is provided to illustrate an approach to migrate from the deprecated Cluster Client to Akka gRPC,
+An example is provided to illustrate an approach to migrate from the deprecated Cluster Client to Pekko gRPC,
 with minimal changes to your existing code. The example is intended to be copied and adjusted to your needs.
 It will not be provided as a published artifact.
 
-* [akka-samples/akka-sample-cluster-cluster-client-grpc-scala](https://github.com/akka/akka-samples/tree/2.6/akka-sample-cluster-client-grpc-scala) implemented in Scala
-* [akka-samples/akka-sample-cluster-cluster-client-grpc-java](https://github.com/akka/akka-samples/tree/2.6/akka-sample-cluster-client-grpc-java) implemented in Java
+* [pekko-samples/pekko-bom-sample-cluster-cluster-client-grpc-scala](https://github.com/apache/incubator-pekko-samples/tree/2.6/pekko-bom-sample-cluster-client-grpc-scala) implemented in Scala
+* [pekko-samples/pekko-bom-sample-cluster-cluster-client-grpc-java](https://github.com/apache/incubator-pekko-samples/tree/2.6/pekko-bom-sample-cluster-client-grpc-java) implemented in Java
 
 The example is still using an actor on the client-side to have an API that is very close
 to the original Cluster Client. The messages this actor can handle correspond to the
@@ -269,24 +265,24 @@ The `ClusterClient` actor delegates those messages to the gRPC client, and on th
 server-side those are translated and delegated to the destination actors that
 are registered via the `ClusterClientReceptionist` in the same way as in the original.
 
-Akka gRPC is used as the transport for the messages between client and server, instead of Akka Remoting.
+Pekko gRPC is used as the transport for the messages between client and server, instead of Pekko Remoting.
 
-The application specific messages are wrapped and serialized with Akka Serialization,
+The application specific messages are wrapped and serialized with Pekko Serialization,
 which means that care must be taken to keep wire compatibility when changing any messages used
-between the client and server. The Akka configuration of Akka serializers must be the same (or
+between the client and server. The Pekko configuration of Pekko serializers must be the same (or
 being compatible) on the client and the server.
 
 #### Next steps
 
-After this first migration step from Cluster Client to Akka gRPC, you can start
+After this first migration step from Cluster Client to Pekko gRPC, you can start
 replacing calls to `ClusterClientReceptionistService` with new,
 application-specific gRPC endpoints.
 
 #### Differences
 
 Aside from the underlying implementation using gRPC instead of Actor messages
-and Akka Remoting it's worth pointing out the following differences between
-the Cluster Client and the example emulating Cluster Client with Akka gRPC as
+and Pekko Remoting it's worth pointing out the following differences between
+the Cluster Client and the example emulating Cluster Client with Pekko gRPC as
 transport.
 
 ##### Single request-reply
@@ -299,7 +295,7 @@ based API.
 
 ##### Initial contact points
 
-Instead of configured initial contact points the [Akka gRPC Service Discovery](https://doc.akka.io/docs/akka-grpc/current/client/configuration.html#using-akka-discovery-for-endpoint-discovery) can be used.
+Instead of configured initial contact points the [Pekko gRPC Service Discovery](https://doc.akka.io/docs/akka-grpc/current/client/configuration.html#using-akka-discovery-for-endpoint-discovery) can be used.
 
 ##### Failure detection
 
diff --git a/docs/src/main/paradox/cluster-dc.md b/docs/src/main/paradox/cluster-dc.md
index 52b3e2a161..29bb217a67 100644
--- a/docs/src/main/paradox/cluster-dc.md
+++ b/docs/src/main/paradox/cluster-dc.md
@@ -1,6 +1,6 @@
 # Classic Multi-DC Cluster
 
-This chapter describes how @ref[Akka Cluster](cluster-usage.md) can be used across
+This chapter describes how @ref[Pekko Cluster](cluster-usage.md) can be used across
 multiple data centers, availability zones or regions.
 
 For the full documentation of this feature and for new projects see @ref:[Multi-DC Cluster](typed/cluster-dc.md).
diff --git a/docs/src/main/paradox/cluster-routing.md b/docs/src/main/paradox/cluster-routing.md
index 8b2a750a57..cbff8fa4af 100644
--- a/docs/src/main/paradox/cluster-routing.md
+++ b/docs/src/main/paradox/cluster-routing.md
@@ -32,11 +32,11 @@ on other nodes in the cluster.
 To use Cluster aware routers, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-cluster_$scala.binary.version$"
+  artifact="pekko-cluster_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -83,7 +83,7 @@ Scala
 Java
 :  @@snip [StatsService.java](/docs/src/test/java/jdocs/cluster/StatsService.java) { #router-lookup-in-code }
 
-See @ref:[reference configuration](general/configuration-reference.md#config-akka-cluster) for further descriptions of the settings.
+See @ref:[reference configuration](general/configuration-reference.md#config-pekko-cluster) for further descriptions of the settings.
 
 ### Router Example with Group of Routees
 
@@ -184,7 +184,7 @@ Scala
 Java
 :  @@snip [StatsService.java](/docs/src/test/java/jdocs/cluster/StatsService.java) { #router-deploy-in-code }
 
-See @ref:[reference configuration](general/configuration-reference.md#config-akka-cluster) for further descriptions of the settings.
+See @ref:[reference configuration](general/configuration-reference.md#config-pekko-cluster) for further descriptions of the settings.
 
 When using a pool of remote deployed routees you must ensure that all parameters of the `Props` can
 be @ref:[serialized](serialization.md).
@@ -246,5 +246,5 @@ pekko.actor.deployment {
 }
 ```
 The easiest way to run **Router Example with Pool of Routees** example yourself is to try the
-@scala[@extref[Akka Cluster Sample with Scala](samples:akka-samples-cluster-scala)]@java[@extref[Akka Cluster Sample with Java](samples:akka-samples-cluster-java)].
+@scala[@extref[Pekko Cluster Sample with Scala](samples:pekko-samples-cluster-scala)]@java[@extref[Pekko Cluster Sample with Java](samples:pekko-samples-cluster-java)].
 It contains instructions on how to run the **Router Example with Pool of Routees** sample.
diff --git a/docs/src/main/paradox/cluster-sharding.md b/docs/src/main/paradox/cluster-sharding.md
index 03f3e9c1ba..93271df6c9 100644
--- a/docs/src/main/paradox/cluster-sharding.md
+++ b/docs/src/main/paradox/cluster-sharding.md
@@ -113,8 +113,8 @@ There are two cluster sharding states managed:
  
 For these, there are currently two modes which define how these states are stored:
 
-* @ref:[Distributed Data Mode](#distributed-data-mode) - uses Akka @ref:[Distributed Data](distributed-data.md) (CRDTs) (the default)
-* @ref:[Persistence Mode](#persistence-mode) - (deprecated) uses Akka @ref:[Persistence](persistence.md) (Event Sourcing)
+* @ref:[Distributed Data Mode](#distributed-data-mode) - uses Pekko @ref:[Distributed Data](distributed-data.md) (CRDTs) (the default)
+* @ref:[Persistence Mode](#persistence-mode) - (deprecated) uses Pekko @ref:[Persistence](persistence.md) (Event Sourcing)
 
 @@include[cluster.md](includes/cluster.md) { #sharding-persistence-mode-deprecated }
  
diff --git a/docs/src/main/paradox/cluster-singleton.md b/docs/src/main/paradox/cluster-singleton.md
index d0fa1440cc..5312481d05 100644
--- a/docs/src/main/paradox/cluster-singleton.md
+++ b/docs/src/main/paradox/cluster-singleton.md
@@ -93,9 +93,6 @@ Scala
 Java
 :  @@snip [ClusterSingletonManagerTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-proxy }
 
-A more comprehensive sample is available in the tutorial named 
-@scala[[Distributed workers with Akka and Scala!](https://github.com/typesafehub/activator-akka-distributed-workers)]@java[[Distributed workers with Akka and Java!](https://github.com/typesafehub/activator-akka-distributed-workers-java)].
-
 ## Configuration
 
 For the full documentation of this feature and for new projects see @ref:[Cluster Singleton - configuration](typed/cluster-singleton.md#configuration).
diff --git a/docs/src/main/paradox/cluster-usage.md b/docs/src/main/paradox/cluster-usage.md
index 42d82eb6a3..aaae8aca5b 100644
--- a/docs/src/main/paradox/cluster-usage.md
+++ b/docs/src/main/paradox/cluster-usage.md
@@ -6,7 +6,7 @@ For specific documentation topics see:
 
 * @ref:[Cluster Specification](typed/cluster-concepts.md)
 * @ref:[Cluster Membership Service](typed/cluster-membership.md)
-* @ref:[When and where to use Akka Cluster](typed/choosing-cluster.md)
+* @ref:[When and where to use Pekko Cluster](typed/choosing-cluster.md)
 * @ref:[Higher level Cluster tools](#higher-level-cluster-tools)
 * @ref:[Rolling Updates](additional/rolling-updates.md)
 * @ref:[Operating, Managing, Observability](additional/operations.md)
@@ -21,7 +21,7 @@ recommendation if you don't have other preferences or constraints.
 
 ## Module info
 
-To use Akka Cluster add the following dependency in your project:
+To use Pekko Cluster add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -34,9 +34,9 @@ To use Akka Cluster add the following dependency in your project:
 
 @@project-info{ projectId="cluster" }
 
-## When and where to use Akka Cluster
+## When and where to use Pekko Cluster
  
-See @ref:[Choosing Akka Cluster](typed/choosing-cluster.md#when-and-where-to-use-akka-cluster) in the documentation of the new APIs.
+See @ref:[Choosing Pekko Cluster](typed/choosing-cluster.md#when-and-where-to-use-pekko-cluster) in the documentation of the new APIs.
 
 ## Cluster API Extension
 
@@ -316,7 +316,7 @@ unreachable from the rest of the cluster. Please see:
 @ref:[Multi Node Testing](multi-node-testing.md) is useful for testing cluster applications.
 
 Set up your project according to the instructions in @ref:[Multi Node Testing](multi-node-testing.md) and @ref:[Multi JVM Testing](multi-jvm-testing.md), i.e.
-add the `sbt-multi-jvm` plugin and the dependency to `akka-multi-node-testkit`.
+add the `sbt-multi-jvm` plugin and the dependency to `pekko-multi-node-testkit`.
 
 First, as described in @ref:[Multi Node Testing](multi-node-testing.md), we need some scaffolding to configure the @scaladoc[MultiNodeSpec](pekko.remote.testkit.MultiNodeSpec).
 Define the participating @ref:[roles](typed/cluster.md#node-roles) and their @ref:[configuration](#configuration) in an object extending @scaladoc[MultiNodeConfig](pekko.remote.testkit.MultiNodeConfig):
@@ -388,12 +388,12 @@ or similar instead.
 
 @@@
 
-The cluster can be managed with the script `akka-cluster` provided in the Akka GitHub repository @extref[here](github:akka-cluster/jmx-client). Place the script and the `jmxsh-R5.jar` library in the same directory.
+The cluster can be managed with the script `pekko-cluster` provided in the Pekko GitHub repository @extref[here](github:cluster/jmx-client). Place the script and the `jmxsh-R5.jar` library in the same directory.
 
 Run it without parameters to see instructions about how to use the script:
 
 ```
-Usage: ./akka-cluster <node-hostname> <jmx-port> <command> ...
+Usage: ./pekko-cluster <node-hostname> <jmx-port> <command> ...
 
 Supported commands are:
            join <node-url> - Sends request a JOIN node with the specified URL
@@ -409,11 +409,11 @@ Supported commands are:
                              node cluster)
               is-available - Checks if the member node is available
 Where the <node-url> should be on the format of
-  'akka.<protocol>://<actor-system-name>@<hostname>:<port>'
+  'pekko.<protocol>://<actor-system-name>@<hostname>:<port>'
 
-Examples: ./akka-cluster localhost 9999 is-available
-          ./akka-cluster localhost 9999 join akka://MySystem@darkstar:2552
-          ./akka-cluster localhost 9999 cluster-status
+Examples: ./pekko-cluster localhost 9999 is-available
+          ./pekko-cluster localhost 9999 join pekko://MySystem@darkstar:2552
+          ./pekko-cluster localhost 9999 cluster-status
 ```
 
 To be able to use the script you must enable remote monitoring and management when starting the JVMs of the cluster nodes,
@@ -425,4 +425,4 @@ Make sure you understand the security implications of enabling remote monitoring
 ## Configuration
 
 There are several @ref:[configuration](typed/cluster.md#configuration) properties for the cluster,
-and the full @ref:[reference configuration](general/configuration-reference.md#config-akka-cluster) for complete information. 
+and the full @ref:[reference configuration](general/configuration-reference.md#config-pekko-cluster) for complete information. 
diff --git a/docs/src/main/paradox/common/binary-compatibility-rules.md b/docs/src/main/paradox/common/binary-compatibility-rules.md
index c6190b1d0f..94a9bdcce1 100644
--- a/docs/src/main/paradox/common/binary-compatibility-rules.md
+++ b/docs/src/main/paradox/common/binary-compatibility-rules.md
@@ -1,9 +1,9 @@
 ---
-project.description: Binary compatibility across Akka versions.
+project.description: Binary compatibility across Pekko versions.
 ---
 # Binary Compatibility Rules
 
-Akka maintains and verifies *backwards binary compatibility* across versions of modules.
+Pekko maintains and verifies *backwards binary compatibility* across versions of modules.
 
 In the rest of this document whenever *binary compatibility* is mentioned "*backwards binary compatibility*" is meant
 (as opposed to forward compatibility).
@@ -11,14 +11,14 @@ In the rest of this document whenever *binary compatibility* is mentioned "*back
 This means that the new JARs are a drop-in replacement for the old one 
 (but not the other way around) as long as your build does not enable the inliner (Scala-only restriction).
 
-Because of this approach applications can upgrade to the latest version of Akka
+Because of this approach applications can upgrade to the latest version of Pekko
 even when @ref[intermediate satellite projects are not yet upgraded](../project/downstream-upgrade-strategy.md)
 
 ## Binary compatibility rules explained
 
 Binary compatibility is maintained between:
 
- * **minor** and **patch** versions - please note that the meaning of "minor" has shifted to be more restrictive with Akka `2.4.0`, read @ref:[Change in versioning scheme](#24versioningchange) for details.
+ * **minor** and **patch** versions - please note that the meaning of "minor" has shifted to be more restrictive with Pekko `2.4.0`, read @ref:[Change in versioning scheme](#24versioningchange) for details.
 
 Binary compatibility is **NOT** maintained between:
 
@@ -46,7 +46,7 @@ OK:  3.1.n --> 3.2.0 ...
 
 ### Cases where binary compatibility is not retained
 
-If a security vulnerability is reported in Akka or a transient dependency of Akka and it cannot be solved without breaking binary compatibility then fixing the security issue is more important. In such cases binary compatibility might not be retained when releasing a minor version. Such exception is always noted in the release announcement.
+If a security vulnerability is reported in Pekko or a transient dependency of Pekko and it cannot be solved without breaking binary compatibility then fixing the security issue is more important. In such cases binary compatibility might not be retained when releasing a minor version. Such exception is always noted in the release announcement.
 
 We do not guarantee binary compatibility with versions that are EOL, though in
 practice this does not make a big difference: only in rare cases would a change
@@ -67,29 +67,29 @@ Once a method has been deprecated then the guideline* is that it will be kept, a
 <a id="24versioningchange"></a>
 ## Change in versioning scheme, stronger compatibility since 2.4
 
-Since the release of Akka `2.4.0` a new versioning scheme is in effect.
+Since the release of Pekko `2.4.0` a new versioning scheme is in effect.
 
-Historically, Akka has been following the Java or Scala style of versioning in which the first number would mean "**epoch**",
+Historically, Pekko has been following the Java or Scala style of versioning in which the first number would mean "**epoch**",
 the second one would mean **major**, and third be the **minor**, thus: `epoch.major.minor` (versioning scheme followed until and during `2.3.x`).
 
-**Currently**, since Akka `2.4.0`, the new versioning applies which is closer to semantic versioning many have come to expect, 
-in which the version number is deciphered as `major.minor.patch`. This also means that Akka `2.5.x` is binary compatible with the `2.4` series releases (with the exception of "may change" APIs).
+**Currently**, since Pekko `2.4.0`, the new versioning applies which is closer to semantic versioning many have come to expect, 
+in which the version number is deciphered as `major.minor.patch`. This also means that Pekko `2.5.x` is binary compatible with the `2.4` series releases (with the exception of "may change" APIs).
 
-In addition to that, Akka `2.4.x` has been made binary compatible with the `2.3.x` series,
-so there is no reason to remain on Akka 2.3.x, since upgrading is completely compatible 
+In addition to that, Pekko `2.4.x` has been made binary compatible with the `2.3.x` series,
+so there is no reason to remain on Pekko 2.3.x, since upgrading is completely compatible 
 (and many issues have been fixed ever since).
 
 ## Mixed versioning is not allowed
 
-Modules that are released together under the Akka project are intended to be upgraded together.
-For example, it is not legal to mix Akka Actor `2.6.2` with Akka Cluster `2.6.5` even though
-"Akka `2.6.2`" and "Akka `2.6.5`" *are* binary compatible. 
+Modules that are released together under the Pekko project are intended to be upgraded together.
+For example, it is not legal to mix Pekko Actor `2.6.2` with Pekko Cluster `2.6.5` even though
+"Pekko `2.6.2`" and "Pekko `2.6.5`" *are* binary compatible. 
 
 This is because modules may assume internals changes across module boundaries, for example some feature
 in Clustering may have required an internals change in Actor, however it is not public API, 
 thus such change is considered safe.
 
-If you accidentally mix Akka versions, for example through transitive
+If you accidentally mix Pekko versions, for example through transitive
 dependencies, you might get a warning at run time such as:
 
 ```
@@ -100,21 +100,21 @@ artifacts: (2.5.3, [pekko-persistence-query]), (2.6.6, [pekko-actor, pekko-clust
 See also: https://doc.akka.io/docs/akka/current/common/binary-compatibility-rules.html#mixed-versioning-is-not-allowed
 ```
 
-The fix is typically to pick the highest Akka version, and add explicit
+The fix is typically to pick the highest Pekko version, and add explicit
 dependencies to your project as needed. For example, in the example above
-you might want to add `akka-persistence-query` dependency for 2.6.6.
+you might want to add `pekko-persistence-query` dependency for 2.6.6.
 
 @@@ note
 
-We recommend keeping an `akkaVersion` variable in your build file, and re-use it for all
+We recommend keeping an `pekkoVersion` variable in your build file, and re-use it for all
 included modules, so when you upgrade you can simply change it in this one place.
 
 @@@
 
-The warning includes a full list of Akka runtime dependencies in the classpath, and the version detected. 
-You can use that information to include an explicit list of Akka artifacts you depend on into your build. If you use
-Maven or Gradle, you can include the @ref:[Akka Maven BOM](../typed/guide/modules.md#actor-library) (bill 
-of materials) to help you keep all the versions of your Akka dependencies in sync. 
+The warning includes a full list of Pekko runtime dependencies in the classpath, and the version detected. 
+You can use that information to include an explicit list of Pekko artifacts you depend on into your build. If you use
+Maven or Gradle, you can include the @ref:[Pekko Maven BOM](../typed/guide/modules.md#actor-library) (bill 
+of materials) to help you keep all the versions of your Pekko dependencies in sync. 
 
 
 ## The meaning of "may change"
@@ -125,7 +125,7 @@ Read more in @ref:[Modules marked "May Change"](may-change.md).
 
 ## API stability annotations and comments
 
-Akka gives a very strong binary compatibility promise to end-users. However some parts of Akka are excluded 
+Pekko gives a very strong binary compatibility promise to end-users. However some parts of Pekko are excluded 
 from these rules, for example internal or known evolving APIs may be marked as such and shipped as part of 
 an overall stable module. As general rule any breakage is avoided and handled via deprecation and method addition,
 however certain APIs which are known to not yet be fully frozen (or are fully internal) are marked as such and subject 
@@ -140,7 +140,7 @@ the `/** INTERNAL API */` comment or the @javadoc[@InternalApi](pekko.annotation
 No compatibility guarantees are given about these classes. They may change or even disappear in minor versions,
 and user code is not supposed to call them.
 
-Side-note on JVM representation details of the Scala `private[pekko]` pattern that Akka is using extensively in 
+Side-note on JVM representation details of the Scala `private[pekko]` pattern that Pekko is using extensively in 
 its internals: Such methods or classes, which act as "accessible only from the given package" in Scala, are compiled
 down to `public` (!) in raw Java bytecode. The access restriction, that Scala understands is carried along
 as metadata stored in the classfile. Thus, such methods are safely guarded from being accessed from Scala,
@@ -149,21 +149,21 @@ into Internal APIs, as they are subject to change without any warning.
 
 ### The `@DoNotInherit` and `@ApiMayChange` markers
 
-In addition to the special internal API marker two annotations exist in Akka and specifically address the following use cases:
+In addition to the special internal API marker two annotations exist in Pekko and specifically address the following use cases:
 
  * @javadoc[@ApiMayChange](pekko.annotation.ApiMayChange) – which marks APIs which are known to be not fully stable yet. Read more in @ref:[Modules marked "May Change"](may-change.md)
  * @javadoc[@DoNotInherit](pekko.annotation.DoNotInherit) – which marks APIs that are designed under a closed-world assumption, and thus must not be
-extended outside Akka itself (or such code will risk facing binary incompatibilities). E.g. an interface may be
+extended outside Pekko itself (or such code will risk facing binary incompatibilities). E.g. an interface may be
 marked using this annotation, and while the type is public, it is not meant for extension by user-code. This allows
 adding new methods to these interfaces without risking to break client code. Examples of such API are the @scaladoc[FlowOps](pekko.stream.scaladsl.FlowOps)
-trait or the Akka HTTP domain model.
+trait or the Pekko HTTP domain model.
 
 Please note that a best-effort approach is always taken when having to change APIs and breakage is avoided as much as 
 possible, however these markers allow to experiment, gather feedback and stabilize the best possible APIs we could build.
 
 ## Binary Compatibility Checking Toolchain
 
-Akka uses the Lightbend maintained [MiMa](https://github.com/lightbend/mima),
+Pekko uses the Lightbend maintained [MiMa](https://github.com/lightbend/mima),
 for enforcing binary compatibility is kept where it was promised.
 
 All Pull Requests must pass MiMa validation (which happens automatically), and if failures are detected,
diff --git a/docs/src/main/paradox/common/circuitbreaker.md b/docs/src/main/paradox/common/circuitbreaker.md
index b295fc8ba9..678ad8d477 100644
--- a/docs/src/main/paradox/common/circuitbreaker.md
+++ b/docs/src/main/paradox/common/circuitbreaker.md
@@ -24,7 +24,7 @@ resource exhaustion.  Circuit breakers can also allow savvy developers to mark p
 the site that use the functionality unavailable, or perhaps show some cached content as 
 appropriate while the breaker is open.
 
-The Akka library provides an implementation of a circuit breaker called 
+The Pekko library provides an implementation of a circuit breaker called 
 @apidoc[CircuitBreaker] which has the behavior described below.
 
 ## What do they do?
@@ -105,7 +105,7 @@ By default, the circuit breaker treats @javadoc[Exception](java.lang.Exception)
 On failure, the failure count will increment. If the failure count reaches the *maxFailures*, the circuit breaker will be opened.
 However, some applications may require certain exceptions to not increase the failure count.
 In other cases one may want to increase the failure count even if the call succeeded.
-Akka circuit breaker provides a way to achieve such use cases: @scala[@scaladoc[withCircuitBreaker](pekko.pattern.CircuitBreaker#withCircuitBreaker[T](body:=%3Escala.concurrent.Future[T],defineFailureFn:scala.util.Try[T]=%3EBoolean):scala.concurrent.Future[T]) and @scaladoc[withSyncCircuitBreaker](pekko.pattern.CircuitBreaker#withSyncCircuitBreaker[T](body:=%3ET,defineFailureFn:scala.util.Try[T]=%3EBoolean):T)]@java[@javadoc[callWithCircuitBreaker](pekko.pattern.CircuitBreaker#callWithCi [...]
+Pekko circuit breaker provides a way to achieve such use cases: @scala[@scaladoc[withCircuitBreaker](pekko.pattern.CircuitBreaker#withCircuitBreaker[T](body:=%3Escala.concurrent.Future[T],defineFailureFn:scala.util.Try[T]=%3EBoolean):scala.concurrent.Future[T]) and @scaladoc[withSyncCircuitBreaker](pekko.pattern.CircuitBreaker#withSyncCircuitBreaker[T](body:=%3ET,defineFailureFn:scala.util.Try[T]=%3EBoolean):T)]@java[@javadoc[callWithCircuitBreaker](pekko.pattern.CircuitBreaker#callWithC [...]
 
 All methods above accept an argument `defineFailureFn`
 
diff --git a/docs/src/main/paradox/common/io-layer.md b/docs/src/main/paradox/common/io-layer.md
index 8e99f210ed..41113a1745 100644
--- a/docs/src/main/paradox/common/io-layer.md
+++ b/docs/src/main/paradox/common/io-layer.md
@@ -1,6 +1,6 @@
 # I/O Layer Design
 
-The `org.apache.pekko.io` package has been developed in collaboration between the Akka
+The `org.apache.pekko.io` package has been developed in collaboration between the Pekko
 and [spray.io](http://spray.io) teams. Its design incorporates the experiences with the
 `spray-io` module along with improvements that were jointly developed for
 more general consumption as an actor-based service.
@@ -8,7 +8,7 @@ more general consumption as an actor-based service.
 ## Requirements
 
 In order to form a general and extensible IO layer basis for a wide range of
-applications, with Akka remoting and spray HTTP being the initial ones, the
+applications, with Pekko remoting and spray HTTP being the initial ones, the
 following requirements were established as key drivers for the design:
 
  * scalability to millions of concurrent connections
@@ -25,7 +25,7 @@ instead allow completely protocol-specific user-level APIs.
 
 ## Basic Architecture
 
-Each transport implementation will be made available as a separate Akka
+Each transport implementation will be made available as a separate Pekko
 extension, offering an @apidoc[actor.ActorRef] representing the initial point of
 contact for client code. This "manager" accepts requests for establishing a
 communications channel (e.g. connect or listen on a TCP socket). Each
@@ -71,7 +71,7 @@ Staying within the actor model for the whole implementation allows us to remove
 the need for explicit thread handling logic, and it also means that there are
 no locks involved (besides those which are part of the underlying transport
 library). Writing only actor code results in a cleaner implementation,
-while Akka’s efficient actor messaging does not impose a high tax for this
+while Pekko’s efficient actor messaging does not impose a high tax for this
 benefit. In fact the event-based nature of I/O maps so well to the actor model
 that we expect clear performance and especially scalability benefits over
 traditional solutions with explicit thread management and synchronization.
diff --git a/docs/src/main/paradox/common/may-change.md b/docs/src/main/paradox/common/may-change.md
index e9dbddb871..d37fe0f42f 100644
--- a/docs/src/main/paradox/common/may-change.md
+++ b/docs/src/main/paradox/common/may-change.md
@@ -6,10 +6,9 @@ the term **may change**.
 
 Concretely **may change** means that an API or module is in early access mode and that it:
 
- * is not covered by Lightbend's commercial support (unless specifically stated otherwise)
  * is not guaranteed to be binary compatible in minor releases
  * may have its API change in breaking ways in minor releases
- * may be entirely dropped from Akka in a minor release
+ * may be entirely dropped from Pekko in a minor release
 
 Complete modules can be marked as **may change**, which will be stated in the module's description and in the docs.
 
@@ -17,8 +16,7 @@ Individual public APIs can be annotated with @javadoc[ApiMayChange](pekko.annota
 guarantees than the rest of the module it lives in. For example, when while introducing "new" Java 8 APIs into
 existing stable modules, these APIs may be marked with this annotation to signal that they are not frozen yet.
 Please use such methods and classes with care, however if you see such APIs that is the best point in time to try them
-out and provide feedback (e.g. using the akka-user mailing list, GitHub issues or Gitter) before they are frozen as
-fully stable API.
+out and provide feedback before they are frozen as fully stable API.
 
 Best effort migration guides may be provided, but this is decided on a case-by-case basis for **may change** modules.
 
diff --git a/docs/src/main/paradox/common/other-modules.md b/docs/src/main/paradox/common/other-modules.md
index 3d006f386e..ccbdf81522 100644
--- a/docs/src/main/paradox/common/other-modules.md
+++ b/docs/src/main/paradox/common/other-modules.md
@@ -1,80 +1,62 @@
-# Other Akka modules
+# Other Pekko modules
 
-This page describes modules that compliment libraries from the Akka core.  See [this overview](https://doc.akka.io/docs/akka/current/typed/guide/modules.html) instead for a guide on the core modules.
+This page describes modules that compliment libraries from the Pekko core.  See [this overview](https://doc.akka.io/docs/akka/current/typed/guide/modules.html) instead for a guide on the core modules.
 
-## [Akka HTTP](https://doc.akka.io/docs/akka-http/current/)
+## [Pekko HTTP](https://doc.akka.io/docs/akka-http/current/)
 
-A full server- and client-side HTTP stack on top of akka-actor and akka-stream.
+A full server- and client-side HTTP stack on top of pekko-actor and pekko-stream.
 
-## [Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/)
+## [Pekko gRPC](https://doc.akka.io/docs/akka-grpc/current/)
 
-Akka gRPC provides support for building streaming gRPC servers and clients on top of Akka Streams.
+Pekko gRPC provides support for building streaming gRPC servers and clients on top of Pekko Streams.
 
-## [Alpakka](https://doc.akka.io/docs/alpakka/current/)
+## [Pekko Connectors](https://doc.akka.io/docs/alpakka/current/)
 
-Alpakka is a Reactive Enterprise Integration library for Java and Scala, based on Reactive Streams and Akka.
+Pekko Connectors is a Reactive Enterprise Integration library for Java and Scala, based on Reactive Streams and Pekko.
 
-## [Alpakka Kafka Connector](https://doc.akka.io/docs/alpakka-kafka/current/)
+## [Pekko Kafka Connector](https://doc.akka.io/docs/alpakka-kafka/current/)
 
-The Alpakka Kafka Connector connects Apache Kafka with Akka Streams.
+The Pekko Kafka Connector connects Apache Kafka with Pekko Streams.
 
 
-## [Akka Projections](https://doc.akka.io/docs/akka-projection/current/)
+## [Pekko Projections](https://doc.akka.io/docs/akka-projection/current/)
 
-Akka Projections let you process a stream of events or records from a source to a projected model or external system.
+Pekko Projections let you process a stream of events or records from a source to a projected model or external system.
 
 
-## [Cassandra Plugin for Akka Persistence](https://doc.akka.io/docs/akka-persistence-cassandra/current/)
+## [Cassandra Plugin for Pekko Persistence](https://doc.akka.io/docs/akka-persistence-cassandra/current/)
 
-An Akka Persistence journal and snapshot store backed by Apache Cassandra.
+An Pekko Persistence journal and snapshot store backed by Apache Cassandra.
 
 
-## [JDBC Plugin for Akka Persistence](https://doc.akka.io/docs/akka-persistence-jdbc/current/)
+## [JDBC Plugin for Pekko Persistence](https://doc.akka.io/docs/akka-persistence-jdbc/current/)
 
-An Akka Persistence journal and snapshot store for use with JDBC-compatible databases. This implementation relies on [Slick](https://scala-slick.org/).
+An Pekko Persistence journal and snapshot store for use with JDBC-compatible databases. This implementation relies on [Slick](https://scala-slick.org/).
 
-## [R2DBC Plugin for Akka Persistence](https://doc.akka.io/docs/akka-persistence-r2dbc/current/)
+## [R2DBC Plugin for Pekko Persistence](https://doc.akka.io/docs/akka-persistence-r2dbc/current/)
 
-An Akka Persistence journal and snapshot store for use with R2DBC-compatible databases. This implementation relies on [R2DBC](https://r2dbc.io/).
+A Pekko Persistence journal and snapshot store for use with R2DBC-compatible databases. This implementation relies on [R2DBC](https://r2dbc.io/).
 
-## [Google Cloud Spanner Plugin for Akka Persistence](https://doc.akka.io/docs/akka-persistence-spanner/current/)
+## [Google Cloud Spanner Plugin for Pekko Persistence](https://doc.akka.io/docs/akka-persistence-spanner/current/)
 
-Use [Google Cloud Spanner](https://cloud.google.com/spanner/) as Akka Persistence journal and snapshot store. This integration relies on [Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/).
+Use [Google Cloud Spanner](https://cloud.google.com/spanner/) as Pekko Persistence journal and snapshot store. This integration relies on [Pekko gRPC](https://doc.akka.io/docs/akka-grpc/current/).
 
 
-## Akka Management
+## Pekko Management
 
-* [Akka Management](https://doc.akka.io/docs/akka-management/current/) provides a central HTTP endpoint for Akka management extensions.
-* [Akka Cluster Bootstrap](https://doc.akka.io/docs/akka-management/current/bootstrap/) helps bootstrapping an Akka cluster using Akka Discovery.
-* [Akka Management Cluster HTTP](https://doc.akka.io/docs/akka-management/current/cluster-http-management.html) provides HTTP endpoints for introspecting and managing Akka clusters.
-* [Akka Discovery for Kubernetes, Consul, Marathon, and AWS](https://doc.akka.io/docs/akka-management/current/discovery/)
+* [Pekko Management](https://doc.akka.io/docs/akka-management/current/) provides a central HTTP endpoint for Pekko management extensions.
+* [Pekko Cluster Bootstrap](https://doc.akka.io/docs/akka-management/current/bootstrap/) helps bootstrapping an Pekko cluster using Pekko Discovery.
+* [Pekko Management Cluster HTTP](https://doc.akka.io/docs/akka-management/current/cluster-http-management.html) provides HTTP endpoints for introspecting and managing Pekko clusters.
+* [Pekko Discovery for Kubernetes, Consul, Marathon, and AWS](https://doc.akka.io/docs/akka-management/current/discovery/)
 * [Kubernetes Lease](https://doc.akka.io/docs/akka-management/current/kubernetes-lease.html)
 
-## Akka Resilience Enhancements
+## Pekko Resilience Enhancements
 
-* [Akka Thread Starvation Detector](https://doc.akka.io/docs/akka-enhancements/current/starvation-detector.html)
-* [Akka Configuration Checker](https://doc.akka.io/docs/akka-enhancements/current/config-checker.html)
-* [Akka Diagnostics Recorder](https://doc.akka.io/docs/akka-enhancements/current/diagnostics-recorder.html)
+* [Pekko Thread Starvation Detector](https://doc.akka.io/docs/akka-enhancements/current/starvation-detector.html)
+* [Pekko Configuration Checker](https://doc.akka.io/docs/akka-enhancements/current/config-checker.html)
+* [Pekko Diagnostics Recorder](https://doc.akka.io/docs/akka-enhancements/current/diagnostics-recorder.html)
 
-## Akka Persistence Enhancements
+## Pekko Persistence Enhancements
 
-* [Akka GDPR for Persistence](https://doc.akka.io/docs/akka-enhancements/current/gdpr/index.html)
+* [Pekko GDPR for Persistence](https://doc.akka.io/docs/akka-enhancements/current/gdpr/index.html)
 
-## Community Projects
-
-Akka has a vibrant and passionate user community, the members of which have created many independent projects using Akka as well as extensions to it. See [Community Projects](https://akka.io/community/).
-
-## Related Projects Sponsored by Lightbend
-
-### [Play Framework](https://www.playframework.com)
-
-Play Framework provides a complete framework to build modern web applications, including tools for front end pipeline integration,
-a HTML template language etc. It is built on top of Akka HTTP, and integrates well with Akka and Actors.
-
-### [Lagom](https://www.lagomframework.com)
-
-Lagom is a microservice framework which strives to be opinionated and encode best practices for building microservice systems with Akka and Play.
-
-### [Lightbend Telemetry](https://developer.lightbend.com/docs/telemetry/current/home.html)
-
-Distributed tracing, metrics and monitoring for Akka Actors, Cluster, HTTP and more.
diff --git a/docs/src/main/paradox/coordinated-shutdown.md b/docs/src/main/paradox/coordinated-shutdown.md
index 0ad94401d6..f3658833ea 100644
--- a/docs/src/main/paradox/coordinated-shutdown.md
+++ b/docs/src/main/paradox/coordinated-shutdown.md
@@ -41,7 +41,7 @@ is only used for debugging/logging.
 Tasks added to the same phase are executed in parallel without any ordering assumptions.
 Next phase will not start until all tasks of previous phase have been completed.
 
-If tasks are not completed within a configured timeout (see @ref:[reference.conf](general/configuration-reference.md#config-akka-actor))
+If tasks are not completed within a configured timeout (see @ref:[reference.conf](general/configuration-reference.md#config-pekko-actor))
 the next phase will be started anyway. It is possible to configure `recover=off` for a phase
 to abort the rest of the shutdown process if a task fails or is not completed within the timeout.
 
@@ -88,10 +88,10 @@ pekko.coordinated-shutdown.exit-jvm = on
 
 The coordinated shutdown process is also started once the actor system's root actor is stopped.
 
-When using @ref:[Akka Cluster](cluster-usage.md) the `CoordinatedShutdown` will automatically run
+When using @ref:[Pekko Cluster](cluster-usage.md) the `CoordinatedShutdown` will automatically run
 when the cluster node sees itself as `Exiting`, i.e. leaving from another node will trigger
 the shutdown process on the leaving node. Tasks for graceful leaving of cluster including graceful
-shutdown of Cluster Singletons and Cluster Sharding are added automatically when Akka Cluster is used,
+shutdown of Cluster Singletons and Cluster Sharding are added automatically when Pekko Cluster is used,
 i.e. running the shutdown process will also trigger the graceful leaving if it's not already in progress.
 
 By default, the `CoordinatedShutdown` will be run when the JVM process exits, e.g.
@@ -102,8 +102,8 @@ pekko.coordinated-shutdown.run-by-jvm-shutdown-hook=off
 ```
 
 If you have application specific JVM shutdown hooks it's recommended that you register them via the
-`CoordinatedShutdown` so that they are running before Akka internal shutdown hooks, e.g.
-those shutting down Akka Remoting (Artery).
+`CoordinatedShutdown` so that they are running before Pekko internal shutdown hooks, e.g.
+those shutting down Pekko Remoting (Artery).
 
 Scala
 :  @@snip [snip](/docs/src/test/scala/docs/actor/typed/CoordinatedActorShutdownSpec.scala) { #coordinated-shutdown-jvm-hook }
diff --git a/docs/src/main/paradox/coordination.md b/docs/src/main/paradox/coordination.md
index 9e1c23edae..57a5657bde 100644
--- a/docs/src/main/paradox/coordination.md
+++ b/docs/src/main/paradox/coordination.md
@@ -1,14 +1,14 @@
 ---
-project.description: A distributed lock with Akka Coordination using a pluggable lease API.
+project.description: A distributed lock with Pekko Coordination using a pluggable lease API.
 ---
 # Coordination
 
-Akka Coordination is a set of tools for distributed coordination.
+Pekko Coordination is a set of tools for distributed coordination.
 
 ## Module info
 
 @@dependency[sbt,Gradle,Maven] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
@@ -74,7 +74,7 @@ The value should be greater than the max expected JVM pause e.g. garbage collect
 by another node and then when the original node becomes responsive again there will be a short time before the original lease owner 
 can take action e.g. shutdown shards or singletons.
 
-## Usages in other Akka modules
+## Usages in other Pekko modules
 
 Leases can be used for @ref[Split Brain Resolver](split-brain-resolver.md#lease), @ref[Cluster Singleton](cluster-singleton.md#lease), and @ref[Cluster Sharding](cluster-sharding.md#lease). 
 
diff --git a/docs/src/main/paradox/discovery/index.md b/docs/src/main/paradox/discovery/index.md
index d9cf44edab..1f1c2643f3 100644
--- a/docs/src/main/paradox/discovery/index.md
+++ b/docs/src/main/paradox/discovery/index.md
@@ -1,35 +1,24 @@
 ---
-project.description: Service discovery with Akka using DNS, Kubernetes, AWS, Consul or Marathon.
+project.description: Service discovery with Pekko using DNS, Kubernetes, AWS, Consul or Marathon.
 ---
 # Discovery
 
-The Akka Discovery API enables **service discovery** to be provided by different technologies. 
+The Pekko Discovery API enables **service discovery** to be provided by different technologies. 
 It allows to delegate endpoint lookup so that services can be configured depending on the environment by other means than configuration files. 
 
-Implementations provided by the Akka Discovery module are 
+Implementations provided by the Pekko Discovery module are 
 
 * @ref:[Configuration](#discovery-method-configuration) (HOCON)
 * @ref:[DNS](#discovery-method-dns) (SRV records)
 * @ref:[Aggregate](#discovery-method-aggregate-multiple-discovery-methods) multiple discovery methods
 
-In addition the @extref:[Akka Management](akka-management:) toolbox contains Akka Discovery implementations for
+In addition, the @extref:[Pekko Management](pekko-management:) toolbox contains Pekko Discovery implementations for
 
-* @extref:[Kubernetes API](akka-management:discovery/kubernetes.html)
-* @extref:[AWS API: EC2 Tag-Based Discovery](akka-management:discovery/aws.html#discovery-method-aws-api-ec2-tag-based-discovery)
-* @extref:[AWS API: ECS Discovery](akka-management:discovery/aws.html#discovery-method-aws-api-ecs-discovery)
-* @extref:[Consul](akka-management:discovery/consul.html)
-* @extref:[Marathon API](akka-management:discovery/marathon.html)
-
-
-@@@ note
-
-Discovery used to be part of Akka Management but has become an Akka module as of `2.5.19` of Akka and version `1.0.0`
-of Akka Management. If you're also using Akka Management for other service discovery methods or bootstrap make
-sure you are using at least version `1.0.0` of Akka Management.
-
-See @ref:[Migration hints](#migrating-from-akka-management-discovery-before-1-0-0-)
-
-@@@
+* @extref:[Kubernetes API](pekko-management:discovery/kubernetes.html)
+* @extref:[AWS API: EC2 Tag-Based Discovery](pekko-management:discovery/aws.html#discovery-method-aws-api-ec2-tag-based-discovery)
+* @extref:[AWS API: ECS Discovery](pekko-management:discovery/aws.html#discovery-method-aws-api-ecs-discovery)
+* @extref:[Consul](pekko-management:discovery/consul.html)
+* @extref:[Marathon API](pekko-management:discovery/marathon.html)
 
 ## Module info
 
@@ -72,13 +61,13 @@ Scala
 Java
 :  @@snip [CompileOnlyTest.java](/discovery/src/test/java/jdoc/org/apache/pekko/discovery/CompileOnlyTest.java) { #full }
 
-Port can be used when a service opens multiple ports e.g. a HTTP port and an Akka remoting port.
+Port can be used when a service opens multiple ports e.g. a HTTP port and an Pekko remoting port.
 
 ## Discovery Method: DNS
 
 @@@ note { title="Async DNS" }
 
-Akka Discovery with DNS does always use the @ref[Akka-native "async-dns" implementation](../io-dns.md) (it is independent of the `pekko.io.dns.resolver` setting).
+Pekko Discovery with DNS does always use the @ref[Pekko-native "async-dns" implementation](../io-dns.md) (it is independent of the `pekko.io.dns.resolver` setting).
 
 @@@
 
@@ -87,7 +76,7 @@ DNS discovery maps `Lookup` queries as follows:
 * `serviceName`, `portName` and `protocol` set: SRV query in the form: `_port._protocol.name` Where the `_`s are added.
 * Any query  missing any of the fields is mapped to a A/AAAA query for the `serviceName`
 
-The mapping between Akka service discovery terminology and SRV terminology:
+The mapping between Pekko service discovery terminology and SRV terminology:
 
 * SRV service = port
 * SRV name = serviceName
@@ -169,7 +158,7 @@ In this case `a-double.pekko.test` would resolve to `192.168.1.21` and `192.168.
 
 Configuration currently ignores all fields apart from service name.
 
-For simple use cases configuration can be used for service discovery. The advantage of using Akka Discovery with
+For simple use cases configuration can be used for service discovery. The advantage of using Pekko Discovery with
 configuration rather than your own configuration values is that applications can be migrated to a more
 sophisticated discovery method without any code changes.
 
@@ -249,11 +238,11 @@ The above configuration will result in `pekko-dns` first being checked and if it
 targets for the given service name then `config` is queried which i configured with one service called
 `service1` which two hosts `host1` and `host2`.
 
-## Migrating from Akka Management Discovery (before 1.0.0)
+## Migrating from Pekko Management Discovery (before 1.0.0)
 
-Akka Discovery started out as a submodule of Akka Management, before 1.0.0 of Akka Management. Akka Discovery is not compatible with those versions of Akka Management Discovery.
+Pekko Discovery started out as a submodule of Pekko Management, before 1.0.0 of Pekko Management. Pekko Discovery is not compatible with those versions of Pekko Management Discovery.
 
-At least version `1.0.0` of any Akka Management module should be used if also using Akka Discovery.
+At least version `1.0.0` of any Pekko Management module should be used if also using Pekko Discovery.
 
 Migration steps:
 
diff --git a/docs/src/main/paradox/dispatchers.md b/docs/src/main/paradox/dispatchers.md
index 823306a1ea..5e80480396 100644
--- a/docs/src/main/paradox/dispatchers.md
+++ b/docs/src/main/paradox/dispatchers.md
@@ -5,14 +5,14 @@ For the full documentation of this feature and for new projects see @ref:[Dispat
 
 ## Dependency
 
-Dispatchers are part of core Akka, which means that they are part of the akka-actor dependency:
+Dispatchers are part of core Pekko, which means that they are part of the pekko-actor dependency:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/distributed-data.md b/docs/src/main/paradox/distributed-data.md
index c7af0dcd2d..d9ae7a2f24 100644
--- a/docs/src/main/paradox/distributed-data.md
+++ b/docs/src/main/paradox/distributed-data.md
@@ -5,14 +5,14 @@ For the full documentation of this feature and for new projects see @ref:[Distri
  
 ## Dependency
 
-To use Akka Distributed Data, you must add the following dependency in your project:
+To use Pekko Distributed Data, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-distributed-data_$scala.binary.version$"
+  artifact="pekko-distributed-data_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -240,7 +240,7 @@ Java
 
 As deleted keys continue to be included in the stored data on each node as well as in gossip
 messages, a continuous series of updates and deletes of top-level entities will result in
-growing memory usage until an ActorSystem runs out of memory. To use Akka Distributed Data
+growing memory usage until an ActorSystem runs out of memory. To use Pekko Distributed Data
 where frequent adds and removes are required, you should use a fixed number of top-level data
 types that support both updates and removals, for example @apidoc[cluster.ddata.ORMap] or @apidoc[cluster.ddata.ORSet].
 
@@ -248,7 +248,7 @@ types that support both updates and removals, for example @apidoc[cluster.ddata.
 
 ## Replicated data types
 
-Akka contains a set of useful replicated data types and it is fully possible to implement custom replicated data types.
+Pekko contains a set of useful replicated data types and it is fully possible to implement custom replicated data types.
 For the full documentation of this feature and for new projects see @ref:[Distributed Data Replicated data types](typed/distributed-data.md#replicated-data-types).
 
 ### Delta-CRDT
diff --git a/docs/src/main/paradox/distributed-pub-sub.md b/docs/src/main/paradox/distributed-pub-sub.md
index b7edc454e2..3cf10420ac 100644
--- a/docs/src/main/paradox/distributed-pub-sub.md
+++ b/docs/src/main/paradox/distributed-pub-sub.md
@@ -53,7 +53,7 @@ There a two different modes of message delivery, explained in the sections
 @@@ div { .group-scala }
 
 A more comprehensive sample is available in the
-tutorial named [Akka Clustered PubSub with Scala!](https://github.com/typesafehub/activator-akka-clustering).
+tutorial named [Pekko Clustered PubSub with Scala!](https://github.com/typesafehub/activator-pekko-clustering).
 
 @@@
 
@@ -233,7 +233,7 @@ pekko.extensions = ["org.apache.pekko.cluster.pubsub.DistributedPubSub"]
 
 ## Delivery Guarantee
 
-As in @ref:[Message Delivery Reliability](general/message-delivery-reliability.md) of Akka, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**.
+As in @ref:[Message Delivery Reliability](general/message-delivery-reliability.md) of Pekko, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**.
 In other words, messages can be lost over the wire.
 
-If you are looking for at-least-once delivery guarantee, we recommend [Alpakka Kafka](https://doc.akka.io/docs/alpakka-kafka/current/).
+If you are looking for at-least-once delivery guarantee, we recommend [Pekko Connectors](https://doc.akka.io/docs/alpakka-kafka/current/).
diff --git a/docs/src/main/paradox/durable-state/persistence-query.md b/docs/src/main/paradox/durable-state/persistence-query.md
index de610cadf0..9d2cee1611 100644
--- a/docs/src/main/paradox/durable-state/persistence-query.md
+++ b/docs/src/main/paradox/durable-state/persistence-query.md
@@ -1,5 +1,5 @@
 ---
-project.description: Query side to Akka Persistence allowing for building CQRS applications using durable state.
+project.description: Query side to Pekko Persistence allowing for building CQRS applications using durable state.
 ---
 # Persistence Query
 
@@ -8,7 +8,7 @@ project.description: Query side to Akka Persistence allowing for building CQRS a
 To use Persistence Query, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -16,24 +16,24 @@ To use Persistence Query, you must add the following dependency in your project:
   version=PekkoVersion
 }
 
-This will also add dependency on the @ref[Akka Persistence](../persistence.md) module.
+This will also add dependency on the @ref[Pekko Persistence](../persistence.md) module.
 
 ## Introduction
 
-Akka persistence query provides a query interface to @ref:[Durable State Behaviors](../typed/durable-state/persistence.md).
+Pekko persistence query provides a query interface to @ref:[Durable State Behaviors](../typed/durable-state/persistence.md).
 These queries are based on asynchronous streams. These streams are similar to the ones offered in the @ref:[Event Sourcing](../persistence-query.md)
 based implementation. Various state store plugins can implement these interfaces to expose their query capabilities.
 
-One of the rationales behind having a separate query module for Akka Persistence is for implementing the so-called 
+One of the rationales behind having a separate query module for Pekko Persistence is for implementing the so-called 
 query side or read side in the popular CQRS architecture pattern - in which the writing side of the 
-application implemented using Akka persistence, is completely separated from the query side.
+application implemented using Pekko persistence, is completely separated from the query side.
 
-## Using query with Akka Projections
+## Using query with Pekko Projections
 
-Akka Persistence and Akka Projections together can be used to develop a CQRS application. In the application the 
+Pekko Persistence and Pekko Projections together can be used to develop a CQRS application. In the application the 
 durable state is stored in a database and fetched as an asynchronous stream to the user. Currently queries on 
 durable state, provided by the `DurableStateStoreQuery` interface, is used to implement tag based searches in 
-Akka Projections. 
+Pekko Projections. 
 
 At present the query is based on _tags_. So if you have not tagged your objects, this query cannot be used.
 
diff --git a/docs/src/main/paradox/event-bus.md b/docs/src/main/paradox/event-bus.md
index 8a04987e05..e174f20f66 100644
--- a/docs/src/main/paradox/event-bus.md
+++ b/docs/src/main/paradox/event-bus.md
@@ -18,7 +18,7 @@ you have to provide it inside the message.
 
 @@@
 
-This mechanism is used in different places within Akka, e.g. the @ref:[Event Stream](#event-stream).
+This mechanism is used in different places within Pekko, e.g. the @ref:[Event Stream](#event-stream).
 Implementations can make use of the specific building blocks presented below.
 
 An event bus must define the following three @scala[abstract types]@java[type parameters]:
@@ -33,9 +33,9 @@ for any concrete implementation.
 
 ## Classifiers
 
-The classifiers presented here are part of the Akka distribution, but rolling
+The classifiers presented here are part of the Pekko distribution, but rolling
 your own in case you do not find a perfect match is not difficult, check the
-implementation of the existing ones on @extref[github](github:akka-actor/src/main/scala/org/apache/pekko/event/EventBus.scala) 
+implementation of the existing ones on @extref[github](github:pekko-actor/src/main/scala/org/apache/pekko/event/EventBus.scala) 
 
 ### Lookup Classification
 
@@ -195,7 +195,7 @@ Similarly to @ref:[Actor Classification](#actor-classification), @apidoc[event.E
 @@@ note
 
 The event stream is a *local facility*, meaning that it will *not* distribute events to other nodes in a clustered environment (unless you subscribe a Remote Actor to the stream explicitly).
-If you need to broadcast events in an Akka cluster, *without* knowing your recipients explicitly (i.e. obtaining their ActorRefs), you may want to look into: @ref:[Distributed Publish Subscribe in Cluster](distributed-pub-sub.md).
+If you need to broadcast events in a Pekko cluster, *without* knowing your recipients explicitly (i.e. obtaining their ActorRefs), you may want to look into: @ref:[Distributed Publish Subscribe in Cluster](distributed-pub-sub.md).
 
 @@@
 
diff --git a/docs/src/main/paradox/extending-akka.md b/docs/src/main/paradox/extending-pekko.md
similarity index 78%
rename from docs/src/main/paradox/extending-akka.md
rename to docs/src/main/paradox/extending-pekko.md
index 38aad46908..8270d029f0 100644
--- a/docs/src/main/paradox/extending-akka.md
+++ b/docs/src/main/paradox/extending-pekko.md
@@ -1,18 +1,18 @@
 ---
-project.description: How to extend Akka with Akka Extensions.
+project.description: How to extend Pekko with Pekko Extensions.
 ---
-# Classic Akka Extensions
+# Classic Pekko Extensions
 
-If you want to add features to Akka, there is a very elegant, but powerful mechanism for doing so.
-It's called Akka Extensions and comprises 2 basic components: an @apidoc[Extension](actor.Extension) and an @apidoc[ExtensionId](actor.ExtensionId).
+If you want to add features to Pekko, there is a very elegant, but powerful mechanism for doing so.
+It's called Pekko Extensions and comprises 2 basic components: an @apidoc[Extension](actor.Extension) and an @apidoc[ExtensionId](actor.ExtensionId).
 
-Extensions will only be loaded once per @apidoc[ActorSystem](actor.ActorSystem), which will be managed by Akka.
-You can choose to have your Extension loaded on-demand or at @apidoc[ActorSystem](actor.ActorSystem) creation time through the Akka configuration.
-Details on how to make that happens are below, in the @ref:[Loading from Configuration](extending-akka.md#loading) section.
+Extensions will only be loaded once per @apidoc[ActorSystem](actor.ActorSystem), which will be managed by Pekko.
+You can choose to have your Extension loaded on-demand or at @apidoc[ActorSystem](actor.ActorSystem) creation time through the Pekko configuration.
+Details on how to make that happens are below, in the @ref:[Loading from Configuration](extending-pekko.md#loading) section.
 
 @@@ warning
 
-Since an extension is a way to hook into Akka itself, the implementor of the extension needs to
+Since an extension is a way to hook into Pekko itself, the implementor of the extension needs to
 ensure the thread safety of his/her extension.
 
 @@@
@@ -45,7 +45,7 @@ Scala
 Java
 :  @@snip [ExtensionDocTest.java](/docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #extension-usage }
 
-Or from inside of an Akka Actor:
+Or from inside of a Pekko Actor:
 
 Scala
 :  @@snip [ExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage-actor }
@@ -66,7 +66,7 @@ That's all there is to it!
 <a id="loading"></a>
 ## Loading from Configuration
 
-To be able to load extensions from your Akka configuration you must add FQCNs of implementations of either @apidoc[ExtensionId](actor.ExtensionId) or @apidoc[ExtensionIdProvider](ExtensionIdProvider)
+To be able to load extensions from your Pekko configuration you must add FQCNs of implementations of either @apidoc[ExtensionId](actor.ExtensionId) or @apidoc[ExtensionIdProvider](ExtensionIdProvider)
 in the `pekko.extensions` section of the config you provide to your @apidoc[ActorSystem](actor.ActorSystem).
 
 Scala
@@ -84,9 +84,9 @@ Java
 ## Applicability
 
 The sky is the limit!
-By the way, did you know that Akka @ref:[Cluster](cluster-usage.md), @ref:[Serialization](serialization.md) and other features are implemented as Akka Extensions?
+By the way, did you know that Pekko @ref:[Cluster](cluster-usage.md), @ref:[Serialization](serialization.md) and other features are implemented as Pekko Extensions?
 
-<a id="extending-akka-settings"></a>
+<a id="extending-pekko-settings"></a>
 ### Application specific settings
 
 The @ref:[configuration](general/configuration.md) can be used for application specific settings. A good practice is to place those settings in an Extension.
diff --git a/docs/src/main/paradox/fault-tolerance.md b/docs/src/main/paradox/fault-tolerance.md
index 603dd0741c..87a118a08e 100644
--- a/docs/src/main/paradox/fault-tolerance.md
+++ b/docs/src/main/paradox/fault-tolerance.md
@@ -8,11 +8,11 @@ For the full documentation of this feature and for new projects see @ref:[fault
 The concept of fault tolerance relates to actors, so in order to use these make sure to depend on actors:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/fsm.md b/docs/src/main/paradox/fsm.md
index ce2b08da46..c97df9f214 100644
--- a/docs/src/main/paradox/fsm.md
+++ b/docs/src/main/paradox/fsm.md
@@ -8,17 +8,17 @@ For the documentation of the new API of this feature and for new projects see @r
 To use Finite State Machine actors, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Overview
 
-The FSM (Finite State Machine) is available as @scala[a mixin for the] @java[an abstract base class that implements an] Akka Actor and
+The FSM (Finite State Machine) is available as @scala[a mixin for the] @java[an abstract base class that implements an] Pekko Actor and
 is best described in the [Erlang design principles](https://www.erlang.org/documentation/doc-4.8.2/doc/design_principles/fsm.html)
 
 A FSM can be described as a set of relations of the form:
diff --git a/docs/src/main/paradox/futures.md b/docs/src/main/paradox/futures.md
index be00f7f34f..454336c386 100644
--- a/docs/src/main/paradox/futures.md
+++ b/docs/src/main/paradox/futures.md
@@ -2,14 +2,14 @@
 
 ## Dependency
 
-Akka offers tiny helpers for use with @scala[@scaladoc[Future](scala.concurrent.Future)s]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)]. These are part of Akka's core module:
+Pekko offers tiny helpers for use with @scala[@scaladoc[Future](scala.concurrent.Future)s]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)]. These are part of Pekko's core module:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/general/actor-systems.md b/docs/src/main/paradox/general/actor-systems.md
index 0cf132056b..96e6ee8fd5 100644
--- a/docs/src/main/paradox/general/actor-systems.md
+++ b/docs/src/main/paradox/general/actor-systems.md
@@ -1,5 +1,5 @@
 ---
-project.description: The Akka ActorSystem.
+project.description: The Pekko ActorSystem.
 ---
 # Actor Systems
 
@@ -60,7 +60,7 @@ guidelines which might be helpful:
 The actor system as a collaborating ensemble of actors is the natural unit for
 managing shared facilities like scheduling services, configuration, logging,
 etc. Several actor systems with different configurations may co-exist within the
-same JVM without problems, there is no global shared state within Akka itself,
+same JVM without problems, there is no global shared state within Pekko itself,
 however the most common scenario will only involve a single actor system per JVM.
 
 Couple this with the transparent communication between actor systems — within one
@@ -102,7 +102,7 @@ system, after all the mantra is to view them as abundant and they weigh in at
 an overhead of only roughly 300 bytes per instance. Naturally, the exact order
 in which messages are processed in large systems is not controllable by the
 application author, but this is also not intended. Take a step back and relax
-while Akka does the heavy lifting under the hood.
+while Pekko does the heavy lifting under the hood.
 
 ## Terminating ActorSystem
 
diff --git a/docs/src/main/paradox/general/actors.md b/docs/src/main/paradox/general/actors.md
index 89ef1d0d61..28db2e4051 100644
--- a/docs/src/main/paradox/general/actors.md
+++ b/docs/src/main/paradox/general/actors.md
@@ -1,5 +1,5 @@
 ---
-project.description: What is an Actor and sending messages between independent units of computation in Akka.
+project.description: What is an Actor and sending messages between independent units of computation in Pekko.
 ---
 # What is an Actor?
 
@@ -49,15 +49,15 @@ Actor objects will typically contain some variables which reflect possible
 states the actor may be in. This can be an explicit state machine,
 or it could be a counter, set of listeners, pending requests, etc.
 These data are what make an actor valuable, and they
-must be protected from corruption by other actors. The good news is that Akka
+must be protected from corruption by other actors. The good news is that Pekko
 actors conceptually each have their own light-weight thread, which is
 completely shielded from the rest of the system. This means that instead of
 having to synchronize access using locks you can write your actor code
 without worrying about concurrency at all.
 
-Behind the scenes Akka will run sets of actors on sets of real threads, where
+Behind the scenes Pekko will run sets of actors on sets of real threads, where
 typically many actors share one thread, and subsequent invocations of one actor
-may end up being processed on different threads. Akka ensures that this
+may end up being processed on different threads. Pekko ensures that this
 implementation detail does not affect the single-threadedness of handling the
 actor’s state.
 
@@ -143,7 +143,7 @@ priority, which might even be at the front. While using such a queue, the order
 of messages processed will naturally be defined by the queue’s algorithm and in
 general not be FIFO.
 
-An important feature in which Akka differs from some other actor model
+An important feature in which Pekko differs from some other actor model
 implementations is that the current behavior must always handle the next
 dequeued message, there is no scanning the mailbox for the next matching one.
 Failure to handle a message will typically be treated as a failure, unless this
@@ -162,7 +162,7 @@ their parent.
 ## Supervisor Strategy
 
 The final piece of an actor is its strategy for handling unexpected exceptions - failures. 
-Fault handling is then done transparently by Akka, applying one of the strategies described 
+Fault handling is then done transparently by Pekko, applying one of the strategies described 
 in @ref:[Fault Tolerance](../typed/fault-tolerance.md) for each failure.
 
 ## When an Actor Terminates
diff --git a/docs/src/main/paradox/general/addressing.md b/docs/src/main/paradox/general/addressing.md
index 0bb1fe335c..2f9ec55a29 100644
--- a/docs/src/main/paradox/general/addressing.md
+++ b/docs/src/main/paradox/general/addressing.md
@@ -1,10 +1,10 @@
 ---
-project.description: Local and remote Akka Actor references, locating Actors, Actor paths and addresses.
+project.description: Local and remote Pekko Actor references, locating Actors, Actor paths and addresses.
 ---
 # Actor References, Paths and Addresses
 
 This chapter describes how actors are identified and located within a possibly
-distributed Akka application. 
+distributed Pekko application. 
 
 ![actor-paths-overview.png](../images/actor-paths-overview.png)
 
@@ -38,11 +38,11 @@ actor references for all practical purposes:
 for the purpose of being completed by the response from an actor.
 `org.apache.pekko.pattern.ask` creates this actor reference.
     * `DeadLetterActorRef` is the default implementation of the dead
-letters service to which Akka routes all messages whose destinations
+letters service to which Pekko routes all messages whose destinations
 are shut down or non-existent.
-    * `EmptyLocalActorRef` is what Akka returns when looking up a
+    * `EmptyLocalActorRef` is what Pekko returns when looking up a
 non-existent local actor path: it is equivalent to a
-`DeadLetterActorRef`, but it retains its path so that Akka can send
+`DeadLetterActorRef`, but it retains its path so that Pekko can send
 it over the network and compare it to other existing actor references for
 that path, some of which might have been obtained before the actor died.
  * And then there are some one-off internal implementations which you should
@@ -88,8 +88,8 @@ by which the corresponding actor is reachable, followed by the names of the
 actors in the hierarchy from the root up. Examples are:
 
 ```
-"akka://my-sys/user/service-a/worker1"               // purely local
-"akka://my-sys@host.example.com:5678/user/service-b" // remote
+"pekko://my-sys/user/service-a/worker1"               // purely local
+"pekko://my-sys@host.example.com:5678/user/service-b" // remote
 ```
 
 The interpretation of the host and port part (i.e. `host.example.com:5678` in the example)
diff --git a/docs/src/main/paradox/general/configuration-reference.md b/docs/src/main/paradox/general/configuration-reference.md
index 3f313c8218..a7f3b1a9e3 100644
--- a/docs/src/main/paradox/general/configuration-reference.md
+++ b/docs/src/main/paradox/general/configuration-reference.md
@@ -1,119 +1,119 @@
 # Default configuration
 
-Each Akka module has a `reference.conf` file with the default values.
+Each Pekko module has a `reference.conf` file with the default values.
 
 Make your edits/overrides in your `application.conf`. Don't override default values if
-you are not sure of the implications. [Akka Config Checker](https://doc.akka.io/docs/akka-enhancements/current/config-checker.html)
+you are not sure of the implications. [Pekko Config Checker](https://doc.akka.io/docs/akka-enhancements/current/config-checker.html)
 is a useful tool for finding potential configuration issues.
 
-The purpose of `reference.conf` files is for libraries, like Akka, to define default values that are used if
+The purpose of `reference.conf` files is for libraries, like Pekko, to define default values that are used if
 an application doesn't define a more specific value. It's also a good place to document the existence and
 meaning of the configuration properties. One library must not try to override properties in its own `reference.conf`
 for properties originally defined by another library's `reference.conf`, because the effective value would be
 nondeterministic when loading the configuration.`
 
-<a id="config-akka-actor"></a>
-### akka-actor
+<a id="config-pekko-actor"></a>
+### pekko-actor
 
 @@snip [reference.conf](/actor/src/main/resources/reference.conf)
 
-<a id="config-akka-actor-typed"></a>
-### akka-actor-typed
+<a id="config-pekko-actor-typed"></a>
+### pekko-actor-typed
 
 @@snip [reference.conf](/actor-typed/src/main/resources/reference.conf)
 
-<a id="config-akka-cluster-typed"></a>
-### akka-cluster-typed
+<a id="config-pekko-cluster-typed"></a>
+### pekko-cluster-typed
 
 @@snip [reference.conf](/cluster-typed/src/main/resources/reference.conf)
 
-<a id="config-akka-cluster"></a>
-### akka-cluster
+<a id="config-pekko-cluster"></a>
+### pekko-cluster
 
 @@snip [reference.conf](/cluster/src/main/resources/reference.conf)
 
-<a id="config-akka-discovery"></a>
-### akka-discovery
+<a id="config-pekko-discovery"></a>
+### pekko-discovery
 
 @@snip [reference.conf](/discovery/src/main/resources/reference.conf)
 
-<a id="config-akka-coordination"></a>
-### akka-coordination
+<a id="config-pekko-coordination"></a>
+### pekko-coordination
 
 @@snip [reference.conf](/coordination/src/main/resources/reference.conf)
 
-<a id="config-akka-multi-node-testkit"></a>
-### akka-multi-node-testkit
+<a id="config-pekko-multi-node-testkit"></a>
+### pekko-multi-node-testkit
 
 @@snip [reference.conf](/multi-node-testkit/src/main/resources/reference.conf)
 
-<a id="config-akka-persistence-typed"></a>
-### akka-persistence-typed
+<a id="config-pekko-persistence-typed"></a>
+### pekko-persistence-typed
 
 @@snip [reference.conf](/persistence-typed/src/main/resources/reference.conf)
 
-<a id="config-akka-persistence"></a>
-### akka-persistence
+<a id="config-pekko-persistence"></a>
+### pekko-persistence
 
 @@snip [reference.conf](/persistence/src/main/resources/reference.conf)
 
-<a id="config-akka-persistence-query"></a>
-### akka-persistence-query
+<a id="config-pekko-persistence-query"></a>
+### pekko-persistence-query
 
 @@snip [reference.conf](/persistence-query/src/main/resources/reference.conf)
 
-<a id="config-akka-persistence-testkit"></a>
-### akka-persistence-testkit
+<a id="config-pekko-persistence-testkit"></a>
+### pekko-persistence-testkit
 
 @@snip [reference.conf](/persistence-testkit/src/main/resources/reference.conf)
 
-<a id="config-akka-remote-artery"></a>
-### akka-remote artery
+<a id="config-pekko-remote-artery"></a>
+### pekko-remote artery
 
 @@snip [reference.conf](/remote/src/main/resources/reference.conf) { #shared #artery type=none }
 
-<a id="config-akka-remote"></a>
-### akka-remote classic (deprecated)
+<a id="config-pekko-remote"></a>
+### pekko-remote classic (deprecated)
 
 @@snip [reference.conf](/remote/src/main/resources/reference.conf) { #shared #classic type=none }
 
-<a id="config-akka-testkit"></a>
-### akka-testkit
+<a id="config-pekko-testkit"></a>
+### pekko-testkit
 
 @@snip [reference.conf](/testkit/src/main/resources/reference.conf)
 
 <a id="config-cluster-metrics"></a>
-### akka-cluster-metrics
+### pekko-cluster-metrics
 
 @@snip [reference.conf](/cluster-metrics/src/main/resources/reference.conf)
 
 <a id="config-cluster-tools"></a>
-### akka-cluster-tools
+### pekko-cluster-tools
 
 @@snip [reference.conf](/cluster-tools/src/main/resources/reference.conf)
 
 <a id="config-cluster-sharding-typed"></a>
-### akka-cluster-sharding-typed
+### pekko-cluster-sharding-typed
 
 @@snip [reference.conf](/cluster-sharding-typed/src/main/resources/reference.conf)
 
 <a id="config-cluster-sharding"></a>
-### akka-cluster-sharding
+### pekko-cluster-sharding
 
 @@snip [reference.conf](/cluster-sharding/src/main/resources/reference.conf)
 
 <a id="config-distributed-data"></a>
-### akka-distributed-data
+### pekko-distributed-data
 
 @@snip [reference.conf](/distributed-data/src/main/resources/reference.conf)
 
-<a id="config-akka-stream"></a>
-### akka-stream
+<a id="config-pekko-stream"></a>
+### pekko-stream
 
 @@snip [reference.conf](/stream/src/main/resources/reference.conf)
 
-<a id="config-akka-stream-testkit"></a>
-### akka-stream-testkit
+<a id="config-pekko-stream-testkit"></a>
+### pekko-stream-testkit
 
 @@snip [reference.conf](/stream-testkit/src/main/resources/reference.conf)
 
diff --git a/docs/src/main/paradox/general/configuration.md b/docs/src/main/paradox/general/configuration.md
index 2e58b2238b..e3b5ef9a30 100644
--- a/docs/src/main/paradox/general/configuration.md
+++ b/docs/src/main/paradox/general/configuration.md
@@ -1,6 +1,6 @@
 # Configuration
 
-You can start using Akka without defining any configuration, since sensible default values
+You can start using Pekko without defining any configuration, since sensible default values
 are provided. Later on you might need to amend the settings to change the default behavior
 or adapt for specific runtime environments. Typical examples of settings that you
 might amend:
@@ -10,14 +10,14 @@ might amend:
  * @ref:[message serializers](../serialization.md)
  * @ref:[tuning of dispatchers](../typed/dispatchers.md)
 
-Akka uses the [Typesafe Config Library](https://github.com/lightbend/config), which might also be a good choice
+Pekko uses the [Typesafe Config Library](https://github.com/lightbend/config), which might also be a good choice
 for the configuration of your own application or library built with or without
-Akka. This library is implemented in Java with no external dependencies;
+Pekko. This library is implemented in Java with no external dependencies;
 This is only a summary of the most important parts for more details see [the config library docs](https://github.com/lightbend/config/blob/master/README.md).
 
 ## Where configuration is read from
 
-All configuration for Akka is held within instances of @apidoc[ActorSystem](typed.ActorSystem), or
+All configuration for Pekko is held within instances of @apidoc[ActorSystem](typed.ActorSystem), or
 put differently, as viewed from the outside, @apidoc[ActorSystem](typed.ActorSystem) is the only
 consumer of configuration information. While constructing an actor system, you
 can either pass in a [Config](https://lightbend.github.io/config/latest/api/index.html?com/typesafe/config/Config.html) object or not, where the second case is
@@ -44,9 +44,9 @@ to `application`—may be overridden using the `config.resource` property
 
 @@@ note
 
-If you are writing an Akka application, keep your configuration in
+If you are writing an Pekko application, keep your configuration in
 `application.conf` at the root of the class path. If you are writing an
-Akka-based library, keep its configuration in `reference.conf` at the root
+Pekko-based library, keep its configuration in `reference.conf` at the root
 of the JAR file. It's not supported to override a config property owned by
 one library in a `reference.conf` of another library.
 
@@ -56,7 +56,7 @@ one library in a `reference.conf` of another library.
 
 @@@ warning
 
-Akka's configuration approach relies heavily on the notion of every
+Pekko's configuration approach relies heavily on the notion of every
 module/jar having its own `reference.conf` file. All of these will be
 discovered by the configuration and loaded. Unfortunately this also means
 that if you put/merge multiple jars into the same jar, you need to merge all the
@@ -77,7 +77,7 @@ A custom `application.conf` might look like this:
 
 pekko {
 
-  # Logger config for Akka internals and classic actors, the new API relies
+  # Logger config for Pekko internals and classic actors, the new API relies
   # directly on SLF4J and your config for the logger backend.
 
   # Loggers to register at boot time (org.apache.pekko.event.Logging$DefaultLogger logs
@@ -134,7 +134,7 @@ pekko {
 More advanced include and substitution mechanisms are explained in the [HOCON](https://github.com/typesafehub/config/blob/master/HOCON.md)
 specification.
 
-<a id="dakka-log-config-on-start"></a>
+<a id="dpekko-log-config-on-start"></a>
 ## Logging of Configuration
 
 If the system or config property `pekko.log-config-on-start` is set to `on`, then the
@@ -185,20 +185,20 @@ Java
 ## A Word About ClassLoaders
 
 In several places of the configuration file it is possible to specify the
-fully-qualified class name of something to be instantiated by Akka. This is
+fully-qualified class name of something to be instantiated by Pekko. This is
 done using Java reflection, which in turn uses a @javadoc[ClassLoader](java.lang.ClassLoader). Getting
 the right one in challenging environments like application containers or OSGi
-bundles is not always trivial, the current approach of Akka is that each
+bundles is not always trivial, the current approach of Pekko is that each
 @apidoc[ActorSystem](typed.ActorSystem) implementation stores the current thread’s context class
 loader (if available, otherwise just its own loader as in
 @javadoc[this.getClass.getClassLoader](java.lang.Class#getClassLoader())) and uses that for all reflective accesses.
-This implies that putting Akka on the boot class path will yield
+This implies that putting Pekko on the boot class path will yield
 @javadoc[NullPointerException](java.lang.NullPointerException) from strange places: this is not supported.
 
 ## Application specific settings
 
 The configuration can also be used for application specific settings.
-A good practice is to place those settings in an @ref:[Extension](../extending-akka.md#extending-akka-settings).
+A good practice is to place those settings in an @ref:[Extension](../extending-pekko.md#extending-pekko-settings).
 
 ## Configuring multiple ActorSystem
 
@@ -241,7 +241,7 @@ my.other.setting = "hello"
 // plus myapp1 and myapp2 subtrees
 ```
 
-while in the second one, only the “akka” subtree is lifted, with the following
+while in the second one, only the “pekko” subtree is lifted, with the following
 result
 
 ```ruby
@@ -277,7 +277,7 @@ Java
 You can replace or supplement `application.conf` either in code
 or using system properties.
 
-If you're using [ConfigFactory.load()](https://lightbend.github.io/config/latest/api/com/typesafe/config/ConfigFactory.html#load--) (which Akka does by
+If you're using [ConfigFactory.load()](https://lightbend.github.io/config/latest/api/com/typesafe/config/ConfigFactory.html#load--) (which Pekko does by
 default) you can replace `application.conf` by defining
 `-Dconfig.resource=whatever`, `-Dconfig.file=whatever`, or
 `-Dconfig.url=whatever`.
@@ -347,5 +347,5 @@ override the earlier stuff.
 
 ## Listing of the Reference Configuration
 
-Each Akka module has a reference configuration file with the default values.
+Each Pekko module has a reference configuration file with the default values.
 Those `reference.conf` files are listed in @ref[Default configuration](configuration-reference.md)
diff --git a/docs/src/main/paradox/general/jmm.md b/docs/src/main/paradox/general/jmm.md
index ff7334660f..ee436fc424 100644
--- a/docs/src/main/paradox/general/jmm.md
+++ b/docs/src/main/paradox/general/jmm.md
@@ -1,10 +1,10 @@
 ---
-project.description: Akka, Actors, Futures and the Java Memory Model.
+project.description: Pekko, Actors, Futures and the Java Memory Model.
 ---
-# Akka and the Java Memory Model
+# Pekko and the Java Memory Model
 
-A major benefit of using the Lightbend Platform, including Scala and Akka, is that it simplifies the process of writing
-concurrent software.  This article discusses how the Lightbend Platform, and Akka in particular, approaches shared memory
+A major benefit of using Pekko is that it simplifies the process of writing
+concurrent software.  This article discusses how the Pekko Platform approaches shared memory
 in concurrent applications.
 
 ## The Java Memory Model
@@ -29,7 +29,7 @@ write performant and scalable concurrent data structures.
 
 ## Actors and the Java Memory Model
 
-With the Actors implementation in Akka, there are two ways multiple threads can execute actions on shared memory:
+With the Actors implementation in Pekko, there are two ways multiple threads can execute actions on shared memory:
 
  * if a message is sent to an actor (e.g. by another actor). In most cases messages are immutable, but if that message
 is not a properly constructed immutable object, without a "happens before" rule, it would be possible for the receiver
@@ -38,7 +38,7 @@ to see partially initialized data structures and possibly even values out of thi
 another message moments later. It is important to realize that with the actor model you don't get any guarantee that
 the same thread will be executing the same actor for different messages.
 
-To prevent visibility and reordering problems on actors, Akka guarantees the following two "happens before" rules:
+To prevent visibility and reordering problems on actors, Pekko guarantees the following two "happens before" rules:
 
  * **The actor send rule:** the send of the message to an actor happens before the receive of that message by the same actor.
  * **The actor subsequent processing rule:** processing of one message happens before processing of the next message by the same actor.
@@ -66,7 +66,7 @@ Such are the perils of synchronized.
 <a id="jmm-shared-state"></a>
 ## Actors and shared mutable state
 
-Since Akka runs on the JVM there are still some rules to be followed.
+Since Pekko runs on the JVM there are still some rules to be followed.
 
 Most importantly, you must not close over internal Actor state and exposing it to other threads:
 
diff --git a/docs/src/main/paradox/general/message-delivery-reliability.md b/docs/src/main/paradox/general/message-delivery-reliability.md
index a61d9b196c..eed0d41b9f 100644
--- a/docs/src/main/paradox/general/message-delivery-reliability.md
+++ b/docs/src/main/paradox/general/message-delivery-reliability.md
@@ -1,9 +1,9 @@
 ---
-project.description: Akka message delivery semantics, at-most-once delivery and message ordering.
+project.description: Pekko message delivery semantics, at-most-once delivery and message ordering.
 ---
 # Message Delivery Reliability
 
-Akka helps you build reliable applications which make use of multiple processor
+Pekko helps you build reliable applications which make use of multiple processor
 cores in one machine (“scaling up”) or distributed across a computer network
 (“scaling out”). The key abstraction to make this work is that all interactions
 between your code units—actors—happen via message passing, which is why the
@@ -42,7 +42,7 @@ also underlies the @scaladoc[ask](pekko.pattern.AskSupport#ask(actorRef:org.apac
 * **message ordering per sender–receiver pair**
 
 The first rule is typically found also in other actor implementations while the
-second is specific to Akka.
+second is specific to Pekko.
 
 ### Discussion: What does “at-most-once” mean?
 
@@ -89,15 +89,15 @@ decide upon the “successfully” part of point five.
 Along those same lines goes the reasoning in [Nobody Needs Reliable
 Messaging](https://www.infoq.com/articles/no-reliable-messaging/). The only meaningful way for a sender to know whether an
 interaction was successful is by receiving a business-level acknowledgement
-message, which is not something Akka could make up on its own (neither are we
+message, which is not something Pekko could make up on its own (neither are we
 writing a “do what I mean” framework nor would you want us to).
 
-Akka embraces distributed computing and makes the fallibility of communication
+Pekko embraces distributed computing and makes the fallibility of communication
 explicit through message passing, therefore it does not try to lie and emulate
 a leaky abstraction. This is a model that has been used with great success in
 Erlang and requires the users to design their applications around it. You can
 read more about this approach in the [Erlang documentation](https://erlang.org/faq/academic.html) (section 10.8 and
-10.9), Akka follows it closely.
+10.9), Pekko follows it closely.
 
 Another angle on this issue is that by providing only basic guarantees those
 use cases which do not need stronger reliability do not pay the cost of their
@@ -131,7 +131,7 @@ This means that:
 
 @@@ note
 
-It is important to note that Akka’s guarantee applies to the order in which
+It is important to note that Pekko’s guarantee applies to the order in which
 messages are enqueued into the recipient’s mailbox. If the mailbox
 implementation does not respect FIFO order (e.g. a `PriorityMailbox`),
 then the order of processing by the actor can deviate from the enqueueing
@@ -195,7 +195,7 @@ this you should only rely on @ref:[The General Rules](#the-general-rules).
 
 ### Reliability of Local Message Sends
 
-The Akka test suite relies on not losing messages in the local context (and for
+The Pekko test suite relies on not losing messages in the local context (and for
 non-error condition tests also for remote deployment), meaning that we
 actually do apply the best effort to keep our tests stable. A local @apidoc[tell](actor.ActorRef) {scala="#tell(msg:Any,sender:org.apache.pekko.actor.ActorRef):Unit"  java="#tell(java.lang.Object,org.apache.pekko.actor.ActorRef)"}
 operation can however fail for the same reasons as a normal method call can on
@@ -205,7 +205,7 @@ the JVM:
 * @javadoc[OutOfMemoryError](java.lang.OutOfMemoryError)
 * other @javadoc[VirtualMachineError](java.lang.VirtualMachineError)
 
-In addition, local sends can fail in Akka-specific ways:
+In addition, local sends can fail in Pekko-specific ways:
 
 * if the mailbox does not accept the message (e.g. full @apidoc[dispatch.BoundedMailbox])
 * if the receiving actor fails while processing the message or is already
@@ -244,7 +244,7 @@ escaped our analysis.
 
 The rule that *for a given pair of actors, messages sent directly from the first
 to the second will not be received out-of-order* holds for messages sent over the
-network with the TCP based Akka remote transport protocol.
+network with the TCP based Pekko remote transport protocol.
 
 As explained in the previous section local message sends obey transitive causal
 ordering under certain conditions. This ordering can be violated due to different
@@ -263,7 +263,7 @@ for `M2` to "travel" to node-3 via node-2.
 
 ## Higher-level abstractions
 
-Based on a small and consistent tool set in Akka's core, Akka also provides
+Based on a small and consistent tool set in Pekko's core, Pekko also provides
 powerful, higher-level abstractions on top of it.
 
 ### Messaging Patterns
@@ -299,7 +299,7 @@ state on a different continent or to react to changes). If the component’s
 state is lost—due to a machine failure or by being pushed out of a cache—it can
 be reconstructed by replaying the event stream (usually employing
 snapshots to speed up the process). @ref:[Event Sourcing](../typed/persistence.md#event-sourcing-concepts) is supported by
-Akka Persistence.
+Pekko Persistence.
 
 ### Mailbox with Explicit Acknowledgement
 
diff --git a/docs/src/main/paradox/general/remoting.md b/docs/src/main/paradox/general/remoting.md
index 6040f3b59b..3dbdc7e082 100644
--- a/docs/src/main/paradox/general/remoting.md
+++ b/docs/src/main/paradox/general/remoting.md
@@ -7,7 +7,7 @@ of programming languages, platforms and technologies.
 
 ## Distributed by Default
 
-Everything in Akka is designed to work in a distributed setting: all
+Everything in Pekko is designed to work in a distributed setting: all
 interactions of actors use purely message passing and everything is
 asynchronous. This effort has been undertaken to ensure that all functions are
 available equally when running within a single JVM or on a cluster of hundreds
@@ -18,7 +18,7 @@ for a detailed discussion on why the second approach is bound to fail.
 
 ## Ways in which Transparency is Broken
 
-What is true of Akka need not be true of the application which uses it, since
+What is true of Pekko need not be true of the application which uses it, since
 designing for distributed execution poses some restrictions on what is
 possible. The most obvious one is that all messages sent over the wire must be
 serializable.
@@ -33,8 +33,8 @@ guarantee!).
 <a id="symmetric-communication"></a>
 ## Peer-to-Peer vs. Client-Server
 
-Akka Remoting is a communication module for connecting actor systems in a peer-to-peer fashion,
-and it is the foundation for Akka Clustering. The design of remoting is driven by two (related)
+Pekko Remoting is a communication module for connecting actor systems in a peer-to-peer fashion,
+and it is the foundation for Pekko Clustering. The design of remoting is driven by two (related)
 design decisions:
 
  1. Communication between involved systems is symmetric: if a system A can connect to a system B
@@ -44,14 +44,14 @@ is no system that only accepts connections, and there is no system that only ini
 
 The consequence of these decisions is that it is not possible to safely create
 pure client-server setups with predefined roles (violates assumption 2).
-For client-server setups it is better to use HTTP or Akka I/O.
+For client-server setups it is better to use HTTP or Pekko I/O.
 
 **Important**: Using setups involving Network Address Translation, Load Balancers or Docker
 containers violates assumption 1, unless additional steps are taken in the
 network configuration to allow symmetric communication between involved systems.
-In such situations Akka can be configured to bind to a different network
-address than the one used for establishing connections between Akka nodes.
-See @ref:[Akka behind NAT or in a Docker container](../remoting-artery.md#remote-configuration-nat-artery).
+In such situations Pekko can be configured to bind to a different network
+address than the one used for establishing connections between Pekko nodes.
+See @ref:[Pekko behind NAT or in a Docker container](../remoting-artery.md#remote-configuration-nat-artery).
 
 ## Marking Points for Scaling Up with Routers
 
diff --git a/docs/src/main/paradox/general/stream/stream-design.md b/docs/src/main/paradox/general/stream/stream-design.md
index a14cfcf5aa..25898e1020 100644
--- a/docs/src/main/paradox/general/stream/stream-design.md
+++ b/docs/src/main/paradox/general/stream/stream-design.md
@@ -1,18 +1,18 @@
-# Design Principles behind Akka Streams
+# Design Principles behind Pekko Streams
 
 It took quite a while until we were reasonably happy with the look and feel of the API and the architecture of the implementation, and while being guided by intuition the design phase was very much exploratory research. This section details the findings and codifies them into a set of principles that have emerged during the process.
 
 @@@ note
 
-As detailed in the introduction keep in mind that the Akka Streams API is completely decoupled from the Reactive Streams interfaces which are an implementation detail for how to pass stream data between individual operators.
+As detailed in the introduction keep in mind that the Pekko Streams API is completely decoupled from the Reactive Streams interfaces which are an implementation detail for how to pass stream data between individual operators.
 
 @@@
 
-## What shall users of Akka Streams expect?
+## What shall users of Pekko Streams expect?
 
-Akka is built upon a conscious decision to offer APIs that are minimal and consistent—as opposed to easy or intuitive. The credo is that we favor explicitness over magic, and if we provide a feature then it must work always, no exceptions. Another way to say this is that we minimize the number of rules a user has to learn instead of trying to keep the rules close to what we think users might expect.
+Pekko is built upon a conscious decision to offer APIs that are minimal and consistent—as opposed to easy or intuitive. The credo is that we favor explicitness over magic, and if we provide a feature then it must work always, no exceptions. Another way to say this is that we minimize the number of rules a user has to learn instead of trying to keep the rules close to what we think users might expect.
 
-From this follows that the principles implemented by Akka Streams are:
+From this follows that the principles implemented by Pekko Streams are:
 
  * all features are explicit in the API, no magic
  * supreme compositionality: combined pieces retain the function of each part
@@ -20,16 +20,16 @@ From this follows that the principles implemented by Akka Streams are:
 
 This means that we provide all the tools necessary to express any stream processing topology, that we model all the essential aspects of this domain (back-pressure, buffering, transformations, failure recovery, etc.) and that whatever the user builds is reusable in a larger context.
 
-### Akka Streams does not send dropped stream elements to the dead letter office
+### Pekko Streams does not send dropped stream elements to the dead letter office
 
-One important consequence of offering only features that can be relied upon is the restriction that Akka Streams cannot ensure that all objects sent through a processing topology will be processed. Elements can be dropped for a number of reasons:
+One important consequence of offering only features that can be relied upon is the restriction that Pekko Streams cannot ensure that all objects sent through a processing topology will be processed. Elements can be dropped for a number of reasons:
 
  * plain user code can consume one element in a *map(...)* operator and produce an entirely different one as its result
  * common stream operators drop elements intentionally, e.g. take/drop/filter/conflate/buffer/…
  * stream failure will tear down the stream without waiting for processing to finish, all elements that are in flight will be discarded
  * stream cancellation will propagate upstream (e.g. from a *take* operator) leading to upstream processing steps being terminated without having processed all of their inputs
 
-This means that sending JVM objects into a stream that need to be cleaned up will require the user to ensure that this happens outside of the Akka Streams facilities (e.g. by cleaning them up after a timeout or when their results are observed on the stream output, or by using other means like finalizers etc.).
+This means that sending JVM objects into a stream that need to be cleaned up will require the user to ensure that this happens outside of the Pekko Streams facilities (e.g. by cleaning them up after a timeout or when their results are observed on the stream output, or by using other means like finalizers etc.).
 
 ### Resulting Implementation Considerations
 
@@ -39,16 +39,16 @@ The process of materialization will often create specific objects that are usefu
 
 ## Interoperation with other Reactive Streams implementations
 
-Akka Streams fully implement the Reactive Streams specification and interoperate with all other conformant implementations. We chose to completely separate the Reactive Streams interfaces from the user-level API because we regard them to be an SPI that is not targeted at endusers. In order to obtain a [Publisher](https://javadoc.io/doc/org.reactivestreams/reactive-streams/latest/org/reactivestreams/Publisher.html) or [Subscriber](https://javadoc.io/doc/org.reactivestreams/reactive-stream [...]
+Pekko Streams fully implement the Reactive Streams specification and interoperate with all other conformant implementations. We chose to completely separate the Reactive Streams interfaces from the user-level API because we regard them to be an SPI that is not targeted at endusers. In order to obtain a [Publisher](https://javadoc.io/doc/org.reactivestreams/reactive-streams/latest/org/reactivestreams/Publisher.html) or [Subscriber](https://javadoc.io/doc/org.reactivestreams/reactive-strea [...]
 
-All stream Processors produced by the default materialization of Akka Streams are restricted to having a single Subscriber, additional Subscribers will be rejected. The reason for this is that the stream topologies described using our DSL never require fan-out behavior from the Publisher sides of the elements, all fan-out is done using explicit elements like @apidoc[Broadcast[T]](stream.*.Broadcast).
+All stream Processors produced by the default materialization of Pekko Streams are restricted to having a single Subscriber, additional Subscribers will be rejected. The reason for this is that the stream topologies described using our DSL never require fan-out behavior from the Publisher sides of the elements, all fan-out is done using explicit elements like @apidoc[Broadcast[T]](stream.*.Broadcast).
 
 This means that @scala[@scaladoc[Sink.asPublisher(true)](pekko.stream.scaladsl.Sink$#asPublisher[T](fanout:Boolean):org.apache.pekko.stream.scaladsl.Sink[T,org.reactivestreams.Publisher[T]])]@java[@javadoc[Sink.asPublisher(WITH_FANOUT)](pekko.stream.javadsl.Sink#asPublisher(org.apache.pekko.stream.javadsl.AsPublisher))] (for enabling fan-out support) must be used where broadcast behavior is needed for interoperation with other Reactive Streams implementations.
 
 ### Rationale and benefits from Sink/Source/Flow not directly extending Reactive Streams interfaces
 
 A sometimes overlooked crucial piece of information about [Reactive Streams](https://github.com/reactive-streams/reactive-streams-jvm/) is that they are a [Service Provider Interface](https://en.m.wikipedia.org/wiki/Service_provider_interface), as explained in depth in one of the [early discussions](https://github.com/reactive-streams/reactive-streams-jvm/pull/25) about the specification.
-Akka Streams was designed during the development of Reactive Streams, so they both heavily influenced one another.
+Pekko Streams was designed during the development of Reactive Streams, so they both heavily influenced one another.
 
 It may be enlightening to learn that even within the Reactive Specification the types had initially attempted to hide `Publisher`, `Subscriber` and the other SPI types from users of the API.
 Though since those internal SPI types would end up surfacing to end users of the standard in some cases, it was decided to [remove the API types, and only keep the SPI types](https://github.com/reactive-streams/reactive-streams-jvm/pull/25) which are the `Publisher`, `Subscriber` et al.
@@ -56,27 +56,27 @@ Though since those internal SPI types would end up surfacing to end users of the
 With this historical knowledge and context about the purpose of the standard – being an internal detail of interoperable libraries - we can with certainty say that it can't be really said that a direct _inheritance_ relationship with these types can be considered some form of advantage or meaningful differentiator between libraries.
 Rather, it could be seen that APIs which expose those SPI types to end-users are leaking internal implementation details accidentally. 
 
-The @apidoc[Source], @apidoc[Sink] and @apidoc[Flow] types which are part of Akka Streams have the purpose of providing the fluent DSL, as well as to be "factories" for running those streams.
+The @apidoc[Source], @apidoc[Sink] and @apidoc[Flow] types which are part of Pekko Streams have the purpose of providing the fluent DSL, as well as to be "factories" for running those streams.
 Their direct counterparts in Reactive Streams are, respectively, [Publisher](https://javadoc.io/doc/org.reactivestreams/reactive-streams/latest/org/reactivestreams/Publisher.html), [Subscriber](https://javadoc.io/doc/org.reactivestreams/reactive-streams/latest/org/reactivestreams/Subscriber.html)` and [Processor](https://javadoc.io/doc/org.reactivestreams/reactive-streams/latest/org/reactivestreams/Processor.html). 
-In other words, Akka Streams operate on a lifted representation of the computing graph,
-which then is materialized and executed in accordance to Reactive Streams rules. This also allows Akka Streams to perform optimizations like fusing and dispatcher configuration during the materialization step.
+In other words, Pekko Streams operate on a lifted representation of the computing graph,
+which then is materialized and executed in accordance to Reactive Streams rules. This also allows Pekko Streams to perform optimizations like fusing and dispatcher configuration during the materialization step.
 
 Another not obvious gain from hiding the Reactive Streams interfaces comes from the fact that `org.reactivestreams.Subscriber` (et al) have now been included in Java 9+, and thus become part of Java itself, so libraries should migrate to using the @javadoc[java.util.concurrent.Flow.Subscriber](java.util.concurrent.Flow.Subscriber) instead of `org.reactivestreams.Subscriber`.
 Libraries which selected to expose and directly extend the Reactive Streams types will now have a tougher time to adapt the JDK9+ types -- all their classes that extend Subscriber and friends will need to be copied or changed to extend the exact same interface,
-but from a different package. In Akka we simply expose the new type when asked to -- already supporting JDK9 types, from the day JDK9 was released.
+but from a different package. In Pekko we simply expose the new type when asked to -- already supporting JDK9 types, from the day JDK9 was released.
 
-The other, and perhaps more important reason for hiding the Reactive Streams interfaces comes back to the first points of this explanation: the fact of Reactive Streams being an SPI, and as such is hard to "get right" in ad-hoc implementations. Thus Akka Streams discourages the use of the hard to implement pieces of the underlying infrastructure, and offers simpler, more type-safe, yet more powerful abstractions for users to work with: @apidoc[GraphStage]s and operators. It is of course  [...]
+The other, and perhaps more important reason for hiding the Reactive Streams interfaces comes back to the first points of this explanation: the fact of Reactive Streams being an SPI, and as such is hard to "get right" in ad-hoc implementations. Thus Pekko Streams discourages the use of the hard to implement pieces of the underlying infrastructure, and offers simpler, more type-safe, yet more powerful abstractions for users to work with: @apidoc[GraphStage]s and operators. It is of course [...]
 
 ## What shall users of streaming libraries expect?
 
-We expect libraries to be built on top of Akka Streams, in fact Akka HTTP is one such example that lives within the Akka project itself. In order to allow users to profit from the principles that are described for Akka Streams above, the following rules are established:
+We expect libraries to be built on top of Pekko Streams, in fact Pekko HTTP is one such example that lives within the Pekko project itself. In order to allow users to profit from the principles that are described for Pekko Streams above, the following rules are established:
 
  * libraries shall provide their users with reusable pieces, i.e. expose factories that return operators, allowing full compositionality
  * libraries may optionally and additionally provide facilities that consume and materialize operators
 
 The reasoning behind the first rule is that compositionality would be destroyed if different libraries only accepted operators and expected to materialize them: using two of these together would be impossible because materialization can only happen once. As a consequence, the functionality of a library must be expressed such that materialization can be done by the user, outside of the library’s control.
 
-The second rule allows a library to additionally provide nice sugar for the common case, an example of which is the Akka HTTP API that provides a `handleWith` method for convenient materialization.
+The second rule allows a library to additionally provide nice sugar for the common case, an example of which is the Pekko HTTP API that provides a `handleWith` method for convenient materialization.
 
 @@@ note
 
@@ -88,7 +88,7 @@ Exceptions from this need to be well-justified and carefully documented.
 
 ### Resulting Implementation Constraints
 
-Akka Streams must enable a library to express any stream processing utility in terms of immutable blueprints. The most common building blocks are
+Pekko Streams must enable a library to express any stream processing utility in terms of immutable blueprints. The most common building blocks are
 
  * @apidoc[Source]: something with exactly one output stream
  * @apidoc[Sink]: something with exactly one input stream
@@ -108,11 +108,11 @@ The starting point for this discussion is the [definition given by the Reactive
 
 @@@ note
 
-Unfortunately the method name for signaling *failure* to a Subscriber is called `onError` for historical reasons. Always keep in mind that the Reactive Streams interfaces (Publisher/Subscription/Subscriber) are modeling the low-level infrastructure for passing streams between execution units, and errors on this level are precisely the failures that we are talking about on the higher level that is modeled by Akka Streams.
+Unfortunately the method name for signaling *failure* to a Subscriber is called `onError` for historical reasons. Always keep in mind that the Reactive Streams interfaces (Publisher/Subscription/Subscriber) are modeling the low-level infrastructure for passing streams between execution units, and errors on this level are precisely the failures that we are talking about on the higher level that is modeled by Pekko Streams.
 
 @@@
 
-There is only limited support for treating `onError` in Akka Streams compared to the operators that are available for the transformation of data elements, which is intentional in the spirit of the previous paragraph. Since `onError` signals that the stream is collapsing, its ordering semantics are not the same as for stream completion: transformation operators of any kind will collapse with the stream, possibly still holding elements in implicit or explicit buffers. This means that data  [...]
+There is only limited support for treating `onError` in Pekko Streams compared to the operators that are available for the transformation of data elements, which is intentional in the spirit of the previous paragraph. Since `onError` signals that the stream is collapsing, its ordering semantics are not the same as for stream completion: transformation operators of any kind will collapse with the stream, possibly still holding elements in implicit or explicit buffers. This means that data [...]
 
 The ability for failures to propagate faster than data elements is essential for tearing down streams that are back-pressured—especially since back-pressure can be the failure mode (e.g. by tripping upstream buffers which then abort because they cannot do anything else; or if a dead-lock occurred).
 
diff --git a/docs/src/main/paradox/general/supervision.md b/docs/src/main/paradox/general/supervision.md
index 5cb05cba41..73ae00e2ff 100644
--- a/docs/src/main/paradox/general/supervision.md
+++ b/docs/src/main/paradox/general/supervision.md
@@ -1,5 +1,5 @@
 ---
-project.description: Hierarchical supervision, lifecycle monitoring and error or failure handling in Akka.
+project.description: Hierarchical supervision, lifecycle monitoring and error or failure handling in Pekko.
 ---
 # Supervision and Monitoring
 
@@ -82,7 +82,7 @@ re-processed.
 
 @@@ note
 
-Lifecycle Monitoring in Akka is usually referred to as `DeathWatch`
+Lifecycle Monitoring in Pekko is usually referred to as `DeathWatch`
 
 @@@
 
diff --git a/docs/src/main/paradox/general/terminology.md b/docs/src/main/paradox/general/terminology.md
index e07774bf8e..fad8d28054 100644
--- a/docs/src/main/paradox/general/terminology.md
+++ b/docs/src/main/paradox/general/terminology.md
@@ -1,8 +1,8 @@
 # Terminology, Concepts
 
 In this chapter we attempt to establish a common terminology to define a solid ground for communicating about concurrent,
-distributed systems which Akka targets. Please note that, for many of these terms, there is no single agreed definition.
-We seek to give working definitions that will be used in the scope of the Akka documentation.
+distributed systems which Pekko targets. Please note that, for many of these terms, there is no single agreed definition.
+We seek to give working definitions that will be used in the scope of the Pekko documentation.
 
 ## Concurrency vs. Parallelism
 
@@ -69,7 +69,7 @@ this can cause race conditions.
 
 @@@ note
 
-The only guarantee that Akka provides about messages sent between a given pair of actors is that their order is
+The only guarantee that Pekko provides about messages sent between a given pair of actors is that their order is
 always preserved. see @ref:[Message Delivery Reliability](message-delivery-reliability.md)
 
 @@@
diff --git a/docs/src/main/paradox/images/akka-remote-testconductor.png b/docs/src/main/paradox/images/pekko-remote-testconductor.png
similarity index 100%
rename from docs/src/main/paradox/images/akka-remote-testconductor.png
rename to docs/src/main/paradox/images/pekko-remote-testconductor.png
diff --git a/docs/src/main/paradox/includes.md b/docs/src/main/paradox/includes.md
index 8157933226..c0fa0555fc 100644
--- a/docs/src/main/paradox/includes.md
+++ b/docs/src/main/paradox/includes.md
@@ -2,8 +2,8 @@
 <!--- #actor-api --->
 @@@ note
 
-Akka Classic pertains to the original Actor APIs, which have been improved by more type safe and guided Actor APIs. 
-Akka Classic is still fully supported and existing applications can continue to use the classic APIs. It is also
+Pekko Classic pertains to the original Actor APIs, which have been improved by more type safe and guided Actor APIs. 
+Pekko Classic is still fully supported and existing applications can continue to use the classic APIs. It is also
 possible to use the new Actor APIs together with classic actors in the same ActorSystem, see @ref:[coexistence](typed/coexisting.md).
 For new projects we recommend using @ref:[the new Actor API](typed/actors.md).
        
diff --git a/docs/src/main/paradox/includes/cluster.md b/docs/src/main/paradox/includes/cluster.md
index 97233be6f9..df919b271c 100644
--- a/docs/src/main/paradox/includes/cluster.md
+++ b/docs/src/main/paradox/includes/cluster.md
@@ -21,7 +21,7 @@ their physical location in the cluster.
 ### Distributed Data
 
 Distributed Data is useful when you need to share data between nodes in an
-Akka Cluster. The data is accessed with an actor providing a key-value store like API.
+Pekko Cluster. The data is accessed with an actor providing a key-value store like API.
 
 <!--- #cluster-ddata --->
  
@@ -44,7 +44,7 @@ like round-robin and consistent hashing.
 <!--- #cluster-multidc --->
 ### Cluster across multiple data centers
 
-Akka Cluster can be used across multiple data centers, availability zones or regions,
+Pekko Cluster can be used across multiple data centers, availability zones or regions,
 so that one Cluster can span multiple data centers and still be tolerant to network partitions.
 
 <!--- #cluster-multidc --->
diff --git a/docs/src/main/paradox/index-actors.md b/docs/src/main/paradox/index-actors.md
index afce684da7..c29c516334 100644
--- a/docs/src/main/paradox/index-actors.md
+++ b/docs/src/main/paradox/index-actors.md
@@ -4,17 +4,17 @@
 
 ## Dependency
 
-To use Classic Akka Actors, you must add the following dependency in your project:
+To use Classic Pekko Actors, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
   group2="org.apache.pekko"
-  artifact2="akka-testkit_$scala.binary.version$"
+  artifact2="pekko-testkit_$scala.binary.version$"
   scope2=test
   version2=PekkoVersion
 }
diff --git a/docs/src/main/paradox/index-classic.md b/docs/src/main/paradox/index-classic.md
index 4b3893dcd4..1095c387b3 100644
--- a/docs/src/main/paradox/index-classic.md
+++ b/docs/src/main/paradox/index-classic.md
@@ -1,4 +1,4 @@
-# Akka Classic
+# Pekko Classic
 
 @@include[includes.md](includes.md) { #actor-api }
 
diff --git a/docs/src/main/paradox/index-network.md b/docs/src/main/paradox/index-network.md
index f92efeee4a..2ee1dd5f8c 100644
--- a/docs/src/main/paradox/index-network.md
+++ b/docs/src/main/paradox/index-network.md
@@ -1,7 +1,6 @@
 # Classic Networking
 
 @@include[includes.md](includes.md) { #actor-api }
-FIXME https://github.com/akka/akka/issues/27263
 
 @@toc { depth=2 }
 
diff --git a/docs/src/main/paradox/index-utilities-classic.md b/docs/src/main/paradox/index-utilities-classic.md
index 5463bab47d..5157646e8a 100644
--- a/docs/src/main/paradox/index-utilities-classic.md
+++ b/docs/src/main/paradox/index-utilities-classic.md
@@ -5,14 +5,14 @@
 To use Utilities, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
   group2="org.apache.pekko"
-  artifact2="akka-testkit_$scala.binary.version$"
+  artifact2="pekko-testkit_$scala.binary.version$"
   scope2=test
   version2=PekkoVersion
 }
@@ -24,6 +24,6 @@ To use Utilities, you must add the following dependency in your project:
 * [event-bus](event-bus.md)
 * [logging](logging.md)
 * [scheduler](scheduler.md)
-* [extending-akka](extending-akka.md)
+* [extending-pekko](extending-pekko.md)
 
 @@@
diff --git a/docs/src/main/paradox/index.md b/docs/src/main/paradox/index.md
index 2aa4d2ec6a..f7fb94c6e9 100644
--- a/docs/src/main/paradox/index.md
+++ b/docs/src/main/paradox/index.md
@@ -1,4 +1,4 @@
-# Akka Documentation
+# Pekko Documentation
 
 @@toc { depth=2 }
 
diff --git a/docs/src/main/paradox/io-dns.md b/docs/src/main/paradox/io-dns.md
index 235e2bef8e..1409954b7d 100644
--- a/docs/src/main/paradox/io-dns.md
+++ b/docs/src/main/paradox/io-dns.md
@@ -4,11 +4,11 @@
 
 `async-dns` does not support:
 
-* [Local hosts file](https://github.com/akka/akka/issues/25846) e.g. `/etc/hosts` on Unix systems
+* Local hosts file e.g. `/etc/hosts` on Unix systems
 * The [nsswitch.conf](https://linux.die.net/man/5/nsswitch.conf) file (no plan to support)
 
 Additionally, while search domains are supported through configuration, detection of the system configured
-[Search domains](https://github.com/akka/akka/issues/25825) is only supported on systems that provide this 
+search domains is only supported on systems that provide this 
 configuration through a `/etc/resolv.conf` file, i.e. it isn't supported on Windows or OSX, and none of the 
 environment variables that are usually supported on most \*nix OSes are supported.
 
@@ -22,17 +22,17 @@ The `async-dns` API is marked as `ApiMayChange` as more information is expected
 
 @@@ warning
 
-The ability to plugin in a custom DNS implementation is expected to be removed in future versions of Akka.
+The ability to plugin in a custom DNS implementation is expected to be removed in future versions of Pekko.
 Users should pick one of the built in extensions.
 
 @@@
 
-Akka DNS is a pluggable way to interact with DNS. Implementations much implement `org.apache.pekko.io.DnsProvider` and provide a configuration
+Pekko DNS is a pluggable way to interact with DNS. Implementations much implement `org.apache.pekko.io.DnsProvider` and provide a configuration
 block that specifies the implementation via `provider-object`.
 
-@@@ note { title="DNS via Akka Discovery" }
+@@@ note { title="DNS via Pekko Discovery" }
 
-@ref[Akka Discovery](discovery/index.md) can be backed by the Akka DNS implementation and provides a more general API for service lookups which is not limited to domain name lookup.
+@ref[Pekko Discovery](discovery/index.md) can be backed by the Pekko DNS implementation and provides a more general API for service lookups which is not limited to domain name lookup.
 
 @@@
 
diff --git a/docs/src/main/paradox/io-tcp.md b/docs/src/main/paradox/io-tcp.md
index a96331b83b..f5492241f8 100644
--- a/docs/src/main/paradox/io-tcp.md
+++ b/docs/src/main/paradox/io-tcp.md
@@ -8,11 +8,11 @@ project.description: Low level API for using TCP with classic actors.
 To use TCP, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -26,7 +26,7 @@ Scala
 Java
 :  @@snip [IODocTest.java](/docs/src/test/java/jdocs/io/japi/IODocTest.java) { #imports }
 
-All of the Akka I/O APIs are accessed through manager objects. When using an I/O API, the first step is to acquire a
+All of the Pekko I/O APIs are accessed through manager objects. When using an I/O API, the first step is to acquire a
 reference to the appropriate manager. The code below shows how to acquire a reference to the `Tcp` manager.
 
 Scala
@@ -54,7 +54,7 @@ to and a list of socket options to apply.
 @@@ note
 
 The SO_NODELAY (TCP_NODELAY on Windows) socket option defaults to true in
-Akka, independently of the OS default settings. This setting disables Nagle's
+Pekko, independently of the OS default settings. This setting disables Nagle's
 algorithm, considerably improving latency for most applications. This setting
 could be overridden by passing `SO.TcpNoDelay(false)` in the list of socket
 options of the @scala[`Connect` message]@java[message by the `TcpMessage.connect` method].
diff --git a/docs/src/main/paradox/io-udp.md b/docs/src/main/paradox/io-udp.md
index a2d00a54a9..c81b41597f 100644
--- a/docs/src/main/paradox/io-udp.md
+++ b/docs/src/main/paradox/io-udp.md
@@ -8,11 +8,11 @@ project.description: Low level API for using UDP with classic actors.
 To use UDP, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -111,7 +111,7 @@ connect, thus writes do not suffer an additional performance penalty.
 
 ## UDP Multicast
 
-Akka provides a way to control various options of `DatagramChannel` through the
+Pekko provides a way to control various options of `DatagramChannel` through the
 `org.apache.pekko.io.Inet.SocketOption` interface. The example below shows
 how to setup a receiver of multicast messages using IPv6 protocol.
 
diff --git a/docs/src/main/paradox/io.md b/docs/src/main/paradox/io.md
index 62e0c30558..5af7299b6d 100644
--- a/docs/src/main/paradox/io.md
+++ b/docs/src/main/paradox/io.md
@@ -5,20 +5,18 @@
 To use I/O, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-The `pekko.io` package has been developed in collaboration between the Akka
-and [spray.io](http://spray.io) teams. Its design combines experiences from the
-`spray-io` module with improvements that were jointly developed for
-more general consumption as an actor-based service.
+The `pekko.io` package design combines experiences from the `spray-io` module 
+with improvements that were jointly developed for more general consumption as an actor-based service.
 
 The guiding design goal for this I/O implementation was to reach extreme
 scalability, make no compromises in providing an API correctly matching the
@@ -64,12 +62,12 @@ application tries to push more data than a device can handle, the driver has to
 is able to write them. With buffering it is possible to handle short bursts of intensive writes --- but no buffer is infinite.
 "Flow control" is needed to avoid overwhelming device buffers.
 
-Akka supports two types of flow control:
+Pekko supports two types of flow control:
 
  * *Ack-based*, where the driver notifies the writer when writes have succeeded.
  * *Nack-based*, where the driver notifies the writer when writes have failed.
 
-Each of these models is available in both the TCP and the UDP implementations of Akka I/O.
+Each of these models is available in both the TCP and the UDP implementations of Pekko I/O.
 
 Individual writes can be acknowledged by providing an ack object in the write message (@apidoc[Write](io.Tcp.Write) in the case of TCP and
 @apidoc[Send](io.Udp.Send) for UDP). When the write is complete the worker will send the ack object to the writing actor. This can be
@@ -93,7 +91,7 @@ not error handling. In other words, data may still be lost, even if every write
 ### ByteString
 
 To maintain isolation, actors should communicate with immutable objects only. @apidoc[ByteString](util.ByteString) is an
-immutable container for bytes. It is used by Akka's I/O system as an efficient, immutable alternative
+immutable container for bytes. It is used by Pekko's I/O system as an efficient, immutable alternative
 the traditional byte containers used for I/O on the JVM, such as @scala[@scaladoc[Array](scala.Array)[@scaladoc[Byte](scala.Byte)]]@java[`byte[]`] and @javadoc[ByteBuffer](java.nio.ByteBuffer).
 
 `ByteString` is a [rope-like](https://en.wikipedia.org/wiki/Rope_\(computer_science\)) data structure that is immutable
diff --git a/docs/src/main/paradox/logging.md b/docs/src/main/paradox/logging.md
index 13035dbad2..09946bbd86 100644
--- a/docs/src/main/paradox/logging.md
+++ b/docs/src/main/paradox/logging.md
@@ -5,7 +5,7 @@ For the new API see @ref[Logging](typed/logging.md).
 
 ## Module info
 
-To use Logging, you must at least use the Akka actors dependency in your project, and will most likely want to configure logging via the SLF4J module (@ref:[see below](#slf4j)).
+To use Logging, you must at least use the Pekko actors dependency in your project, and will most likely want to configure logging via the SLF4J module (@ref:[see below](#slf4j)).
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -20,7 +20,7 @@ To use Logging, you must at least use the Akka actors dependency in your project
 
 ## Introduction
 
-Logging in Akka is not tied to a specific logging backend. By default
+Logging in Pekko is not tied to a specific logging backend. By default
 log messages are printed to STDOUT, but you can plug-in a SLF4J logger or
 your own logger. Logging is performed asynchronously to ensure that logging
 has minimal performance impact. Logging generally means IO and locks,
@@ -54,8 +54,8 @@ class MyActor extends Actor with org.apache.pekko.actor.ActorLogging {
 
 The first parameter to @scala[@scaladoc[Logging](pekko.event.Logging$#apply[T](bus:org.apache.pekko.event.LoggingBus,logSource:T)(implicitevidence$5:org.apache.pekko.event.LogSource[T]):org.apache.pekko.event.LoggingAdapter)] @java[@javadoc[Logging.getLogger](pekko.event.Logging#getLogger(org.apache.pekko.event.LoggingBus,java.lang.Object))] could also be any
 @apidoc[LoggingBus], specifically @scala[@scaladoc[system.eventStream](pekko.actor.ActorSystem#eventStream:org.apache.pekko.event.EventStream)] @java[@javadoc[system.getEventStream()](pekko.actor.ActorSystem#getEventStream())]. 
-In the demonstrated case, the actor system's address is included in the `akkaSource`
-representation of the log source (see @ref:[Logging Thread, Akka Source and Actor System in MDC](#logging-thread-akka-source-and-actor-system-in-mdc)),
+In the demonstrated case, the actor system's address is included in the `pekkoSource`
+representation of the log source (see @ref:[Logging Thread, Pekko Source and Actor System in MDC](#logging-thread-pekko-source-and-actor-system-in-mdc)),
 while in the second case this is not automatically done.
 The second parameter to @scala[`Logging`] @java[`Logging.getLogger`] is the source of this logging channel.
 The source object is translated to a String according to the following rules:
@@ -109,7 +109,7 @@ to the @ref:[Event Stream](event-bus.md#event-stream).
 
 ### Auxiliary logging options
 
-Akka has a few configuration options for very low level debugging. These make more sense in development than in production.
+Pekko has a few configuration options for very low level debugging. These make more sense in development than in production.
 
 You almost definitely need to have logging set to `DEBUG` to use any of the options below:
 
@@ -119,7 +119,7 @@ pekko {
 }
 ```
 
-This config option is very good if you want to know what config settings are loaded by Akka:
+This config option is very good if you want to know what config settings are loaded by Pekko:
 
 ```ruby
 pekko {
@@ -221,7 +221,7 @@ If you want to see all messages that are sent through remoting at `DEBUG` log le
 
 ```ruby
 pekko.remote.artery {
-  # If this is "on", Akka will log all outbound messages at DEBUG level,
+  # If this is "on", Pekko will log all outbound messages at DEBUG level,
   # if off then they are not logged
   log-sent-messages = on
 }
@@ -231,7 +231,7 @@ If you want to see all messages that are received through remoting at `DEBUG` lo
 
 ```ruby
 pekko.remote.artery {
-  # If this is "on", Akka will log all inbound messages at DEBUG level,
+  # If this is "on", Pekko will log all inbound messages at DEBUG level,
   # if off then they are not logged
   log-received-messages = on
 }
@@ -317,12 +317,12 @@ pekko {
 
 The default one logs to STDOUT and is registered by default. It is not intended
 to be used for production. There is also an @ref:[SLF4J](#slf4j)
-logger available in the 'akka-slf4j' module.
+logger available in the 'pekko-slf4j' module.
 
 @@@ note
 
-If `akka-actor-typed` is available on your classpath, logging will automatically switch to @ref:[SLF4J](#slf4j) instead of 
-the default logger. See the  @ref:[Akka typed logging](typed/logging.md#event-bus) docs for more details.
+If `pekko-actor-typed` is available on your classpath, logging will automatically switch to @ref:[SLF4J](#slf4j) instead of 
+the default logger. See the  @ref:[Pekko typed logging](typed/logging.md#event-bus) docs for more details.
 
 @@@
 
@@ -344,15 +344,15 @@ stdout logger is `WARNING` and it can be silenced completely by setting
 
 ## SLF4J
 
-Akka provides a logger for [SLF4J](https://www.slf4j.org/). This module is available in the 'akka-slf4j.jar'.
+Pekko provides a logger for [SLF4J](https://www.slf4j.org/). This module is available in the 'pekko-slf4j.jar'.
 It has a single dependency: the slf4j-api jar. In your runtime, you also need a SLF4J backend. We recommend [Logback](https://logback.qos.ch/):
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-slf4j_$scala.binary.version$"
+  artifact="pekko-slf4j_$scala.binary.version$"
   version=PekkoVersion
   group2="ch.qos.logback"
   artifact2="logback-classic"
@@ -445,12 +445,12 @@ in this example:
 Place the `logback.xml` file in `src/main/resources/logback.xml`. For tests you can define different
 logging configuration in `src/test/resources/logback-test.xml`.
 
-MDC properties can be included in the Logback output with for example `%X{akkaSource}` specifier within the
+MDC properties can be included in the Logback output with for example `%X{pekkoSource}` specifier within the
 [pattern layout configuration](https://logback.qos.ch/manual/layouts.html#mdc):
 
 ```
   <encoder>
-    <pattern>%date{ISO8601} %-5level %logger{36} %X{akkaSource} - %msg%n</pattern>
+    <pattern>%date{ISO8601} %-5level %logger{36} %X{pekkoSource} - %msg%n</pattern>
   </encoder>
 ```
 
@@ -462,7 +462,7 @@ All MDC properties as key-value entries can be included with `%mdc`:
   </encoder>
 ```
 
-### Logging Thread, Akka Source and Actor System in MDC
+### Logging Thread, Pekko Source and Actor System in MDC
 
 Since the logging is done asynchronously the thread in which the logging was performed is captured in
 Mapped Diagnostic Context (MDC) with attribute name `sourceThread`.
@@ -470,17 +470,17 @@ Mapped Diagnostic Context (MDC) with attribute name `sourceThread`.
 @@@ note
 
 It will probably be a good idea to use the `sourceThread` MDC value also in
-non-Akka parts of the application in order to have this property consistently
+non-Pekko parts of the application in order to have this property consistently
 available in the logs.
 
 @@@
 
-Another helpful facility is that Akka captures the actor’s address when
+Another helpful facility is that Pekko captures the actor’s address when
 instantiating a logger within it, meaning that the full instance identification
 is available for associating log messages e.g. with members of a router. This
-information is available in the MDC with attribute name `akkaSource`.
+information is available in the MDC with attribute name `pekkoSource`.
 
-The address of the actor system, containing host and port if the system is using cluster, is available through `akkaAddress`.
+The address of the actor system, containing host and port if the system is using cluster, is available through `pekkoAddress`.
 
 Finally, the actor system in which the logging was performed
 is available in the MDC with attribute name `sourceActorSystem`.
@@ -490,14 +490,14 @@ For more details on what this attribute contains—also for non-actors—please
 
 ### More accurate timestamps for log output in MDC
 
-Akka's logging is asynchronous which means that the timestamp of a log entry is taken from
+Pekko's logging is asynchronous which means that the timestamp of a log entry is taken from
 when the underlying logger implementation is called, which can be surprising at first.
-If you want to more accurately output the timestamp, use the MDC attribute `akkaTimestamp`.
+If you want to more accurately output the timestamp, use the MDC attribute `pekkoTimestamp`.
 
 ### MDC values defined by the application
 
 One useful feature available in Slf4j is [MDC](https://logback.qos.ch/manual/mdc.html),
-Akka has a way to let the application specify custom values, for this you need to use a
+Pekko has a way to let the application specify custom values, for this you need to use a
 specialized @apidoc[LoggingAdapter], the @apidoc[DiagnosticLoggingAdapter]. In order to
 get it you can use the factory, providing an @scala[@scaladoc[Actor](pekko.actor.Actor)] @java[@javadoc[AbstractActor](pekko.actor.AbstractActor)] as logSource:
 
@@ -571,15 +571,15 @@ trigger emails and other notifications immediately.
 Markers are available through the LoggingAdapters, when obtained via @apidoc[Logging.withMarker](event.Logging$) {scala="#withMarker(logSource:org.apache.pekko.actor.Actor):org.apache.pekko.event.DiagnosticMarkerBusLoggingAdapter" java="#withMarker(org.apache.pekko.actor.Actor)"}.
 The first argument passed into all log calls then should be a @apidoc[event.LogMarker].
 
-The slf4j bridge provided by Akka in `akka-slf4j` will automatically pick up this marker value and make it available to SLF4J.
+The slf4j bridge provided by Pekko in `pekko-slf4j` will automatically pick up this marker value and make it available to SLF4J.
 
-Akka is logging some events with markers. Some of these events also include structured MDC properties. 
+Pekko is logging some events with markers. Some of these events also include structured MDC properties. 
 
 * The "SECURITY" marker is used for highlighting security related events or incidents.
-* Akka Actor is using the markers defined in @apidoc[actor.ActorLogMarker$].
-* Akka Cluster is using the markers defined in @apidoc[cluster.ClusterLogMarker$].
-* Akka Remoting is using the markers defined in @apidoc[remote.RemoteLogMarker$].
-* Akka Cluster Sharding is using the markers defined in @apidoc[cluster.sharding.ShardingLogMarker$].
+* Pekko Actor is using the markers defined in @apidoc[actor.ActorLogMarker$].
+* Pekko Cluster is using the markers defined in @apidoc[cluster.ClusterLogMarker$].
+* Pekko Remoting is using the markers defined in @apidoc[remote.RemoteLogMarker$].
+* Pekko Cluster Sharding is using the markers defined in @apidoc[cluster.sharding.ShardingLogMarker$].
 
 Markers and MDC properties are automatically picked up by the [Logstash Logback encoder](https://github.com/logstash/logstash-logback-encoder).
 
@@ -595,7 +595,7 @@ The marker can be included in the Logback output with `%marker` and all MDC prop
 
 It is also possible to use the @javadoc[org.slf4j.Marker](org.slf4j.Marker) with the @apidoc[LoggingAdapter] when using slf4j.
 
-Since the akka-actor library avoids depending on any specific logging library, the support for this is included in `akka-slf4j`,
+Since the pekko-actor library avoids depending on any specific logging library, the support for this is included in `pekko-slf4j`,
 which provides the @apidoc[Slf4jLogMarker] type which can be passed in as first argument instead of the logging framework agnostic LogMarker
-type from `akka-actor`. The most notable difference between the two is that slf4j's Markers can have child markers, so one can
+type from `pekko-actor`. The most notable difference between the two is that slf4j's Markers can have child markers, so one can
 rely more information using them rather than just a single string.
diff --git a/docs/src/main/paradox/mailboxes.md b/docs/src/main/paradox/mailboxes.md
index 7580813436..a705f79b64 100644
--- a/docs/src/main/paradox/mailboxes.md
+++ b/docs/src/main/paradox/mailboxes.md
@@ -8,17 +8,17 @@ For the full documentation of this feature and for new projects see @ref:[mailbo
 To use Mailboxes, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-An Akka `Mailbox` holds the messages that are destined for an @apidoc[actor.Actor].
+A Pekko `Mailbox` holds the messages that are destined for an @apidoc[actor.Actor].
 Normally each `Actor` has its own mailbox, but with for example a @apidoc[BalancingPool]
 all routees will share a single mailbox instance.
 
@@ -98,7 +98,7 @@ dispatcher which will execute it. Then the mailbox is determined as follows:
 this refers to a configuration section describing the mailbox type.
  2. If the actor's @apidoc[Props](actor.Props) contains a mailbox selection then that names a configuration section describing the
 mailbox type to be used. This needs to be an absolute config path,
-for example `myapp.special-mailbox`, and is not nested inside the `akka` namespace.
+for example `myapp.special-mailbox`, and is not nested inside the `pekko` namespace.
  3. If the dispatcher's configuration section contains a `mailbox-type` key
 the same section will be used to configure the mailbox type.
  4. If the actor requires a mailbox type as described above then the mapping for
diff --git a/docs/src/main/paradox/multi-jvm-testing.md b/docs/src/main/paradox/multi-jvm-testing.md
index a60582eed8..8671cfbedd 100644
--- a/docs/src/main/paradox/multi-jvm-testing.md
+++ b/docs/src/main/paradox/multi-jvm-testing.md
@@ -1,5 +1,5 @@
 ---
-project.description: Multi JVM testing of distributed systems built with Akka.
+project.description: Multi JVM testing of distributed systems built with Pekko.
 ---
 
 # Multi JVM Testing
@@ -35,14 +35,14 @@ and not in `src/test/...`.
 The multi-JVM tasks are similar to the normal tasks: `test`, `testOnly`,
 and `run`, but are under the `multi-jvm` configuration.
 
-So in Akka, to run all the multi-JVM tests in the akka-remote project use (at
+So in Pekko, to run all the multi-JVM tests in the pekko-remote project use (at
 the sbt prompt):
 
 ```none
 remote-tests/multi-jvm:test
 ```
 
-Or one can change to the `akka-remote-tests` project first, and then run the
+Or one can change to the `pekko-remote-tests` project first, and then run the
 tests:
 
 ```none
@@ -214,7 +214,7 @@ described in that section.
 
 ## Example project
 
-@extref[Cluster example project](samples:akka-samples-cluster-scala)
+@extref[Cluster example project](samples:pekko-samples-cluster-scala)
 is an example project that can be downloaded, and with instructions of how to run.
 
 This project illustrates Cluster features and also includes Multi JVM Testing with the `sbt-multi-jvm` plugin.
diff --git a/docs/src/main/paradox/multi-node-testing.md b/docs/src/main/paradox/multi-node-testing.md
index 678984377d..7b8b51634a 100644
--- a/docs/src/main/paradox/multi-node-testing.md
+++ b/docs/src/main/paradox/multi-node-testing.md
@@ -1,5 +1,5 @@
 ---
-project.description: Multi node testing of distributed systems built with Akka.
+project.description: Multi node testing of distributed systems built with Pekko.
 ---
 # Multi Node Testing
 
@@ -21,7 +21,7 @@ To use Multi Node Testing, you must add the following dependency in your project
 
 ## Multi Node Testing Concepts
 
-When we talk about multi node testing in Akka we mean the process of running coordinated tests on multiple actor
+When we talk about multi node testing in Pekko we mean the process of running coordinated tests on multiple actor
 systems in different JVMs. The multi node testing kit consist of three main parts.
 
  * @ref:[The Test Conductor](#the-test-conductor). that coordinates and controls the nodes under test.
@@ -31,7 +31,7 @@ nodes connect to it.
 
 ## The Test Conductor
 
-The basis for the multi node testing is the @apidoc[TestConductor$]. It is an Akka Extension that plugs in to the
+The basis for the multi node testing is the @apidoc[TestConductor$]. It is an Pekko Extension that plugs in to the
 network stack and it is used to coordinate the nodes participating in the test and provides several features
 including:
 
@@ -42,7 +42,7 @@ test nodes)
 
 This is a schematic overview of the test conductor.
 
-![akka-remote-testconductor.png](./images/akka-remote-testconductor.png)
+![pekko-remote-testconductor.png](./images/pekko-remote-testconductor.png)
 
 The test conductor server is responsible for coordinating barriers and sending commands to the test conductor
 clients that act upon them, e.g. throttling network traffic to/from another client. More information on the
@@ -201,4 +201,4 @@ thread. This also means that you shouldn't use them from inside an actor, a futu
 ## Configuration
 
 There are several configuration properties for the Multi-Node Testing module, please refer
-to the @ref:[reference configuration](general/configuration-reference.md#config-akka-multi-node-testkit).
+to the @ref:[reference configuration](general/configuration-reference.md#config-pekko-multi-node-testkit).
diff --git a/docs/src/main/paradox/persistence-fsm.md b/docs/src/main/paradox/persistence-fsm.md
index ee0a1fff67..f3c2c0baab 100644
--- a/docs/src/main/paradox/persistence-fsm.md
+++ b/docs/src/main/paradox/persistence-fsm.md
@@ -4,20 +4,20 @@
 
 ## Dependency
 
-Persistent FSMs are part of Akka persistence, you must add the following dependency in your project:
+Persistent FSMs are part of Pekko persistence, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-persistence_$scala.binary.version$"
+  artifact="pekko-persistence_$scala.binary.version$"
   version=PekkoVersion
 }
 
 @@@ warning
 
-Persistent FSM is no longer actively developed and will be replaced by @ref[Akka Persistence Typed](typed/persistence.md). It is not advised
+Persistent FSM is no longer actively developed and will be replaced by @ref[Pekko Persistence Typed](typed/persistence.md). It is not advised
 to build new applications with Persistent FSM. Existing users of Persistent FSM @ref[should migrate](#migration-to-eventsourcedbehavior). 
 
 @@@
diff --git a/docs/src/main/paradox/persistence-journals.md b/docs/src/main/paradox/persistence-journals.md
index e6260bd40c..b5c5977982 100644
--- a/docs/src/main/paradox/persistence-journals.md
+++ b/docs/src/main/paradox/persistence-journals.md
@@ -1,7 +1,7 @@
 # Persistence - Building a storage backend 
 
-Storage backends for journals and snapshot stores are pluggable in the Akka persistence extension.
-A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see [Community plugins](https://akka.io/community/)
+Storage backends for journals and snapshot stores are pluggable in the Pekko persistence extension.
+A directory of persistence journal and snapshot store plugins is available at the Pekko Community Projects page, see [Community plugins](https://akka.io/community/)
 This documentation described how to build a new storage backend.
 
 Applications can provide their own plugins by implementing a plugin API and activating them by configuration.
diff --git a/docs/src/main/paradox/persistence-plugins.md b/docs/src/main/paradox/persistence-plugins.md
index 222003d40d..45ad1af35a 100644
--- a/docs/src/main/paradox/persistence-plugins.md
+++ b/docs/src/main/paradox/persistence-plugins.md
@@ -1,15 +1,15 @@
 # Persistence Plugins 
 
-Storage backends for journals and snapshot stores are pluggable in the Akka persistence extension.
+Storage backends for journals and snapshot stores are pluggable in the Pekko persistence extension.
 
-A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see [Community plugins](https://akka.io/community/)
+A directory of persistence journal and snapshot store plugins is available at the Pekko Community Projects page, see [Community plugins](https://akka.io/community/)
 
-Plugins maintained within the Akka organization are:
+Plugins maintained within the Pekko organization are:
 
-* [akka-persistence-cassandra](https://doc.akka.io/docs/akka-persistence-cassandra/current/) (no Durable State support)
-* [akka-persistence-jdbc](https://doc.akka.io/docs/akka-persistence-jdbc/current/)
-* [akka-persistence-r2dbc](https://doc.akka.io/docs/akka-persistence-r2dbc/current/)
-* [akka-persistence-spanner](https://doc.akka.io/docs/akka-persistence-spanner/current/)
+* [pekko-persistence-cassandra](https://doc.akka.io/docs/akka-persistence-cassandra/current/) (no Durable State support)
+* [pekko-persistence-jdbc](https://doc.akka.io/docs/akka-persistence-jdbc/current/)
+* [pekko-persistence-r2dbc](https://doc.akka.io/docs/akka-persistence-r2dbc/current/)
+* [pekko-persistence-spanner](https://doc.akka.io/docs/akka-persistence-spanner/current/)
 
 Plugins can be selected either by "default" for all persistent actors,
 or "individually", when a persistent actor defines its own set of plugins.
@@ -27,7 +27,7 @@ However, these entries are provided as empty "", and require explicit user confi
 
 * For an example of a journal plugin which writes messages to LevelDB see @ref:[Local LevelDB journal](#local-leveldb-journal).
 * For an example of a snapshot store plugin which writes snapshots as individual files to the local filesystem see @ref:[Local snapshot store](#local-snapshot-store).
-* The state store is relatively new, one available implementation is the [akka-persistence-jdbc-plugin](https://doc.akka.io/docs/akka-persistence-jdbc/current/).
+* The state store is relatively new, one available implementation is the [pekko-persistence-jdbc-plugin](https://doc.akka.io/docs/akka-persistence-jdbc/current/).
 
 ## Eager initialization of persistence plugin
 
@@ -62,19 +62,19 @@ pekko {
 
 ## Pre-packaged plugins
 
-The Akka Persistence module comes with few built-in persistence plugins, but none of these are suitable
-for production usage in an Akka Cluster. 
+The Pekko Persistence module comes with few built-in persistence plugins, but none of these are suitable
+for production usage in an Pekko Cluster. 
 
 ### Local LevelDB journal
 
 This plugin writes events to a local LevelDB instance.
 
 @@@ warning
-The LevelDB plugin cannot be used in an Akka Cluster since the storage is in a local file system.
+The LevelDB plugin cannot be used in an Pekko Cluster since the storage is in a local file system.
 @@@
 
 The LevelDB journal is deprecated and it is not advised to build new applications with it.
-As a replacement we recommend using [Akka Persistence JDBC](https://doc.akka.io/docs/akka-persistence-jdbc/current/index.html).
+As a replacement we recommend using [Pekko Persistence JDBC](https://doc.akka.io/docs/akka-persistence-jdbc/current/index.html).
 
 The LevelDB journal plugin config entry is `pekko.persistence.journal.leveldb`. Enable this plugin by
 defining config property:
@@ -105,7 +105,7 @@ this end, LevelDB offers a special journal compaction function that is exposed v
 
 ### Shared LevelDB journal
 
-The LevelDB journal is deprecated and will be removed from a future Akka version, it is not advised to build new 
+The LevelDB journal is deprecated and will be removed from a future Pekko version, it is not advised to build new 
 applications with it. For testing in a multi node environment the "inmem" journal together with the @ref[proxy plugin](#persistence-plugin-proxy) can be used, but the actual journal used in production of applications is also a good choice.
 
 @@@ note
@@ -147,7 +147,7 @@ i.e. only the first injection is used.
 This plugin writes snapshot files to the local filesystem.
 
 @@@ warning
-The local snapshot store plugin cannot be used in an Akka Cluster since the storage is in a local file system.
+The local snapshot store plugin cannot be used in an Pekko Cluster since the storage is in a local file system.
 @@@
 
 The local snapshot store plugin config entry is `pekko.persistence.snapshot-store.local`. 
@@ -186,9 +186,9 @@ and `target-snapshot-store-address` configuration keys, or programmatically by c
 `PersistencePluginProxy.setTargetLocation` method.
 
 @@@ note
-Akka starts extensions lazily when they are required, and this includes the proxy. This means that in order for the
+Pekko starts extensions lazily when they are required, and this includes the proxy. This means that in order for the
 proxy to work, the persistence plugin on the target node must be instantiated. This can be done by instantiating the
-`PersistencePluginProxyExtension` @ref:[extension](extending-akka.md), or by calling the `PersistencePluginProxy.start` method.
+`PersistencePluginProxyExtension` @ref:[extension](extending-pekko.md), or by calling the `PersistencePluginProxy.start` method.
 @@@
 
 @@@ note
diff --git a/docs/src/main/paradox/persistence-query-leveldb.md b/docs/src/main/paradox/persistence-query-leveldb.md
index ed8ae41d7d..8510b51c18 100644
--- a/docs/src/main/paradox/persistence-query-leveldb.md
+++ b/docs/src/main/paradox/persistence-query-leveldb.md
@@ -1,14 +1,14 @@
 # Persistence Query for LevelDB
 
 The LevelDB journal and query plugin is deprecated and it is not advised to build new applications with it.
-As a replacement we recommend using [Akka Persistence JDBC](https://doc.akka.io/docs/akka-persistence-jdbc/current/index.html).
+As a replacement we recommend using [Pekko Persistence JDBC](https://doc.akka.io/docs/akka-persistence-jdbc/current/index.html).
 
 ## Dependency
 
 To use Persistence Query, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -16,7 +16,7 @@ To use Persistence Query, you must add the following dependency in your project:
   version=PekkoVersion
 }
 
-This will also add dependency on the @ref[akka-persistence](persistence.md) module.
+This will also add dependency on the @ref[pekko-persistence](persistence.md) module.
 
 ## Introduction
 
diff --git a/docs/src/main/paradox/persistence-query.md b/docs/src/main/paradox/persistence-query.md
index 4505d3c7f3..3c6fae44c3 100644
--- a/docs/src/main/paradox/persistence-query.md
+++ b/docs/src/main/paradox/persistence-query.md
@@ -1,5 +1,5 @@
 ---
-project.description: Query side to Akka Persistence allowing for building CQRS applications.
+project.description: Query side to Pekko Persistence allowing for building CQRS applications.
 ---
 # Persistence Query
 
@@ -8,7 +8,7 @@ project.description: Query side to Akka Persistence allowing for building CQRS a
 To use Persistence Query, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -16,16 +16,16 @@ To use Persistence Query, you must add the following dependency in your project:
   version=PekkoVersion
 }
 
-This will also add dependency on the @ref[Akka Persistence](persistence.md) module.
+This will also add dependency on the @ref[Pekko Persistence](persistence.md) module.
 
 ## Introduction
 
-Akka persistence query complements @ref:[Event Sourcing](typed/persistence.md) by providing a universal asynchronous stream based
+Pekko persistence query complements @ref:[Event Sourcing](typed/persistence.md) by providing a universal asynchronous stream based
 query interface that various journal plugins can implement in order to expose their query capabilities.
 
 The most typical use case of persistence query is implementing the so-called query side (also known as "read side")
-in the popular CQRS architecture pattern - in which the writing side of the application (e.g. implemented using Akka
-persistence) is completely separated from the "query side". Akka Persistence Query itself is *not* directly the query
+in the popular CQRS architecture pattern - in which the writing side of the application (e.g. implemented using Pekko
+persistence) is completely separated from the "query side". Pekko Persistence Query itself is *not* directly the query
 side of an application, however it can help to migrate data from the write side to the query side database. In very
 simple scenarios Persistence Query may be powerful enough to fulfill the query needs of your app, however we highly
 recommend (in the spirit of CQRS) of splitting up the write/read sides into separate datastores as the need arises.
@@ -33,12 +33,12 @@ recommend (in the spirit of CQRS) of splitting up the write/read sides into sepa
 For a similar implementation of query interface to @ref:[Durable State Behaviors](typed/durable-state/persistence.md)
 please refer to @ref:[Persistence Query using Durable State](durable-state/persistence-query.md).
 
-The @extref[Microservices with Akka tutorial](platform-guide:microservices-tutorial/) explains how to
-implement an Event Sourced CQRS application with Akka Persistence and Akka Projections.
+The @extref[Microservices with Pekko tutorial](platform-guide:microservices-tutorial/) explains how to
+implement an Event Sourced CQRS application with Pekko Persistence and Pekko Projections.
 
 ## Design overview
 
-Akka persistence query is purposely designed to be a very loosely specified API.
+Pekko persistence query is purposely designed to be a very loosely specified API.
 This is in order to keep the provided APIs general enough for each journal implementation to be able to expose its best
 features, e.g. a SQL journal can use complex SQL queries or if a journal is able to subscribe to a live event stream
 this should also be possible to expose the same API - a typed stream of events.
@@ -46,7 +46,7 @@ this should also be possible to expose the same API - a typed stream of events.
 **Each read journal must explicitly document which types of queries it supports.**
 Refer to your journal's plugins documentation for details on which queries and semantics it supports.
 
-While Akka Persistence Query does not provide actual implementations of ReadJournals, it defines a number of pre-defined
+While Pekko Persistence Query does not provide actual implementations of ReadJournals, it defines a number of pre-defined
 query types for the most common query scenarios, that most journals are likely to implement (however they are not required to).
 
 ## Read Journals
@@ -69,7 +69,7 @@ Read journal implementations are available as [Community plugins](https://akka.i
 
 ### Predefined queries
 
-Akka persistence query comes with a number of query interfaces built in and suggests Journal implementors to implement
+Pekko persistence query comes with a number of query interfaces built in and suggests Journal implementors to implement
 them according to the semantics described below. It is important to notice that while these query types are very common
 a journal is not obliged to implement all of them - for example because in a given journal such query would be
 significantly inefficient.
@@ -224,7 +224,7 @@ projected into the other read-optimised datastore.
 
 @@@ note
 
-When referring to **Materialized Views** in Akka Persistence think of it as "some persistent storage of the result of a Query".
+When referring to **Materialized Views** in Pekko Persistence think of it as "some persistent storage of the result of a Query".
 In other words, it means that the view is created once, in order to be afterwards queried multiple times, as in this format
 it may be more efficient or interesting to query it (instead of the source events directly).
 
@@ -267,14 +267,14 @@ Java
 Sometimes you may need to use "resumable" projections, which will not start from the beginning of time each time
 when run. In such case, the sequence number (or `offset`) of the processed event will be stored and
 used the next time this projection is started. This pattern is implemented in the
-[Akka Projections](https://doc.akka.io/docs/akka-projection/current/) module.
+[Pekko Projections](https://doc.akka.io/docs/akka-projection/current/) module.
 
 
 <a id="read-journal-plugin-api"></a>
 ## Query plugins
 
 Query plugins are various (mostly community driven) @apidoc[query.*.ReadJournal] implementations for all kinds
-of available datastores. The complete list of available plugins is maintained on the Akka Persistence Query [Community Plugins](https://akka.io/community/#plugins-to-akka-persistence-query) page.
+of available datastores. The complete list of available plugins is maintained on the Pekko Persistence Query [Community Plugins](https://akka.io/community/#plugins-to-akka-persistence-query) page.
 
 This section aims to provide tips and guide plugin developers through implementing a custom query plugin.
 Most users will not need to implement journals themselves, except if targeting a not yet supported datastore.
@@ -338,6 +338,6 @@ shard events over a cluster.
 
 ## Example project
 
-The @extref[Microservices with Akka tutorial](platform-guide:microservices-tutorial/) explains how to
+The @extref[Microservices with Pekko tutorial](platform-guide:microservices-tutorial/) explains how to
 use Event Sourcing and Projections together. The events are tagged to be consumed by even processors to build
 other representations from the events, or publish the events to other services.
diff --git a/docs/src/main/paradox/persistence-schema-evolution.md b/docs/src/main/paradox/persistence-schema-evolution.md
index 169f5d773d..1151e430da 100644
--- a/docs/src/main/paradox/persistence-schema-evolution.md
+++ b/docs/src/main/paradox/persistence-schema-evolution.md
@@ -2,17 +2,17 @@
 
 ## Dependency
 
-This documentation page touches upon @ref[Akka Persistence](persistence.md), so to follow those examples you will want to depend on:
+This documentation page touches upon @ref[Pekko Persistence](persistence.md), so to follow those examples you will want to depend on:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-persistence_$scala.binary.version$"
+  artifact="pekko-persistence_$scala.binary.version$"
   version=PekkoVersion
   group2="org.apache.pekko"
-  artifact2="akka-persistence-testkit_$scala.binary.version$"
+  artifact2="pekko-persistence-testkit_$scala.binary.version$"
   version2=PekkoVersion
   scope2=test
 }
@@ -33,7 +33,7 @@ choose the ones that match your domain and challenge at hand.
 @@@ note
 
 This page proposes a number of possible solutions to the schema evolution problem and explains how some of the
-utilities Akka provides can be used to achieve this, it is by no means a complete (closed) set of solutions.
+utilities Pekko provides can be used to achieve this, it is by no means a complete (closed) set of solutions.
 
 Sometimes, based on the capabilities of your serialization formats, you may be able to evolve your schema in
 different ways than outlined in the sections below. If you discover useful patterns or techniques for schema
@@ -113,19 +113,19 @@ by Martin Kleppmann.
 
 ### Provided default serializers
 
-Akka Persistence provides [Google Protocol Buffers](https://developers.google.com/protocol-buffers/) based serializers (using @ref:[Akka Serialization](serialization.md))
+Pekko Persistence provides [Google Protocol Buffers](https://developers.google.com/protocol-buffers/) based serializers (using @ref:[Pekko Serialization](serialization.md))
 for its own message types such as @apidoc[PersistentRepr], @apidoc[AtomicWrite] and snapshots. Journal plugin implementations
 *may* choose to use those provided serializers, or pick a serializer which suits the underlying database better.
 
 @@@ note
 
-Serialization is **NOT** handled automatically by Akka Persistence itself. Instead, it only provides the above described
+Serialization is **NOT** handled automatically by Pekko Persistence itself. Instead, it only provides the above described
 serializers, and in case a @scala[@scaladoc[AsyncWriteJournal](pekko.persistence.journal.AsyncWriteJournal)]@java[@javadoc[AsyncWriteJournal](pekko.persistence.journal.japi.AsyncWriteJournal)] plugin implementation chooses to use them directly, the above serialization
 scheme will be used.
 
 Please refer to your write journal's documentation to learn more about how it handles serialization!
 
-For example, some journals may choose to not use Akka Serialization *at all* and instead store the data in a format
+For example, some journals may choose to not use Pekko Serialization *at all* and instead store the data in a format
 that is more "native" for the underlying datastore, e.g. using JSON or some other kind of format that the target
 datastore understands directly.
 
@@ -136,7 +136,7 @@ user provided message itself, which we will from here on refer to as the `payloa
 
 ![persistent-message-envelope.png](./images/persistent-message-envelope.png)
 
-Akka Persistence provided serializers wrap the user payload in an envelope containing all persistence-relevant information.
+Pekko Persistence provided serializers wrap the user payload in an envelope containing all persistence-relevant information.
 **If the Journal uses provided Protobuf serializers for the wrapper types (e.g. PersistentRepr), then the payload will
 be serialized using the user configured serializer, and if none is provided explicitly, Java serialization will be used for it.**
 
@@ -163,13 +163,13 @@ scenarios).
 
 ### Configuring payload serializers
 
-This section aims to highlight the complete basics on how to define custom serializers using @ref:[Akka Serialization](serialization.md).
-Many journal plugin implementations use Akka Serialization, thus it is tremendously important to understand how to configure
+This section aims to highlight the complete basics on how to define custom serializers using @ref:[Pekko Serialization](serialization.md).
+Many journal plugin implementations use Pekko Serialization, thus it is tremendously important to understand how to configure
 it to work with your event classes.
 
 @@@ note
 
-Read the @ref:[Akka Serialization](serialization.md) docs to learn more about defining custom serializers.
+Read the @ref:[Pekko Serialization](serialization.md) docs to learn more about defining custom serializers.
 
 @@@
 
@@ -199,7 +199,7 @@ And finally we register the serializer and bind it to handle the `docs.persisten
 Deserialization will be performed by the same serializer which serialized the message initially
 because of the `identifier` being stored together with the message.
 
-Please refer to the @ref:[Akka Serialization](serialization.md) documentation for more advanced use of serializers,
+Please refer to the @ref:[Pekko Serialization](serialization.md) documentation for more advanced use of serializers,
 especially the @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer) section since it is very useful for Persistence based applications
 dealing with schema evolutions, as we will see in some of the examples below.
 
@@ -242,7 +242,7 @@ Java
 :  @@snip [PersistenceSchemaEvolutionDocTest.java](/docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #protobuf-read-optional-model }
 
 Next we prepare a protocol definition using the protobuf Interface Description Language, which we'll use to generate
-the serializer code to be used on the Akka Serialization layer (notice that the schema approach allows us to rename
+the serializer code to be used on the Pekko Serialization layer (notice that the schema approach allows us to rename
 fields, as long as the numeric identifiers of the fields do not change):
 
 @@snip [FlightAppModels.proto](/docs/src/test/../main/protobuf/FlightAppModels.proto) { #protobuf-read-optional-proto }
@@ -465,7 +465,7 @@ Java
 
 @@@ note
 
-This technique only applies if the Akka Persistence plugin you are using provides this capability.
+This technique only applies if the Pekko Persistence plugin you are using provides this capability.
 Check the documentation of your favourite plugin to see if it supports this style of persistence.
 
 If it doesn't, you may want to skim the [list of existing journal plugins](https://akka.io/community/#journal-plugins), just in case some other plugin
diff --git a/docs/src/main/paradox/persistence.md b/docs/src/main/paradox/persistence.md
index 1da130bddc..b816210e12 100644
--- a/docs/src/main/paradox/persistence.md
+++ b/docs/src/main/paradox/persistence.md
@@ -1,5 +1,5 @@
 ---
-project.description: Akka Persistence Classic, Event Sourcing with Akka, At-Least-Once delivery, snapshots, recovery and replay with Akka actors.
+project.description: Pekko Persistence Classic, Event Sourcing with Pekko, At-Least-Once delivery, snapshots, recovery and replay with Pekko actors.
 ---
 # Classic Persistence
 
@@ -8,7 +8,7 @@ For the full documentation of this feature and for new projects see @ref:[Event
 
 ## Module info
 
-To use Akka Persistence, you must add the following dependency in your project:
+To use Pekko Persistence, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -32,7 +32,7 @@ You also have to select journal plugin and optionally snapshot store plugin, see
 
 See introduction in @ref:[Persistence](typed/persistence.md#introduction) 
 
-Akka Persistence also provides point-to-point communication with at-least-once message delivery semantics.
+Pekko Persistence also provides point-to-point communication with at-least-once message delivery semantics.
 
 ### Architecture
 
@@ -49,12 +49,12 @@ Replicated journals are available as [Community plugins](https://akka.io/communi
  * *Snapshot store*: A snapshot store persists snapshots of a persistent actor's state. Snapshots are
 used for optimizing recovery times. The storage backend of a snapshot store is pluggable.
 The persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem. Replicated snapshot stores are available as [Community plugins](https://akka.io/community/)
- * *Event Sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the
+ * *Event Sourcing*. Based on the building blocks described above, Pekko persistence provides abstractions for the
 development of event sourced applications (see section @ref:[Event Sourcing](typed/persistence.md#event-sourcing-concepts)).
 
 ## Example
 
-Akka persistence supports Event Sourcing with the @scala[@scaladoc[PersistentActor](pekko.persistence.PersistentActor) trait]@java[@javadoc[AbstractPersistentActor](pekko.persistence.AbstractPersistentActor) abstract class]. An actor that extends this @scala[trait]@java[class] uses the
+Pekko persistence supports Event Sourcing with the @scala[@scaladoc[PersistentActor](pekko.persistence.PersistentActor) trait]@java[@javadoc[AbstractPersistentActor](pekko.persistence.AbstractPersistentActor) abstract class]. An actor that extends this @scala[trait]@java[class] uses the
 @scala[@scaladoc[persist](pekko.persistence.PersistentActor#persist[A](event:A)(handler:A=%3EUnit):Unit)]@java[@javadoc[persist](pekko.persistence.AbstractPersistentActorLike#persist(A,org.apache.pekko.japi.Procedure))] method to persist and handle events. The behavior of @scala[a `PersistentActor`]@java[an `AbstractPersistentActor`]
 is defined by implementing @scala[@scaladoc[receiveRecover](pekko.persistence.PersistentActor#receiveRecover:Eventsourced.this.Receive)]@java[@javadoc[createReceiveRecover](pekko.persistence.AbstractPersistentActorLike#createReceiveRecover())] and @scala[@scaladoc[receiveCommand](pekko.persistence.PersistentActor#receiveCommand:Eventsourced.this.Receive)]@java[@javadoc[createReceive](pekko.persistence.AbstractPersistentActorLike#createReceive())]. This is demonstrated in the following example.
 
@@ -241,7 +241,7 @@ pekko.persistence.internal-stash-overflow-strategy=
 The `DiscardToDeadLetterStrategy` strategy also has a pre-packaged companion configurator
 @apidoc[persistence.DiscardConfigurator].
 
-You can also query the default strategy via the Akka persistence extension singleton:    
+You can also query the default strategy via the Pekko persistence extension singleton:    
 
 Scala
 :   @@@vars
@@ -505,7 +505,7 @@ restarts of the persistent actor.
 
 Journal implementations may choose to implement a retry mechanism, e.g. such that only after a write fails N number
 of times a persistence failure is signalled back to the user. In other words, once a journal returns a failure,
-it is considered *fatal* by Akka Persistence, and the persistent actor which caused the failure will be stopped.
+it is considered *fatal* by Pekko Persistence, and the persistent actor which caused the failure will be stopped.
 
 Check the documentation of the journal implementation you are using for details if/how it is using this technique.
 
@@ -517,7 +517,7 @@ Check the documentation of the journal implementation you are using for details
 Special care should be given when shutting down persistent actors from the outside.
 With normal Actors it is often acceptable to use the special @ref:[PoisonPill](actors.md#poison-pill) message
 to signal to an Actor that it should stop itself once it receives this message – in fact this message is handled
-automatically by Akka, leaving the target actor no way to refuse stopping itself when given a poison pill.
+automatically by Pekko, leaving the target actor no way to refuse stopping itself when given a poison pill.
 
 This can be dangerous when used with `PersistentActor` due to the fact that incoming commands are *stashed* while
 the persistent actor is awaiting confirmation from the Journal that events have been written when @scala[@scaladoc[persist()](pekko.persistence.PersistentActor#persist[A](event:A)(handler:A=%3EUnit):Unit)]@java[@javadoc[persist()](pekko.persistence.AbstractPersistentActorLike#persist(A,org.apache.pekko.japi.Procedure))] was used.
@@ -608,7 +608,7 @@ In order to use snapshots, a default snapshot-store (`pekko.persistence.snapshot
 or the @scala[`PersistentActor`]@java[persistent actor] can pick a snapshot store explicitly by overriding @scala[`def snapshotPluginId: String`]@java[`String snapshotPluginId()`].
 
 Because some use cases may not benefit from or need snapshots, it is perfectly valid not to not configure a snapshot store.
-However, Akka will log a warning message when this situation is detected and then continue to operate until
+However, Pekko will log a warning message when this situation is detected and then continue to operate until
 an actor tries to store a snapshot, at which point the operation will fail (by replying with an @apidoc[persistence.SaveSnapshotFailure] for example).
 
 Note that the "persistence mode" of @ref:[Cluster Sharding](cluster-sharding.md) makes use of snapshots. If you use that mode, you'll need to define a snapshot store plugin.
@@ -737,7 +737,7 @@ if no matching `confirmDelivery` will have been performed.
 Support for snapshots is provided by @apidoc[getDeliverySnapshot](persistence.AtLeastOnceDeliveryLike) {scala="#getDeliverySnapshot:org.apache.pekko.persistence.AtLeastOnceDelivery.AtLeastOnceDeliverySnapshot" java="#getDeliverySnapshot()"} and @apidoc[setDeliverySnapshot](persistence.AtLeastOnceDeliveryLike) {scala="#setDeliverySnapshot(snapshot:org.apache.pekko.persistence.AtLeastOnceDelivery.AtLeastOnceDeliverySnapshot):Unit" java="#setDeliverySnapshot(org.apache.pekko.persistence.AtL [...]
 The @apidoc[persistence.AtLeastOnceDelivery.AtLeastOnceDeliverySnapshot] contains the full delivery state, including unconfirmed messages.
 If you need a custom snapshot for other parts of the actor state you must also include the
-`AtLeastOnceDeliverySnapshot`. It is serialized using protobuf with the ordinary Akka
+`AtLeastOnceDeliverySnapshot`. It is serialized using protobuf with the ordinary Pekko
 serialization mechanism. It is easiest to include the bytes of the `AtLeastOnceDeliverySnapshot`
 as a blob in your custom snapshot.
 
@@ -812,7 +812,7 @@ For more advanced schema evolution techniques refer to the @ref:[Persistence - S
 
 ## Custom serialization
 
-Serialization of snapshots and payloads of `Persistent` messages is configurable with Akka's
+Serialization of snapshots and payloads of `Persistent` messages is configurable with Pekko's
 @ref:[Serialization](serialization.md) infrastructure. For example, if an application wants to serialize
 
  * payloads of type `MyPayload` with a custom `MyPayloadSerializer` and
@@ -828,7 +828,7 @@ For more advanced schema evolution techniques refer to the @ref:[Persistence - S
 
 ## Testing with LevelDB journal
 
-The LevelDB journal is deprecated and will be removed from a future Akka version, it is not advised to build new applications 
+The LevelDB journal is deprecated and will be removed from a future Pekko version, it is not advised to build new applications 
 with it. For testing the built in "inmem" journal or the actual journal that will be used in production of the application 
 is recommended. See @ref[Persistence Plugins](persistence-plugins.md) for some journal implementation choices.
 
@@ -840,7 +840,7 @@ or
 
 @@snip [PersistencePluginDocSpec.scala](/docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-native-config }
 
-in your Akka configuration. Also note that for the LevelDB Java port, you will need the following dependencies:
+in your Pekko configuration. Also note that for the LevelDB Java port, you will need the following dependencies:
 
 @@dependency[sbt,Maven,Gradle] {
   group="org.iq80.leveldb"
@@ -862,7 +862,7 @@ When testing Persistence based projects always rely on @ref:[asynchronous messag
 ## Configuration
 
 There are several configuration properties for the persistence module, please refer
-to the @ref:[reference configuration](general/configuration-reference.md#config-akka-persistence).
+to the @ref:[reference configuration](general/configuration-reference.md#config-pekko-persistence).
 
 The @ref:[journal and snapshot store plugins](persistence-plugins.md) have specific configuration, see
 reference documentation of the chosen plugin.
diff --git a/docs/src/main/paradox/project/downstream-upgrade-strategy.md b/docs/src/main/paradox/project/downstream-upgrade-strategy.md
index 19d1e2489f..fdcf303d1e 100644
--- a/docs/src/main/paradox/project/downstream-upgrade-strategy.md
+++ b/docs/src/main/paradox/project/downstream-upgrade-strategy.md
@@ -3,41 +3,36 @@ project.description: Upgrade strategy for downstream libraries
 ---
 # Downstream upgrade strategy
 
-When a new Akka version is released, downstream projects (such as
-[Akka Management](https://doc.akka.io/docs/akka-management/current/),
-[Akka HTTP](https://doc.akka.io/docs/akka-http/current/) and
-[Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/))
+When a new Pekko version is released, downstream projects (such as
+[Pekko Management](https://doc.akka.io/docs/akka-management/current/),
+[Pekko HTTP](https://doc.akka.io/docs/akka-http/current/) and
+[Pekko gRPC](https://doc.akka.io/docs/akka-grpc/current/))
 do not need to update immediately: because of our
 @ref[binary compatibility](../common/binary-compatibility-rules.md) approach,
-applications can take advantage of the latest version of Akka without having to
+applications can take advantage of the latest version of Pekko without having to
 wait for intermediate libraries to update.
 
 ## Patch versions
 
-When releasing a new patch version of Akka (e.g. 2.5.22), we typically don't
-immediately bump the Akka version in satellite projects.
+When releasing a new patch version of Pekko (e.g. 2.5.22), we typically don't
+immediately bump the Pekko version in satellite projects.
 
 The reason for this is this will make it more low-friction for users to update
-those satellite projects: say their project is on Akka 2.5.22 and
-Akka Management 1.0.0, and we release Akka Management 1.0.1 (still built with
-Akka 2.5.22) and Akka 2.5.23. They can safely update to Akka Management 1.0.1
-without also updating to Akka 2.5.23, or update to Akka 2.5.23 without updating
-to Akka Management 1.0.1.
+those satellite projects: say their project is on Pekko 2.5.22 and
+Pekko Management 1.0.0, and we release Pekko Management 1.0.1 (still built with
+Pekko 2.5.22) and Pekko 2.5.23. They can safely update to Pekko Management 1.0.1
+without also updating to Pekko 2.5.23, or update to Pekko 2.5.23 without updating
+to Pekko Management 1.0.1.
 
-When there is reason for a satellite project to upgrade the Akka patch
+When there is reason for a satellite project to upgrade the Pekko patch
 version, they are free to do so at any time.
 
 ## Minor versions
 
-When releasing a new minor version of Akka (e.g. 2.6.0), satellite projects are
+When releasing a new minor version of Pekko (e.g. 2.6.0), satellite projects are
 also usually not updated immediately, but as needed.
 
-When a satellite project does update to a new minor version of Akka, it will
+When a satellite project does update to a new minor version of Pekko, it will
 also increase its own minor version. The previous stable branch will enter the
-usual end-of-support lifecycle for Lightbend customers, and only important
+usual end-of-support lifecycle and only important
 bugfixes will be backported to the previous version and released.
-
-For example, when Akka 2.5.0 was released, Akka HTTP 10.0.x continued to depend
-on Akka 2.4. When it was time to update Akka HTTP to Akka 2.5, 10.1.0 was
-created, but 10.0.x was maintained for backward compatibility for a period of
-time according to Lightbend's support policy.
diff --git a/docs/src/main/paradox/project/examples.md b/docs/src/main/paradox/project/examples.md
index ac43d4ae3a..2cc7ac35fd 100644
--- a/docs/src/main/paradox/project/examples.md
+++ b/docs/src/main/paradox/project/examples.md
@@ -13,15 +13,15 @@ messages as well as how to use the test module and logging.
 
 ## FSM
 
-@java[@extref[FSM example project](samples:akka-samples-fsm-java)]
-@scala[@extref[FSM example project](samples:akka-samples-fsm-scala)]
+@java[@extref[FSM example project](samples:pekko-samples-fsm-java)]
+@scala[@extref[FSM example project](samples:pekko-samples-fsm-scala)]
 
 This project contains a Dining Hakkers sample illustrating how to model a Finite State Machine (FSM) with actors.
 
 ## Cluster
 
-@java[@extref[Cluster example project](samples:akka-samples-cluster-java)]
-@scala[@extref[Cluster example project](samples:akka-samples-cluster-scala)]
+@java[@extref[Cluster example project](samples:pekko-samples-cluster-java)]
+@scala[@extref[Cluster example project](samples:pekko-samples-cluster-scala)]
 
 This project contains samples illustrating different Cluster features, such as
 subscribing to cluster membership events, and sending messages to actors running on nodes in the cluster
@@ -31,62 +31,62 @@ It also includes Multi JVM Testing with the `sbt-multi-jvm` plugin.
 
 ## Distributed Data
 
-@java[@extref[Distributed Data example project](samples:akka-samples-distributed-data-java)]
-@scala[@extref[Distributed Data example project](samples:akka-samples-distributed-data-scala)]
+@java[@extref[Distributed Data example project](samples:pekko-samples-distributed-data-java)]
+@scala[@extref[Distributed Data example project](samples:pekko-samples-distributed-data-scala)]
 
 This project contains several samples illustrating how to use Distributed Data.
 
 ## Cluster Sharding
 
-@java[@extref[Sharding example project](samples:akka-samples-cluster-sharding-java)]
-@scala[@extref[Sharding example project](samples:akka-samples-cluster-sharding-scala)]
+@java[@extref[Sharding example project](samples:pekko-samples-cluster-sharding-java)]
+@scala[@extref[Sharding example project](samples:pekko-samples-cluster-sharding-scala)]
 
 This project contains a KillrWeather sample illustrating how to use Cluster Sharding.
 
 ## Persistence
 
-@java[@extref[Persistence example project](samples:akka-samples-persistence-java)]
-@scala[@extref[Persistence example project](samples:akka-samples-persistence-scala)]
+@java[@extref[Persistence example project](samples:pekko-samples-persistence-java)]
+@scala[@extref[Persistence example project](samples:pekko-samples-persistence-scala)]
 
-This project contains a Shopping Cart sample illustrating how to use Akka Persistence.
+This project contains a Shopping Cart sample illustrating how to use Pekko Persistence.
 
 ## CQRS
 
-The @extref[Microservices with Akka tutorial](platform-guide:microservices-tutorial/) contains a
+The @extref[Microservices with Pekko tutorial](platform-guide:microservices-tutorial/) contains a
 Shopping Cart sample illustrating how to use Event Sourcing and Projections together. The events are
 tagged to be consumed by even processors to build other representations from the events, or publish the events
 to other services.
 
 ## Replicated Event Sourcing
 
-@java[@extref[Multi-DC Persistence example project](samples:akka-samples-persistence-dc-java)]
-@scala[@extref[Multi-DC Persistence example project](samples:akka-samples-persistence-dc-scala)]
+@java[@extref[Multi-DC Persistence example project](samples:pekko-samples-persistence-dc-java)]
+@scala[@extref[Multi-DC Persistence example project](samples:pekko-samples-persistence-dc-scala)]
 
 Illustrates how to use @ref:[Replicated Event Sourcing](../typed/replicated-eventsourcing.md) that supports
 active-active persistent entities across data centers.
 
 ## Cluster with Docker
 
-@java[@extref[Cluster with docker-compse example project](samples:akka-sample-cluster-docker-compose-java)]
-@scala[@extref[Cluster with docker-compose example project](samples:akka-sample-cluster-docker-compose-scala)]
+@java[@extref[Cluster with docker-compose example project](samples:pekko-sample-cluster-docker-compose-java)]
+@scala[@extref[Cluster with docker-compose example project](samples:pekko-sample-cluster-docker-compose-scala)]
 
-Illustrates how to use Akka Cluster with Docker compose.
+Illustrates how to use Pekko Cluster with Docker compose.
 
 ## Cluster with Kubernetes
 
-@extref[Cluster with Kubernetes example project](samples:akka-sample-cluster-kubernetes-java)
+@extref[Cluster with Kubernetes example project](samples:pekko-sample-cluster-kubernetes-java)
 
-This sample illustrates how to form an Akka Cluster with Akka Bootstrap when running in Kubernetes.
+This sample illustrates how to form an Pekko Cluster with Pekko Bootstrap when running in Kubernetes.
 
 ## Distributed workers
 
-@extref[Distributed workers example project](samples:akka-samples-distributed-workers-scala)
+@extref[Distributed workers example project](samples:pekko-samples-distributed-workers-scala)
 
-This project demonstrates the work pulling pattern using Akka Cluster.
+This project demonstrates the work pulling pattern using Pekko Cluster.
 
 ## Kafka to Cluster Sharding 
 
-@extref[Kafka to Cluster Sharding example project](samples:akka-samples-kafka-to-sharding)
+@extref[Kafka to Cluster Sharding example project](samples:pekko-samples-kafka-to-sharding)
 
 This project demonstrates how to use the External Shard Allocation strategy to co-locate the consumption of Kafka
 partitions with the shard that processes the messages.
diff --git a/docs/src/main/paradox/project/immutable.md b/docs/src/main/paradox/project/immutable.md
index 1e388cdc52..510a862e69 100644
--- a/docs/src/main/paradox/project/immutable.md
+++ b/docs/src/main/paradox/project/immutable.md
@@ -3,7 +3,7 @@ project.description: Data immutability using Project Lombok
 ---
 # Immutability using Lombok
 
-A preferred best practice in Akka is to have immutable messages. Scala provides case class which makes it extremely easy
+A preferred best practice in Pekko is to have immutable messages. Scala provides case class which makes it extremely easy
 to have short and clean classes for creating immutable objects, but no such facility is easily available in Java. We can make use
 of several third party libraries which help is achieving this. One good example is Lombok.
 
diff --git a/docs/src/main/paradox/project/licenses.md b/docs/src/main/paradox/project/licenses.md
index d13b19cb7c..0c3cf2d7bd 100644
--- a/docs/src/main/paradox/project/licenses.md
+++ b/docs/src/main/paradox/project/licenses.md
@@ -1,31 +1,5 @@
 # Licenses
 
-## Akka License
+[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
 
-```
-This software is licensed under the Apache 2 license, quoted below.
-
-Copyright 2009-2018 Lightbend Inc. <https://www.lightbend.com>
-
-Licensed under the Apache License, Version 2.0 (the "License"); you may not
-use this file except in compliance with the License. You may obtain a copy of
-the License at
-
-    https://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-License for the specific language governing permissions and limitations under
-the License.
-```
-
-## Akka Committer License Agreement
-
-All committers have signed this [CLA](https://www.lightbend.com/contribute/cla/akka/current).
-It can be [signed online](https://www.lightbend.com/contribute/cla/akka).
-
-## Licenses for Dependency Libraries
-
-Each dependency and its license can be seen in the project build file (the comment on the side of each dependency):
-@extref[AkkaBuild.scala](github:project/AkkaBuild.scala#L1054) 
+[Contributor License Agreements](https://www.apache.org/licenses/contributor-agreements.html#clas)
\ No newline at end of file
diff --git a/docs/src/main/paradox/project/links.md b/docs/src/main/paradox/project/links.md
index bf74b4048f..2dfed21f68 100644
--- a/docs/src/main/paradox/project/links.md
+++ b/docs/src/main/paradox/project/links.md
@@ -1,42 +1,21 @@
 # Project
 
-## Commercial Support
-
-Commercial support is provided by [Lightbend](https://www.lightbend.com).
-Akka is part of the [Akka Platform](https://www.lightbend.com/akka-platform).
-
-## Sponsors
-
-**Lightbend** is the company behind the Akka Project, Scala Programming Language,
-Play Web Framework, Lagom, sbt and many other open source projects. 
-It also provides the Lightbend Reactive Platform, which is powered by an open source core and commercial Enterprise Suite for building scalable Reactive systems on the JVM. Learn more at [lightbend.com](https://www.lightbend.com).
-
-## Akka Discuss Forums
-
-[Akka Discuss Forums](https://discuss.pekko.io)
-
-## Gitter
-
-Chat room about *using* Akka: [![gitter: akka/akka](https://img.shields.io/badge/gitter%3A-akka%2Fakka-blue.svg?style=flat-square)](https://gitter.im/akka/akka)
-
-A chat room is available for all questions related to developing and contributing to Akka: [![gitter: akka/dev](https://img.shields.io/badge/gitter%3A-akka%2Fdev-blue.svg?style=flat-square)](https://gitter.im/akka/dev)
-
 ## Source Code
 
-Akka uses Git and is hosted at [Github akka/akka](https://github.com/akka/akka).
+Pekko uses Git and is hosted at [Github apache/pekko](https://github.com/apache/incubator-pekko).
 
 ## Releases Repository
 
-All Akka releases are published via Sonatype to Maven Central, see
+All Pekko releases are published via Sonatype to Maven Central, see
 [search.maven.org](https://search.maven.org/search?q=g:org.apache.pekko)
 
 ## Snapshots Repository
 
-Snapshot builds are published nightly and are available for 30 days at [https://nightlies.apache.org/pekko/snapshots/org/apache/pekko/](https://nightlies.apache.org/pekko/snapshots/org/apache/pekko/). All Apache Pekko modules that belong to the same build have the same version.
+Snapshot builds are available at [https://oss.sonatype.org/content/repositories/snapshots/org/apache/pekko/](https://oss.sonatype.org/content/repositories/snapshots/org/apache/pekko/). All Pekko modules that belong to the same build have the same version.
 
 @@@ warning
 
-The use of Apache Pekko SNAPSHOTs, nightlies and milestone releases is discouraged unless you know what you are doing.
+The use of Pekko SNAPSHOTs, nightlies and milestone releases is discouraged unless you know what you are doing.
 
 @@@
 
diff --git a/docs/src/main/paradox/project/migration-guide-2.4.x-2.5.x.md b/docs/src/main/paradox/project/migration-guide-2.4.x-2.5.x.md
deleted file mode 100644
index 2399f9f09b..0000000000
--- a/docs/src/main/paradox/project/migration-guide-2.4.x-2.5.x.md
+++ /dev/null
@@ -1,4 +0,0 @@
-# Migration Guide 2.4.x to 2.5.x
-
-Migration from 2.4.x to 2.5.x is described in the
-[documentation of 2.5](https://doc.akka.io/docs/akka/2.5/project/migration-guide-2.4.x-2.5.x.html).
diff --git a/docs/src/main/paradox/project/migration-guide-2.5.x-2.6.x.md b/docs/src/main/paradox/project/migration-guide-2.5.x-2.6.x.md
deleted file mode 100644
index 0b2c74e154..0000000000
--- a/docs/src/main/paradox/project/migration-guide-2.5.x-2.6.x.md
+++ /dev/null
@@ -1,824 +0,0 @@
----
-project.description: Migrating to Akka 2.6.
----
-# Migration Guide 2.5.x to 2.6.x
-
-An overview of the changes in Akka 2.6 is presented in the [What's new in Akka 2.6 video](https://akka.io/blog/news/2019/12/12/akka-26-intro)
-and the [release announcement](https://akka.io/blog/news/2019/11/06/akka-2.6.0-released).
-
-Akka 2.6.x is binary backwards compatible with 2.5.x with the ordinary exceptions listed in the
-@ref:[Binary Compatibility Rules](../common/binary-compatibility-rules.md).
-
-This means that updating an application from Akka 2.5.x to 2.6.x should be a smooth process, and
-that libraries built for Akka 2.5.x can also be used with Akka 2.6.x. For example Akka HTTP 10.1.10
-and Akka Management 1.0.3 can be used with Akka 2.6.0 dependencies. You may have to add explicit
-dependencies to the new Akka version in your build.
-
-That said, there are some changes to configuration and behavior that should be considered, so
-reading this migration guide and testing your application thoroughly is recommended.
-
-Rolling updates are possible without shutting down all nodes of the Akka Cluster, but will require
-configuration adjustments as described in the @ref:[Remoting](#remoting) section of this migration
-guide. Due to the @ref:[changed serialization of the Cluster messages in Akka 2.6.2](rolling-update.md#2-6-2-clustermessageserializer-manifests-change)
-a rolling update from 2.5.x must first be made to Akka 2.6.2 and then a second rolling update can change to Akka 2.6.3
-or later.
-
-## Scala 2.11 no longer supported
-
-If you are still using Scala 2.11 then you must upgrade to 2.12 or 2.13
-
-## Auto-downing removed
-
-Auto-downing of unreachable Cluster members have been removed after warnings and recommendations against using it
-for many years. It was by default disabled, but could be enabled with configuration
-`pekko.cluster.auto-down-unreachable-after`.
-
-For alternatives see the @ref:[documentation about Downing](../typed/cluster.md#downing).
-
-Auto-downing was a naïve approach to remove unreachable nodes from the cluster membership.
-In a production environment it will eventually break down the cluster. 
-When a network partition occurs, both sides of the partition will see the other side as unreachable
-and remove it from the cluster. This results in the formation of two separate, disconnected, clusters
-(known as *Split Brain*).
-
-This behavior is not limited to network partitions. It can also occur if a node in the cluster is
-overloaded, or experiences a long GC pause.
-
-When using @ref:[Cluster Singleton](../typed/cluster-singleton.md) or @ref:[Cluster Sharding](../typed/cluster-sharding.md)
-it can break the contract provided by those features. Both provide a guarantee that an actor will be unique in a cluster.
-With the auto-down feature enabled, it is possible for multiple independent clusters to form (*Split Brain*).
-When this happens the guaranteed uniqueness will no longer be true resulting in undesirable behavior in the system.
-
-This is even more severe when @ref:[Akka Persistence](../typed/persistence.md) is used in conjunction with
-Cluster Sharding. In this case, the lack of unique actors can cause multiple actors to write to the same journal.
-Akka Persistence operates on a single writer principle. Having multiple writers will corrupt the journal
-and make it unusable. 
-
-Finally, even if you don't use features such as Persistence, Sharding, or Singletons, auto-downing can lead the
-system to form multiple small clusters. These small clusters will be independent from each other. They will be
-unable to communicate and as a result you may experience performance degradation. Once this condition occurs,
-it will require manual intervention in order to reform the cluster.
-
-Because of these issues, auto-downing should **never** be used in a production environment.
-
-## Removed features that were deprecated
-
-After being deprecated since 2.5.0, the following have been removed in Akka 2.6.0.
-
-* akka-camel module
-    - As an alternative we recommend [Alpakka](https://doc.akka.io/docs/alpakka/current/).
-    - This is of course not a drop-in replacement. If there is community interest we are open to setting up akka-camel as a separate community-maintained repository.
-* akka-agent module
-    - If there is interest it may be moved to a separate, community-maintained repository.
-* akka-contrib module
-    - To migrate, take the components you are using from [Akka 2.5](https://github.com/org/apache/pekko/akka/tree/release-2.5/akka-contrib) and include them in your own project or library under your own package name.
-* Actor DSL
-    - Actor DSL is a rarely used feature. Use plain `system.actorOf` instead of the DSL to create Actors if you have been using it.
-* `org.apache.pekko.stream.extra.Timing` operator
-    - If you need it you can now find it in `org.apache.pekko.stream.contrib.Timed` from [Akka Stream Contrib](https://github.com/org/apache/pekko/akka-stream-contrib/blob/master/src/main/scala/org/apache/pekko/stream/contrib/Timed.scala).
-* Netty UDP (Classic remoting over UDP)
-    - To continue to use UDP configure @ref[Artery UDP](../remoting-artery.md#configuring-ssl-tls-for-akka-remoting) or migrate to Artery TCP.
-    - A full cluster restart is required to change to Artery.
-* `UntypedActor`
-    - Use `AbstractActor` instead.
-* `JavaTestKit`
-    - Use `org.apache.pekko.testkit.javadsl.TestKit` instead.
-* `UntypedPersistentActor`
-    - Use `AbstractPersistentActor` instead.
-* `UntypedPersistentActorWithAtLeastOnceDelivery`
-    - Use @apidoc[AbstractPersistentActorWithAtLeastOnceDelivery] instead.
-* `org.apache.pekko.stream.actor.ActorSubscriber` and `org.apache.pekko.stream.actor.ActorPublisher`
-    - Use `GraphStage` instead.
-
-After being deprecated since 2.4.0, the following have been removed in Akka 2.6.0.
-
-* Secure cookie in Classic Akka Remoting
-
-After being deprecated since 2.2, the following have been removed in Akka 2.6.0.
-
-* `actorFor`
-    - Use `ActorSelection` instead.
-
-### Removed methods
-
-* `Logging.getLogger(UntypedActor)` `UntypedActor` has been removed, use `AbstractActor` instead.
-* `LoggingReceive.create(Receive, ActorContext)` use `AbstractActor.Receive` instead.
-* `ActorMaterialzierSettings.withAutoFusing` disabling fusing is no longer possible.
-* `AbstractActor.getChild` use `findChild` instead.
-* `Actor.getRef` use `Actor.getActorRef` instead.
-* `CircuitBreaker.onOpen` use `CircuitBreaker.addOnOpenListener`
-* `CircuitBreaker.onHalfOpen` use `CircuitBreaker.addOnHalfOpenListener`
-* `CircuitBreaker.onClose` use `CircuitBreaker.addOnCloseListener`
-* `Source.actorSubscriber`, use `Source.fromGraph` instead.
-* `Source.actorActorPublisher`, use `Source.fromGraph` instead.
-
-
-## Deprecated features
-
-### PersistentFSM
-
-@ref[Migration guide to Persistence Typed](../persistence-fsm.md) is in the PersistentFSM documentation.
-
-### TypedActor
-
-`org.apache.pekko.actor.TypedActor` has been deprecated as of 2.6.0 in favor of the
-`pekko.actor.typed` API which should be used instead.
-
-There are several reasons for phasing out the old `TypedActor`. The primary reason is they use transparent
-remoting which is not our recommended way of implementing and interacting with actors. Transparent remoting
-is when you try to make remote method invocations look like local calls. In contrast we believe in location
-transparency with explicit messaging between actors (same type of messaging for both local and remote actors).
-They also have limited functionality compared to ordinary actors, and worse performance.
-
-To summarize the fallacy of transparent remoting:
-
-* Was used in CORBA, RMI, and DCOM, and all of them failed. Those problems were noted by [Waldo et al already in 1994](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7628)
-* Partial failure is a major problem. Remote calls introduce uncertainty whether the function was invoked or not.
-  Typically handled by using timeouts but the client can't always know the result of the call.
-* Latency of calls over a network are several orders of magnitudes longer than latency of local calls,
-  which can be more than surprising if encoded as an innocent looking local method call.
-* Remote invocations have much lower throughput due to the need of serializing the
-  data and you can't just pass huge datasets in the same way.
-
-Therefore explicit message passing is preferred. It looks different from local method calls
-(@scala[`actorRef ! message`]@java[`actorRef.tell(message)`]) and there is no misconception
-that sending a message will result in it being processed instantaneously. The goal of location
-transparency is to unify message passing for both local and remote interactions, versus attempting
-to make remote interactions look like local method calls.
-
-Warnings about `TypedActor` have been [mentioned in documentation](https://doc.akka.io/docs/org/apache/pekko/2.5/typed-actors.html#when-to-use-typed-actors)
-for many years.
-
-### Cluster Client
-
-Cluster client has been deprecated as of 2.6.0 in favor of [Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/index.html).
-It is not advised to build new applications with Cluster client, and existing users @ref[should migrate to Akka gRPC](../cluster-client.md#migration-to-akka-grpc).
-
-### akka-protobuf
-
-`akka-protobuf` was never intended to be used by end users but perhaps this was not well-documented.
-Applications should use standard Protobuf dependency instead of `akka-protobuf`. The artifact is still
-published, but the transitive dependency to `akka-protobuf` has been removed.
-
-Akka is now using Protobuf version 3.9.0 for serialization of messages defined by Akka.
-
-### ByteString.empty
-
-It is now recommended to use @apidoc[util.ByteString]`.emptyByteString()` instead of
-@apidoc[util.ByteString]`.empty()` when using Java because @apidoc[util.ByteString]`.empty()`
-is [no longer available as a static method](https://github.com/scala/bug/issues/11509) in the artifacts built for Scala 2.13.
-
-### AkkaSslConfig
-
-`AkkaSslConfig` has been deprecated in favor of setting up TLS with `javax.net.ssl.SSLEngine` directly.
-
-This also means that methods Akka Streams `TLS` and `Tcp` that take `SSLContext` or `AkkaSslConfig` have been
-deprecated and replaced with corresponding methods that takes a factory function for creating the `SSLEngine`.
-
-See documentation of @ref:[streaming IO with TLS](../stream/stream-io.md#tls).    
-
-### JavaLogger
-
-`org.apache.pekko.event.jul.JavaLogger` for integration with `java.util.logging` has been deprecated. Use SLF4J instead,
-which also has support for `java.util.logging`.
-
-### org.apache.pekko.Main
-
-`org.apache.pekko.Main` is deprecated in favour of starting the `ActorSystem` from a custom main class instead. `org.apache.pekko.Main` was not
-adding much value and typically a custom main class is needed anyway.
-
-### Pluggable DNS
-
-Plugging in your own DNS implementation is now deprecated and will be removed in `2.7.0`, it was originally added to
-support a third party DNS provided that supported SRV records. The built in `async-dns` now supports SRV records.
-
-The `resolve` and `cached` methods on the `DNS` extension have also been deprecated in favour of ones that take in
-`DnsProtocol.Resolve`. These methods return a new types that include SRV records.
-
-## Remoting
-
-### Default remoting is now Artery TCP
-
-@ref[Artery TCP](../remoting-artery.md) is now the default remoting implementation.
-Classic remoting has been deprecated and will be removed in `2.7.0`.
-
-
-<a id="classic-to-artery"></a>
-#### Migrating from classic remoting to Artery
-
-Artery has the same functionality as classic remoting and you should normally only have to change the
-configuration to switch.
-
-To switch a full cluster restart is required and any overrides for classic remoting need to be ported to Artery configuration.
-Artery has a completely different protocol, which means that a rolling update is not supported.
-
-Artery defaults to TCP (see @ref:[selected transport](../remoting-artery.md#selecting-a-transport)) which is a good start
-when migrating from classic remoting.
-
-The protocol part in the Akka `Address`, for example `"akka.tcp://actorSystemName@10.0.0.1:2552/user/actorName"`
-has changed from `akka.tcp` to `akka`. If you have configured or hardcoded any such addresses you have to change
-them to `"akka://actorSystemName@10.0.0.1:25520/user/actorName"`. `akka` is used also when TLS is enabled.
-One typical place where such address is used is in the `seed-nodes` configuration.
-
-The default port is 25520 instead of 2552 to avoid connections between Artery and classic remoting due to
-misconfiguration. You can run Artery on 2552 if you prefer that (e.g. existing firewall rules) and then you
-have to configure the port with:
-
-```
-pekko.remote.artery.canonical.port = 2552
-```
-
-The configuration for Artery is different, so you might have to revisit any custom configuration. See the full
-@ref:[reference configuration for Artery](../general/configuration-reference.md#config-akka-remote-artery) and
-@ref:[reference configuration for classic remoting](../general/configuration-reference.md#config-akka-remote).
-
-Configuration that is likely required to be ported:
-
-* `pekko.remote.netty.tcp.hostname` => `pekko.remote.artery.canonical.hostname`
-* `pekko.remote.netty.tcp.port`=> `pekko.remote.artery.canonical.port`
-
-If using SSL then `tcp-tls` needs to be enabled and setup. See @ref[Artery docs for SSL](../remoting-artery.md#configuring-ssl-tls-for-akka-remoting)
-for how to do this.
-
-The following events that are published to the `eventStream` have changed:
-
-* classic `org.apache.pekko.remote.QuarantinedEvent` is `org.apache.pekko.remote.artery.QuarantinedEvent` in Artery
-* classic `org.apache.pekko.remote.GracefulShutdownQuarantinedEvent` is `org.apache.pekko.remote.artery.GracefulShutdownQuarantinedEvent` in Artery
-* classic `org.apache.pekko.remote.ThisActorSystemQuarantinedEvent` is `org.apache.pekko.remote.artery.ThisActorSystemQuarantinedEvent` in Artery
-
-#### Migration from 2.5.x Artery to 2.6.x Artery
-
-The following defaults have changed:
-
-* `pekko.remote.artery.transport` default has changed from `aeron-udp` to `tcp`
-
-The following properties have moved. If you don't adjust these from their defaults no changes are required:
-
-For Aeron-UDP:
-
-* `pekko.remote.artery.log-aeron-counters` to `pekko.remote.artery.advanced.aeron.log-aeron-counters`
-* `pekko.remote.artery.advanced.embedded-media-driver` to `pekko.remote.artery.advanced.aeron.embedded-media-driver`
-* `pekko.remote.artery.advanced.aeron-dir` to `pekko.remote.artery.advanced.aeron.aeron-dir`
-* `pekko.remote.artery.advanced.delete-aeron-dir` to `pekko.remote.artery.advanced.aeron.aeron-delete-dir`
-* `pekko.remote.artery.advanced.idle-cpu-level` to `pekko.remote.artery.advanced.aeron.idle-cpu-level`
-* `pekko.remote.artery.advanced.give-up-message-after` to `pekko.remote.artery.advanced.aeron.give-up-message-after`
-* `pekko.remote.artery.advanced.client-liveness-timeout` to `pekko.remote.artery.advanced.aeron.client-liveness-timeout`
-* `pekko.remote.artery.advanced.image-liveless-timeout` to `pekko.remote.artery.advanced.aeron.image-liveness-timeout`
-* `pekko.remote.artery.advanced.driver-timeout` to `pekko.remote.artery.advanced.aeron.driver-timeout`
-
-For TCP:
-
-* `pekko.remote.artery.advanced.connection-timeout` to `pekko.remote.artery.advanced.tcp.connection-timeout`
-
-
-#### Remaining with Classic remoting (not recommended)
-
-Classic remoting is deprecated but can be used in 2.6.x Rolling update from Classic remoting to Artery is
-not supported so if you want to update from Akka 2.5.x with Classic remoting to Akka 2.6.x without a full shut
-down of the Cluster you have to enable Classic remoting. Later, you can plan for a full shutdown and
-@ref:[migrate from classic remoting to Artery](#migrating-from-classic-remoting-to-artery) as a separate step.
-
-Explicitly disable Artery by setting property `pekko.remote.artery.enabled` to `false`. Further, any configuration under `pekko.remote` that is
-specific to classic remoting needs to be moved to `pekko.remote.classic`. To see which configuration options
-are specific to classic search for them in: @ref:[`akka-remote/reference.conf`](../general/configuration-reference.md#config-akka-remote).
-
-If you have a [Lightbend Subscription](https://www.lightbend.com/lightbend-subscription) you can use our [Config Checker](https://doc.akka.io/docs/akka-enhancements/current/config-checker.html) enhancement to flag any settings that have not been properly migrated.
-
-### Persistent mode for Cluster Sharding
-
-Cluster Sharding coordinator and @ref:[Remembering Entities](../cluster-sharding.md#remembering-entities) state could previously be stored in Distributed Data or via Akka Persistence.
-The Persistence mode has been deprecated in favour of using the Distributed Data mode for the coordinator state. A replacement for the state
-for Remembered Entities is tracked in [issue 27763](https://github.com/org/apache/pekko/akka/issues/27763).
-
-## Java Serialization
-
-Java serialization is known to be slow and [prone to attacks](https://community.microfocus.com/cyberres/fortify/f/fortify-discussions/317555/the-perils-of-java-deserialization)
-of various kinds - it never was designed for high throughput messaging after all.
-One may think that network bandwidth and latency limit the performance of remote messaging, but serialization is a more typical bottleneck.
-
-From Akka 2.6.0 the Akka serialization with Java serialization is disabled by default and Akka
-itself doesn't use Java serialization for any of its internal messages.
-
-You have to enable @ref:[serialization](../serialization.md)  to send messages between ActorSystems (nodes) in the Cluster.
-@ref:[Serialization with Jackson](../serialization-jackson.md) is a good choice in many cases, and our
-recommendation if you don't have other preferences or constraints.
-
-For compatibility with older systems that rely on Java serialization it can be enabled with the following configuration:
-
-```ruby
-pekko.actor.allow-java-serialization = on
-```
-
-Akka will still log warning when Java serialization is used and to silent that you may add:
-
-```ruby
-pekko.actor.warn-about-java-serializer-usage = off
-```
-
-### Rolling update
-
-Please see the @ref:[rolling update procedure from Java serialization to Jackson](../additional/rolling-updates.md#from-java-serialization-to-jackson).
-
-### Java serialization in consistent hashing
-
-When using a consistent hashing router keys that were not bytes or a String are serialized.
-You might have to add a serializer for you hash keys, unless one of the default serializer are not
-handling that type and it was previously "accidentally" serialized with Java serialization.
-
-## Configuration and behavior changes
-
-The following documents configuration changes and behavior changes where no action is required. In some cases the old
-behavior can be restored via configuration.
-
-### Remoting dependencies have been made optional
-
-Classic remoting depends on Netty and Artery UDP depends on Aeron. These are now both optional dependencies that need
-to be explicitly added. See @ref[classic remoting](../remoting.md) or @ref[artery remoting](../remoting-artery.md) for instructions.
-
-### Remote watch and deployment have been disabled without Cluster use
-
-By default, these remoting features are disabled when not using Akka Cluster:
-
-* Remote Deployment: falls back to creating a local actor
-* Remote Watch: ignores the watch and unwatch request, and `Terminated` will not be delivered when the remote actor is stopped or if a remote node crashes
- 
-Watching an actor on a node outside the cluster may have unexpected
-@ref[consequences](../remoting-artery.md#quarantine), such as quarantining
-so it has been disabled by default in Akka 2.6.x This is the case if either
-cluster is not used at all (only plain remoting) or when watching an actor outside of the cluster.
-
-On the other hand, failure detection between nodes of the same cluster
-do not have that shortcoming. Thus, when remote watching or deployment is used within
-the same cluster, they are working the same in 2.6.x as before, except that a remote watch attempt before a node has joined 
-will log a warning and be ignored, it must be done after the node has joined.
-
-To optionally enable a watch without Akka Cluster or across a Cluster boundary between Cluster and non Cluster, 
-knowing the consequences, all watchers (cluster as well as remote) need to set:
-```
-pekko.remote.use-unsafe-remote-features-outside-cluster = on`.
-```
-
-When enabled
-
-* An initial warning is logged on startup of `RemoteActorRefProvider`
-* A warning will be logged on remote watch attempts, which you can suppress by setting
-```
-pekko.remote.warn-unsafe-watch-outside-cluster = off
-```
-
-### Schedule periodically with fixed-delay vs. fixed-rate
-
-The `Scheduler.schedule` method has been deprecated in favor of selecting `scheduleWithFixedDelay` or
-`scheduleAtFixedRate`.
-
-The @ref:[Scheduler](../scheduler.md#schedule-periodically) documentation describes the difference between
-`fixed-delay` and `fixed-rate` scheduling. If you are uncertain of which one to use you should pick
-`startTimerWithFixedDelay`.
-
-The deprecated `schedule` method had the same semantics as `scheduleAtFixedRate`, but since that can result in
-bursts of scheduled tasks or messages after long garbage collection pauses and in the worst case cause undesired
-load on the system `scheduleWithFixedDelay` is often preferred.
-
-For the same reason the following methods have also been deprecated:
-
-* `TimerScheduler.startPeriodicTimer`, replaced by `startTimerWithFixedDelay` or `startTimerAtFixedRate`
-* `FSM.setTimer`, replaced by `startSingleTimer`, `startTimerWithFixedDelay` or `startTimerAtFixedRate`
-* `PersistentFSM.setTimer`, replaced by `startSingleTimer`, `startTimerWithFixedDelay` or `startTimerAtFixedRate`
-
-### Internal dispatcher introduced
-
-To protect the Akka internals against starvation when user code blocks the default dispatcher (for example by accidental
-use of blocking APIs from actors) a new internal dispatcher has been added. All of Akka's internal, non-blocking actors
-now run on the internal dispatcher by default.
-
-The dispatcher can be configured through `pekko.actor.internal-dispatcher`.
-
-For maximum performance, you might want to use a single shared dispatcher for all non-blocking,
-asynchronous actors, user actors and Akka internal actors. In that case, you can configure the
-`pekko.actor.internal-dispatcher` with a string value of `pekko.actor.default-dispatcher`.
-This reinstantiates the behavior from previous Akka versions but also removes the isolation between
-user and Akka internals. So, use at your own risk!
-
-Several `use-dispatcher` configuration settings that previously accepted an empty value to fall back to the default
-dispatcher has now gotten an explicit value of `pekko.actor.internal-dispatcher` and no longer accept an empty
-string as value. If such an empty value is used in your `application.conf` the same result is achieved by simply removing
-that entry completely and having the default apply.
-
-For more details about configuring dispatchers, see the @ref[Dispatchers](../dispatchers.md)
-
-### Default dispatcher size
-
-Previously the factor for the default dispatcher was set a bit high (`3.0`) to give some extra threads in case of accidental
-blocking and protect a bit against starving the internal actors. Since the internal actors are now on a separate dispatcher
-the default dispatcher has been adjusted down to `1.0` which means the number of threads will be one per core, but at least
-`8` and at most `64`. This can be tuned using the individual settings in `pekko.actor.default-dispatcher.fork-join-executor`.
-
-### Mixed version
-
-Startup will fail if mixed versions of a product family (such as Akka) are accidentally used. This was previously
-only logged as a warning. There is no guarantee mixed modules will work and it's better to fail early than
-that the application is crashing at a later time than startup. 
-
-### Cluster Sharding
-
-#### waiting-for-state-timeout reduced to 2s
-
-This has been reduced to speed up ShardCoordinator initialization in smaller clusters.
-The read from ddata is a ReadMajority. For small clusters (< majority-min-cap) every node needs to respond
-so it is more likely to timeout if there are nodes restarting, for example when there is a rolling re-deploy happening.
-
-#### Passivate idle entity
-
-The configuration `pekko.cluster.sharding.passivate-idle-entity-after` is now enabled by default.
-Sharding will passivate entities when they have not received any messages after this duration.
-To disable passivation you can use configuration:
-
-```
-pekko.cluster.sharding.passivate-idle-entity-after = off
-```
-
-It is always disabled if @ref:[Remembering Entities](../cluster-sharding.md#remembering-entities) is enabled.
-
-#### Cluster Sharding stats
-
-A new field has been added to the response of a `ShardRegion.GetClusterShardingStats` command
-for any shards per region that may have failed or not responded within the new configurable `pekko.cluster.sharding.shard-region-query-timeout`. 
-This is described further in @ref:[inspecting sharding state](../cluster-sharding.md#inspecting-cluster-sharding-state).
-
-### Distributed Data
-
-#### Config for message payload size
-
-Configuration properties for controlling sizes of `Gossip` and `DeltaPropagation` messages in Distributed Data
-have been reduced. Previous defaults sometimes resulted in messages exceeding max payload size for remote
-actor messages.
-
-The new configuration properties are:
-
-```
-pekko.cluster.distributed-data.max-delta-elements = 500
-pekko.cluster.distributed-data.delta-crdt.max-delta-size = 50
-```
-
-#### DataDeleted
-
-`DataDeleted` has been changed in its usage. While it is still a possible response to a Delete request,
-it is no longer the response when an `Update` or `Get` request couldn't be performed because the entry has been deleted.
-In its place are two new possible responses to a request, `UpdateDataDeleted` for an `Update` and `GetDataDeleted`
-for a `Get`.
-
-The reason for this change is that `DataDeleted` didn't extend the `UpdateResponse` and `GetResponse` types
-and could therefore cause problems when `Update` and `Get` were used with `ask`. This was also a problem for
-Akka Typed.
-
-### CoordinatedShutdown is run from ActorSystem.terminate
-
-No migration is needed but it is mentioned here because it is a change in behavior.
-
-When `ActorSystem.terminate()` is called, @ref:[`CoordinatedShutdown`](../coordinated-shutdown.md)
-will be run in Akka 2.6.x, which wasn't the case in 2.5.x. For example, if using Akka Cluster this means that
-member will attempt to leave the cluster gracefully.
-
-If this is not desired behavior, for example in tests, you can disable this feature with the following configuration
-and then it will behave as in Akka 2.5.x:
-
-```
-pekko.coordinated-shutdown.run-by-actor-system-terminate = off
-```
-
-### Scheduler not running tasks when shutdown
-
-When the `ActorSystem` was shutting down and the `Scheduler` was closed all outstanding scheduled tasks were run,
-which was needed for some internals in Akka but a surprising behavior for end users. Therefore this behavior has
-changed in Akka 2.6.x and outstanding tasks are not run when the system is terminated.
-
-Instead, `system.registerOnTermination` or `CoordinatedShutdown` can be used for running such tasks when shutting
-down.
-
-### IOSources & FileIO
-
-`FileIO.toPath`, `StreamConverters.fromInputStream`, and `StreamConverters.fromOutputStream` now always fail the materialized value in case of failure. 
-It is no longer required to both check the materialized value and the `Try[Done]` inside the @apidoc[IOResult]. In case of an IO failure
-the exception will be @apidoc[IOOperationIncompleteException] instead of @apidoc[AbruptIOTerminationException].
-
-Additionally when downstream of the IO-sources cancels with a failure, the materialized value
-is failed with that failure rather than completed successfully.
-
-### Akka now uses Fork Join Pool from JDK
-
-Previously, Akka contained a shaded copy of the ForkJoinPool. In benchmarks, we could not find significant benefits of
-keeping our own copy, so from Akka 2.6.0 on, the default FJP from the JDK will be used. The Akka FJP copy was removed.
-
-### Logging of dead letters
-
-When the number of dead letters have reached configured `pekko.log-dead-letters` value it didn't log
-more dead letters in Akka 2.5.x. In Akka 2.6.x the count is reset after configured `pekko.log-dead-letters-suspend-duration`.
-
-`pekko.log-dead-letters-during-shutdown` default configuration changed from `on` to `off`.
-
-### Cluster failure detection
-
-Default number of nodes that each node is observing for failure detection has increased from 5 to 9.
-The reason is to have better coverage and unreachability information for downing decisions.
-
-Configuration property:
-
-```
-pekko.cluster.monitored-by-nr-of-members = 9
-```
-
-### TestKit
-
-`expectNoMessage()` without timeout parameter is now using a new configuration property
-`pekko.test.expect-no-message-default` (short timeout) instead of `remainingOrDefault` (long timeout).
-
-### Config library resolution change
-
-The [Lightbend Config Library](https://github.com/lightbend/config) has been updated to load both `reference.conf`
-and user config files such as `application.conf` before substitution of variables used in the `reference.conf`. 
-This makes it possible to override such variables in `reference.conf` with user configuration.
-
-For example, the default config for Cluster Sharding, refers to the default config for Distributed Data, in 
-`reference.conf` like this:
-
-```ruby
-pekko.cluster.sharding.distributed-data = ${pekko.cluster.distributed-data}
-``` 
-
-In Akka 2.5.x this meant that to override default gossip interval for both direct use of Distributed Data and Cluster Sharding
-in the same application you would have to change two settings:
-
-```ruby
-pekko.cluster.distributed-data.gossip-interval = 3s
-pekko.cluster.sharding.distributed-data = 3s
-```
-
-In Akka 2.6.0 and forward, changing the default in the `pekko.cluster.distributed-data` config block will be done before
-the variable in `reference.conf` is resolved, so that the same change only needs to be done once:
-
-```ruby
-pekko.cluster.distributed-data.gossip-interval = 3s
-```
-
-The following default settings in Akka are using such substitution and may be affected if you are changing the right
-hand config path in your `application.conf`:
-
-```ruby
-pekko.cluster.sharding.coordinator-singleton = ${pekko.cluster.singleton}
-pekko.cluster.sharding.distributed-data = ${pekko.cluster.distributed-data}
-pekko.cluster.singleton-proxy.singleton-name = ${pekko.cluster.singleton.singleton-name}
-pekko.cluster.typed.receptionist.distributed-data = ${pekko.cluster.distributed-data}
-pekko.remote.classic.netty.ssl = ${pekko.remote.classic.netty.tcp}
-pekko.remote.artery.advanced.materializer = ${pekko.stream.materializer}
-``` 
-
-
-## Source incompatibilities
-
-### StreamRefs
-
-The materialized value for `StreamRefs.sinkRef` and `StreamRefs.sourceRef` is no longer wrapped in
-`Future`/`CompletionStage`. It can be sent as reply to `sender()` immediately without using the `pipe` pattern.
-
-`StreamRefs` was marked as @ref:[may change](../common/may-change.md).
-
-## Akka Typed
-
-### Naming convention changed
-
-In needing a way to distinguish the new APIs in code and docs from the original, Akka used the naming
-convention `untyped`. All references of the original have now been changed to `classic`. The
-reference of the new APIs as `typed` is going away as it becomes the primary APIs.
-
-### Receptionist has moved
-
-The receptionist had a name clash with the default Cluster Client Receptionist at `/system/receptionist` and will now 
-instead either run under `/system/localReceptionist` or `/system/clusterReceptionist`.
-
-The path change means that the receptionist information will not be disseminated between 2.5.x and 2.6.x nodes during a
-rolling update from 2.5.x to 2.6.x if you use Akka Typed. See @ref:[rolling updates with typed Receptionist](../additional/rolling-updates.md#akka-typed-with-receptionist-or-cluster-receptionist)
-
-### Cluster Receptionist using own Distributed Data
-
-In 2.5.x the Cluster Receptionist was using the shared Distributed Data extension but that could result in
-undesired configuration changes if the application was also using that and changed for example the `role`
-configuration.
-
-In 2.6.x the Cluster Receptionist is using its own independent instance of Distributed Data.
-
-This means that the receptionist information will not be disseminated between 2.5.x and 2.6.x nodes during a
-rolling update from 2.5.x to 2.6.x if you use Akka Typed. See @ref:[rolling updates with typed Cluster Receptionist](../additional/rolling-updates.md#akka-typed-with-receptionist-or-cluster-receptionist)
-
-### Akka Typed API changes
-
-Akka Typed APIs were still marked as @ref:[may change](../common/may-change.md) in Akka 2.5.x and a few changes were
-made before finalizing the APIs. Compared to Akka 2.5.x the source incompatible changes are:
-
-* `Behaviors.intercept` now takes a factory function for the interceptor.
-* `ActorSystem.scheduler` previously gave access to the classic `org.apache.pekko.actor.Scheduler` but now returns a specific `org.apache.pekko.actor.typed.Scheduler`.
-  Additionally `schedule` method has been replaced by `scheduleWithFixedDelay` and `scheduleAtFixedRate`. Actors that need to schedule tasks should
-  prefer `Behaviors.withTimers`.
-* `TimerScheduler.startPeriodicTimer`, replaced by `startTimerWithFixedDelay` or `startTimerAtFixedRate`
-* `Routers.pool` now takes a factory function rather than a `Behavior` to protect against accidentally sharing same behavior instance and state across routees.
-* The `request` parameter in Distributed Data commands was removed, in favor of using `ask` with the new `ReplicatorMessageAdapter`.
-* Removed `Behavior.same`, `Behavior.unhandled`, `Behavior.stopped`, `Behavior.empty`, and `Behavior.ignore` since
-  they were redundant with corresponding @scala[scaladsl.Behaviors.x]@java[javadsl.Behaviors.x].
-* `ActorContext` parameter removed in `javadsl.ReceiveBuilder` for the functional style in Java. Use `Behaviors.setup`
-   to retrieve `ActorContext`, and use an enclosing class to hold initialization parameters and `ActorContext`.
-* Java @javadoc[EntityRef](pekko.cluster.sharding.typed.javadsl.EntityRef) ask timeout now takes a `java.time.Duration` rather than a @apidoc[Timeout]
-* Changed method signature for `EventAdapter.fromJournal` and support for `manifest` in `EventAdapter`.
-* Renamed @scala[`widen`]@java[`Behaviors.widen`] to @scala[`transformMessages`]@java[`Behaviors.transformMessages`]
-* `BehaviorInterceptor`, `Behaviors.monitor`, `Behaviors.withMdc` and @scala[`transformMessages`]@java[`Behaviors.transformMessages`] takes
-  a @scala[`ClassTag` parameter (probably source compatible)]@java[`interceptMessageClass` parameter].
-  `interceptMessageType` method in `BehaviorInterceptor` is replaced with this @scala[`ClassTag`]@java[`Class`] parameter.
-* `Behavior.orElse` has been removed because it wasn't safe together with `narrow`.
-* `StashBuffer`s are now created with `Behaviors.withStash` rather than instantiating directly
-* To align with the Akka Typed style guide `SpawnProtocol` is now created through @scala[`SpawnProtocol()`]@java[`SpawnProtocol.create()`], the special `Spawn` message
-  factories has been removed and the top level of the actor protocol is now `SpawnProtocol.Command`
-* `Future` removed from `ActorSystem.systemActorOf`.
-* `toUntyped` has been renamed to `toClassic`.
-* Akka Typed is now using SLF4J as the logging API. @scala[`ActorContext.log`]@java[`ActorContext.getLog`] returns
-  an `org.slf4j.Logger`. MDC has been changed to only support `String` values.
-* `setLoggerClass` in `ActorContext` has been renamed to `setLoggerName`.
-* `GetDataDeleted` and `UpdateDataDeleted` introduced as described in @ref[DataDeleted](#datadeleted).
-* `SubscribeResponse` introduced in `Subscribe` because the responses can be both `Changed` and `Deleted`.
-* `ReplicationDeleteFailure` renamed to `DeleteFailure`.
-* `EventSourcedEntity` removed in favor using plain `EventSourcedBehavior` because the alternative way was
-  causing more confusion than adding value. Construction of `PersistentId` for the `EventSourcedBehavior` is
-  facilitated by factory methods in `PersistenceId`.
-* `PersistenceId.apply(String)` renamed to `PersistenceId.ofUniqueId(String)`  
-* `org.apache.pekko.cluster.sharding.typed.scaladsl.Entity.apply` changed to use two parameter lists because the new
-  `EntityContext.entityTypeKey` required additional type parameter that is inferred better with a secondary
-  parameter list.
-* `EventSourcedBehavior.withEnforcedReplies` signature changed. Command is not required to extend `ExpectingReply`
-  anymore. `ExpectingReply` has therefore been removed.
-* `ActorContext` is now a mandatory constructor parameter in `AbstractBehavior`. Create via `Behaviors.setup`.
-  The reason is to encourage right usage and detect mistakes like not creating a new instance (via `setup`)
-  when the behavior is supervised and restarted.    
-* `LoggingEventFilter` has been renamed to `LoggingTestKit` and its `intercept` method renamed to `assert`
-* Scala `ask` from `AskPattern` now implicitly converts an implicit `ActorSystem[_]` to `Scheduler` to eliminate some boilerplate.
-
-#### Akka Typed Stream API changes
-
-* `ActorSource.actorRef` relying on `PartialFunction` has been replaced in the Java API with a variant more suitable to be called by Java.
-* Factories for creating a materializer from an `org.apache.pekko.actor.typed.ActorSystem` have been removed.
-  A stream can be run with an `org.apache.pekko.actor.typed.ActorSystem` @scala[in implicit scope]@java[parameter]
-  and therefore the need for creating a materializer has been reduced.
-* `actorRefWithAck` has been renamed to `actorRefWithBackpressure`
-
-## Akka Stream changes
-
-### Materializer changes
-
-A default materializer is now provided out of the box. For the Java API just pass `system` when running streams,
-for Scala an implicit materializer is provided if there is an implicit `ActorSystem` available. This avoids leaking
-materializers and simplifies most stream use cases somewhat.
-
-The `ActorMaterializer` factories has been deprecated and replaced with a few corresponding factories in `org.apache.pekko.stream.Materializer`.
-New factories with per-materializer settings has not been provided but should instead be done globally through config or per stream,
-see below for more details.
-
-Having a default materializer available means that most, if not all, usages of Java `ActorMaterializer.create()`
-and Scala `implicit val materializer = ActorMaterializer()` should be removed.
-
-Details about the stream materializer can be found in @ref:[Actor Materializer Lifecycle](../stream/stream-flows-and-basics.md#actor-materializer-lifecycle)
-
-When using streams from typed the same factories and methods for creating materializers and running streams as from classic can now be used with typed. The
-`org.apache.pekko.stream.typed.scaladsl.ActorMaterializer` and `org.apache.pekko.stream.typed.javadsl.ActorMaterializerFactory` that previously existed in the `akka-stream-typed` module has been removed.
-
-### Materializer settings deprecated
-
-The `ActorMaterializerSettings` class has been deprecated.
-
-All materializer settings are available as configuration to change the system default or through attributes that can be
-used for individual streams when they are materialized.
-
-| MaterializerSettings   | Corresponding attribute                           | Config  |
--------------------------|---------------------------------------------------|---------|
-| `initialInputBufferSize`        | `Attributes.inputBuffer(initial, max)`   | `pekko.stream.materializer.initial-input-buffer-size` |
-| `maxInputBufferSize`            | `Attributes.inputBuffer(initial, max)`   | `pekko.stream.materializer.max-input-buffer-size` |
-| `dispatcher`                    | `ActorAttributes.dispatcher(name)`       | `pekko.stream.materializer.dispatcher` |
-| `supervisionDecider`            | `ActorAttributes.supervisionStrategy`    | na |
-| `debugLogging`                  | `ActorAttributes.debugLogging`           | `pekko.stream.materializer.debug-logging` |
-| `outputBurstLimit`              | `ActorAttributes.outputBurstLimit`       | `pekko.stream.materializer.output-burst-limit` |
-| `fuzzingMode`                   | `ActorAttributes.fuzzingMode`            | `pekko.stream.materializer.debug.fuzzing-mode` |
-| `autoFusing`                    | no longer used (since 2.5.0)             | na |
-| `maxFixedBufferSize`            | `ActorAttributes.maxFixedBufferSize`     | `pekko.stream.materializer.max-fixed-buffer-size` |
-| `syncProcessingLimit`           | `ActorAttributes.syncProcessingLimit`    | `pekko.stream.materializer.sync-processing-limit` |
-| `IOSettings.tcpWriteBufferSize` | `Tcp.writeBufferSize`                    | `pekko.stream.materializer.io.tcp.write-buffer-size` |
-| `blockingIoDispatcher`          | na                                       | `pekko.stream.materializer.blocking-io-dispatcher` |
-
-
-| StreamRefSettings                | Corresponding StreamRefAttributes | Config  |
------------------------------------|-----------------------------------|---------|
-| `bufferCapacity`                 | `bufferCapacity`                  | `pekko.stream.materializer.stream-ref.buffer-capacity` |
-| `demandRedeliveryInterval`       | `demandRedeliveryInterval`        | `pekko.stream.materializer.stream-ref.demand-redelivery-interval` |
-| `subscriptionTimeout`            | `subscriptionTimeout`             | `pekko.stream.materializer.stream-ref.subscription-timeout` |
-| `finalTerminationSignalDeadline` | `finalTerminationSignalDeadline`  | `pekko.stream.materializer.stream-ref.final-termination-signal-deadline` |
-
-
-| SubscriptionTimeoutSettings      | Corresponding ActorAttributes               | Config  |
------------------------------------|---------------------------------------------|---------|
-| `subscriptionTimeoutSettings.mode`           | `streamSubscriptionTimeoutMode` | `pekko.stream.materializer.subscription-timeout.mode` |
-| `subscriptionTimeoutSettings.timeout`        | `streamSubscriptionTimeout`     | `pekko.stream.materializer.subscription-timeout.timeout` |
-
-Setting attributes on individual streams can be done like so:
-
-Scala
-:  @@snip [StreamAttributeDocSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/StreamAttributeDocSpec.scala) { #attributes-on-stream }
-
-Java
-:  @@snip [StreamAttributeDocTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/StreamAttributeDocTest.java) { #attributes-on-stream }
-
-### Stream cancellation available upstream
-
-Previously an Akka streams stage or operator failed it was impossible to discern this from
-the stage just cancelling. This has been improved so that when a stream stage fails the cause
-will be propagated upstream.
-
-The following operators have a slight change in behavior because of this:
-
-* `FileIO.fromPath`, `FileIO.fromFile` and `StreamConverters.fromInputStream`  will fail the materialized future with
-  an `IOOperationIncompleteException` when downstream fails
-* `.watchTermination` will fail the materialized `Future` or `CompletionStage` rather than completing it when downstream fails
-* `StreamRef` - `SourceRef` will cancel with a failure when the receiving node is downed
-
-This also means that custom `GraphStage` implementations should be changed to pass on the
-cancellation cause when downstream cancels by implementing the `OutHandler.onDownstreamFinish` signature
-taking a `cause` parameter and calling `cancelStage(cause)` to pass the cause upstream. The old zero-argument
-`onDownstreamFinish` method has been deprecated.
-
-
-### Lazy and async stream operator changes
-
-The operators that provide support for lazy and @scala[`Future`]@java[`CompletionStage`] stream construction were revised
-to be more consistent.
-
-The materialized value is now no longer wrapped in an @scala[`Option`]@java[`Optional`], instead the @scala[`Future`]@java[`CompletionStage`]
-is failed with a `org.apache.pekko.stream.NeverMaterializedException` in the cases that would previously lead to @scala[`None`]@java[an empty `Optional`] 
-
-A deferred creation of the stream based on the initial element like how the deprecated `lazyInit` worked can be achieved by combining 
-@scala[`future(Flow|Sink)`] @java[`completionStage(Flow|Sink)`] with `prefixAndTail`. See example in @scala[@ref:[futureFlow](../stream/operators/Flow/futureFlow.md)]
-@java[@ref:[completionStageFlow](../stream/operators/Flow/completionStageFlow.md)]. 
-
-#### javadsl.Flow 
-  
-| old                     | new |
-------------------------|----------------
-| lazyInit                | @ref:[lazyCompletionStageFlow](../stream/operators/Flow/lazyCompletionStageFlow.md) in combination with `prefixAndTail(1)` |
-| lazyInitAsync           | @ref:[lazyCompletionStageFlow](../stream/operators/Flow/lazyCompletionStageFlow.md)  | 
-|                         | @ref:[completionStageFlow](../stream/operators/Flow/completionStageFlow.md) |
-|                          | @ref:[lazyFlow](../stream/operators/Flow/lazyFlow.md) |
-
-### javadsl.Sink            
-  
-| old                     | new |
-------------------------|----------------
-| lazyInit                | @ref:[lazyCompletionStageSink](../stream/operators/Sink/lazyCompletionStageSink.md) in combination with `Flow.prefixAndTail(1)` |
-| lazyInitAsync           | @ref:[lazyCompletionStageSink](../stream/operators/Sink/lazyCompletionStageSink.md) |
-|                          | @ref:[completionStageSink](../stream/operators/Sink/completionStageSink.md) |
-|                          | @ref:[lazySink](../stream/operators/Sink/lazySink.md) |
-  
-### javadsl.Source
-  
-| old                       | new |
---------------------------|----------------
-| fromFuture                | @ref:[future](../stream/operators/Source/future.md) |
-| fromCompletionStage       | @ref:[completionStage](../stream/operators/Source/completionStage.md) |
-| fromFutureSource          | @ref:[futureSource](../stream/operators/Source/futureSource.md) |
-| fromSourceCompletionStage | @ref:[completionStageSource](../stream/operators/Source/completionStageSource.md) |
-| lazily                    | @ref:[lazySource](../stream/operators/Source/lazySource.md) |
-| lazilyAsync               | @ref:[lazyCompletionStage](../stream/operators/Source/lazyCompletionStage.md) |
-|                            | @ref:[lazySingle](../stream/operators/Source/lazySingle.md) |
-|                            | @ref:[lazyCompletionStageSource](../stream/operators/Source/lazyCompletionStageSource.md) |
-    
-### scaladsl.Flow
-
-| old                     | new |
---------------------------|----------------
-| lazyInit                | @ref:[lazyFutureFlow](../stream/operators/Flow/lazyFutureFlow.md) |
-| lazyInitAsync           | @ref:[lazyFutureFlow](../stream/operators/Flow/lazyFutureFlow.md) |
-|                         | @ref:[futureFlow](../stream/operators/Flow/futureFlow.md) |
-|                         | @ref:[lazyFlow](../stream/operators/Flow/lazyFlow.md) |
-
-### scaladsl.Sink            
-
-| old                     | new |
-------------------------|----------------
-| lazyInit                | @ref:[lazyFutureSink](../stream/operators/Sink/lazyFutureSink.md) in combination with `Flow.prefixAndTail(1)` |
-| lazyInitAsync           | @ref:[lazyFutureSink](../stream/operators/Sink/lazyFutureSink.md) |
-|                         | @ref:[futureSink](../stream/operators/Sink/futureSink.md) |
-|                         | @ref:[lazySink](../stream/operators/Sink/lazySink.md) |
-
-### scaladsl.Source
-
-| old                       | new |
---------------------------|----------------
-| fromFuture                | @ref:[future](../stream/operators/Source/future.md) |
-| fromCompletionStage       | @ref:[completionStage](../stream/operators/Source/completionStage.md) |
-| fromFutureSource          | @ref:[futureSource](../stream/operators/Source/futureSource.md) |
-| fromSourceCompletionStage |   |
-| lazily                    | @ref:[lazySource](../stream/operators/Source/lazySource.md) |
-| lazilyAsync               | @ref:[lazyFuture](../stream/operators/Source/lazyFuture.md) |
-|                           | @ref:[lazySingle](../stream/operators/Source/lazySingle.md) |
-|                           | @ref:[lazyFutureSource](../stream/operators/Source/lazyFutureSource.md) |
diff --git a/docs/src/main/paradox/project/migration-guide-old.md b/docs/src/main/paradox/project/migration-guide-old.md
deleted file mode 100644
index 884d66179d..0000000000
--- a/docs/src/main/paradox/project/migration-guide-old.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# Older Migration Guides
-
-Migration from old versions:
-
-* [2.3.x to 2.4.x](https://doc.akka.io/docs/akka/2.4/project/migration-guide-2.3.x-2.4.x.html)
-* [2.2.x to 2.3.x](https://doc.akka.io/docs/akka/2.3/project/migration-guide-2.2.x-2.3.x.html)
-* [2.1.x to 2.2.x](https://doc.akka.io/docs/akka/2.2/project/migration-guide-2.1.x-2.2.x.html)
-* [2.0.x to 2.1.x](https://doc.akka.io/docs/akka/2.1/project/migration-guide-2.0.x-2.1.x.html)
-* [1.3.x to 2.0.x](https://doc.akka.io/docs/akka/2.0.5/project/migration-guide-1.3.x-2.0.x.html).
diff --git a/docs/src/main/paradox/project/migration-guides.md b/docs/src/main/paradox/project/migration-guides.md
index 02d7a374fd..8c394788d7 100644
--- a/docs/src/main/paradox/project/migration-guides.md
+++ b/docs/src/main/paradox/project/migration-guides.md
@@ -1,14 +1,12 @@
 ---
-project.description: Akka version migration guides.
+project.description: Pekko version migration guides.
 ---
 # Migration Guides
 
-@@toc { depth=1 }
+Pekko is based on the latest version of Akka in the v2.6.x series. If migrating from an earlier version series, migration guides can be found in the Akka documentation at the following locations: 
 
-@@@ index
+[2.5.x to 2.6.x](https://doc.akka.io/docs/akka/2.6/project/migration-guide-2.5.x-2.6.x.html)
 
-* [migration-guide-2.5.x-2.6.x](migration-guide-2.5.x-2.6.x.md)
-* [migration-guide-2.4.x-2.5.x](migration-guide-2.4.x-2.5.x.md)
-* [migration-guide-old](migration-guide-old.md)
+[2.4.x to 2.5.x](https://doc.akka.io/docs/akka/2.5/project/migration-guide-2.4.x-2.5.x.html)
 
-@@@
+[Older versions](https://doc.akka.io/docs/akka/2.6/project/migration-guide-old.html)
diff --git a/docs/src/main/paradox/project/rolling-update.md b/docs/src/main/paradox/project/rolling-update.md
index 3de2010791..4439678ccd 100644
--- a/docs/src/main/paradox/project/rolling-update.md
+++ b/docs/src/main/paradox/project/rolling-update.md
@@ -1,8 +1,8 @@
 # Rolling Updates and Versions
 
-## Akka upgrades
-Akka supports rolling updates between two consecutive patch versions unless an exception is
-mentioned on this page. For example updating Akka version from 2.5.15 to 2.5.16. Many times
+## Pekko upgrades
+Pekko supports rolling updates between two consecutive patch versions unless an exception is
+mentioned on this page. For example updating from 2.5.15 to 2.5.16. Many times
 it is also possible to skip several versions and exceptions to that are also described here.
 For example it's possible to update from 2.5.14 to 2.5.16 without intermediate 2.5.15.
 
@@ -14,104 +14,4 @@ update completely before starting next update.
 @ref:[Rolling update from classic remoting to Artery](../additional/rolling-updates.md#migrating-from-classic-remoting-to-artery) is not supported since the protocol
 is completely different. It will require a full cluster shutdown and new startup.
 
-@@@
-
-## Change log
-
-### 2.5.0 Several changes in minor release
-
-See [migration guide](https://doc.akka.io/docs/akka/2.5/project/migration-guide-2.4.x-2.5.x.html#rolling-update) when updating from 2.4.x to 2.5.x.
-
-### 2.5.10 Joining regression
-
-Issue: [#24622](https://github.com/akka/akka/issues/24622)
-
-Incompatible change was introduced in 2.5.10 and fixed in 2.5.11.
-
-This means that you can't do a rolling update from 2.5.9 to 2.5.10 and must instead do update from 2.5.9 to 2.5.11.
-
-### 2.5.10 Joining old versions
-
-Issue: [#25491](https://github.com/akka/akka/issues/25491)
-
-Incompatibility was introduced in in 2.5.10 and fixed in 2.5.15.
-
-That means that you should do rolling update from 2.5.9 directly to 2.5.15 if you need to be able to
-join 2.5.9 nodes during the update phase.
-
-### 2.5.14 Distributed Data serializer for `ORSet[ActorRef]`
-
-Issue: [#23703](https://github.com/akka/akka/issues/23703)
-
-Intentional change was done in 2.5.14.
-
-This change required a two phase update where the data was duplicated to be compatible with both old and new nodes.
-
-* 2.5.13 - old format, before the change. Can communicate with intermediate format and with old format.
-* 2.5.14, 2.5.15, 2.5.16 - intermediate format. Can communicate with old format and with new format.
-* 2.5.17 - new format. Can communicate with intermediate format and with new format.
-
-This means that you can't update from 2.5.13 directly to 2.5.17. You must first update to one of the intermediate
-versions 2.5.14, 2.5.15, or 2.5.16.
-
-### 2.5.22 ClusterSharding serializer for `ShardRegionStats`
-
-Issue: [#25348](https://github.com/akka/akka/issues/25348)
-
-Intentional change was done in 2.5.22.
-
-Changed serializer for classes: `GetShardRegionStats`, `ShardRegionStats`, `GetShardStats`, `ShardStats`
-
-This change required a two phase update where new serializer was introduced but not enabled in an earlier version.
-
-* 2.5.18 - serializer was added but not enabled, `JavaSerializer` still used
-* 2.5.22 - `ClusterShardingMessageSerializer` was enabled for these classes
-
-This means that you can't update from 2.5.17 directly to 2.5.22. You must first update to one of the intermediate
-versions 2.5.18, 2.5.19, 2.5.20 or 2.5.21.
-
-### 2.6.0 Several changes in minor release
-
-See @ref:[migration guide](migration-guide-2.5.x-2.6.x.md) when updating from 2.5.x to 2.6.x.
-
-### 2.6.2 ClusterMessageSerializer manifests change
-
-Issue: [#23654](https://github.com/akka/akka/issues/13654)
-
-In preparation of switching away from class based manifests to more efficient letter codes the `ClusterMessageSerializer`
-has been prepared to accept those shorter forms but still emits the old long manifests.
-
-* 2.6.2 - shorter manifests accepted
-* 2.6.5 - shorter manifests emitted
-
-This means that a rolling update will have to go through at least one of 2.6.2, 2.6.3 or 2.6.4 when upgrading to
-2.6.5 or higher or else cluster nodes will not be able to communicate during the rolling update.
-
-### 2.6.5 JacksonCborSerializer
-
-Issue: [#28918](https://github.com/akka/akka/issues/28918). JacksonCborSerializer was using plain JSON format
-instead of CBOR.
-
-If you have `jackson-cbor` in your `serialization-bindings` a rolling update will have to go through 2.6.5 when
-upgrading to 2.6.5 or higher.
-
-In Akka 2.6.5 the `jackson-cbor` binding will still serialize to JSON format to support rolling update from 2.6.4.
-It also adds a new binding to be able to deserialize CBOR format when rolling update from 2.6.5 to 2.6.6.
-In Akka 2.6.6 the `jackson-cbor` binding will serialize to CBOR and that can be deserialized by 2.6.5. Old
-data, such as persistent events, can still be deserialized.
-
-You can start using CBOR format already with Akka 2.6.5 without waiting for the 2.6.6 release. First, perform
-a rolling update to Akka 2.6.5 using default configuration. Then change the configuration to:
-
-```
-pekko.actor {
-  serializers {
-    jackson-cbor = "org.apache.pekko.serialization.jackson.JacksonCborSerializer"
-  }
-  serialization-identifiers {
-    jackson-cbor = 33
-  }
-}
-```
-
-Perform a second rolling update with the new configuration.
+@@@
\ No newline at end of file
diff --git a/docs/src/main/paradox/project/scala3.md b/docs/src/main/paradox/project/scala3.md
index 1d0ee7903f..3dcd552464 100644
--- a/docs/src/main/paradox/project/scala3.md
+++ b/docs/src/main/paradox/project/scala3.md
@@ -1,19 +1,15 @@
 # Scala 3 support
 
-Apache Pekko has experimental support for Scala 3.
+Pekko has experimental support for Scala 3.
 
 ## Using 2.13 artifacts in Scala 3
 
 You can use [CrossVersion.for3Use2_13](https://scala-lang.org/blog/2021/04/08/scala-3-in-sbt.html#using-scala-213-libraries-in-scala-3)
-to use the regular 2.13 Apache Pekko artifacts in a Scala 3 project. This has been
+to use the regular 2.13 Pekko artifacts in a Scala 3 project. This has been
 shown to be successful for Streams, HTTP and gRPC-heavy applications.
 
 ## Scala 3 artifacts
 
-Experimental Scala 3 artifacts are published.
-
-[Development snapshots](https://nightlies.apache.org/pekko/snapshots/org/apache/pekko/pekko-actor_3/) can be found in the snapshots repository.
-
-We encourage you to try out these artifacts and [report any findings](https://github.com/apache/incubator-pekko/issues?q=is%3Aopen+is%3Aissue+label%3At%3Ascala-3).
+We are publishing experimental Scala 3 artifacts that can be used 'directly' (without `CrossVersion`) with Scala 3.
 
 We do not promise @ref:[binary compatibility](../common/binary-compatibility-rules.md) for these artifacts yet.
diff --git a/docs/src/main/paradox/remoting-artery.md b/docs/src/main/paradox/remoting-artery.md
index f0d780851a..c7dd54489f 100644
--- a/docs/src/main/paradox/remoting-artery.md
+++ b/docs/src/main/paradox/remoting-artery.md
@@ -1,5 +1,5 @@
 ---
-project.description: Details about the underlying remoting module for Akka Cluster.
+project.description: Details about the underlying remoting module for Pekko Cluster.
 ---
 # Artery Remoting
 
@@ -8,9 +8,9 @@ project.description: Details about the underlying remoting module for Akka Clust
 Remoting is the mechanism by which Actors on different nodes talk to each
 other internally.
 
-When building an Akka application, you would usually not use the Remoting concepts
+When building an Pekko application, you would usually not use the Remoting concepts
 directly, but instead use the more high-level
-@ref[Akka Cluster](index-cluster.md) utilities or technology-agnostic protocols
+@ref[Pekko Cluster](index-cluster.md) utilities or technology-agnostic protocols
 such as [HTTP](https://doc.akka.io/docs/akka-http/current/),
 [gRPC](https://doc.akka.io/docs/akka-grpc/current/) etc.
 
@@ -23,7 +23,7 @@ If migrating from classic remoting see @ref:[what's new in Artery](#what-is-new-
 To use Artery Remoting, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -45,7 +45,7 @@ The Aeron dependency needs to be explicitly added if using the `aeron-udp` trans
 
 ## Configuration
 
-To enable remote capabilities in your Akka project you should, at a minimum, add the following changes
+To enable remote capabilities in your Pekko project you should, at a minimum, add the following changes
 to your `application.conf` file:
 
 ```
@@ -66,7 +66,7 @@ pekko {
 
 As you can see in the example above there are four things you need to add to get started:
 
- * Change provider from `local`. We recommend using @ref:[Akka Cluster](cluster-usage.md) over using remoting directly.
+ * Change provider from `local`. We recommend using @ref:[Pekko Cluster](cluster-usage.md) over using remoting directly.
  * Enable Artery to use it as the remoting implementation
  * Add host name - the machine you want to run the actor system on; this host
 name is exactly what is passed to remote systems in order to identify this
@@ -88,7 +88,7 @@ All settings are described in @ref:[Remote Configuration](#remote-configuration-
 
 ## Introduction
 
-We recommend @ref:[Akka Cluster](cluster-usage.md) over using remoting directly. As remoting is the
+We recommend @ref:[Pekko Cluster](cluster-usage.md) over using remoting directly. As remoting is the
 underlying module that allows for Cluster, it is still useful to understand details about it though.
 
 @@@ note
@@ -119,8 +119,8 @@ acts as a "server" to which arbitrary systems on the same network can connect to
 There are three alternatives of which underlying transport to use. It is configured by property
 `pekko.remote.artery.transport` with the possible values:
 
-* `tcp` - Based on @ref:[Akka Streams TCP](stream/stream-io.md#streaming-tcp) (default if other not configured)
-* `tls-tcp` - Same as `tcp` with encryption using @ref:[Akka Streams TLS](stream/stream-io.md#tls)
+* `tcp` - Based on @ref:[Pekko Streams TCP](stream/stream-io.md#streaming-tcp) (default if other not configured)
+* `tls-tcp` - Same as `tcp` with encryption using @ref:[Pekko Streams TLS](stream/stream-io.md#tls)
 * `aeron-udp` - Based on [Aeron (UDP)](https://github.com/real-logic/aeron)
 
 If you are uncertain of what to select a good choice is to use the default, which is `tcp`.
@@ -129,7 +129,7 @@ The Aeron (UDP) transport is a high performance transport and should be used for
 that require high throughput and low latency. It uses more CPU than TCP when the system
 is idle or at low message rates. There is no encryption for Aeron.
 
-The TCP and TLS transport is implemented using Akka Streams TCP/TLS. This is the choice
+The TCP and TLS transport is implemented using Pekko Streams TCP/TLS. This is the choice
 when encryption is needed, but it can also be used with plain TCP without TLS. It's also
 the obvious choice when UDP can't be used.
 It has very good performance (high throughput and low latency) but latency at high throughput
@@ -146,10 +146,6 @@ officially supported. If you're on a Big Endian processor, such as Sparc, it is
 
 @@@
 
-## Migrating from classic remoting
-
-See @ref:[migrating from classic remoting](project/migration-guide-2.5.x-2.6.x.md#classic-to-artery)
-
 ## Canonical address
 
 In order for remoting to work properly, where each system can send messages to any other system on the same network
@@ -166,7 +162,7 @@ real network.
 
 In cases, where Network Address Translation (NAT) is used or other network bridging is involved, it is important
 to configure the system so that it understands that there is a difference between his externally visible, canonical
-address and between the host-port pair that is used to listen for connections. See @ref:[Akka behind NAT or in a Docker container](#remote-configuration-nat-artery)
+address and between the host-port pair that is used to listen for connections. See @ref:[Pekko behind NAT or in a Docker container](#remote-configuration-nat-artery)
 for details.
 
 ## Acquiring references to remote actors
@@ -193,25 +189,25 @@ In the next sections the two alternatives are described in detail.
 Scala
 :   ```
     val selection =
-      context.actorSelection("akka://actorSystemName@10.0.0.1:25520/user/actorName")
+      context.actorSelection("pekko://actorSystemName@10.0.0.1:25520/user/actorName")
     ```
     
 Java
 :   ```
     ActorSelection selection =
-      context.actorSelection("akka://actorSystemName@10.0.0.1:25520/user/actorName");
+      context.actorSelection("pekko://actorSystemName@10.0.0.1:25520/user/actorName");
     ```
     
 
 As you can see from the example above the following pattern is used to find an actor on a remote node:
 
 ```
-akka://<actor system>@<hostname>:<port>/<actor path>
+pekko://<actor system>@<hostname>:<port>/<actor path>
 ```
 
 @@@ note
 
-Unlike with earlier remoting, the protocol field is always *akka* as pluggable transports are no longer supported.
+Unlike with earlier remoting, the protocol field is always *pekko* as pluggable transports are no longer supported.
 
 @@@
 
@@ -257,22 +253,22 @@ be delivered just fine.
 
 ## Remote Security
 
-An @apidoc[actor.ActorSystem] should not be exposed via Akka Remote (Artery) over plain Aeron/UDP or TCP to an untrusted
+An @apidoc[actor.ActorSystem] should not be exposed via Pekko Remote (Artery) over plain Aeron/UDP or TCP to an untrusted
 network (e.g. Internet). It should be protected by network security, such as a firewall. If that is not considered
 as enough protection @ref:[TLS with mutual authentication](#remote-tls) should be enabled.
 
-Best practice is that Akka remoting nodes should only be accessible from the adjacent network. Note that if TLS is
+Best practice is that Pekko remoting nodes should only be accessible from the adjacent network. Note that if TLS is
 enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate by
 compromising any node with certificates issued by the same internal PKI tree.
 
-By default, @ref[Java serialization](serialization.md#java-serialization) is disabled in Akka.
+By default, @ref[Java serialization](serialization.md#java-serialization) is disabled in Pekko.
 That is also security best-practice because of its multiple
 [known attack surfaces](https://community.microfocus.com/cyberres/fortify/f/fortify-discussions/317555/the-perils-of-java-deserialization).
 
 <a id="remote-tls"></a>
-### Configuring SSL/TLS for Akka Remoting
+### Configuring SSL/TLS for Pekko Remoting
 
-In addition to what is described here, read the blog post about [Securing Akka cluster communication in Kubernetes](https://akka.io/blog/article/2021/10/27/akka-cluster-mtls).
+In addition to what is described here, you can read the blog post addressing this aspect for Pekko [Securing Pekko cluster communication in Kubernetes](https://akka.io/blog/article/2021/10/27/akka-cluster-mtls).
 
 SSL can be used as the remote transport by using the `tls-tcp` transport:
 
@@ -315,11 +311,7 @@ According to [RFC 7525](https://www.rfc-editor.org/rfc/rfc7525.html) the recomme
 
 You should always check the latest information about security and algorithm recommendations though before you configure your system.
 
-Creating and working with keystores and certificates is well documented in the
-[Generating X.509 Certificates](https://lightbend.github.io/ssl-config/CertificateGeneration.html#using-keytool)
-section of Lightbend's SSL-Config library.
-
-Since an Akka remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
+Since an Pekko remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
 need to be configured on each remoting node participating in the cluster.
 
 The official [Java Secure Socket Extension documentation](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html)
@@ -329,7 +321,7 @@ and configuring SSL.
 
 Mutual authentication between TLS peers is enabled by default. Mutual authentication means that the the passive side
 (the TLS server side) of a connection will also request and verify a certificate from the connecting peer.
-Without this mode only the client side is requesting and verifying certificates. While Akka is a peer-to-peer
+Without this mode only the client side is requesting and verifying certificates. While Pekko is a peer-to-peer
 technology, each connection between nodes starts out from one side (the "client") towards the other (the "server").
 
 Note that if TLS is enabled with mutual authentication there is still a risk that an attacker can gain access to a
@@ -353,7 +345,7 @@ You have a few choices how to set up certificates and hostname verification:
     * This means that only the hosts mentioned in the certificate can connect to the cluster.
     * It cannot be checked, though, if the node you talk to is actually the node it is supposed to be (or if it is one
       of the other nodes). This seems like a minor restriction as you'll have to trust all cluster nodes the same in an
-      Akka cluster anyway.
+      Pekko cluster anyway.
     * The certificate can be self-signed in which case the same single certificate is distributed and trusted on all
       nodes (but see the next bullet)
     * Adding a new node means that its host name needs to conform to the trusted host names in the certificate.
@@ -450,7 +442,7 @@ marking them @apidoc[actor.PossiblyHarmful] so that a client cannot forge them.
 
 ## Quarantine
 
-Akka remoting is using TCP or Aeron as underlying message transport. Aeron is using UDP and adds
+Pekko remoting is using TCP or Aeron as underlying message transport. Aeron is using UDP and adds
 among other things reliable delivery and session semantics, very similar to TCP. This means that
 the order of the messages are preserved, which is needed for the @ref:[Actor message ordering guarantees](general/message-delivery-reliability.md#message-ordering).
 Under normal circumstances all messages will be delivered but there are cases when messages
@@ -463,9 +455,9 @@ may not be delivered to the destination:
 
 In short, Actor message delivery is “at-most-once” as described in @ref:[Message Delivery Reliability](general/message-delivery-reliability.md)
 
-Some messages in Akka are called system messages and those cannot be dropped because that would result
+Some messages in Pekko are called system messages and those cannot be dropped because that would result
 in an inconsistent state between the systems. Such messages are used for essentially two features; remote death
-watch and remote deployment. These messages are delivered by Akka remoting with “exactly-once” guarantee by
+watch and remote deployment. These messages are delivered by Pekko remoting with “exactly-once” guarantee by
 confirming each message and resending unconfirmed messages. If a system message anyway cannot be delivered the
 association with the destination system is irrecoverable failed, and Terminated is signaled for all watched
 actors on the remote system. It is placed in a so called quarantined state. Quarantine usually does not
@@ -483,7 +475,7 @@ system has been restarted.
 An association will be quarantined when:
 
  * Cluster node is removed from the cluster membership.
- * Remote failure detector triggers, i.e. remote watch is used. This is different when @ref:[Akka Cluster](cluster-usage.md)
+ * Remote failure detector triggers, i.e. remote watch is used. This is different when @ref:[Pekko Cluster](cluster-usage.md)
 is used. The unreachable observation by the cluster failure detector can go back to reachable if the network
 partition heals. A cluster member is not quarantined when the failure detector triggers.
  * Overflow of the system message delivery buffer, e.g. because of too many `watch` requests at the same time
@@ -595,7 +587,7 @@ as usual (which is explained in @ref:[Serialization](serialization.md)).
 
 Implementations should typically extend @apidoc[SerializerWithStringManifest] and in addition to the `ByteBuffer` based
 @apidoc[toBinary](ByteBufferSerializer) {scala="#toBinary(o:AnyRef,buf:java.nio.ByteBuffer):Unit" java="#toBinary(java.lang.Object,java.nio.ByteBuffer)"} and @apidoc[fromBinary](ByteBufferSerializer) {scala="#fromBinary(buf:java.nio.ByteBuffer,manifest:String):AnyRef" java="#fromBinary(java.nio.ByteBuffer,java.lang.String)"} methods also implement the array based @apidoc[toBinary](SerializerWithStringManifest) {scala="#toBinary(o:AnyRef):Array[Byte]" java="#toBinary(java.lang.Object)"} a [...]
-The array based methods will be used when `ByteBuffer` is not used, e.g. in Akka Persistence.
+The array based methods will be used when `ByteBuffer` is not used, e.g. in Pekko Persistence.
 
 Note that the array based methods can be implemented by delegation like this:
 
@@ -633,7 +625,7 @@ Artery is a reimplementation of the old remoting module aimed at improving perfo
 source compatible with the old implementation and it is a drop-in replacement in many cases. Main features
 of Artery compared to the previous implementation:
 
- * Based on Akka Streams TCP/TLS or [Aeron](https://github.com/real-logic/Aeron) (UDP) instead of Netty TCP
+ * Based on Pekko Streams TCP/TLS or [Aeron](https://github.com/real-logic/Aeron) (UDP) instead of Netty TCP
  * Focused on high-throughput, low-latency communication
  * Isolation of internal control messages from user messages improving stability and reducing false failure detection
 in case of heavy traffic by using a dedicated subchannel.
@@ -643,10 +635,10 @@ in case of heavy traffic by using a dedicated subchannel.
  * Support for faster serialization/deserialization using ByteBuffers directly
  * Built-in Java Flight Recorder (JFR) to help debugging implementation issues without polluting users logs with implementation
 specific events
- * Providing protocol stability across major Akka versions to support rolling updates of large-scale systems
+ * Providing protocol stability across major Pekko versions to support rolling updates of large-scale systems
 
 The main incompatible change from the previous implementation that the protocol field of the string representation of an
-@apidoc[actor.ActorRef] is always *akka* instead of the previously used *akka.tcp* or *akka.ssl.tcp*. Configuration properties
+@apidoc[actor.ActorRef] is always *pekko* instead of the previously used *pekko.tcp* or *pekko.ssl.tcp*. Configuration properties
 are also different.
 
 
@@ -662,18 +654,18 @@ Note that lowest latency can be achieved with `inbound-lanes=1` and `outbound-la
 
 Also note that the total amount of parallel tasks are bound by the `remote-dispatcher` and the thread pool size should not exceed the number of CPU cores minus headroom for actually processing the messages in the application, i.e. in practice the the pool size should be less than half of the number of cores.
 
-See `inbound-lanes` and `outbound-lanes` in the @ref:[reference configuration](general/configuration-reference.md#config-akka-remote-artery) for default values.
+See `inbound-lanes` and `outbound-lanes` in the @ref:[reference configuration](general/configuration-reference.md#config-pekko-remote-artery) for default values.
 
 ### Dedicated subchannel for large messages
 
-All the communication between user defined remote actors are isolated from the channel of Akka internal messages so
-a large user message cannot block an urgent system message. While this provides good isolation for Akka services, all
+All the communication between user defined remote actors are isolated from the channel of Pekko internal messages so
+a large user message cannot block an urgent system message. While this provides good isolation for Pekko services, all
 user communications by default happen through a shared network connection. When some actors
 send large messages this can cause other messages to suffer higher latency as they need to wait until the full
 message has been transported on the shared channel (and hence, shared bottleneck). In these cases it is usually
 helpful to separate actors that have different QoS requirements: large messages vs. low latency.
 
-Akka remoting provides a dedicated channel for large messages if configured. Since actor message ordering must
+Pekko remoting provides a dedicated channel for large messages if configured. Since actor message ordering must
 not be violated the channel is actually dedicated for *actors* instead of messages, to ensure all of the messages
 arrive in send order. It is possible to assign actors on given paths to use this dedicated channel by using
 path patterns that have to be specified in the actor system's configuration on both the sending and the receiving side:
@@ -714,8 +706,8 @@ pekko.remote.artery {
 Example log messages:
 
 ```
-[INFO] Payload size for [java.lang.String] is [39068] bytes. Sent to Actor[akka://Sys@localhost:53039/user/destination#-1908386800]
-[INFO] New maximum payload size for [java.lang.String] is [44068] bytes. Sent to Actor[akka://Sys@localhost:53039/user/destination#-1908386800].
+[INFO] Payload size for [java.lang.String] is [39068] bytes. Sent to Actor[pekko://Sys@localhost:53039/user/destination#-1908386800]
+[INFO] New maximum payload size for [java.lang.String] is [44068] bytes. Sent to Actor[pekko://Sys@localhost:53039/user/destination#-1908386800].
 ```
 
 The large messages channel can still not be used for extremely large messages, a few MB per message at most.
@@ -726,10 +718,10 @@ them again on the receiving side.
 ### External, shared Aeron media driver
 
 The Aeron transport is running in a so called [media driver](https://github.com/real-logic/Aeron/wiki/Media-Driver-Operation).
-By default, Akka starts the media driver embedded in the same JVM process as application. This is
+By default, Pekko starts the media driver embedded in the same JVM process as application. This is
 convenient and simplifies operational concerns by only having one process to start and monitor.
 
-The media driver may use rather much CPU resources. If you run more than one Akka application JVM on the
+The media driver may use rather much CPU resources. If you run more than one Pekko application JVM on the
 same machine it can therefore be wise to share the media driver by running it as a separate process.
 
 The media driver has also different resource usage characteristics than a normal application and it can
@@ -781,13 +773,13 @@ aeron.threading.mode=SHARED_NETWORK
 #aeron.receiver.idle.strategy=org.agrona.concurrent.BusySpinIdleStrategy
 
 # use same director in pekko.remote.artery.advanced.aeron-dir config
-# of the Akka application
+# of the Pekko application
 aeron.dir=/dev/shm/aeron
 ```
 
 Read more about the media driver in the [Aeron documentation](https://github.com/real-logic/Aeron/wiki/Media-Driver-Operation).
 
-To use the external media driver from the Akka application you need to define the following two
+To use the external media driver from the Pekko application you need to define the following two
 configuration properties:
 
 ```
@@ -799,10 +791,10 @@ pekko.remote.artery.advanced.aeron {
 
 The `aeron-dir` must match the directory you started the media driver with, i.e. the `aeron.dir` property.
 
-Several Akka applications can then be configured to use the same media driver by pointing to the
+Several Pekko applications can then be configured to use the same media driver by pointing to the
 same directory.
 
-Note that if the media driver process is stopped the Akka applications that are using it will also be stopped.
+Note that if the media driver process is stopped the Pekko applications that are using it will also be stopped.
 
 ### Aeron Tuning
 
@@ -820,7 +812,7 @@ usage and latency with the following configuration:
 pekko.remote.artery.advanced.aeron.idle-cpu-level = 1
 ```
 
-By setting this value to a lower number, it tells Akka to do longer "sleeping" periods on its thread dedicated
+By setting this value to a lower number, it tells Pekko to do longer "sleeping" periods on its thread dedicated
 for [spin-waiting](https://en.wikipedia.org/wiki/Busy_waiting) and hence reducing CPU load when there is no
 immediate task to execute at the cost of a longer reaction time to an event when it actually happens. It is worth
 to be noted though that during a continuously high-throughput period this setting makes not much difference
@@ -830,8 +822,8 @@ the system might have less latency than at low message rates.
 <a id="remote-configuration-artery"></a>
 ## Remote Configuration
 
-There are lots of configuration properties that are related to remoting in Akka. We refer to the
-@ref:[reference configuration](general/configuration-reference.md#config-akka-remote-artery) for more information.
+There are lots of configuration properties that are related to remoting in Pekko. We refer to the
+@ref:[reference configuration](general/configuration-reference.md#config-pekko-remote-artery) for more information.
 
 @@@ note
 
@@ -843,10 +835,10 @@ best done by using something like the following:
 @@@
 
 <a id="remote-configuration-nat-artery"></a>
-### Akka behind NAT or in a Docker container
+### Pekko behind NAT or in a Docker container
 
 In setups involving Network Address Translation (NAT), Load Balancers or Docker
-containers the hostname and port pair that Akka binds to will be different than the "logical"
+containers the hostname and port pair that Pekko binds to will be different than the "logical"
 host name and port pair that is used to connect to the system from the outside. This requires
 special configuration that sets both the logical and the bind pairs for remoting.
 
@@ -865,8 +857,8 @@ pekko {
 ```
 
 You can look at the
-@java[@extref[Cluster with docker-compse example project](samples:akka-sample-cluster-docker-compose-java)]
-@scala[@extref[Cluster with docker-compose example project](samples:akka-sample-cluster-docker-compose-scala)]
+@java[@extref[Cluster with docker-compose example project](samples:pekko-sample-cluster-docker-compose-java)]
+@scala[@extref[Cluster with docker-compose example project](samples:pekko-sample-cluster-docker-compose-scala)]
 to see what this looks like in practice.
 
 ### Running in Docker/Kubernetes
@@ -905,4 +897,4 @@ When running on JDK 11 Artery specific flight recording is available through the
 The flight recorder is automatically enabled by detecting JDK 11 but can be disabled if needed by setting `pekko.java-flight-recorder.enabled = false`.
 
 Low overhead Artery specific events are emitted by default when JFR is enabled, higher overhead events needs a custom settings template and are not enabled automatically with the `profiling` JFR template.
-To enable those create a copy of the `profiling` template and enable all `Akka` sub category events, for example through the JMC GUI. 
+To enable those create a copy of the `profiling` template and enable all `Pekko` sub category events, for example through the JMC GUI. 
diff --git a/docs/src/main/paradox/remoting.md b/docs/src/main/paradox/remoting.md
index 39aacdbd50..dcca574c8f 100644
--- a/docs/src/main/paradox/remoting.md
+++ b/docs/src/main/paradox/remoting.md
@@ -2,7 +2,7 @@
 
 @@@ warning
 
-Classic remoting has been deprecated and will be removed in Akka 2.7.0. Please use @ref[Artery](remoting-artery.md) instead.
+Classic remoting has been deprecated. Please use @ref[Artery](remoting-artery.md) instead.
 
 @@@
 
@@ -11,9 +11,9 @@ Classic remoting has been deprecated and will be removed in Akka 2.7.0. Please u
 Remoting is the mechanism by which Actors on different nodes talk to each
 other internally.
 
-When building an Akka application, you would usually not use the Remoting concepts
+When building an Pekko application, you would usually not use the Remoting concepts
 directly, but instead use the more high-level
-@ref[Akka Cluster](index-cluster.md) utilities or technology-agnostic protocols
+@ref[Pekko Cluster](index-cluster.md) utilities or technology-agnostic protocols
 such as [HTTP](https://doc.akka.io/docs/akka-http/current/),
 [gRPC](https://doc.akka.io/docs/akka-grpc/current/) etc.
 
@@ -22,7 +22,7 @@ such as [HTTP](https://doc.akka.io/docs/akka-http/current/),
 
 ## Module info
 
-To use Akka Remoting, you must add the following dependency in your project:
+To use Pekko Remoting, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -46,7 +46,7 @@ not using classic remoting do not have to have Netty on the classpath:
 
 ## Configuration
 
-To enable classic remoting in your Akka project you should, at a minimum, add the following changes
+To enable classic remoting in your Pekko project you should, at a minimum, add the following changes
 to your `application.conf` file:
 
 ```
@@ -68,7 +68,7 @@ pekko {
 
 As you can see in the example above there are four things you need to add to get started:
 
- * Change provider from `local`. We recommend using @ref:[Akka Cluster](cluster-usage.md) over using remoting directly.
+ * Change provider from `local`. We recommend using @ref:[Pekko Cluster](cluster-usage.md) over using remoting directly.
  * Disable artery remoting. Artery is the default remoting implementation since `2.6.0`
  * Add host name - the machine you want to run the actor system on; this host
 name is exactly what is passed to remote systems in order to identify this
@@ -90,19 +90,19 @@ All settings are described in @ref:[Remote Configuration](#remote-configuration)
 
 ## Introduction
 
-We recommend @ref:[Akka Cluster](cluster-usage.md) over using remoting directly. As remoting is the
+We recommend @ref:[Pekko Cluster](cluster-usage.md) over using remoting directly. As remoting is the
 underlying module that allows for Cluster, it is still useful to understand details about it though.
 
-For an introduction of remoting capabilities of Akka please see @ref:[Location Transparency](general/remoting.md).
+For an introduction of remoting capabilities of Pekko please see @ref:[Location Transparency](general/remoting.md).
 
 @@@ note
 
-As explained in that chapter Akka remoting is designed for communication in a
+As explained in that chapter Pekko remoting is designed for communication in a
 peer-to-peer fashion and it is not a good fit for client-server setups. In
-particular Akka Remoting does not work transparently with Network Address Translation,
+particular Pekko Remoting does not work transparently with Network Address Translation,
 Load Balancers, or in Docker containers. For symmetric communication in these situations
-network and/or Akka configuration will have to be changed as described in
-[Akka behind NAT or in a Docker container](#remote-configuration-nat).
+network and/or Pekko configuration will have to be changed as described in
+[Pekko behind NAT or in a Docker container](#remote-configuration-nat).
 
 @@@
 
@@ -112,7 +112,7 @@ recommendation if you don't have other preference.
 
 ## Types of Remote Interaction
 
-Akka has two ways of using remoting:
+Pekko has two ways of using remoting:
 
  * Lookup    : used to look up an actor on a remote node with `actorSelection(path)`
  * Creation  : used to create an actor on a remote node with `actorOf(Props(...), actorName)`
@@ -126,19 +126,19 @@ In the next sections the two alternatives are described in detail.
 Scala
 :   ```
 val selection =
-  context.actorSelection("akka.tcp://actorSystemName@10.0.0.1:2552/user/actorName")
+  context.actorSelection("pekko.tcp://actorSystemName@10.0.0.1:2552/user/actorName")
 ```
 
 Java
 :   ```
 ActorSelection selection =
-  context.actorSelection("akka.tcp://app@10.0.0.1:2552/user/serviceA/worker");
+  context.actorSelection("pekko.tcp://app@10.0.0.1:2552/user/serviceA/worker");
 ```
 
 As you can see from the example above the following pattern is used to find an actor on a remote node:
 
 ```
-akka.<protocol>://<actor system name>@<hostname>:<port>/<actor path>
+pekko.<protocol>://<actor system name>@<hostname>:<port>/<actor path>
 ```
 
 Once you obtained a selection to the actor you can interact with it in the same way you would with a local actor, e.g.:
@@ -182,7 +182,7 @@ be delivered just fine.
 
 ## Creating Actors Remotely
 
-If you want to use the creation functionality in Akka remoting you have to further amend the
+If you want to use the creation functionality in Pekko remoting you have to further amend the
 `application.conf` file in the following way (only showing deployment section):
 
 ```
@@ -190,14 +190,14 @@ pekko {
   actor {
     deployment {
       /sampleActor {
-        remote = "akka.tcp://sampleActorSystem@127.0.0.1:2553"
+        remote = "pekko.tcp://sampleActorSystem@127.0.0.1:2553"
       }
     }
   }
 }
 ```
 
-The configuration above instructs Akka to react when an actor with path `/sampleActor` is created, i.e.
+The configuration above instructs Pekko to react when an actor with path `/sampleActor` is created, i.e.
 using @scala[`system.actorOf(Props(...), "sampleActor")`]@java[`system.actorOf(new Props(...), "sampleActor")`]. This specific actor will not be directly instantiated,
 but instead the remote daemon of the remote system will be asked to create the actor,
 which in this sample corresponds to `sampleActorSystem@127.0.0.1:2553`.
@@ -368,7 +368,7 @@ That is not done by the router.
 
 ### Remote Events
 
-It is possible to listen to events that occur in Akka Remote, and to subscribe/unsubscribe to these events
+It is possible to listen to events that occur in Pekko Remote, and to subscribe/unsubscribe to these events
 you register as listener to the below described types in on the `ActorSystem.eventStream`.
 
 @@@ note
@@ -385,7 +385,7 @@ the lifecycle of associations, subscribe to
 The use of term "Association" instead of "Connection" reflects that the
 remoting subsystem may use connectionless transports, but an association
 similar to transport layer connections is maintained between endpoints by
-the Akka protocol.
+the Pekko protocol.
 
 @@@
 
@@ -424,20 +424,20 @@ To intercept generic remoting related errors, listen to `RemotingErrorEvent` whi
 
 ## Remote Security
 
-An `ActorSystem` should not be exposed via Akka Remote over plain TCP to an untrusted network (e.g. Internet).
+An `ActorSystem` should not be exposed via Pekko Remote over plain TCP to an untrusted network (e.g. Internet).
 It should be protected by network security, such as a firewall. If that is not considered as enough protection
 [TLS with mutual authentication](#remote-tls)  should be enabled.
 
-Best practice is that Akka remoting nodes should only be accessible from the adjacent network. Note that if TLS is
+Best practice is that Pekko remoting nodes should only be accessible from the adjacent network. Note that if TLS is
 enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate by
 compromising any node with certificates issued by the same internal PKI tree.
 
-By default, @ref[Java serialization](serialization.md#java-serialization) is disabled in Akka.
+By default, @ref[Java serialization](serialization.md#java-serialization) is disabled in Pekko.
 That is also security best-practice because of its multiple
 [known attack surfaces](https://community.microfocus.com/cyberres/fortify/f/fortify-discussions/317555/the-perils-of-java-deserialization).
 
 <a id="remote-tls"></a>
-### Configuring SSL/TLS for Akka Remoting
+### Configuring SSL/TLS for Pekko Remoting
 
 SSL can be used as the remote transport by adding `pekko.remote.classic.netty.ssl` to the `enabled-transport` configuration section.
 An example of setting up the default Netty based SSL driver as default:
@@ -488,11 +488,7 @@ According to [RFC 7525](https://www.rfc-editor.org/rfc/rfc7525.html) the recomme
 
 You should always check the latest information about security and algorithm recommendations though before you configure your system.
 
-Creating and working with keystores and certificates is well documented in the
-[Generating X.509 Certificates](https://lightbend.github.io/ssl-config/CertificateGeneration.html#using-keytool)
-section of Lightbend's SSL-Config library.
-
-Since an Akka remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
+Since an Pekko remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
 need to be configured on each remoting node participating in the cluster.
 
 The official [Java Secure Socket Extension documentation](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html)
@@ -500,11 +496,11 @@ as well as the [Oracle documentation on creating KeyStore and TrustStores](https
 are both great resources to research when setting up security on the JVM. Please consult those resources when troubleshooting
 and configuring SSL.
 
-Since Akka 2.5.0 mutual authentication between TLS peers is enabled by default.
+Since Pekko 2.5.0 mutual authentication between TLS peers is enabled by default.
 
 Mutual authentication means that the the passive side (the TLS server side) of a connection will also request and verify
 a certificate from the connecting peer. Without this mode only the client side is requesting and verifying certificates.
-While Akka is a peer-to-peer technology, each connection between nodes starts out from one side (the "client") towards
+While Pekko is a peer-to-peer technology, each connection between nodes starts out from one side (the "client") towards
 the other (the "server").
 
 Note that if TLS is enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate
@@ -587,8 +583,8 @@ marking them `PossiblyHarmful` so that a client cannot forge them.
 
 ## Remote Configuration
 
-There are lots of configuration properties that are related to remoting in Akka. We refer to the
-@ref:[reference configuration](general/configuration-reference.md#config-akka-remote) for more information.
+There are lots of configuration properties that are related to remoting in Pekko. We refer to the
+@ref:[reference configuration](general/configuration-reference.md#config-pekko-remote) for more information.
 
 @@@ note
 
@@ -600,10 +596,10 @@ best done by using something like the following:
 @@@
 
 <a id="remote-configuration-nat"></a>
-### Akka behind NAT or in a Docker container
+### Pekko behind NAT or in a Docker container
 
 In setups involving Network Address Translation (NAT), Load Balancers or Docker
-containers the hostname and port pair that Akka binds to will be different than the "logical"
+containers the hostname and port pair that Pekko binds to will be different than the "logical"
 host name and port pair that is used to connect to the system from the outside. This requires
 special configuration that sets both the logical and the bind pairs for remoting.
 
diff --git a/docs/src/main/paradox/routing.md b/docs/src/main/paradox/routing.md
index cd2aea84e7..b421780094 100644
--- a/docs/src/main/paradox/routing.md
+++ b/docs/src/main/paradox/routing.md
@@ -8,11 +8,11 @@ For the documentation of the new API of this feature and for new projects see @r
 To use Routing, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -22,7 +22,7 @@ Messages can be sent via a router to efficiently route them to destination actor
 its *routees*. A @apidoc[routing.Router] can be used inside or outside of an actor, and you can manage the
 routees yourselves or use a self contained router actor with configuration capabilities.
 
-Different routing strategies can be used, according to your application's needs. Akka comes with
+Different routing strategies can be used, according to your application's needs. Pekko comes with
 several useful routing strategies right out of the box. But, as you will see in this chapter, it is
 also possible to @ref:[create your own](#custom-router).
 
@@ -40,7 +40,7 @@ Java
 We create a `Router` and specify that it should use @apidoc[routing.RoundRobinRoutingLogic] when routing the
 messages to the routees.
 
-The routing logic shipped with Akka are:
+The routing logic shipped with Pekko are:
 
  * @apidoc[routing.RoundRobinRoutingLogic]
  * @apidoc[routing.RandomRoutingLogic]
@@ -126,7 +126,7 @@ In addition to being able to create local actors as routees, you can instruct th
 deploy its created children on a set of remote hosts. Routees will be deployed in round-robin
 fashion. In order to deploy routees remotely, wrap the router configuration in a
 @apidoc[remote.routing.RemoteRouterConfig], attaching the remote addresses of the nodes to deploy to. Remote
-deployment requires the `akka-remote` module to be included in the classpath.
+deployment requires the `pekko-remote` module to be included in the classpath.
 
 Scala
 :  @@snip [RouterDocSpec.scala](/docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #remoteRoutees }
@@ -239,7 +239,7 @@ Java
 :  @@snip [RouterDocTest.java](/docs/src/test/java/jdocs/routing/RouterDocTest.java) { #create-worker-actors }
 
 The paths may contain protocol and address information for actors running on remote hosts.
-Remoting requires the `akka-remote` module to be included in the classpath.
+Remoting requires the `pekko-remote` module to be included in the classpath.
 
 @@snip [RouterDocSpec.scala](/docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-group }
 
@@ -889,7 +889,7 @@ Dispatchers](#configuring-dispatchers) for more information.
 @@@
 
 <a id="router-design"></a>
-## How Routing is Designed within Akka
+## How Routing is Designed within Pekko
 
 On the surface routers look like normal actors, but they are actually implemented differently.
 Routers are designed to be extremely efficient at receiving messages and passing them quickly on to
@@ -909,7 +909,7 @@ routers.
 
 ## Custom Router
 
-You can create your own router should you not find any of the ones provided by Akka sufficient for your needs.
+You can create your own router should you not find any of the ones provided by Pekko sufficient for your needs.
 In order to roll your own router you have to fulfill certain criteria which are explained in this section.
 
 Before creating your own router you should consider whether a normal actor with router-like
@@ -958,7 +958,7 @@ Scala
 Java
 :  @@snip [RedundancyGroup.java](/docs/src/test/java/jdocs/routing/RedundancyGroup.java) { #group }
 
-This can be used exactly as the router actors provided by Akka.
+This can be used exactly as the router actors provided by Pekko.
 
 Scala
 :  @@snip [CustomRouterDocSpec.scala](/docs/src/test/scala/docs/routing/CustomRouterDocSpec.scala) { #usage-1 }
diff --git a/docs/src/main/paradox/scheduler.md b/docs/src/main/paradox/scheduler.md
index 1790e662db..1680432d67 100644
--- a/docs/src/main/paradox/scheduler.md
+++ b/docs/src/main/paradox/scheduler.md
@@ -1,5 +1,5 @@
 ---
-project.description: How to schedule processes in Akka with the Scheduler.
+project.description: How to schedule processes in Pekko with the Scheduler.
 ---
 # Classic Scheduler
 
@@ -11,11 +11,11 @@ For the new API see @ref:[typed scheduling](typed/interaction-patterns.md#typed-
 To use Scheduler, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -35,23 +35,23 @@ When scheduling periodic or single messages in an actor to itself it is recommen
 use the @ref:[Actor Timers](actors.md#actors-timers) instead of using the @apidoc[actor.Scheduler]
 directly.
 
-The scheduler in Akka is designed for high-throughput of thousands up to millions 
+The scheduler in Pekko is designed for high-throughput of thousands up to millions 
 of triggers. The prime use-case being triggering Actor receive timeouts, Future timeouts,
 circuit breakers and other time dependent events which happen all-the-time and in many 
 instances at the same time. The implementation is based on a Hashed Wheel Timer, which is
 a known datastructure and algorithm for handling such use cases, refer to the [Hashed and Hierarchical Timing Wheels](http://www.cs.columbia.edu/~nahum/w6998/papers/sosp87-timing-wheels.pdf) 
 whitepaper by Varghese and Lauck if you'd like to understand its inner workings. 
 
-The Akka scheduler is **not** designed for long-term scheduling (see [akka-quartz-scheduler](https://github.com/enragedginger/akka-quartz-scheduler) 
+The Pekko scheduler is **not** designed for long-term scheduling (see [akka-quartz-scheduler](https://github.com/enragedginger/akka-quartz-scheduler) 
 instead for this use case) nor is it to be used for highly precise firing of the events.
 The maximum amount of time into the future you can schedule an event to trigger is around 8 months,
 which in practice is too much to be useful since this would assume the system never went down during that period.
 If you need long-term scheduling we highly recommend looking into alternative schedulers, as this
-is not the use-case the Akka scheduler is implemented for.
+is not the use-case the Pekko scheduler is implemented for.
 
 @@@ warning
 
-The default implementation of @apidoc[actor.Scheduler] used by Akka is based on job
+The default implementation of @apidoc[actor.Scheduler] used by Pekko is based on job
 buckets which are emptied according to a fixed schedule.  It does not
 execute tasks at the exact time, but on every tick, it will run everything
 that is (over)due.  The accuracy of the default Scheduler can be modified
diff --git a/docs/src/main/paradox/security/2017-02-10-java-serialization.md b/docs/src/main/paradox/security/2017-02-10-java-serialization.md
deleted file mode 100644
index 3f89fbc352..0000000000
--- a/docs/src/main/paradox/security/2017-02-10-java-serialization.md
+++ /dev/null
@@ -1,55 +0,0 @@
-# Java Serialization, Fixed in Akka 2.4.17
-
-### Date
-
-10 February 2017
-
-### Description of Vulnerability
-
-An attacker that can connect to an `ActorSystem` exposed via Akka Remote over TCP can gain remote code execution 
-capabilities in the context of the JVM process that runs the ActorSystem if:
-
- * `JavaSerializer` is enabled (default in Akka 2.4.x)
- * and TLS is disabled *or* TLS is enabled with `pekko.remote.netty.ssl.security.require-mutual-authentication = false`
-(which is still the default in Akka 2.4.x)
- * or if TLS is enabled with mutual authentication and the authentication keys of a host that is allowed to connect have been compromised, an attacker gained access to a valid certificate (e.g. by compromising a node with certificates issued by the same internal PKI tree to get access of the certificate)
- * regardless of whether `untrusted` mode is enabled or not
-
-Java deserialization is [known to be vulnerable](https://community.microfocus.com/cyberres/fortify/f/fortify-discussions/317555/the-perils-of-java-deserialization) to attacks when attacker can provide arbitrary types.
-
-Akka Remoting uses Java serializer as default configuration which makes it vulnerable in its default form. The documentation of how to disable Java serializer was not complete. The documentation of how to enable mutual authentication was missing (only described in reference.conf).
-
-To protect against such attacks the system should be updated to Akka *2.4.17* or later and be configured with 
-[disabled Java serializer](https://doc.akka.io/docs/akka/2.5/remoting.html#disable-java-serializer). Additional protection can be achieved when running in an
-untrusted network by enabling @ref:[TLS with mutual authentication](../remoting.md#remote-tls).
-
-Please subscribe to the [akka-security](https://groups.google.com/forum/#!forum/akka-security) mailing list to be notified promptly about future security issues.
-
-### Severity
-
-The [CVSS](https://en.wikipedia.org/wiki/CVSS) score of this vulnerability is 6.8 (Medium), based on vector [AV:A/AC:M/Au:N/C:C/I:C/A:C/E:F/RL:TF/RC:C](https://nvd.nist.gov/vuln-metrics/cvss/v2-calculator?calculator&amp;version=2&amp;vector=%5C(AV:A/AC:M/Au:N/C:C/I:C/A:C/E:F/RL:TF/RC:C%5C)).
-
-Rationale for the score:
-
- * AV:A - Best practice is that Akka remoting nodes should only be accessible from the adjacent network, so in good setups, this will be adjacent.
- * AC:M - Any one in the adjacent network can launch the attack with non-special access privileges.
- * C:C, I:C, A:C - Remote Code Execution vulnerabilities are by definition CIA:C.
-
-### Affected Versions
-
- * Akka *2.4.16* and prior
- * Akka *2.5-M1* (milestone not intended for production)
-
-### Fixed Versions
-
-We have prepared patches for the affected versions, and have released the following versions which resolve the issue: 
-
- * Akka *2.4.17* (Scala 2.11, 2.12)
-
-Binary and source compatibility has been maintained for the patched releases so the upgrade procedure is as simple as changing the library dependency.
-
-It will also be fixed in 2.5-M2 or 2.5.0-RC1.
-
-### Acknowledgements
-
-We would like to thank Alvaro Munoz at Hewlett Packard Enterprise Security & Adrian Bravo at Workday for their thorough investigation and bringing this issue to our attention.
diff --git a/docs/src/main/paradox/security/2017-08-09-camel.md b/docs/src/main/paradox/security/2017-08-09-camel.md
deleted file mode 100644
index d01ede83fc..0000000000
--- a/docs/src/main/paradox/security/2017-08-09-camel.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# Camel Dependency, Fixed in Akka 2.5.4
-
-### Date
-
-9 August 2017
-
-### Description of Vulnerability
-
-Apache Camel's Validation Component is vulnerable against SSRF via remote DTDs and XXE, as described in [CVE-2017-5643](https://nvd.nist.gov/vuln/detail/CVE-2017-5643)
-
-To protect against such attacks the system should be updated to Akka *2.4.20*, *2.5.4* or later. Dependencies to Camel libraries should be updated to version 2.17.7.
-
-### Severity
-
-The [CVSS](https://en.wikipedia.org/wiki/CVSS) score of this vulnerability is 7.4 (High), according to [CVE-2017-5643](https://nvd.nist.gov/vuln/detail/CVE-2017-5643).
-
-### Affected Versions
-
- * Akka *2.4.19* and prior
- * Akka *2.5.3* and prior
-
-### Fixed Versions
-
-We have prepared patches for the affected versions, and have released the following versions which resolve the issue: 
-
- * Akka *2.4.20* (Scala 2.11, 2.12)
- * Akka *2.5.4* (Scala 2.11, 2.12)
-
-### Acknowledgements
-
-We would like to thank Thomas Szymanski for bringing this issue to our attention.
diff --git a/docs/src/main/paradox/security/2018-08-29-aes-rng.md b/docs/src/main/paradox/security/2018-08-29-aes-rng.md
deleted file mode 100644
index 0fc887c083..0000000000
--- a/docs/src/main/paradox/security/2018-08-29-aes-rng.md
+++ /dev/null
@@ -1,99 +0,0 @@
-# Broken random number generators AES128CounterSecureRNG / AES256CounterSecureRNG, Fixed in Akka 2.5.16
-
-### CVE ID
-
-CVE-2018-16115
-
-### Date
-
-29 August 2018
-
-### Description of Vulnerability
-
-A random number generator is used in Akka Remoting for TLS (both classic and Artery
-Remoting). Akka allows to configure custom random number generators. For historical reasons,
-Akka included the `AES128CounterSecureRNG` and `AES256CounterSecureRNG` random number
-generators. The implementations had a bug that caused the generated numbers to be repeated
-after only a few bytes.
-
-The custom RNG implementations were not configured by default but examples in the
-documentation showed (and therefore implicitly recommended) using the custom ones.
-
-This can be used by an attacker to compromise the communication if these random number generators
-are enabled in configuration. It would be possible to eavesdrop, replay or modify the messages sent with
-Akka Remoting/Cluster.
-
-To protect against such attacks the system should be updated to Akka *2.5.16* or later, or the default
-configuration of the TLS random number generator should be used:
-
-```
-# Set `SecureRandom` RNG explicitly (but it is also the default)
-pekko.remote.classic.netty.ssl.random-number-generator = "SecureRandom"
-pekko.remote.artery.ssl.config-ssl-engine.random-number-generator = "SecureRandom"
-```
-
-Please subscribe to the [akka-security](https://groups.google.com/forum/#!forum/akka-security) mailing list to be notified promptly about future security issues.
-
-### Severity
-
-The [CVSS](https://en.wikipedia.org/wiki/CVSS) score of this vulnerability is 5.9 (Medium), based on vector [AV:A/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:N/E:U/RL:O/RC:C](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:A/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:N/E:U/RL:O/RC:C).
-
-Rationale for the score:
-
- * AV:A - Best practice is that Akka remoting nodes should only be accessible from the adjacent network, so in
-   good setups, this will be adjacent.
- * AC:H - Any one in the adjacent network can launch the attack with non-special access privileges,
-   but man-in-the-middle attacks are not trivial.
- * C:H, I:H - Confidentiality and Integrity are only partially affected because only the networking component
-   is affected and not the whole Akka cluster. Assessed to be High anyway because access to actor system data would
-   probably be possible by injecting messages into the remoting communication.
-
-### Affected Versions
-
- * Akka *2.5.0 - 2.5.15* with any of the following configuration properties defined:
-
-```
-pekko.remote.netty.ssl.random-number-generator = "AES128CounterSecureRNG"
-pekko.remote.netty.ssl.random-number-generator = "AES256CounterSecureRNG"
-pekko.remote.artery.ssl.config-ssl-engine.random-number-generator = "AES128CounterSecureRNG"
-pekko.remote.artery.ssl.config-ssl-engine.random-number-generator = "AES256CounterSecureRNG"
-```
-
-Akka *2.4.x* versions are not affected by this particular bug. It has reached
-end-of-life since start of 2018. If you still run on Akka 2.4, we still
-recommend to use the default `SecureRandom` implementation for the reasons
-given below. Please check your configuration files not to configure the
-custom RNGs.
-
-### Fixed Versions
-
-We have prepared patches for the affected versions, and have released the following version which resolve the issue:
-
- * Akka *2.5.16* (Scala 2.11, 2.12)
-
-Binary and source compatibility has been maintained for the patched releases so the upgrade procedure is as simple
-as changing the library dependency.
-
-The exact historical reasons to include custom RNG implementations could not be reconstructed
-but it was likely because RNGs provided by previous versions of the JDK were deemed too slow.
-
-Including custom cryptographic components in your library (or application) should not be done
-lightly. We acknowledge that we cannot prove that the custom RNGs that Akka provides or has
-been providing are generally correct or just correct enough for the purposes in Akka.
-
-The reporter of this vulnerability, Rafał Sumisławski, kindly provided us with fixes for the
-custom RNGs in Akka. However, as we cannot thoroughly verify the correctness of the algorithm
-we decided to remove custom RNGs from Akka.
-
-If the "AES128CounterSecureRNG" and "AES256CounterSecureRNG" configuration values are still used with Akka 2.5.16
-they will be ignored and the default `SecureRandom` is used and a warning is logged. This is to avoid accidental
-use of these unverified and possibly insecure implementations. The deprecated implementations are not recommended,
-but they can be enabled by using configuration values "DeprecatedAES128CounterSecureRNG" or "DeprecatedAES256CounterSecureRNG"
-during the transition period until they have been removed.
-
-*Edit*: `DeprecatedAES128CounterSecureRNG` and `DeprecatedAES256CounterSecureRNG` have been removed since Akka 2.5.19.
-
-### Acknowledgements
-
-We would like to thank Rafał Sumisławski at NetworkedAssets for bringing this issue to our attention and providing
-a patch.
diff --git a/docs/src/main/paradox/security/index.md b/docs/src/main/paradox/security/index.md
index cc12f0ca8b..f4a20b6cd4 100644
--- a/docs/src/main/paradox/security/index.md
+++ b/docs/src/main/paradox/security/index.md
@@ -2,7 +2,7 @@
 
 ## Receiving Security Advisories
 
-The best way to receive any and all security announcements is to subscribe to the [Akka security list](https://groups.google.com/forum/#!forum/akka-security).
+The best way to receive any and all security announcements is to subscribe to the [Pekko security list](https://groups.google.com/forum/#!forum/akka-security).
 
 The mailing list is very low traffic, and receives notifications only after security reports have been managed by the core team and fixes are publicly available.
 
@@ -21,15 +21,3 @@ to ensure that a fix can be provided without delay.
  * @ref:[Java Serialization](../serialization.md#java-serialization)
  * @ref:[Remote deployment allow list](../remoting.md#remote-deployment-allow-list)
  * @ref:[Remote Security](../remoting-artery.md#remote-security)
-
-## Fixed Security Vulnerabilities
-
-@@toc { .list depth=1 }
-
-@@@ index
-
-* [2017-02-10-java-serialization](2017-02-10-java-serialization.md)
-* [2017-08-09-camel](2017-08-09-camel.md)
-* [2018-08-29-aes-rng](2018-08-29-aes-rng.md)
-
-@@@
diff --git a/docs/src/main/paradox/serialization-classic.md b/docs/src/main/paradox/serialization-classic.md
index 72fe1b916e..54115c8363 100644
--- a/docs/src/main/paradox/serialization-classic.md
+++ b/docs/src/main/paradox/serialization-classic.md
@@ -10,11 +10,11 @@ aside from serialization of `ActorRef` that is described @ref:[here](#serializin
 To use Serialization, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/serialization-jackson.md b/docs/src/main/paradox/serialization-jackson.md
index 0e437f13eb..196483399e 100644
--- a/docs/src/main/paradox/serialization-jackson.md
+++ b/docs/src/main/paradox/serialization-jackson.md
@@ -1,5 +1,5 @@
 ---
-project.description: Serialization with Jackson for Akka.
+project.description: Serialization with Jackson for Pekko.
 ---
 # Serialization with Jackson
 
@@ -8,17 +8,17 @@ project.description: Serialization with Jackson for Akka.
 To use Jackson Serialization, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-serialization-jackson_$scala.binary.version$"
+  artifact="pekko-serialization-jackson_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-You find general concepts for for Akka serialization in the @ref:[Serialization](serialization.md) section.
+You find general concepts for for Pekko serialization in the @ref:[Serialization](serialization.md) section.
 This section describes how to use the Jackson serializer for application specific messages and persistent
 events and snapshots.
 
@@ -47,7 +47,7 @@ one of the supported Jackson formats: `jackson-json` or `jackson-cbor`
 
 A good convention would be to name the marker interface `CborSerializable` or `JsonSerializable`.
 In this documentation we have used `MySerializable` to make it clear that the marker interface itself is not
-provided by Akka.
+provided by Pekko.
 
 That is all that is needed for basic classes where Jackson understands the structure. A few cases that requires
 annotations are described below.
@@ -459,13 +459,13 @@ For the `jackson-cbor` and custom bindings other than `jackson-json` compression
 but can be enabled in the same way as the configuration shown above but replacing `jackson-json` with
 the binding name (for example `jackson-cbor`).
 
-## Using Akka Serialization for embedded types
+## Using Pekko Serialization for embedded types
 
-For types that already have an Akka Serializer defined that are embedded in types serialized with Jackson the @apidoc[PekkoSerializationSerializer] and
-@apidoc[PekkoSerializationDeserializer] can be used to Akka Serialization for individual fields. 
+For types that already have an Pekko Serializer defined that are embedded in types serialized with Jackson the @apidoc[PekkoSerializationSerializer] and
+@apidoc[PekkoSerializationDeserializer] can be used to Pekko Serialization for individual fields. 
 
 The serializer/deserializer are not enabled automatically. The @javadoc[@JsonSerialize](com.fasterxml.jackson.databind.annotation.JsonSerialize) and @javadoc[@JsonDeserialize](com.fasterxml.jackson.databind.annotation.JsonDeserialize) annotation needs to be added
-to the fields containing the types to be serialized with Akka Serialization.
+to the fields containing the types to be serialized with Pekko Serialization.
 
 The type will be embedded as an object with the fields:
 
@@ -501,9 +501,7 @@ this to be useful, generally that single type must be a
 @ref:[Polymorphic type](#polymorphic-types), with all type information necessary to deserialize to
 the various sub types contained in the JSON message.
 
-When switching serializers, for example, if doing a rolling update as described
-@ref:[here](additional/rolling-updates.md#from-java-serialization-to-jackson), there will be
-periods of time when you may have no serialization bindings declared for the type. In such
+When switching serializers, there will be periods of time when you may have no serialization bindings declared for the type. In such
 circumstances, you must use the `deserialization-type` configuration attribute to specify which
 type should be used to deserialize messages.
 
@@ -513,14 +511,14 @@ configurations.
 
 @@snip [config](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #manifestless }
 
-Note that Akka remoting already implements manifest compression, and so this optimization will have
+Note that Pekko remoting already implements manifest compression, and so this optimization will have
 no significant impact for messages sent over remoting. It's only useful for messages serialized for
 other purposes, such as persistence or distributed data.
 
 ## Additional features
 
 Additional Jackson serialization features can be enabled/disabled in configuration. The default values from
-Jackson are used aside from the the following that are changed in Akka's default configuration.
+Jackson are used aside from the the following that are changed in Pekko's default configuration.
 
 @@snip [reference.conf](/serialization-jackson/src/main/resources/reference.conf) { #features }
 
diff --git a/docs/src/main/paradox/serialization.md b/docs/src/main/paradox/serialization.md
index 5ff8fcf872..326107523c 100644
--- a/docs/src/main/paradox/serialization.md
+++ b/docs/src/main/paradox/serialization.md
@@ -1,5 +1,5 @@
 ---
-project.description: Serialization APIs built into Akka.
+project.description: Serialization APIs built into Pekko.
 ---
 # Serialization
 
@@ -8,19 +8,19 @@ project.description: Serialization APIs built into Akka.
 To use Serialization, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor_$scala.binary.version$"
+  artifact="pekko-actor_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-The messages that Akka actors send to each other are JVM objects @scala[(e.g. instances of Scala case classes)]. Message passing between actors that live on the same JVM is straightforward. It is done via reference passing. However, messages that have to escape the JVM to reach an actor running on a different host have to undergo some form of serialization (i.e. the objects have to be converted to and from byte arrays).
+The messages that Pekko actors send to each other are JVM objects @scala[(e.g. instances of Scala case classes)]. Message passing between actors that live on the same JVM is straightforward. It is done via reference passing. However, messages that have to escape the JVM to reach an actor running on a different host have to undergo some form of serialization (i.e. the objects have to be converted to and from byte arrays).
 
-The serialization mechanism in Akka allows you to write custom serializers and to define which serializer to use for what.
+The serialization mechanism in Pekko allows you to write custom serializers and to define which serializer to use for what.
 
 @ref:[Serialization with Jackson](serialization-jackson.md) is a good choice in many cases and our
 recommendation if you don't have other preference.
@@ -29,13 +29,13 @@ recommendation if you don't have other preference.
 more control over the schema evolution of your messages, but it requires more work to develop and
 maintain the mapping between serialized representation and domain representation.
 
-Akka itself uses Protocol Buffers to serialize internal messages (for example cluster gossip messages).
+Pekko itself uses Protocol Buffers to serialize internal messages (for example cluster gossip messages).
 
 ## Usage
 
 ### Configuration
 
-For Akka to know which `Serializer` to use for what, you need to edit your configuration: 
+For Pekko to know which `Serializer` to use for what, you need to edit your configuration: 
 in the `pekko.actor.serializers`-section, you bind names to implementations of the @apidoc[serialization.Serializer](Serializer)
 you wish to use, like this:
 
@@ -62,14 +62,14 @@ you would need to reference it as `Wrapper$Message` instead of `Wrapper.Message`
 
 @@@
 
-Akka provides serializers for several primitive types and [protobuf](https://github.com/protocolbuffers/protobuf)
+Pekko provides serializers for several primitive types and [protobuf](https://github.com/protocolbuffers/protobuf)
 @javadoc[com.google.protobuf.GeneratedMessage](com.google.protobuf.GeneratedMessage) (protobuf2) and @javadoc[com.google.protobuf.GeneratedMessageV3](com.google.protobuf.GeneratedMessageV3) (protobuf3) by default (the latter only if
-depending on the akka-remote module), so normally you don't need to add
+depending on the pekko-remote module), so normally you don't need to add
 configuration for that if you send raw protobuf messages as actor messages.
 
 ### Programmatic
 
-If you want to programmatically serialize/deserialize using Akka Serialization,
+If you want to programmatically serialize/deserialize using Pekko Serialization,
 here are some examples:
 
 Scala
@@ -208,14 +208,14 @@ Classic and Typed actor references have the same serialization format so they ca
 
 ### Deep serialization of Actors
 
-The recommended approach to do deep serialization of internal actor state is to use Akka @ref:[Persistence](persistence.md).
+The recommended approach to do deep serialization of internal actor state is to use Pekko @ref:[Persistence](persistence.md).
 
-## Serialization of Akka's messages
+## Serialization of Pekko's messages
 
-Akka is using a Protobuf 3 for serialization of messages defined by Akka. This dependency is
-shaded in the `akka-protobuf-v3` artifact so that applications can use another version of Protobuf.
+Pekko is using a Protobuf 3 for serialization of messages defined by Pekko. This dependency is
+shaded in the `pekko-protobuf-v3` artifact so that applications can use another version of Protobuf.
 
-Applications should use standard Protobuf dependency and not `akka-protobuf-v3`.
+Applications should use standard Protobuf dependency and not `pekko-protobuf-v3`.
 
 ## Java serialization
 
@@ -225,7 +225,7 @@ One may think that network bandwidth and latency limit the performance of remote
 
 @@@ note
 
-Akka serialization with Java serialization is disabled by default and Akka itself doesn't use Java serialization
+Pekko serialization with Java serialization is disabled by default and Pekko itself doesn't use Java serialization
 for any of its internal messages. It is highly discouraged to enable Java serialization in production.
 
 The log messages emitted by the disabled Java serializer in production SHOULD be treated as potential
@@ -242,7 +242,7 @@ older systems that rely on Java serialization it can be enabled with the followi
 pekko.actor.allow-java-serialization = on
 ```
 
-Akka will still log warning when Java serialization is used and to silent that you may add:
+Pekko will still log warning when Java serialization is used and to silent that you may add:
 
 ```ruby
 pekko.actor.warn-about-java-serializer-usage = off
diff --git a/docs/src/main/paradox/split-brain-resolver.md b/docs/src/main/paradox/split-brain-resolver.md
index df16673899..59a6ce9429 100644
--- a/docs/src/main/paradox/split-brain-resolver.md
+++ b/docs/src/main/paradox/split-brain-resolver.md
@@ -1,10 +1,10 @@
 # Split Brain Resolver
 
-When operating an Akka cluster you must consider how to handle
+When operating an Pekko cluster you must consider how to handle
 [network partitions](https://en.wikipedia.org/wiki/Network_partition) (a.k.a. split brain scenarios)
 and machine crashes (including JVM and hardware failures). This is crucial for correct behavior if
 you use @ref:[Cluster Singleton](typed/cluster-singleton.md) or @ref:[Cluster Sharding](typed/cluster-sharding.md),
-especially together with Akka Persistence.
+especially together with Pekko Persistence.
 
 The [Split Brain Resolver video](https://akka.io/blog/news/2020/06/08/akka-split-brain-resolver-video)
 is a good starting point for learning why it is important to use a correct downing provider and
@@ -12,7 +12,7 @@ how the Split Brain Resolver works.
 
 ## Module info
 
-To use Akka Split Brain Resolver is part of `akka-cluster` and you probably already have that
+To use Pekko Split Brain Resolver is part of `pekko-cluster` and you probably already have that
 dependency included. Otherwise, add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
@@ -68,7 +68,7 @@ Another type of problem that makes it difficult to see the "right" picture is wh
 connected and cannot communicate directly to each other but information can be disseminated between them via
 other nodes.
 
-The Akka cluster has a failure detector that will notice network partitions and machine crashes (but it
+The Pekko cluster has a failure detector that will notice network partitions and machine crashes (but it
 cannot distinguish the two). It uses periodic heartbeat messages to check if other nodes are available
 and healthy. These observations by the failure detector are referred to as a node being *unreachable*
 and it may become *reachable* again if the failure detector observes that it can communicate with it again.  
@@ -80,16 +80,16 @@ partitions. Both sides of the network partition will see the other side as unrea
 after a while remove it from its cluster membership. Since this happens on both sides the result
 is that two separate disconnected clusters have been created.
 This approach is provided by the opt-in (off by default) auto-down feature in the OSS version of
-Akka Cluster.
+Pekko Cluster.
 
 If you use the timeout based auto-down feature in combination with Cluster Singleton or Cluster Sharding
 that would mean that two singleton instances or two sharded entities with the same identifier would be running.
 One would be running: one in each cluster.
-For example when used together with Akka Persistence that could result in that two instances of a
+For example when used together with Pekko Persistence that could result in that two instances of a
 persistent actor with the same `persistenceId` are running and writing concurrently to the
 same stream of persistent events, which will have fatal consequences when replaying these events.
 
-The default setting in Akka Cluster is to not remove unreachable nodes automatically and
+The default setting in Pekko Cluster is to not remove unreachable nodes automatically and
 the recommendation is that the decision of what to
 do should be taken by a human operator or an external monitoring system. This is a valid solution,
 but not very convenient if you do not have this staff or external system for other reasons.
@@ -167,7 +167,7 @@ pekko.coordinated-shutdown.exit-jvm = on
 @@@ note
 
 Some legacy containers may block calls to System.exit(..) and you may have to find an alternate
-way to shut the app down. For example, when running Akka on top of a Spring / Tomcat setup, you
+way to shut the app down. For example, when running Pekko on top of a Spring / Tomcat setup, you
 could replace the call to `System.exit(..)` with a call to Spring's ApplicationContext .close() method
 (or with a HTTP call to Tomcat Manager's API to un-deploy the app).
 
@@ -428,7 +428,7 @@ That could result in that members are removed from one side but are still runnin
 
 ## Multiple data centers
 
-Akka Cluster has @ref:[support for multiple data centers](cluster-dc.md), where the cluster
+Pekko Cluster has @ref:[support for multiple data centers](cluster-dc.md), where the cluster
 membership is managed by each data center separately and independently of network partitions across different
 data centers. The Split Brain Resolver is embracing that strategy and will not count nodes or down nodes in
 another data center.
diff --git a/docs/src/main/paradox/stream/actor-interop.md b/docs/src/main/paradox/stream/actor-interop.md
index 8e3e40c25e..cd1bee5a16 100644
--- a/docs/src/main/paradox/stream/actor-interop.md
+++ b/docs/src/main/paradox/stream/actor-interop.md
@@ -2,14 +2,14 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -32,7 +32,7 @@ Additionally you can use `ActorSource.actorRef`, `ActorSource.actorRefWithBackpr
 ### ask
 
 @@@ note
-  See also: @ref[Flow.ask operator reference docs](operators/Source-or-Flow/ask.md), @ref[ActorFlow.ask operator reference docs](operators/ActorFlow/ask.md) for Akka Typed
+  See also: @ref[Flow.ask operator reference docs](operators/Source-or-Flow/ask.md), @ref[ActorFlow.ask operator reference docs](operators/ActorFlow/ask.md) for Pekko Typed
 @@@
 
 A nice way to delegate some processing of elements in a stream to an actor is to use `ask`.
diff --git a/docs/src/main/paradox/stream/futures-interop.md b/docs/src/main/paradox/stream/futures-interop.md
index abc553eb26..673ba8b530 100644
--- a/docs/src/main/paradox/stream/futures-interop.md
+++ b/docs/src/main/paradox/stream/futures-interop.md
@@ -2,14 +2,14 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/index.md b/docs/src/main/paradox/stream/index.md
index bd9ee72ef9..96354940d4 100644
--- a/docs/src/main/paradox/stream/index.md
+++ b/docs/src/main/paradox/stream/index.md
@@ -5,7 +5,7 @@ project.description: An intuitive and safe way to do asynchronous, non-blocking
 
 ## Module info
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
diff --git a/docs/src/main/paradox/stream/operators/ActorFlow/ask.md b/docs/src/main/paradox/stream/operators/ActorFlow/ask.md
index 1a313f9f2d..7fe77db067 100644
--- a/docs/src/main/paradox/stream/operators/ActorFlow/ask.md
+++ b/docs/src/main/paradox/stream/operators/ActorFlow/ask.md
@@ -9,11 +9,11 @@ Use the "Ask Pattern" to send each stream element as an `ask` to the target acto
 This operator is included in:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream-typed_$scala.binary.version$"
+  artifact="pekko-stream-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/ActorFlow/askWithContext.md b/docs/src/main/paradox/stream/operators/ActorFlow/askWithContext.md
index ec9e5580fa..bf420e0586 100644
--- a/docs/src/main/paradox/stream/operators/ActorFlow/askWithContext.md
+++ b/docs/src/main/paradox/stream/operators/ActorFlow/askWithContext.md
@@ -9,11 +9,11 @@ Use the "Ask Pattern" to send each stream element (without the context) as an `a
 This operator is included in:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream-typed_$scala.binary.version$"
+  artifact="pekko-stream-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/ActorFlow/askWithStatus.md b/docs/src/main/paradox/stream/operators/ActorFlow/askWithStatus.md
index d57041e526..d23f3974b5 100644
--- a/docs/src/main/paradox/stream/operators/ActorFlow/askWithStatus.md
+++ b/docs/src/main/paradox/stream/operators/ActorFlow/askWithStatus.md
@@ -12,7 +12,7 @@ This operator is included in:
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream-typed_$scala.binary.version$"
+  artifact="pekko-stream-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/ActorFlow/askWithStatusAndContext.md b/docs/src/main/paradox/stream/operators/ActorFlow/askWithStatusAndContext.md
index ca498bb4bc..de2e6d0312 100644
--- a/docs/src/main/paradox/stream/operators/ActorFlow/askWithStatusAndContext.md
+++ b/docs/src/main/paradox/stream/operators/ActorFlow/askWithStatusAndContext.md
@@ -9,11 +9,11 @@ Use the "Ask Pattern" to send each stream element (without the context) as an `a
 This operator is included in:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream-typed_$scala.binary.version$"
+  artifact="pekko-stream-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/ActorSink/actorRef.md b/docs/src/main/paradox/stream/operators/ActorSink/actorRef.md
index 913620770b..04d69ec32a 100644
--- a/docs/src/main/paradox/stream/operators/ActorSink/actorRef.md
+++ b/docs/src/main/paradox/stream/operators/ActorSink/actorRef.md
@@ -9,11 +9,11 @@ Sends the elements of the stream to the given @java[`ActorRef<T>`]@scala[`ActorR
 This operator is included in:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream-typed_$scala.binary.version$"
+  artifact="pekko-stream-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/ActorSink/actorRefWithBackpressure.md b/docs/src/main/paradox/stream/operators/ActorSink/actorRefWithBackpressure.md
index 4f02d8cdab..ad8203b47a 100644
--- a/docs/src/main/paradox/stream/operators/ActorSink/actorRefWithBackpressure.md
+++ b/docs/src/main/paradox/stream/operators/ActorSink/actorRefWithBackpressure.md
@@ -9,11 +9,11 @@ Sends the elements of the stream to the given @java[`ActorRef<T>`]@scala[`ActorR
 This operator is included in:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream-typed_$scala.binary.version$"
+  artifact="pekko-stream-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/ActorSource/actorRef.md b/docs/src/main/paradox/stream/operators/ActorSource/actorRef.md
index 158f881c8e..35851bfea8 100644
--- a/docs/src/main/paradox/stream/operators/ActorSource/actorRef.md
+++ b/docs/src/main/paradox/stream/operators/ActorSource/actorRef.md
@@ -9,11 +9,11 @@ Materialize an @java[`ActorRef<T>`]@scala[`ActorRef[T]`] of the new actors API;
 This operator is included in:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream-typed_$scala.binary.version$"
+  artifact="pekko-stream-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/ActorSource/actorRefWithBackpressure.md b/docs/src/main/paradox/stream/operators/ActorSource/actorRefWithBackpressure.md
index d47dcf7e26..149c1cb4b9 100644
--- a/docs/src/main/paradox/stream/operators/ActorSource/actorRefWithBackpressure.md
+++ b/docs/src/main/paradox/stream/operators/ActorSource/actorRefWithBackpressure.md
@@ -9,11 +9,11 @@ Materialize an @java[`ActorRef<T>`]@scala[`ActorRef[T]`] of the new actors API;
 This operator is included in:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream-typed_$scala.binary.version$"
+  artifact="pekko-stream-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/Flow/fromSinkAndSource.md b/docs/src/main/paradox/stream/operators/Flow/fromSinkAndSource.md
index a502612654..ff2a028177 100644
--- a/docs/src/main/paradox/stream/operators/Flow/fromSinkAndSource.md
+++ b/docs/src/main/paradox/stream/operators/Flow/fromSinkAndSource.md
@@ -43,7 +43,7 @@ Java
 :   @@snip [FromSinkAndSource.java](/docs/src/test/java/jdocs/stream/operators/flow/FromSinkAndSource.java) { #chat }
 
 
-The same patterns can also be applied to @extref:[Akka HTTP WebSockets](akka.http:/server-side/websocket-support.html#server-api) which also have an API accepting a `Flow` of messages.
+The same patterns can also be applied to @extref:[Pekko HTTP WebSockets](pekko.http:/server-side/websocket-support.html#server-api) which also have an API accepting a `Flow` of messages.
 
 If we would replace the `fromSinkAndSource` here with `fromSinkAndSourceCoupled` it would allow the client to close the connection by closing its outgoing stream.
 
diff --git a/docs/src/main/paradox/stream/operators/PubSub/sink.md b/docs/src/main/paradox/stream/operators/PubSub/sink.md
index a464d713c9..9612700ffd 100644
--- a/docs/src/main/paradox/stream/operators/PubSub/sink.md
+++ b/docs/src/main/paradox/stream/operators/PubSub/sink.md
@@ -14,11 +14,11 @@ If the topic does not have any subscribers when a message is published, or the t
 This operator is included in:
 
 @@dependency[sbt,Maven,Gradle] {
-bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
 symbol1=PekkoVersion
 value1="$pekko.version$"
 group="org.apache.pekko"
-artifact="akka-stream-typed_$scala.binary.version$"
+artifact="pekko-stream-typed_$scala.binary.version$"
 version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/PubSub/source.md b/docs/src/main/paradox/stream/operators/PubSub/source.md
index f694522d87..1e73dbea49 100644
--- a/docs/src/main/paradox/stream/operators/PubSub/source.md
+++ b/docs/src/main/paradox/stream/operators/PubSub/source.md
@@ -17,11 +17,11 @@ strategy.
 This operator is included in:
 
 @@dependency[sbt,Maven,Gradle] {
-bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
 symbol1=PekkoVersion
 value1="$pekko.version$"
 group="org.apache.pekko"
-artifact="akka-stream-typed_$scala.binary.version$"
+artifact="pekko-stream-typed_$scala.binary.version$"
 version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/RetryFlow/withBackoff.md b/docs/src/main/paradox/stream/operators/RetryFlow/withBackoff.md
index eb9af703a3..b27e938da1 100644
--- a/docs/src/main/paradox/stream/operators/RetryFlow/withBackoff.md
+++ b/docs/src/main/paradox/stream/operators/RetryFlow/withBackoff.md
@@ -22,7 +22,7 @@ When `decideRetry` returns @scala[`None`]@java[`Optional.empty`], no retries wil
 
 @@@ note
 
-This API was added in Akka 2.6.0 and @ref:[may be changed](../../../common/may-change.md) in further patch releases.
+This API was added in Pekko 2.6.0 and @ref:[may be changed](../../../common/may-change.md) in further patch releases.
 
 @@@
 
diff --git a/docs/src/main/paradox/stream/operators/RetryFlow/withBackoffAndContext.md b/docs/src/main/paradox/stream/operators/RetryFlow/withBackoffAndContext.md
index 564c03e25e..44d0101505 100644
--- a/docs/src/main/paradox/stream/operators/RetryFlow/withBackoffAndContext.md
+++ b/docs/src/main/paradox/stream/operators/RetryFlow/withBackoffAndContext.md
@@ -23,7 +23,7 @@ When `decideRetry` returns @scala[`None`]@java[`Optional.empty`], no retries wil
 
 @@@ note
 
-This API was added in Akka 2.6.0 and @ref:[may be changed](../../../common/may-change.md) in further patch releases.
+This API was added in Pekko 2.6.0 and @ref:[may be changed](../../../common/may-change.md) in further patch releases.
 
 @@@
 
diff --git a/docs/src/main/paradox/stream/operators/Sink/asPublisher.md b/docs/src/main/paradox/stream/operators/Sink/asPublisher.md
index 57769a6155..49be6139f9 100644
--- a/docs/src/main/paradox/stream/operators/Sink/asPublisher.md
+++ b/docs/src/main/paradox/stream/operators/Sink/asPublisher.md
@@ -13,8 +13,8 @@ Integration with Reactive Streams, materializes into a `org.reactivestreams.Publ
 ## Description
 
 This method gives you the capability to publish the data from the `Sink` through a Reactive Streams [Publisher](https://www.reactive-streams.org/reactive-streams-1.0.3-javadoc/org/reactivestreams/Publisher.html).
-Generally, in Akka Streams a `Sink` is considered a subscriber, which consumes the data from source. To integrate with other Reactive Stream implementations `Sink.asPublisher` provides a `Publisher` materialized value when run.
-Now, the data from this publisher can be consumed by subscribing to it. We can control if we allow more than one downstream subscriber from the single running Akka stream through the `fanout` parameter.
+Generally, in Pekko Streams a `Sink` is considered a subscriber, which consumes the data from source. To integrate with other Reactive Stream implementations `Sink.asPublisher` provides a `Publisher` materialized value when run.
+Now, the data from this publisher can be consumed by subscribing to it. We can control if we allow more than one downstream subscriber from the single running Pekko stream through the `fanout` parameter.
 In Java 9, the Reactive Stream API was included in the JDK, and `Publisher` is available through [Flow.Publisher](https://docs.oracle.com/javase/9/docs/api/java/util/concurrent/Flow.Publisher.html).
 Since those APIs are identical but exist at different package namespaces and does not depend on the Reactive Streams package a separate publisher sink for those is available 
 through @scala[`org.apache.pekko.stream.scaladsl.JavaFlowSupport.Sink#asPublisher`]@java[`org.apache.pekko.stream.javadsl.JavaFlowSupport.Sink#asPublisher`].
diff --git a/docs/src/main/paradox/stream/operators/Sink/foreachParallel.md b/docs/src/main/paradox/stream/operators/Sink/foreachParallel.md
index 04ecb5b083..8514e770f3 100644
--- a/docs/src/main/paradox/stream/operators/Sink/foreachParallel.md
+++ b/docs/src/main/paradox/stream/operators/Sink/foreachParallel.md
@@ -4,12 +4,6 @@ Like `foreach` but allows up to `parallellism` procedure calls to happen in para
 
 @ref[Sink operators](../index.md#sink-operators)
 
-@@@warning { title="Deprecated" }
-
-Use @ref[`foreachAsync`](foreachAsync.md) instead (this is deprecated since Akka 2.5.17).
-
-@@@
-
 ## Reactive Streams semantics
 
 @@@div { .callout }
diff --git a/docs/src/main/paradox/stream/operators/Source/asSubscriber.md b/docs/src/main/paradox/stream/operators/Source/asSubscriber.md
index d417a9a2e2..5f85c2c910 100644
--- a/docs/src/main/paradox/stream/operators/Source/asSubscriber.md
+++ b/docs/src/main/paradox/stream/operators/Source/asSubscriber.md
@@ -39,7 +39,7 @@ be used for further processing, for example creating a @apidoc[Source] that cont
 rows.
 
 Note that since the database is queried for each materialization, the `rowSource` can be safely re-used.
-Because both the database driver and Akka Streams support [Reactive Streams](https://www.reactive-streams.org/),
+Because both the database driver and Pekko Streams support [Reactive Streams](https://www.reactive-streams.org/),
 backpressure is applied throughout the stream, preventing us from running out of memory when the database
 rows are consumed slower than they are produced by the database.
 
diff --git a/docs/src/main/paradox/stream/operators/Source/fromPublisher.md b/docs/src/main/paradox/stream/operators/Source/fromPublisher.md
index 95af1d28e4..00efdd822f 100644
--- a/docs/src/main/paradox/stream/operators/Source/fromPublisher.md
+++ b/docs/src/main/paradox/stream/operators/Source/fromPublisher.md
@@ -36,7 +36,7 @@ we could create a @apidoc[Source] that queries the database for its rows. That @
 be used for further processing, for example creating a @apidoc[Source] that contains the names of the
 rows.
 
-Because both the database driver and Akka Streams support [Reactive Streams](https://www.reactive-streams.org/),
+Because both the database driver and Pekko Streams support [Reactive Streams](https://www.reactive-streams.org/),
 backpressure is applied throughout the stream, preventing us from running out of memory when the database
 rows are consumed slower than they are produced by the database.
 
diff --git a/docs/src/main/paradox/stream/operators/Source/mergePrioritizedN.md b/docs/src/main/paradox/stream/operators/Source/mergePrioritizedN.md
index 189eec77a6..1f655816bd 100644
--- a/docs/src/main/paradox/stream/operators/Source/mergePrioritizedN.md
+++ b/docs/src/main/paradox/stream/operators/Source/mergePrioritizedN.md
@@ -6,7 +6,7 @@ Merge multiple sources with priorities.
 
 ## Signature
 
-@@signature [Source.scala](/akka-stream/src/main/scala/org/apache/pekko/stream/scaladsl/Source.scala) { #mergePrioritized }
+@@signature [Source.scala](/pekko-stream/src/main/scala/org/apache/pekko/stream/scaladsl/Source.scala) { #mergePrioritized }
 
 ## Description
 
diff --git a/docs/src/main/paradox/stream/operators/Source/range.md b/docs/src/main/paradox/stream/operators/Source/range.md
index 612919e188..9a852d2383 100644
--- a/docs/src/main/paradox/stream/operators/Source/range.md
+++ b/docs/src/main/paradox/stream/operators/Source/range.md
@@ -7,11 +7,11 @@ Emit each integer in a range, with an option to take bigger steps than 1.
 ## Dependency
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/operators/Source/unfoldResource.md b/docs/src/main/paradox/stream/operators/Source/unfoldResource.md
index d47becf570..df3eac69dc 100644
--- a/docs/src/main/paradox/stream/operators/Source/unfoldResource.md
+++ b/docs/src/main/paradox/stream/operators/Source/unfoldResource.md
@@ -17,13 +17,13 @@ Wrap any resource that can be opened, queried for next element (in a blocking wa
 1. `read`: Fetch the next element or signal that we reached the end of the stream by returning a @java[`Optional.empty`]@scala[`None`]
 1. `close`: Close the resource, invoked on end of stream or if the stream fails
 
-The functions are by default called on Akka's dispatcher for blocking IO to avoid interfering with other stream operations. 
+The functions are by default called on Pekko's dispatcher for blocking IO to avoid interfering with other stream operations. 
 See @ref:[Blocking Needs Careful Management](../../../typed/dispatchers.md#blocking-needs-careful-management) for an explanation on why this is important.
 
 Note that there are pre-built `unfoldResource`-like operators to wrap `java.io.InputStream`s in 
 @ref:[Additional Sink and Source converters](../index.md#additional-sink-and-source-converters), 
 `Iterator` in @ref:[fromIterator](fromIterator.md) and File IO in @ref:[File IO Sinks and Sources](../index.md#file-io-sinks-and-sources).
-Additional prebuilt technology specific connectors can also be found in the [Alpakka project](https://doc.akka.io/docs/alpakka/current/).
+Additional prebuilt technology specific connectors can also be found in [Pekko Connectors](https://doc.akka.io/docs/alpakka/current/).
 
 ## Examples
 
diff --git a/docs/src/main/paradox/stream/operators/Source/unfoldResourceAsync.md b/docs/src/main/paradox/stream/operators/Source/unfoldResourceAsync.md
index 04fae22493..d767ac8bda 100644
--- a/docs/src/main/paradox/stream/operators/Source/unfoldResourceAsync.md
+++ b/docs/src/main/paradox/stream/operators/Source/unfoldResourceAsync.md
@@ -26,7 +26,7 @@ fail the stream. The supervision strategy is used to handle exceptions from `rea
 Note that there are pre-built `unfoldResourceAsync`-like operators to wrap `java.io.InputStream`s in 
 @ref:[Additional Sink and Source converters](../index.md#additional-sink-and-source-converters), 
 `Iterator` in @ref:[fromIterator](fromIterator.md) and File IO in @ref:[File IO Sinks and Sources](../index.md#file-io-sinks-and-sources).
-Additional prebuilt technology specific connectors can also be found in the [Alpakka project](https://doc.akka.io/docs/alpakka/current/).
+Additional prebuilt technology specific connectors can also be found in [Pekko Connectors](https://doc.akka.io/docs/alpakka/current/).
 
 ## Examples
 
diff --git a/docs/src/main/paradox/stream/operators/StreamConverters/asJavaStream.md b/docs/src/main/paradox/stream/operators/StreamConverters/asJavaStream.md
index 33616fcab7..52fa8182c3 100644
--- a/docs/src/main/paradox/stream/operators/StreamConverters/asJavaStream.md
+++ b/docs/src/main/paradox/stream/operators/StreamConverters/asJavaStream.md
@@ -14,7 +14,7 @@ Create a sink which materializes into Java 8 `Stream` that can be run to trigger
 Elements emitted through the stream will be available for reading through the Java 8 `Stream`.
 
 The Java 8 `Stream` will be ended when the stream flowing into this `Sink` completes, and closing the Java
-`Stream` will cancel the inflow of this `Sink`. If the Java `Stream` throws an exception, the Akka stream is cancelled.
+`Stream` will cancel the inflow of this `Sink`. If the Java `Stream` throws an exception, the Pekko stream is cancelled.
 
 Be aware that Java `Stream` blocks current thread while waiting on next element from downstream.
 
diff --git a/docs/src/main/paradox/stream/operators/index.md b/docs/src/main/paradox/stream/operators/index.md
index 1cabbf98b4..69712e037e 100644
--- a/docs/src/main/paradox/stream/operators/index.md
+++ b/docs/src/main/paradox/stream/operators/index.md
@@ -320,7 +320,7 @@ There is a number of fan-out operators for which currently no 'fluent' is API av
 
 ## Actor interop operators
 
-Operators meant for inter-operating between Akka Streams and Actors:
+Operators meant for inter-operating between Pekko Streams and Actors:
 
 
 | |Operator|Description|
diff --git a/docs/src/main/paradox/stream/reactive-streams-interop.md b/docs/src/main/paradox/stream/reactive-streams-interop.md
index 24419f9f96..7363dde597 100644
--- a/docs/src/main/paradox/stream/reactive-streams-interop.md
+++ b/docs/src/main/paradox/stream/reactive-streams-interop.md
@@ -2,27 +2,27 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
 <a id="reactive-streams-integration"></a>
 ## Overview
 
-Akka Streams implements the [Reactive Streams](https://www.reactive-streams.org/) standard for asynchronous stream processing with non-blocking
+Pekko Streams implements the [Reactive Streams](https://www.reactive-streams.org/) standard for asynchronous stream processing with non-blocking
 back pressure. 
 
 Since Java 9 the APIs of Reactive Streams has been included in the Java Standard library, under the  `java.util.concurrent.Flow` 
 namespace. For Java 8 there is instead a separate Reactive Streams artifact with the same APIs in the package `org.reactivestreams`.
 
-Akka streams provides interoperability for both these two API versions, the Reactive Streams interfaces directly through factories on the
+Pekko streams provides interoperability for both these two API versions, the Reactive Streams interfaces directly through factories on the
 regular `Source` and `Sink` APIs. For the Java 9 and later built in interfaces there is a separate set of factories in 
 @scala[`org.apache.pekko.stream.scaladsl.JavaFlowSupport`]@java[`org.apache.pekko.stream.javadsl.JavaFlowSupport`].
 
@@ -55,7 +55,7 @@ Scala
 Java
 :   @@snip [ReactiveStreamsDocTest.java](/docs/src/test/java/jdocs/stream/ReactiveStreamsDocTest.java) { #author-storage-subscriber }
 
-Using an Akka Streams `Flow` we can transform the stream and connect those:
+Using an Pekko Streams `Flow` we can transform the stream and connect those:
 
 Scala
 :   @@snip [ReactiveStreamsDocSpec.scala](/docs/src/test/scala/docs/stream/ReactiveStreamsDocSpec.scala) { #authors #connect-all }
@@ -131,7 +131,7 @@ Please note that a factory is necessary to achieve reusability of the resulting
 
 ## Other implementations
 
-Implementing Reactive Streams makes it possible to plug Akka Streams together with other stream libraries that adhere to the standard.
+Implementing Reactive Streams makes it possible to plug Pekko Streams together with other stream libraries that adhere to the standard.
 An incomplete list of other implementations:
 
  * [Reactor (1.1+)](https://github.com/reactor/reactor)
diff --git a/docs/src/main/paradox/stream/stream-composition.md b/docs/src/main/paradox/stream/stream-composition.md
index 4f16cb1a93..e94d6b4cf7 100644
--- a/docs/src/main/paradox/stream/stream-composition.md
+++ b/docs/src/main/paradox/stream/stream-composition.md
@@ -2,26 +2,26 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-Akka Streams provide a uniform model of stream processing graphs, which allows flexible composition of reusable
+Pekko Streams provide a uniform model of stream processing graphs, which allows flexible composition of reusable
 components. In this chapter we show how these look like from the conceptual and API perspective, demonstrating
 the modularity aspects of the library.
 
 ## Basics of composition and modularity
 
-Every operator used in Akka Streams can be imagined as a "box" with input and output ports where elements to
+Every operator used in Pekko Streams can be imagined as a "box" with input and output ports where elements to
 be processed arrive and leave the operator. In this view, a `Source` is nothing else than a "box" with a single
 output port, or, a `BidiFlow` is a "box" with exactly two input and two output ports. In the figure below
 we illustrate the most commonly used operators viewed as "boxes".
@@ -33,7 +33,7 @@ and `Flow`, as these can be used to compose strict chains of operators.
 Fan-in and Fan-out operators have usually multiple input or multiple output ports, therefore they allow to build
 more complex graph layouts, not only chains. `BidiFlow` operators are usually useful in IO related tasks, where
 there are input and output channels to be handled. Due to the specific shape of `BidiFlow` it is easy to
-stack them on top of each other to build a layered protocol for example. The `TLS` support in Akka is for example
+stack them on top of each other to build a layered protocol for example. The `TLS` support in Pekko is for example
 implemented as a `BidiFlow`.
 
 These reusable components already allow the creation of complex processing networks. What we
@@ -119,7 +119,7 @@ Java
 ## Composing complex systems
 
 In the previous section we explored the possibility of composition, and hierarchy, but we stayed away from non-linear,
-generalized operators. There is nothing in Akka Streams though that enforces that stream processing layouts
+generalized operators. There is nothing in Pekko Streams though that enforces that stream processing layouts
 can only be linear. The DSL for `Source` and friends is optimized for creating such linear chains, as they are
 the most common in practice. There is a more advanced DSL for building complex graphs, that can be used if more
 flexibility is needed. We will see that the difference between the two DSLs is only on the surface: the concepts they
diff --git a/docs/src/main/paradox/stream/stream-cookbook.md b/docs/src/main/paradox/stream/stream-cookbook.md
index ac7a26eda2..708e65a185 100644
--- a/docs/src/main/paradox/stream/stream-cookbook.md
+++ b/docs/src/main/paradox/stream/stream-cookbook.md
@@ -2,25 +2,25 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-This is a collection of patterns to demonstrate various usage of the Akka Streams API by solving small targeted
+This is a collection of patterns to demonstrate various usage of the Pekko Streams API by solving small targeted
 problems in the format of "recipes". The purpose of this page is to give inspiration and ideas how to approach
 various small tasks involving streams. The recipes in this page can be used directly as-is, but they are most powerful as
 starting points: customization of the code snippets is warmly encouraged. The recipes can be extended or can provide a 
 basis for the implementation of other [patterns](https://doc.akka.io/docs/alpakka/current/patterns.html) involving
-[Alpakka](https://doc.akka.io/docs/alpakka/current).
+[Pekko Connectors](https://doc.akka.io/docs/alpakka/current).
 
 This part also serves as supplementary material for the main body of documentation. It is a good idea to have this page
 open while reading the manual and look for examples demonstrating various streaming concepts
@@ -116,7 +116,7 @@ Java
 **Situation:** A stream of bytes is given as a stream of `ByteString` s and we want to calculate the cryptographic digest
 of the stream.
 
-This recipe uses a @ref[`GraphStage`](stream-customize.md) to define a custom Akka Stream operator, to host a mutable `MessageDigest` class (part of the Java Cryptography
+This recipe uses a @ref[`GraphStage`](stream-customize.md) to define a custom Pekko Stream operator, to host a mutable `MessageDigest` class (part of the Java Cryptography
 API) and update it with the bytes arriving from the stream. When the stream starts, the `onPull` handler of the
 operator is called, which bubbles up the `pull` event to its upstream. As a response to this pull, a ByteString
 chunk will arrive (`onPush`) which we use to update the digest, then it will pull for the next chunk.
@@ -217,7 +217,7 @@ a last step we merge back these values from the substreams into one single
 output stream.
 
 One noteworthy detail pertains to the @scala[`MaximumDistinctWords`] @java[`MAXIMUM_DISTINCT_WORDS`] parameter: this
-defines the breadth of the groupBy and merge operations. Akka Streams is
+defines the breadth of the groupBy and merge operations. Pekko Streams is
 focused on bounded resource consumption and the number of concurrently open
 inputs to the merge operator describes the amount of resources needed by the
 merge itself. Therefore only a finite number of substreams can be active at
@@ -493,7 +493,7 @@ for this actor.
 the same sequence, but capping the size of `ByteString` s. In other words we want to slice up `ByteString` s into smaller
 chunks if they exceed a size threshold.
 
-This can be achieved with a single @ref[`GraphStage`](stream-customize.md) to define a custom Akka Stream operator. The main logic of our operator is in `emitChunk()`
+This can be achieved with a single @ref[`GraphStage`](stream-customize.md) to define a custom Pekko Stream operator. The main logic of our operator is in `emitChunk()`
 which implements the following logic:
 
  * if the buffer is empty, and upstream is not closed we pull for more bytes, if it is closed we complete
diff --git a/docs/src/main/paradox/stream/stream-customize.md b/docs/src/main/paradox/stream/stream-customize.md
index 5635f2faec..f743803fac 100644
--- a/docs/src/main/paradox/stream/stream-customize.md
+++ b/docs/src/main/paradox/stream/stream-customize.md
@@ -2,20 +2,20 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-While the processing vocabulary of Akka Streams is quite rich (see the @ref:[Streams Cookbook](stream-cookbook.md) for examples) it
+While the processing vocabulary of Pekko Streams is quite rich (see the @ref:[Streams Cookbook](stream-cookbook.md) for examples) it
 is sometimes necessary to define new transformation operators either because some functionality is missing from the
 stock operations, or for performance reasons. In this part we show how to build custom operators and graph
 junctions of various kinds.
@@ -37,7 +37,7 @@ operators by composing others. Where `GraphStage` differs is that it creates an
 smaller ones, and allows state to be maintained inside it in a safe way.
 
 As a first motivating example, we will build a new @apidoc[stream.*.Source] that will emit numbers from 1 until it is
-cancelled. To start, we need to define the "interface" of our operator, which is called *shape* in Akka Streams terminology
+cancelled. To start, we need to define the "interface" of our operator, which is called *shape* in Pekko Streams terminology
 (this is explained in more detail in the section @ref:[Modularity, Composition and Hierarchy](stream-composition.md)). This is how it looks:
 
 Scala
diff --git a/docs/src/main/paradox/stream/stream-dynamic.md b/docs/src/main/paradox/stream/stream-dynamic.md
index f9aa31dc13..18a3c24f06 100644
--- a/docs/src/main/paradox/stream/stream-dynamic.md
+++ b/docs/src/main/paradox/stream/stream-dynamic.md
@@ -2,14 +2,14 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/stream-error.md b/docs/src/main/paradox/stream/stream-error.md
index dfc5fba239..701890c74f 100644
--- a/docs/src/main/paradox/stream/stream-error.md
+++ b/docs/src/main/paradox/stream/stream-error.md
@@ -2,14 +2,14 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -106,7 +106,7 @@ Java
 
 ## Delayed restarts with a backoff operator
 
-Akka streams provides a @apidoc[stream.*.RestartSource$], @apidoc[stream.*.RestartSink$] and @apidoc[stream.*.RestartFlow$] for implementing the so-called *exponential backoff 
+Pekko streams provides a @apidoc[stream.*.RestartSource$], @apidoc[stream.*.RestartSink$] and @apidoc[stream.*.RestartFlow$] for implementing the so-called *exponential backoff 
 supervision strategy*, starting an operator again when it fails or completes, each time with a growing time delay between restarts.
 
 This pattern is useful when the operator fails or completes because some external resource is not available
@@ -126,7 +126,7 @@ Configurable parameters are:
 
 The following snippet shows how to create a backoff supervisor using @apidoc[stream.*.RestartSource$] 
 which will supervise the given @apidoc[stream.*.Source]. The `Source` in this case is a 
-stream of Server Sent Events, produced by akka-http. If the stream fails or completes at any point, the request will
+stream of Server Sent Events, produced by pekko-http. If the stream fails or completes at any point, the request will
 be made again, in increasing intervals of 3, 6, 12, 24 and finally 30 seconds (at which point it will remain capped due
 to the `maxBackoff` parameter):
 
diff --git a/docs/src/main/paradox/stream/stream-flows-and-basics.md b/docs/src/main/paradox/stream/stream-flows-and-basics.md
index b61e500c45..b41aacb926 100644
--- a/docs/src/main/paradox/stream/stream-flows-and-basics.md
+++ b/docs/src/main/paradox/stream/stream-flows-and-basics.md
@@ -2,14 +2,14 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -17,12 +17,12 @@ To use Akka Streams, add the module to your project:
 
 ## Core concepts
 
-Akka Streams is a library to process and transfer a sequence of elements using bounded buffer space. This
-latter property is what we refer to as *boundedness*, and it is the defining feature of Akka Streams. Translated to
+Pekko Streams is a library to process and transfer a sequence of elements using bounded buffer space. This
+latter property is what we refer to as *boundedness*, and it is the defining feature of Pekko Streams. Translated to
 everyday terms, it is possible to express a chain (or as we see later, graphs) of processing entities. Each of these
 entities executes independently (and possibly concurrently) from the others while only buffering a limited number
 of elements at any given time. This property of bounded buffers is one of the differences from the actor model, where each actor usually has
-an unbounded, or a bounded, but dropping mailbox. Akka Stream processing entities have bounded "mailboxes" that
+an unbounded, or a bounded, but dropping mailbox. Pekko Stream processing entities have bounded "mailboxes" that
 do not drop.
 
 Before we move on, let's define some basic terminology which will be used throughout the entire documentation:
@@ -37,7 +37,7 @@ downstream. Buffer sizes are always expressed as number of elements independentl
 Back-pressure
 : A means of flow-control, a way for consumers of data to notify a producer about their current availability, effectively
 slowing down the upstream producer to match their consumption speeds.
-In the context of Akka Streams back-pressure is always understood as *non-blocking* and *asynchronous*.
+In the context of Pekko Streams back-pressure is always understood as *non-blocking* and *asynchronous*.
 
 Non-Blocking
 : Means that a certain operation does not hinder the progress of the calling thread, even if it takes a long time to
@@ -53,7 +53,7 @@ Examples of operators are `map()`, `filter()`, custom ones extending @ref[`Graph
 junctions like `Merge` or `Broadcast`. For the full list of built-in operators see the @ref:[operator index](operators/index.md)
 
 
-When we talk about *asynchronous, non-blocking backpressure*, we mean that the operators available in Akka
+When we talk about *asynchronous, non-blocking backpressure*, we mean that the operators available in Pekko
 Streams will not use blocking calls but asynchronous message passing to exchange messages between each other.
 This way they can slow down a fast producer without blocking its thread. This is a thread-pool friendly
 design, since entities that need to wait (a fast producer waiting on a slow consumer) will not block the thread but
@@ -61,7 +61,7 @@ can hand it back for further use to an underlying thread-pool.
 
 ## Defining and running streams
 
-Linear processing pipelines can be expressed in Akka Streams using the following core abstractions:
+Linear processing pipelines can be expressed in Pekko Streams using the following core abstractions:
 
 Source
 : An operator with *exactly one output*, emitting data elements whenever downstream operators are
@@ -85,7 +85,7 @@ it will be represented by the @apidoc[stream.*.RunnableGraph] type, indicating t
 
 It is important to remember that even after constructing the `RunnableGraph` by connecting all the source, sink and
 different operators, no data will flow through it until it is materialized. Materialization is the process of
-allocating all resources needed to run the computation described by a Graph (in Akka Streams this will often involve
+allocating all resources needed to run the computation described by a Graph (in Pekko Streams this will often involve
 starting up Actors). Thanks to Flows being a description of the processing pipeline they are *immutable,
 thread-safe, and freely shareable*, which means that it is for example safe to share and send them between actors, to have
 one actor prepare the work, and then have it be materialized at some completely different place in the code.
@@ -144,7 +144,7 @@ Java
 
 @@@ note
 
-By default, Akka Streams elements support **exactly one** downstream operator.
+By default, Pekko Streams elements support **exactly one** downstream operator.
 Making fan-out (supporting multiple downstream operators) an explicit opt-in feature allows default stream elements to
 be less complex and more efficient. Also, it allows for greater flexibility on *how exactly* to handle the multicast scenarios,
 by providing named fan-out elements such as broadcast (signals all down-stream elements) or balance (signals one of available down-stream elements).
@@ -188,23 +188,23 @@ Java
 ### Illegal stream elements
 
 In accordance to the Reactive Streams specification ([Rule 2.13](https://github.com/reactive-streams/reactive-streams-jvm#2.13))
-Akka Streams do not allow `null` to be passed through the stream as an element. In case you want to model the concept
+Pekko Streams do not allow `null` to be passed through the stream as an element. In case you want to model the concept
 of absence of a value we recommend using @scala[@scaladoc[scala.Option](scala.Option) or @scaladoc[scala.util.Either](scala.util.Either)]@java[@javadoc[java.util.Optional](java.util.Optional) which is available since Java 8].
 
 ## Back-pressure explained
 
-Akka Streams implement an asynchronous non-blocking back-pressure protocol standardised by the [Reactive Streams](https://www.reactive-streams.org/)
-specification, which Akka is a founding member of.
+Pekko Streams implement an asynchronous non-blocking back-pressure protocol standardised by the [Reactive Streams](https://www.reactive-streams.org/)
+specification, which Pekko is a founding member of.
 
 The user of the library does not have to write any explicit back-pressure handling code — it is built in
-and dealt with automatically by all of the provided Akka Streams operators. It is possible however to add
+and dealt with automatically by all of the provided Pekko Streams operators. It is possible however to add
 explicit buffer operators with overflow strategies that can influence the behavior of the stream. This is especially important
 in complex processing graphs which may even contain loops (which *must* be treated with very special
 care, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles)).
 
 The back pressure protocol is defined in terms of the number of elements a downstream `Subscriber` is able to receive
 and buffer, referred to as `demand`.
-The source of data, referred to as `Publisher` in Reactive Streams terminology and implemented as @apidoc[stream.*.Source] in Akka
+The source of data, referred to as `Publisher` in Reactive Streams terminology and implemented as @apidoc[stream.*.Source] in Pekko
 Streams, guarantees that it will never emit more elements than the received total demand for any given `Subscriber`.
 
 @@@ note
@@ -213,7 +213,7 @@ The Reactive Streams specification defines its protocol in terms of `Publisher`
 These types are **not** meant to be user facing API, instead they serve as the low-level building blocks for
 different Reactive Streams implementations.
 
-Akka Streams implements these concepts as @apidoc[stream.*.Source], @apidoc[Flow](stream.*.Flow) (referred to as `Processor` in Reactive Streams)
+Pekko Streams implements these concepts as @apidoc[stream.*.Source], @apidoc[Flow](stream.*.Flow) (referred to as `Processor` in Reactive Streams)
 and @apidoc[stream.*.Sink] without exposing the Reactive Streams interfaces directly.
 If you need to integrate with other Reactive Stream libraries, read @ref:[Integrating with Reactive Streams](reactive-streams-interop.md).
 
@@ -259,9 +259,9 @@ this mode of operation is referred to as pull-based back-pressure.
 
 ## Stream Materialization
 
-When constructing flows and graphs in Akka Streams think of them as preparing a blueprint, an execution plan.
+When constructing flows and graphs in Pekko Streams think of them as preparing a blueprint, an execution plan.
 Stream materialization is the process of taking a stream description (@apidoc[stream.*.RunnableGraph]) and allocating all the necessary resources
-it needs in order to run. In the case of Akka Streams this often means starting up Actors which power the processing,
+it needs in order to run. In the case of Pekko Streams this often means starting up Actors which power the processing,
 but is not restricted to that—it could also mean opening files or socket connections etc.—depending on what the stream needs.
 
 Materialization is triggered at so called "terminal operations". Most notably this includes the various forms of the `run()`
@@ -283,7 +283,7 @@ yet will materialize that operator multiple times.
 
 ### Operator Fusion
 
-By default, Akka Streams will fuse the stream operators. This means that the processing steps of a flow or
+By default, Pekko Streams will fuse the stream operators. This means that the processing steps of a flow or
 stream can be executed within the same Actor and has two consequences:
 
  * passing elements from one operator to the next is a lot faster between fused
@@ -327,7 +327,7 @@ is needed in order to allow the stream to run at all, you will have to insert ex
 <a id="flow-combine-mat"></a>
 ### Combining materialized values
 
-Since every operator in Akka Streams can provide a materialized value after being materialized, it is necessary
+Since every operator in Pekko Streams can provide a materialized value after being materialized, it is necessary
 to somehow express how these values should be composed to a final value when we plug these operators together. For this,
 many operator methods have variants that take an additional argument, a function, that will be used to combine the
 resulting values. Some examples of using these combiners are illustrated in the example below.
@@ -360,7 +360,7 @@ Java
 
 ## Stream ordering
 
-In Akka Streams almost all computation operators *preserve input order* of elements. This means that if inputs `{IA1,IA2,...,IAn}`
+In Pekko Streams almost all computation operators *preserve input order* of elements. This means that if inputs `{IA1,IA2,...,IAn}`
 "cause" outputs `{OA1,OA2,...,OAk}` and inputs `{IB1,IB2,...,IBm}` "cause" outputs `{OB1,OB2,...,OBl}` and all of
 `IAi` happened before all `IBi` then `OAi` happens before `OBi`.
 
@@ -380,7 +380,7 @@ merge is performed.
 ## Actor Materializer Lifecycle
 
 The @apidoc[stream.Materializer] is a component that is responsible for turning the stream blueprint into a running stream
-and emitting the "materialized value". An @apidoc[actor.ActorSystem] wide `Materializer` is provided by the Akka `Extension` 
+and emitting the "materialized value". An @apidoc[actor.ActorSystem] wide `Materializer` is provided by the Pekko `Extension` 
 @apidoc[SystemMaterializer] by @scala[having an implicit `ActorSystem` in scope]@java[passing the `ActorSystem` to the 
 various `run` methods] this way there is no need to worry about the `Materializer` unless there are special requirements.
 
@@ -390,7 +390,7 @@ An important aspect of working with streams and actors is understanding a `Mater
 The materializer is bound to the lifecycle of the @apidoc[actor.ActorRefFactory] it is created from, which in practice will
 be either an @apidoc[actor.ActorSystem] or @apidoc[ActorContext](actor.ActorContext) (when the materializer is created within an @apidoc[actor.Actor]). 
 
-Tying it to the `ActorSystem` should be replaced with using the system materializer from Akka 2.6 and on.
+Tying it to the `ActorSystem` should be replaced with using the system materializer from Pekko 2.6 and on.
 
 When run by the system materializer the streams will run until the `ActorSystem` is shut down. When the materializer is shut down
 *before* the streams have run to completion, they will be terminated abruptly. This is a little different than the
@@ -412,7 +412,7 @@ This is a very useful technique if the stream is closely related to the actor, e
 You may also cause a `Materializer` to shut down by explicitly calling @apidoc[shutdown()](stream.Materializer) {scala="#shutdown():Unit" java="#shutdown()"} on it, resulting in abruptly terminating all of the streams it has been running then. 
 
 Sometimes, however, you may want to explicitly create a stream that will out-last the actor's life.
-For example, you are using an Akka stream to push some large stream of data to an external service.
+For example, you are using an Pekko stream to push some large stream of data to an external service.
 You may want to eagerly stop the Actor since it has performed all of its duties already:
 
 Scala
diff --git a/docs/src/main/paradox/stream/stream-graphs.md b/docs/src/main/paradox/stream/stream-graphs.md
index 7a56617253..be7459d0a8 100644
--- a/docs/src/main/paradox/stream/stream-graphs.md
+++ b/docs/src/main/paradox/stream/stream-graphs.md
@@ -2,20 +2,20 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-In Akka Streams computation graphs are not expressed using a fluent DSL like linear computations are, instead they are
+In Pekko Streams computation graphs are not expressed using a fluent DSL like linear computations are, instead they are
 written in a more graph-resembling DSL which aims to make translating graph drawings (e.g. from notes taken
 from design discussions, or illustrations in protocol specifications) to and from code simpler. In this section we'll
 dive into the multiple ways of constructing and re-using graphs, as well as explain common pitfalls and how to avoid them.
@@ -33,7 +33,7 @@ Graphs are built from simple Flows which serve as the linear connections within
 which serve as fan-in and fan-out points for Flows. Thanks to the junctions having meaningful types based on their behavior
 and making them explicit elements these elements should be rather straightforward to use.
 
-Akka Streams currently provide these junctions (for a detailed list see the @ref[operator index](operators/index.md)):
+Pekko Streams currently provide these junctions (for a detailed list see the @ref[operator index](operators/index.md)):
 
  * **Fan-out**
 
@@ -55,7 +55,7 @@ Akka Streams currently provide these junctions (for a detailed list see the @ref
 
 One of the goals of the GraphDSL DSL is to look similar to how one would draw a graph on a whiteboard, so that it is
 simple to translate a design from whiteboard to code and be able to relate those two. Let's illustrate this by translating
-the below hand drawn graph into Akka Streams:
+the below hand drawn graph into Pekko Streams:
 
 ![simple-graph-example.png](../images/simple-graph-example.png)
 
@@ -418,7 +418,7 @@ all processing stops after some time. After some investigation we observe that:
  * through merging from `source` we increase the number of elements flowing in the cycle
  * by broadcasting back to the cycle we do not decrease the number of elements in the cycle
 
-Since Akka Streams (and Reactive Streams in general) guarantee bounded processing (see the "Buffering" section for more
+Since Pekko Streams (and Reactive Streams in general) guarantee bounded processing (see the "Buffering" section for more
 details) it means that only a bounded number of elements are buffered over any time span. Since our cycle gains more and
 more elements, eventually all of its internal buffers become full, backpressuring `source` forever. To be able
 to process more elements from `source` elements would need to leave the cycle somehow.
diff --git a/docs/src/main/paradox/stream/stream-introduction.md b/docs/src/main/paradox/stream/stream-introduction.md
index 9e4ea40388..9a3c4112bf 100644
--- a/docs/src/main/paradox/stream/stream-introduction.md
+++ b/docs/src/main/paradox/stream/stream-introduction.md
@@ -22,33 +22,33 @@ and must be retransmitted in that case. Failure to do so would lead to holes at
 the receiving side.
 
 For these reasons we decided to bundle up a solution to these problems as an
-Akka Streams API. The purpose is to offer an intuitive and safe way to
+Pekko Streams API. The purpose is to offer an intuitive and safe way to
 formulate stream processing setups such that we can then execute them
 efficiently and with bounded resource usage—no more OutOfMemoryErrors. In order
 to achieve this our streams need to be able to limit the buffering that they
 employ, they need to be able to slow down producers if the consumers cannot
 keep up. This feature is called back-pressure and is at the core of the
-[Reactive Streams](https://www.reactive-streams.org/) initiative of which Akka is a
+[Reactive Streams](https://www.reactive-streams.org/) initiative of which Pekko is a
 founding member. For you this means that the hard problem of propagating and
-reacting to back-pressure has been incorporated in the design of Akka Streams
-already, so you have one less thing to worry about; it also means that Akka
+reacting to back-pressure has been incorporated in the design of Pekko Streams
+already, so you have one less thing to worry about; it also means that Pekko
 Streams interoperate seamlessly with all other Reactive Streams implementations
 (where Reactive Streams interfaces define the interoperability SPI while
-implementations like Akka Streams offer a nice user API).
+implementations like Pekko Streams offer a nice user API).
 
 ### Relationship with Reactive Streams
 
-The Akka Streams API is completely decoupled from the Reactive Streams
-interfaces. While Akka Streams focus on the formulation of transformations on
+The Pekko Streams API is completely decoupled from the Reactive Streams
+interfaces. While Pekko Streams focus on the formulation of transformations on
 data streams the scope of Reactive Streams is to define a common mechanism
 of how to move data across an asynchronous boundary without losses, buffering
 or resource exhaustion.
 
-The relationship between these two is that the Akka Streams API is geared
-towards end-users while the Akka Streams implementation uses the Reactive
+The relationship between these two is that the Pekko Streams API is geared
+towards end-users while the Pekko Streams implementation uses the Reactive
 Streams interfaces internally to pass data between the different operators.
 For this reason you will not find any resemblance between the Reactive
-Streams interfaces and the Akka Streams API. This is in line with the
+Streams interfaces and the Pekko Streams API. This is in line with the
 expectations of the Reactive Streams project, whose primary purpose is to
 define interfaces such that different streaming implementation can
 interoperate; it is not the purpose of Reactive Streams to describe an end-user
@@ -63,7 +63,7 @@ and for best results we recommend the following approach:
 
  * Read the @ref:[Quick Start Guide](stream-quickstart.md) to get a feel for how streams
 look like and what they can do.
- * The top-down learners may want to peruse the @ref:[Design Principles behind Akka Streams](../general/stream/stream-design.md) at this
+ * The top-down learners may want to peruse the @ref:[Design Principles behind Pekko Streams](../general/stream/stream-design.md) at this
 point.
  * The bottom-up learners may feel more at home rummaging through the
 @ref:[Streams Cookbook](stream-cookbook.md).
diff --git a/docs/src/main/paradox/stream/stream-io.md b/docs/src/main/paradox/stream/stream-io.md
index 3a84683419..b53ef2efea 100644
--- a/docs/src/main/paradox/stream/stream-io.md
+++ b/docs/src/main/paradox/stream/stream-io.md
@@ -2,22 +2,22 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-Akka Streams provides a way of handling File IO and TCP connections with Streams.
-While the general approach is very similar to the @ref:[Actor based TCP handling](../io-tcp.md) using Akka IO,
-by using Akka Streams you are freed of having to manually react to back-pressure signals,
+Pekko Streams provides a way of handling File IO and TCP connections with Streams.
+While the general approach is very similar to the @ref:[Actor based TCP handling](../io-tcp.md) using Pekko IO,
+by using Pekko Streams you are freed of having to manually react to back-pressure signals,
 as the library does it transparently for you.
 
 ## Streaming TCP
@@ -50,7 +50,7 @@ Java
 
 ![tcp-stream-run.png](../images/tcp-stream-run.png)
 
-Notice that while most building blocks in Akka Streams are reusable and freely shareable, this is *not* the case for the
+Notice that while most building blocks in Pekko Streams are reusable and freely shareable, this is *not* the case for the
 incoming connection Flow, since it directly corresponds to an existing, already accepted connection its handling can
 only ever be materialized *once*.
 
@@ -69,7 +69,7 @@ Hello World!!!
 
 In this example we implement a rather naive Read Evaluate Print Loop client over TCP.
 Let's say we know a server has exposed a simple command line interface over TCP,
-and would like to interact with it using Akka Streams over TCP. To open an outgoing connection socket we use
+and would like to interact with it using Pekko Streams over TCP. To open an outgoing connection socket we use
 the `outgoingConnection` method:
 
 Scala
@@ -161,7 +161,7 @@ The `SSLEngine` instance can then be used with the binding or outgoing connectio
 
 ## Streaming File IO
 
-Akka Streams provide simple Sources and Sinks that can work with @apidoc[util.ByteString] instances to perform IO operations
+Pekko Streams provide simple Sources and Sinks that can work with @apidoc[util.ByteString] instances to perform IO operations
 on files.
 
 Streaming data from a file is as easy as creating a *FileIO.fromPath* given a target path, and an optional
diff --git a/docs/src/main/paradox/stream/stream-parallelism.md b/docs/src/main/paradox/stream/stream-parallelism.md
index 6b7ab42d20..e1f6c0c04e 100644
--- a/docs/src/main/paradox/stream/stream-parallelism.md
+++ b/docs/src/main/paradox/stream/stream-parallelism.md
@@ -2,20 +2,20 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-Akka Streams operators (be it simple operators on Flows and Sources or graph junctions) are "fused" together
+Pekko Streams operators (be it simple operators on Flows and Sources or graph junctions) are "fused" together
 and executed sequentially by default. This avoids the overhead of events crossing asynchronous boundaries but
 limits the flow to execute at most one operator at any given time.
 
diff --git a/docs/src/main/paradox/stream/stream-quickstart.md b/docs/src/main/paradox/stream/stream-quickstart.md
index bbc859feec..994c12d68c 100644
--- a/docs/src/main/paradox/stream/stream-quickstart.md
+++ b/docs/src/main/paradox/stream/stream-quickstart.md
@@ -2,26 +2,26 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
 @@@ note
 
-Both the Java and Scala DSLs of Akka Streams are bundled in the same JAR. For a smooth development experience, when using an IDE such as Eclipse or IntelliJ, you can disable the auto-importer from suggesting `javadsl` imports when working in Scala,
+Both the Java and Scala DSLs of Pekko Streams are bundled in the same JAR. For a smooth development experience, when using an IDE such as Eclipse or IntelliJ, you can disable the auto-importer from suggesting `javadsl` imports when working in Scala,
 or viceversa. See @ref:[IDE Tips](../additional/ide.md). 
 @@@
 
 ## First steps
 
-A stream usually begins at a source, so this is also how we start an Akka
+A stream usually begins at a source, so this is also how we start an Pekko
 Stream. Before we create one, we import the full complement of streaming tools:
 
 Scala
@@ -38,7 +38,7 @@ Scala
 Java
 :   @@snip [QuickStartDocTest.java](/docs/src/test/java/jdocs/stream/QuickStartDocTest.java) { #other-imports }
 
-And @scala[an object]@java[a class] to start an Akka @apidoc[actor.ActorSystem] and hold your code @scala[. Making the `ActorSystem`
+And @scala[an object]@java[a class] to start an Pekko @apidoc[actor.ActorSystem] and hold your code @scala[. Making the `ActorSystem`
 implicit makes it available to the streams without manually passing it when running them]:
 
 Scala
@@ -76,7 +76,7 @@ Java
 This line will complement the source with a consumer function—in this example
 we print out the numbers to the console—and pass this little stream
 setup to an Actor that runs it. This activation is signaled by having “run” be
-part of the method name; there are other methods that run Akka Streams, and
+part of the method name; there are other methods that run Pekko Streams, and
 they all follow this pattern.
 
 When running this @scala[source in a `scala.App`]@java[program] you might notice it does not
@@ -89,7 +89,7 @@ Scala
 Java
 :   @@snip [QuickStartDocTest.java](/docs/src/test/java/jdocs/stream/QuickStartDocTest.java) { #run-source-and-terminate }
 
-The nice thing about Akka Streams is that the `Source` is a
+The nice thing about Pekko Streams is that the `Source` is a
 description of what you want to run, and like an architect’s blueprint it can
 be reused, incorporated into a larger design. We may choose to transform the
 source of integers and write it to a file instead:
@@ -109,9 +109,9 @@ important to keep in mind that nothing is actually computed yet, this is a
 description of what we want to have computed once we run the stream. Then we
 convert the resulting series of numbers into a stream of @apidoc[util.ByteString]
 objects describing lines in a text file. This stream is then run by attaching a
-file as the receiver of the data. In the terminology of Akka Streams this is
+file as the receiver of the data. In the terminology of Pekko Streams this is
 called a @apidoc[stream.*.Sink]. @apidoc[stream.IOResult] is a type that IO operations return in
-Akka Streams in order to tell you how many bytes or elements were consumed and
+Pekko Streams in order to tell you how many bytes or elements were consumed and
 whether the stream terminated normally or exceptionally.
 
 ### Browser-embedded example
@@ -119,19 +119,19 @@ whether the stream terminated normally or exceptionally.
 <a name="here-is-another-example-that-you-can-edit-and-run-in-the-browser-"></a>
 Here is another example that you can edit and run in the browser:
 
-@@fiddle [TwitterStreamQuickstartDocSpec.scala](/docs/src/test/scala/docs/stream/TwitterStreamQuickstartDocSpec.scala) { #fiddle_code template=Akka layout=v75 minheight=400px }
+@@fiddle [TwitterStreamQuickstartDocSpec.scala](/docs/src/test/scala/docs/stream/TwitterStreamQuickstartDocSpec.scala) { #fiddle_code template=Pekko layout=v75 minheight=400px }
 
 
 ## Reusable Pieces
 
-One of the nice parts of Akka Streams—and something that other stream libraries
+One of the nice parts of Pekko Streams—and something that other stream libraries
 do not offer—is that not only sources can be reused like blueprints, all other
 elements can be as well. We can take the file-writing @apidoc[stream.*.Sink], prepend
 the processing steps necessary to get the @apidoc[util.ByteString] elements from
 incoming strings and package that up as a reusable piece as well. Since the
 language for writing these streams always flows from left to right (just like
 plain English), we need a starting point that is like a source but with an
-“open” input. In Akka Streams this is called a @apidoc[stream.*.Flow]:
+“open” input. In Pekko Streams this is called a @apidoc[stream.*.Flow]:
 
 Scala
 :   @@snip [QuickStartDocSpec.scala](/docs/src/test/scala/docs/stream/QuickStartDocSpec.scala) { #transform-sink }
@@ -162,7 +162,7 @@ Java
 ## Time-Based Processing
 
 Before we start looking at a more involved example we explore the streaming
-nature of what Akka Streams can do. Starting from the `factorials` source
+nature of what Pekko Streams can do. Starting from the `factorials` source
 we transform the stream by zipping it together with another stream,
 represented by a @apidoc[stream.*.Source] that emits the number 0 to 100: the first
 number emitted by the `factorials` source is the factorial of zero, the
@@ -187,26 +187,26 @@ the streams to produce a billion numbers each then you will notice that your
 JVM does not crash with an OutOfMemoryError, even though you will also notice
 that running the streams happens in the background, asynchronously (this is the
 reason for the auxiliary information to be provided as a @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)], in the future). The
-secret that makes this work is that Akka Streams implicitly implement pervasive
+secret that makes this work is that Pekko Streams implicitly implement pervasive
 flow control, all operators respect back-pressure. This allows the throttle
 operator to signal to all its upstream sources of data that it can only
 accept elements at a certain rate—when the incoming rate is higher than one per
 second the throttle operator will assert *back-pressure* upstream.
 
-This is all there is to Akka Streams in a nutshell—glossing over the
+This is all there is to Pekko Streams in a nutshell—glossing over the
 fact that there are dozens of sources and sinks and many more stream
 transformation operators to choose from, see also @ref:[operator index](operators/index.md).
 
 # Reactive Tweets
 
 A typical use case for stream processing is consuming a live stream of data that we want to extract or aggregate some
-other data from. In this example we'll consider consuming a stream of tweets and extracting information concerning Akka from them.
+other data from. In this example we'll consider consuming a stream of tweets and extracting information concerning Pekko from them.
 
 We will also consider the problem inherent to all non-blocking streaming
 solutions: *"What if the subscriber is too slow to consume the live stream of
 data?"*. Traditionally the solution is often to buffer the elements, but this
 can—and usually will—cause eventual buffer overflows and instability of such
-systems. Instead Akka Streams depend on internal backpressure signals that
+systems. Instead Pekko Streams depend on internal backpressure signals that
 allow to control what should happen in such scenarios.
 
 Here's the data model we'll be working with throughout the quickstart examples:
@@ -228,7 +228,7 @@ sections of the docs, and then come back to this quickstart to see it all pieced
 ## Transforming and consuming simple streams
 
 The example application we will be looking at is a simple Twitter feed stream from which we'll want to extract certain information,
-like for example finding all twitter handles of users who tweet about `#akka`.
+like for example finding all twitter handles of users who tweet about `#pekko`.
 
 In order to prepare our environment by creating an @apidoc[actor.ActorSystem] which will be responsible for running the streams we are about to create:
 
@@ -238,7 +238,7 @@ Scala
 Java
 :   @@snip [TwitterStreamQuickstartDocTest.java](/docs/src/test/java/jdocs/stream/TwitterStreamQuickstartDocTest.java) { #system-setup }
 
-Let's assume we have a stream of tweets readily available. In Akka this is expressed as a @scala[`Source[Out, M]`]@java[`Source<Out, M>`]:
+Let's assume we have a stream of tweets readily available. In Pekko this is expressed as a @scala[`Source[Out, M]`]@java[`Source<Out, M>`]:
 
 Scala
 :   @@snip [TwitterStreamQuickstartDocSpec.scala](/docs/src/test/scala/docs/stream/TwitterStreamQuickstartDocSpec.scala) { #tweet-source }
@@ -327,11 +327,11 @@ Now let's say we want to persist all hashtags, as well as all author names from
 For example we'd like to write all author handles into one file, and all hashtags into another file on disk.
 This means we have to split the source stream into two streams which will handle the writing to these different files.
 
-Elements that can be used to form such "fan-out" (or "fan-in") structures are referred to as "junctions" in Akka Streams.
+Elements that can be used to form such "fan-out" (or "fan-in") structures are referred to as "junctions" in Pekko Streams.
 One of these that we'll be using in this example is called @apidoc[stream.*.Broadcast$], and it emits elements from its
 input port to all of its output ports.
 
-Akka Streams intentionally separate the linear stream structures (Flows) from the non-linear, branching ones (Graphs)
+Pekko Streams intentionally separate the linear stream structures (Flows) from the non-linear, branching ones (Graphs)
 in order to offer the most convenient API for both of these cases. Graphs can express arbitrarily complex stream setups
 at the expense of not reading as familiarly as collection transformations.
 
@@ -362,14 +362,14 @@ as Flows, Sinks or Sources, which will be explained in detail in
 
 ## Back-pressure in action
 
-One of the main advantages of Akka Streams is that they *always* propagate back-pressure information from stream Sinks
+One of the main advantages of Pekko Streams is that they *always* propagate back-pressure information from stream Sinks
 (Subscribers) to their Sources (Publishers). It is not an optional feature, and is enabled at all times. To learn more
-about the back-pressure protocol used by Akka Streams and all other Reactive Streams compatible implementations read
+about the back-pressure protocol used by Pekko Streams and all other Reactive Streams compatible implementations read
 @ref:[Back-pressure explained](stream-flows-and-basics.md#back-pressure-explained).
 
-A typical problem applications (not using Akka Streams) like this often face is that they are unable to process the incoming data fast enough,
+A typical problem applications (not using Pekko Streams) like this often face is that they are unable to process the incoming data fast enough,
 either temporarily or by design, and will start buffering incoming data until there's no more space to buffer, resulting
-in either @javadoc[OutOfMemoryError](java.lang.OutOfMemoryError) s or other severe degradations of service responsiveness. With Akka Streams buffering can
+in either @javadoc[OutOfMemoryError](java.lang.OutOfMemoryError) s or other severe degradations of service responsiveness. With Pekko Streams buffering can
 and must be handled explicitly. For example, if we are only interested in the "*most recent tweets, with a buffer of 10
 elements*" this can be expressed using the @apidoc[buffer](stream.*.Source) {scala="#buffer(size:Int,overflowStrategy:org.apache.pekko.stream.OverflowStrategy):FlowOps.this.Repr[Out]" java="#buffer(int,org.apache.pekko.stream.OverflowStrategy)"} element:
 
@@ -432,7 +432,7 @@ Scala
 Java
 :   @@snip [TwitterStreamQuickstartDocTest.java](/docs/src/test/java/jdocs/stream/TwitterStreamQuickstartDocTest.java) { #tweets-runnable-flow-materialized-twice }
 
-Many elements in Akka Streams provide materialized values which can be used for obtaining either results of computation or
+Many elements in Pekko Streams provide materialized values which can be used for obtaining either results of computation or
 steering these elements which will be discussed in detail in @ref:[Stream Materialization](stream-flows-and-basics.md#stream-materialization). Summing up this section, now we know
 what happens behind the scenes when we run this one-liner, which is equivalent to the multi line version above:
 
diff --git a/docs/src/main/paradox/stream/stream-rate.md b/docs/src/main/paradox/stream/stream-rate.md
index 84b818ad73..bd570bc491 100644
--- a/docs/src/main/paradox/stream/stream-rate.md
+++ b/docs/src/main/paradox/stream/stream-rate.md
@@ -2,21 +2,21 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
 When upstream and downstream rates differ, especially when the throughput has spikes, it can be useful to introduce
-buffers in a stream. In this chapter we cover how buffers are used in Akka Streams.
+buffers in a stream. In this chapter we cover how buffers are used in Pekko Streams.
 
 <a id="async-stream-buffers"></a>
 ## Buffers for asynchronous operators
@@ -52,7 +52,7 @@ execution model of flows where an element completely passes through the processi
 enters the flow. The next element is processed by an asynchronous operator as soon as it has emitted the previous one.
 
 While pipelining in general increases throughput, in practice there is a cost of passing an element through the
-asynchronous (and therefore thread crossing) boundary which is significant. To amortize this cost Akka Streams uses
+asynchronous (and therefore thread crossing) boundary which is significant. To amortize this cost Pekko Streams uses
 a *windowed*, *batching* backpressure strategy internally. It is windowed because as opposed to a [Stop-And-Wait](https://en.wikipedia.org/wiki/Stop-and-wait_ARQ)
 protocol multiple elements might be "in-flight" concurrently with requests for elements. It is also batching because
 a new element is not immediately requested once an element has been drained from the window-buffer but multiple elements
@@ -62,13 +62,13 @@ propagating the backpressure signal through the asynchronous boundary.
 While this internal protocol is mostly invisible to the user (apart from its throughput increasing effects) there are
 situations when these details get exposed. In all of our previous examples we always assumed that the rate of the
 processing chain is strictly coordinated through the backpressure signal causing all operators to process no faster than
-the throughput of the connected chain. There are tools in Akka Streams however that enable the rates of different segments
+the throughput of the connected chain. There are tools in Pekko Streams however that enable the rates of different segments
 of a processing chain to be "detached" or to define the maximum throughput of the stream through external timing sources.
 These situations are exactly those where the internal batching buffering strategy suddenly becomes non-transparent.
 
 ### Internal buffers and their effect
 
-As we have explained, for performance reasons Akka Streams introduces a buffer for every asynchronous operator.
+As we have explained, for performance reasons Pekko Streams introduces a buffer for every asynchronous operator.
 The purpose of these buffers is solely optimization, in fact the size of 1 would be the most natural choice if there
 would be no need for throughput improvements. Therefore it is recommended to keep these buffer sizes small,
 and increase them only to a level suitable for the throughput requirements of the application. Default buffer sizes
@@ -110,7 +110,7 @@ should be to decrease the input buffer of the affected elements to 1.
 
 @@@
 
-## Buffers in Akka Streams
+## Buffers in Pekko Streams
 
 In this section we will discuss *explicit* user defined buffers that are part of the domain logic of the stream processing
 pipeline of an application.
diff --git a/docs/src/main/paradox/stream/stream-refs.md b/docs/src/main/paradox/stream/stream-refs.md
index da73bd3edc..f7a1b9af31 100644
--- a/docs/src/main/paradox/stream/stream-refs.md
+++ b/docs/src/main/paradox/stream/stream-refs.md
@@ -2,14 +2,14 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
@@ -22,29 +22,29 @@ To use Akka Streams, add the module to your project:
   this module in production just yet.
 @@@
 
-Stream references, or "stream refs" for short, allow running Akka Streams across multiple nodes within 
-an Akka Cluster. 
+Stream references, or "stream refs" for short, allow running Pekko Streams across multiple nodes within 
+an Pekko Cluster. 
 
-Unlike heavier "streaming data processing" frameworks, Akka Streams are neither "deployed" nor automatically distributed.
-Akka stream refs are, as the name implies, references to existing parts of a stream, and can be used to create a 
+Unlike heavier "streaming data processing" frameworks, Pekko Streams are neither "deployed" nor automatically distributed.
+Pekko stream refs are, as the name implies, references to existing parts of a stream, and can be used to create a 
 distributed processing framework or to introduce such capabilities in specific parts of your application.
   
-Stream refs are trivial to use in existing clustered Akka applications and require no additional configuration
-or setup. They automatically maintain flow-control / back-pressure over the network and employ Akka's failure detection
+Stream refs are trivial to use in existing clustered Pekko applications and require no additional configuration
+or setup. They automatically maintain flow-control / back-pressure over the network and employ Pekko's failure detection
 mechanisms to fail-fast ("let it crash!") in the case of failures of remote nodes. They can be seen as an implementation 
 of the [Work Pulling Pattern](https://www.michaelpollmeier.com/akka-work-pulling-pattern), which one would otherwise 
 implement manually.
 
 @@@ note
   A useful way to think about stream refs is: 
-  "like an `ActorRef`, but for Akka Streams's `Source` and `Sink`".
+  "like an `ActorRef`, but for Pekko Streams's `Source` and `Sink`".
   
   Stream refs refer to an already existing, possibly remote, `Sink` or `Source`.
   This is not to be mistaken with deploying streams remotely, which this feature is not intended for.
 @@@
 
 @@@ warning { title=IMPORTANT }
-  Use stream refs with Akka Cluster. The @ref:[failure detector can cause quarantining](../typed/cluster-concepts.md#quarantined) if plain Akka remoting is used.
+  Use stream refs with Pekko Cluster. The @ref:[failure detector can cause quarantining](../typed/cluster-concepts.md#quarantined) if plain Pekko remoting is used.
 @@@
 
 ## Stream References
@@ -60,8 +60,8 @@ It is recommended to mix and introduce stream refs in actor-messaging-based syst
 orchestrate and prepare such message flows, and later the stream refs are used to do the flow-controlled message transfer.  
 
 Stream refs are not persistent. However, it is simple to build a resumable stream by introducing such a protocol
-in the actor messaging layer. Stream refs are absolutely expected to be sent over Akka remoting to other nodes
-within a cluster using Akka Cluster, and therefore complement, instead of compete, with plain Actor messaging.
+in the actor messaging layer. Stream refs are absolutely expected to be sent over Pekko remoting to other nodes
+within a cluster using Pekko Cluster, and therefore complement, instead of compete, with plain Actor messaging.
 Actors would usually be used to establish the stream via some initial message saying, "I want to offer you many log
 elements (the stream ref)," or conversely, "if you need to send me much data, here is the stream ref you can use to do so".
 
@@ -117,7 +117,7 @@ The dual of @scala[@scaladoc[`SourceRef`](pekko.stream.SinkRef)]@java[@javadoc[`
 They can be used to offer the other side the capability to 
 send to the *origin* side data in a streaming, flow-controlled fashion. The origin here allocates a `Sink`,
 which could be as simple as a `Sink.foreach` or as advanced as a complex `Sink` which streams the incoming data
-into various other systems (e.g., any of the Alpakka-provided `Sink`s).
+into various other systems (e.g., any of the Pekko connectors-provided `Sink`s).
 
 @@@ note
   To form a good mental model of `SinkRef`s, you can think of them as being similar to "passive mode" in FTP.
@@ -179,7 +179,7 @@ StreamRefs require serialization, since the whole point is to send them between
 is provided when `SourceRef` and `SinkRef` are sent directly as messages however the recommended use is to wrap them
 into your own actor message classes. 
 
-When @ref[Akka Jackson](../serialization-jackson.md) is used, serialization of wrapped `SourceRef` and `SinkRef` 
+When @ref[Pekko Jackson](../serialization-jackson.md) is used, serialization of wrapped `SourceRef` and `SinkRef` 
 will work out of the box.
  
 If you are using some other form of serialization you will need to use the @apidoc[stream.StreamRefResolver] extension 
diff --git a/docs/src/main/paradox/stream/stream-substream.md b/docs/src/main/paradox/stream/stream-substream.md
index 456b6831a7..435b4254b9 100644
--- a/docs/src/main/paradox/stream/stream-substream.md
+++ b/docs/src/main/paradox/stream/stream-substream.md
@@ -2,14 +2,14 @@
 
 ## Dependency
 
-To use Akka Streams, add the module to your project:
+To use Pekko Streams, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream_$scala.binary.version$"
+  artifact="pekko-stream_$scala.binary.version$"
   version=PekkoVersion
 }
 
diff --git a/docs/src/main/paradox/stream/stream-testkit.md b/docs/src/main/paradox/stream/stream-testkit.md
index e8acc07f4c..bd2dd2223a 100644
--- a/docs/src/main/paradox/stream/stream-testkit.md
+++ b/docs/src/main/paradox/stream/stream-testkit.md
@@ -2,32 +2,32 @@
 
 ## Dependency
 
-To use Akka Stream TestKit, add the module to your project:
+To use Pekko Stream TestKit, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-stream-testkit_$scala.binary.version$"
+  artifact="pekko-stream-testkit_$scala.binary.version$"
   version=PekkoVersion
   scope="test"
 }
 
 ## Introduction
 
-Verifying behavior of Akka Stream sources, flows and sinks can be done using
+Verifying behavior of Pekko Stream sources, flows and sinks can be done using
 various code patterns and libraries. Here we will discuss testing these
 elements using:
 
  * simple sources, sinks and flows;
- * sources and sinks in combination with @apidoc[testkit.TestProbe] from the `akka-testkit` module;
- * sources and sinks specifically crafted for writing tests from the `akka-stream-testkit` module.
+ * sources and sinks in combination with @apidoc[testkit.TestProbe] from the `pekko-testkit` module;
+ * sources and sinks specifically crafted for writing tests from the `pekko-stream-testkit` module.
 
 It is important to keep your data processing pipeline as separate sources,
 flows and sinks. This makes them testable by wiring them up to other
-sources or sinks, or some test harnesses that `akka-testkit` or
-`akka-stream-testkit` provide.
+sources or sinks, or some test harnesses that `pekko-testkit` or
+`pekko-stream-testkit` provide.
 
 ## Built-in sources, sinks and operators
 
@@ -65,9 +65,9 @@ Java
 
 ## TestKit
 
-Akka Stream offers integration with Actors out of the box. This support can be
+Pekko Stream offers integration with Actors out of the box. This support can be
 used for writing stream tests that use familiar @apidoc[testkit.TestProbe] from the
-`akka-testkit` API.
+`pekko-testkit` API.
 
 One of the more straightforward tests would be to materialize stream to a
 @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)] and then use @scala[@scaladoc[pipe](pekko.pattern.PipeToSupport#pipe[T](future:scala.concurrent.Future[T])(implicitexecutionContext:scala.concurrent.ExecutionContext):PipeToSupport.this.PipeableFuture[T])]@java[@scaladoc[Patterns.pipe](org.apache.pekko.pattern.Patterns$#pipe[T](future:java.util.concurrent.CompletionStage[T],context:scala.concurrent.ExecutionContext):or [...]
@@ -104,7 +104,7 @@ Java
 ## Streams TestKit
 
 You may have noticed various code patterns that emerge when testing stream
-pipelines. Akka Stream has a separate `akka-stream-testkit` module that
+pipelines. Pekko Stream has a separate `pekko-stream-testkit` module that
 provides tools specifically for writing stream tests. This module comes with
 two main components that are @apidoc[stream.testkit.*.TestSource$] and @apidoc[stream.testkit.*.TestSink$] which
 provide sources and sinks that materialize to probes that allow fluent API.
diff --git a/docs/src/main/paradox/supervision-classic.md b/docs/src/main/paradox/supervision-classic.md
index 5acb9c7426..6944236315 100644
--- a/docs/src/main/paradox/supervision-classic.md
+++ b/docs/src/main/paradox/supervision-classic.md
@@ -1,6 +1,6 @@
 # Classic Supervision
 
-This chapter outlines the concept behind the supervision in Akka Classic, for the
+This chapter outlines the concept behind the supervision in Pekko Classic, for the
 corresponding overview of the new APIs see @ref:[supervision](general/supervision.md)
 
 <a id="supervision-directives"></a>
@@ -39,7 +39,7 @@ supervision is about forming a recursive fault handling structure. If you try
 to do too much at one level, it will become hard to reason about, hence the
 recommended way, in this case, is to add a level of supervision.
 
-Akka implements a specific form called “parental supervision”. Actors can only
+Pekko implements a specific form called “parental supervision”. Actors can only
 be created by other actors—where the top-level actor is provided by the
 library—and each created actor is supervised by its parent. This restriction
 makes the formation of actor supervision hierarchies implicit and encourages
@@ -76,7 +76,7 @@ user-created actors, the guardian named `"/user"`. Actors created using
 `system.actorOf()` are children of this actor. This means that when this
 guardian terminates, all normal actors in the system will be shutdown, too. It
 also means that this guardian’s supervisor strategy determines how the
-top-level normal actors are supervised. Since Akka 2.1 it is possible to
+top-level normal actors are supervised. Since Pekko 2.1 it is possible to
 configure this using the setting `pekko.actor.guardian-supervisor-strategy`,
 which takes the fully-qualified class-name of a
 `SupervisorStrategyConfigurator`. When the guardian escalates a failure,
@@ -112,7 +112,7 @@ stopped).
 
 ## One-For-One Strategy vs. All-For-One Strategy
 
-There are two classes of supervision strategies which come with Akka:
+There are two classes of supervision strategies which come with Pekko:
 `OneForOneStrategy` and `AllForOneStrategy`. Both are configured
 with a mapping from exception type to supervision directive (see
 [above](#supervision-directives)) and limits on how often a child is allowed to fail
diff --git a/docs/src/main/paradox/testing.md b/docs/src/main/paradox/testing.md
index 995ec7994c..387027cc61 100644
--- a/docs/src/main/paradox/testing.md
+++ b/docs/src/main/paradox/testing.md
@@ -5,7 +5,7 @@ For the new API see @ref[testing](typed/testing.md).
 
 ## Module info
 
-To use Akka Testkit, you must add the following dependency in your project:
+To use Pekko Testkit, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -26,7 +26,7 @@ development cycle. The actor model presents a different view on how units of
 code are delimited and how they interact, which influences how to
 perform tests.
 
-Akka comes with a dedicated module `akka-testkit` for supporting tests.
+Pekko comes with a dedicated module `pekko-testkit` for supporting tests.
 
 <a id="async-integration-testing"></a>
 ## Asynchronous Testing: `TestKit`
@@ -320,7 +320,7 @@ His full example is also available @ref:[here](testing.md#example).
 The tight timeouts you use during testing on your lightning-fast notebook will
 invariably lead to spurious test failures on the heavily loaded Jenkins server
 (or similar). To account for this situation, all maximum durations are
-internally scaled by a factor taken from the @ref:[Configuration](general/configuration-reference.md#config-akka-testkit),
+internally scaled by a factor taken from the @ref:[Configuration](general/configuration-reference.md#config-pekko-testkit),
 `pekko.test.timefactor`, which defaults to 1.
 
 You can scale other durations with the same factor by using the @scala[implicit conversion
@@ -705,7 +705,7 @@ exception stack traces
 The testing facilities described up to this point were aiming at formulating
 assertions about a system’s behavior. If a test fails, it is usually your job
 to find the cause, fix it and verify the test again. This process is supported
-by debuggers as well as logging, where the Akka toolkit offers the following
+by debuggers as well as logging, where the Pekko toolkit offers the following
 options:
 
 * *Logging of exceptions thrown within Actor instances*
@@ -714,13 +714,13 @@ options:
 
 @@@ div { .group-scala }
 * *Logging of message invocations on certain actors*
-   This is enabled by a setting in the @ref:[Configuration](general/configuration-reference.md#config-akka-actor) — namely
+   This is enabled by a setting in the @ref:[Configuration](general/configuration-reference.md#config-pekko-actor) — namely
 `pekko.actor.debug.receive` — which enables the `loggable`
 statement to be applied to an actor’s `receive` function:
 
 @@snip [TestkitDocSpec.scala](/docs/src/test/scala/docs/testkit/TestkitDocSpec.scala) { #logging-receive }
 
-If the aforementioned setting is not given in the @ref:[Configuration](general/configuration-reference.md#config-akka-actor), this method will
+If the aforementioned setting is not given in the @ref:[Configuration](general/configuration-reference.md#config-pekko-actor), this method will
 pass through the given `Receive` function unmodified, meaning that
 there is no runtime cost unless enabled.
 
@@ -760,7 +760,7 @@ pekko {
 
 ## Different Testing Frameworks
 
-Akka’s test suite is written using [ScalaTest](https://www.scalatest.org),
+Pekko’s test suite is written using [ScalaTest](https://www.scalatest.org),
 which also shines through in documentation examples. However, the TestKit and
 its facilities do not depend on that framework, so you can essentially use
 whichever suits your development style best.
@@ -794,8 +794,8 @@ Some [Specs2](https://etorreborre.github.io/specs2/) users have contributed exam
  beneficial also for the third point—is to apply the TestKit together
  with `org.specs2.specification.Scope`.
   * The Specification traits provide a `Duration` DSL which uses partly the same method names as `scala.concurrent.duration.Duration`, resulting in ambiguous implicits if `scala.concurrent.duration._` is imported. There are two workarounds:
-     * either use the Specification variant of Duration and supply an implicit conversion to the Akka Duration. This conversion is not supplied with the
- Akka distribution because that would mean that our JAR files would depend on
+     * either use the Specification variant of Duration and supply an implicit conversion to the Pekko Duration. This conversion is not supplied with the
+ Pekko distribution because that would mean that our JAR files would depend on
  Specs2, which is not justified by this little feature.
      * or mix `org.specs2.time.NoTimeConversions` into the Specification.
   * Specifications are by default executed concurrently, which requires some care when writing the tests or the `sequential` keyword.
@@ -805,15 +805,15 @@ Some [Specs2](https://etorreborre.github.io/specs2/) users have contributed exam
 ## Configuration
 
 There are several configuration properties for the TestKit module, please refer
-to the @ref:[reference configuration](general/configuration-reference.md#config-akka-testkit).
+to the @ref:[reference configuration](general/configuration-reference.md#config-pekko-testkit).
 
 @@@ div { .group-scala }
 
 ## Example
 
 Ray Roestenburg's example code from his blog, which unfortunately is only available on
-[web archive](https://web.archive.org/web/20180114133958/http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html),
-adapted to work with Akka 2.x.
+[web archive](https://web.archive.org/web/20180114133958/http://roestenburg.agilesquad.com/2011/02/unit-testing-pekko-actors-with-testkit_12.html),
+adapted to work with Pekko 2.x.
 
 @@snip [TestKitUsageSpec.scala](/docs/src/test/scala/docs/testkit/TestKitUsageSpec.scala) { #testkit-usage }
 
@@ -848,9 +848,9 @@ instead of using `TestActorRef` whenever possible.
 @@@ warning
 
 Due to the synchronous nature of `TestActorRef`, it will **not** work with some support
-traits that Akka provides as they require asynchronous behaviors to function properly.
+traits that Pekko provides as they require asynchronous behaviors to function properly.
 Examples of traits that do not mix well with test actor refs are @ref:[PersistentActor](persistence.md#example)
-and @ref:[AtLeastOnceDelivery](persistence.md#at-least-once-delivery) provided by @ref:[Akka Persistence](persistence.md).
+and @ref:[AtLeastOnceDelivery](persistence.md#at-least-once-delivery) provided by @ref:[Pekko Persistence](persistence.md).
 
 @@@
 
@@ -960,5 +960,5 @@ before sending the test message
 the test message
 
 Feel free to experiment with the possibilities, and if you find useful
-patterns, don't hesitate to let the Akka forums know about them! Who knows,
+patterns, don't hesitate to let the Pekko forums know about them! Who knows,
 common operations might even be worked into nice DSLs.
diff --git a/docs/src/main/paradox/typed/actor-discovery.md b/docs/src/main/paradox/typed/actor-discovery.md
index 49696f39de..43f62b94ff 100644
--- a/docs/src/main/paradox/typed/actor-discovery.md
+++ b/docs/src/main/paradox/typed/actor-discovery.md
@@ -1,13 +1,13 @@
 # Actor discovery
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Actors](../actors.md#actorselection).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Actors](../actors.md#actorselection).
 
 ## Dependency
 
-To use Akka Actor Typed, you must add the following dependency in your project:
+To use Pekko Actor Typed, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
diff --git a/docs/src/main/paradox/typed/actor-lifecycle.md b/docs/src/main/paradox/typed/actor-lifecycle.md
index 6ecceb8f9f..25ca60cd25 100644
--- a/docs/src/main/paradox/typed/actor-lifecycle.md
+++ b/docs/src/main/paradox/typed/actor-lifecycle.md
@@ -1,16 +1,16 @@
 ---
-project.description: The Akka Actor lifecycle.
+project.description: The Pekko Actor lifecycle.
 ---
 # Actor lifecycle
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Actors](../actors.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Actors](../actors.md).
 
 ## Dependency
 
-To use Akka Actor Typed, you must add the following dependency in your project:
+To use Pekko Actor Typed, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
diff --git a/docs/src/main/paradox/typed/actors.md b/docs/src/main/paradox/typed/actors.md
index 1aea9e221b..74d1071d49 100644
--- a/docs/src/main/paradox/typed/actors.md
+++ b/docs/src/main/paradox/typed/actors.md
@@ -1,13 +1,13 @@
 ---
-project.description: The Actor model, managing internal state and changing behavior in Akka Actors.
+project.description: The Actor model, managing internal state and changing behavior in Pekko Actors.
 ---
 # Introduction to Actors
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Actors](../actors.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Actors](../actors.md).
 
 ## Module info
 
-To use Akka Actors, add the following dependency in your project:
+To use Pekko Actors, add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -22,13 +22,13 @@ To use Akka Actors, add the following dependency in your project:
   scope2=test
 }
 
-Both the Java and Scala DSLs of Akka modules are bundled in the same JAR. For a smooth development experience,
+Both the Java and Scala DSLs of Pekko modules are bundled in the same JAR. For a smooth development experience,
 when using an IDE such as Eclipse or IntelliJ, you can disable the auto-importer from suggesting `javadsl`
 imports when working in Scala, or viceversa. See @ref:[IDE Tips](../additional/ide.md). 
 
 @@project-info{ projectId="actor-typed" }
 
-## Akka Actors
+## Pekko Actors
 
 The [Actor Model](https://en.wikipedia.org/wiki/Actor_model) provides a higher level of abstraction for writing concurrent
 and distributed systems. It alleviates the developer from having to deal with
@@ -36,13 +36,12 @@ explicit locking and thread management, making it easier to write correct
 concurrent and parallel systems. Actors were defined in the 1973 paper by Carl
 Hewitt but have been popularized by the Erlang language, and used for example at
 Ericsson with great success to build highly concurrent and reliable telecom
-systems. The API of Akka’s Actors has borrowed some of its syntax from Erlang.
+systems. The API of Pekko’s Actors has borrowed some of its syntax from Erlang.
  
 ## First example
 
-If you are new to Akka you might want to start with reading the @ref:[Getting Started Guide](guide/introduction.md)
-and then come back here to learn more. We also recommend watching the short 
-[introduction video to Akka actors](https://akka.io/blog/news/2019/12/03/akka-typed-actor-intro-video).  
+If you are new to Pekko you might want to start with reading the @ref:[Getting Started Guide](guide/introduction.md)
+and then come back here to learn more. 
 
 It is helpful to become familiar with the foundational, external and internal
 ecosystem of your Actors, to see what you can leverage and customize as needed, see
@@ -148,18 +147,18 @@ An application normally consists of a single @apidoc[typed.ActorSystem], running
 The console output may look like this:
 
 ```
-[INFO] [03/13/2018 15:50:05.814] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/greeter] Hello World!
-[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/greeter] Hello Akka!
-[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-2] [akka://hello/user/World] Greeting 1 for World
-[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/Akka] Greeting 1 for Akka
-[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello World!
-[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello Akka!
-[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/World] Greeting 2 for World
-[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello World!
-[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/Akka] Greeting 2 for Akka
-[INFO] [03/13/2018 15:50:05.816] [hello-pekko.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello Akka!
-[INFO] [03/13/2018 15:50:05.816] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/World] Greeting 3 for World
-[INFO] [03/13/2018 15:50:05.816] [hello-pekko.actor.default-dispatcher-6] [akka://hello/user/Akka] Greeting 3 for Akka
+[INFO] [03/13/2018 15:50:05.814] [hello-pekko.actor.default-dispatcher-4] [pekko://hello/user/greeter] Hello World!
+[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [pekko://hello/user/greeter] Hello Pekko!
+[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-2] [pekko://hello/user/World] Greeting 1 for World
+[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [pekko://hello/user/Pekko] Greeting 1 for Pekko
+[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-5] [pekko://hello/user/greeter] Hello World!
+[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-5] [pekko://hello/user/greeter] Hello Pekko!
+[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [pekko://hello/user/World] Greeting 2 for World
+[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-5] [pekko://hello/user/greeter] Hello World!
+[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [pekko://hello/user/Pekko] Greeting 2 for Pekko
+[INFO] [03/13/2018 15:50:05.816] [hello-pekko.actor.default-dispatcher-5] [pekko://hello/user/greeter] Hello Pekko!
+[INFO] [03/13/2018 15:50:05.816] [hello-pekko.actor.default-dispatcher-4] [pekko://hello/user/World] Greeting 3 for World
+[INFO] [03/13/2018 15:50:05.816] [hello-pekko.actor.default-dispatcher-6] [pekko://hello/user/Pekko] Greeting 3 for Pekko
 ```
 
 You will also need to add a @ref:[logging dependency](logging.md) to see that output when running.
@@ -168,7 +167,7 @@ You will also need to add a @ref:[logging dependency](logging.md) to see that ou
 
 #### Here is another example that you can edit and run in the browser:
 
-@@fiddle [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #fiddle_code template=Akka layout=v75 minheight=400px }
+@@fiddle [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #fiddle_code template=Pekko layout=v75 minheight=400px }
 
 @@@
 
diff --git a/docs/src/main/paradox/typed/choosing-cluster.md b/docs/src/main/paradox/typed/choosing-cluster.md
index 11c925ab9b..8ecb0c99ad 100644
--- a/docs/src/main/paradox/typed/choosing-cluster.md
+++ b/docs/src/main/paradox/typed/choosing-cluster.md
@@ -1,15 +1,12 @@
-<a id="when-and-where-to-use-akka-cluster"></a>
-# Choosing Akka Cluster
+<a id="when-and-where-to-use-pekko-cluster"></a>
+# Choosing Pekko Cluster
 
 An architectural choice you have to make is if you are going to use a microservices architecture or
-a traditional distributed application. This choice will influence how you should use Akka Cluster.
-
-The [Stateful or Stateless applications: to Akka Cluster or not](https://akka.io/blog/news/2020/06/01/akka-cluster-motivation)
-video is a good starting point to understand the motivation to use Akka Cluster.
+a traditional distributed application. This choice will influence how you should use Pekko Cluster.
 
 ## Microservices
 
-Microservices has many attractive properties, such as the independent nature of microservices allows for
+Microservices architecture has many attractive properties, such as the independent nature of microservices allows for
 multiple smaller and more focused teams that can deliver new functionality more frequently and can
 respond quicker to business opportunities. Reactive Microservices should be isolated, autonomous, and have
 a single responsibility as identified by Jonas Bonér in the book
@@ -17,7 +14,7 @@ a single responsibility as identified by Jonas Bonér in the book
 
 In a microservices architecture, you should consider communication within a service and between services.
 
-In general we recommend against using Akka Cluster and actor messaging between _different_ services because that
+In general we recommend against using Pekko Cluster and actor messaging between _different_ services because that
 would result in a too tight code coupling between the services and difficulties deploying these independent of
 each other, which is one of the main reasons for using a microservices architecture.
 See the discussion on @extref[Internal and External Communication](platform-guide:concepts/internal-and-external-communication.html)
@@ -26,16 +23,16 @@ for some background on this.
 Nodes of a single service (collectively called a cluster) require less decoupling. They share the same code and
 are deployed together, as a set, by a single team or individual. There might be two versions running concurrently
 during a rolling deployment, but deployment of the entire set has a single point of control. For this reason,
-intra-service communication can take advantage of Akka Cluster, failure management and actor messaging, which
+intra-service communication can take advantage of Pekko Cluster, failure management and actor messaging, which
 is convenient to use and has great performance.
 
-Between different services [Akka HTTP](https://doc.akka.io/docs/akka-http/current/) or
-[Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/) can be used for synchronous (yet non-blocking)
-communication and [Akka Streams Kafka](https://doc.akka.io/docs/alpakka-kafka/current/) or other
-[Alpakka](https://doc.akka.io/docs/alpakka/current/) connectors for integration asynchronous communication.
+Between different services [Pekko HTTP](https://doc.akka.io/docs/akka-http/current/) or
+[Pekko gRPC](https://doc.akka.io/docs/akka-grpc/current/) can be used for synchronous (yet non-blocking)
+communication and [Pekko Streams Kafka](https://doc.akka.io/docs/alpakka-kafka/current/) or other
+[Pekko Connectors](https://doc.akka.io/docs/alpakka/current/) for integration asynchronous communication.
 All those communication mechanisms work well with streaming of messages with end-to-end back-pressure, and the
 synchronous communication tools can also be used for single request response interactions. It is also important
-to note that when using these tools both sides of the communication do not have to be implemented with Akka,
+to note that when using these tools both sides of the communication do not have to be implemented with Pekko,
 nor does the programming language matter.
 
 ## Traditional distributed application
@@ -43,7 +40,7 @@ nor does the programming language matter.
 We acknowledge that microservices also introduce many new challenges and it's not the only way to
 build applications. A traditional distributed application may have less complexity and work well in many cases.
 For example for a small startup, with a single team, building an application where time to market is everything.
-Akka Cluster can efficiently be used for building such distributed application.
+Pekko Cluster can efficiently be used for building such distributed application.
 
 In this case, you have a single deployment unit, built from a single code base (or using traditional binary
 dependency management to modularize) but deployed across many nodes using a single cluster.
@@ -53,7 +50,7 @@ have specialized runtime roles which means that the cluster is not totally homog
 is just a runtime behavior and doesn't cause the same kind of problems you might get from tight coupling of
 totally separate artifacts.
 
-A tightly coupled distributed application has served the industry and many Akka users well for years and is
+A tightly coupled distributed application has served the industry and many Pekko users well for years and is
 still a valid choice.
 
 ## Distributed monolith
diff --git a/docs/src/main/paradox/typed/cluster-concepts.md b/docs/src/main/paradox/typed/cluster-concepts.md
index 86fc82c447..d198df2ed0 100644
--- a/docs/src/main/paradox/typed/cluster-concepts.md
+++ b/docs/src/main/paradox/typed/cluster-concepts.md
@@ -1,18 +1,18 @@
 # Cluster Specification
 
-This document describes the design concepts of Akka Cluster. For the guide on using Akka Cluster please see either
+This document describes the design concepts of Pekko Cluster. For the guide on using Pekko Cluster please see either
 
 * @ref:[Cluster Usage](../typed/cluster.md)
-* @ref:[Cluster Usage with classic Akka APIs](../cluster-usage.md)
+* @ref:[Cluster Usage with classic Pekko APIs](../cluster-usage.md)
 * @ref:[Cluster Membership Service](cluster-membership.md)
  
 ## Introduction
 
-Akka Cluster provides a fault-tolerant decentralized peer-to-peer based
+Pekko Cluster provides a fault-tolerant decentralized peer-to-peer based
 @ref:[Cluster Membership Service](cluster-membership.md#cluster-membership-service) with no single point of failure or 
 single point of bottleneck. It does this using @ref:[gossip](#gossip) protocols and an automatic [failure detector](#failure-detector).
 
-Akka Cluster allows for building distributed applications, where one application or service spans multiple nodes
+Pekko Cluster allows for building distributed applications, where one application or service spans multiple nodes
 (in practice multiple @apidoc[typed.ActorSystem]s). 
 
 ## Terms
@@ -30,7 +30,7 @@ and membership state transitions.
 
 ### Gossip
 
-The cluster membership used in Akka is based on Amazon's [Dynamo](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf) system and
+The cluster membership used in Pekko is based on Amazon's [Dynamo](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf) system and
 particularly the approach taken in Basho's' [Riak](https://en.wikipedia.org/wiki/Riak) distributed database.
 Cluster membership is communicated using a [Gossip Protocol](https://en.wikipedia.org/wiki/Gossip_protocol), where the current
 state of the cluster is gossiped randomly through the cluster, with preference to
@@ -65,7 +65,7 @@ nodes have been downed.
 
 #### Failure Detector
 
-The failure detector in Akka Cluster is responsible for trying to detect if a node is
+The failure detector in Pekko Cluster is responsible for trying to detect if a node is
 `unreachable` from the rest of the cluster. For this we are using the
 @ref:[Phi Accrual Failure Detector](failure-detector.md) implementation.
 To be able to survive sudden abnormalities, such as garbage collection pauses and
@@ -130,8 +130,8 @@ A variation of *push-pull gossip* is used to reduce the amount of gossip
 information sent around the cluster. In push-pull gossip a digest is sent
 representing current versions but not actual values; the recipient of the gossip
 can then send back any values for which it has newer versions and also request
-values for which it has outdated versions. Akka uses a single shared state with
-a vector clock for versioning, so the variant of push-pull gossip used in Akka
+values for which it has outdated versions. Pekko uses a single shared state with
+a vector clock for versioning, so the variant of push-pull gossip used in Pekko
 makes use of this version to only push the actual state as needed.
 
 Periodically, the default is every 1 second, each node chooses another random
diff --git a/docs/src/main/paradox/typed/cluster-dc.md b/docs/src/main/paradox/typed/cluster-dc.md
index 6ec80f9f15..37666c389c 100644
--- a/docs/src/main/paradox/typed/cluster-dc.md
+++ b/docs/src/main/paradox/typed/cluster-dc.md
@@ -1,11 +1,11 @@
 # Multi-DC Cluster
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Multi-DC Cluster](../cluster-dc.md)
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Multi-DC Cluster](../cluster-dc.md)
 
-This chapter describes how @ref[Akka Cluster](cluster.md) can be used across
+This chapter describes how @ref[Pekko Cluster](cluster.md) can be used across
 multiple data centers, availability zones or regions.
 
-The reason for making the Akka Cluster aware of data center boundaries is that
+The reason for making the Pekko Cluster aware of data center boundaries is that
 communication across data centers typically has much higher latency and higher failure
 rate than communication between nodes in the same data center.
 
@@ -16,10 +16,10 @@ up a large cluster into smaller groups of nodes for better scalability.
 
 ## Dependency
 
-To use Akka Cluster add the following dependency in your project:
+To use Pekko Cluster add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -35,7 +35,7 @@ There can be many reasons for using more than one data center, such as:
 * Serve requests from a location near the user to provide better responsiveness.
 * Balance the load over many servers.
 
-It's possible to run an ordinary Akka Cluster with default settings that spans multiple
+It's possible to run an ordinary Pekko Cluster with default settings that spans multiple
 data centers but that may result in problems like:
 
 * Management of Cluster membership is stalled during network partitions as described in a 
@@ -56,23 +56,23 @@ data centers but that may result in problems like:
   that are close over distant nodes. E.g. a cluster aware router would be more efficient
   if it would prefer routing messages to nodes in the own data center. 
 
-To avoid some of these problems one can run a separate Akka Cluster per data center and use another
+To avoid some of these problems one can run a separate Pekko Cluster per data center and use another
 communication channel between the data centers, such as HTTP, an external message broker.
 However, many of the nice tools that are built on top of the Cluster membership information are lost.
 For example, it wouldn't be possible to use @ref[Distributed Data](distributed-data.md) across the separate clusters.
 
-We often recommend implementing a micro-service as one Akka Cluster. The external API of the
-service would be HTTP, gRPC or a message broker, and not Akka Remoting or Cluster (see additional discussion
-in @ref:[When and where to use Akka Cluster](choosing-cluster.md)). 
+We often recommend implementing a micro-service as one Pekko Cluster. The external API of the
+service would be HTTP, gRPC or a message broker, and not Pekko Remoting or Cluster (see additional discussion
+in @ref:[When and where to use Pekko Cluster](choosing-cluster.md)). 
  
 The internal communication within the service that is running on several nodes would use ordinary actor 
-messaging or the tools based on Akka Cluster. When deploying this service to multiple data
+messaging or the tools based on Pekko Cluster. When deploying this service to multiple data
 centers it would be inconvenient if the internal communication could not use ordinary actor 
-messaging because it was separated into several Akka Clusters. The benefit of using Akka
+messaging because it was separated into several Pekko Clusters. The benefit of using Pekko
 messaging internally is performance as well as ease of development and reasoning about
 your domain in terms of Actors.
 
-Therefore, it's possible to make the Akka Cluster aware of data centers so that one Akka
+Therefore, it's possible to make the Pekko Cluster aware of data centers so that one Pekko
 Cluster can span multiple data centers and still be tolerant to network partitions.
 
 ## Defining the data centers
@@ -190,7 +190,7 @@ one in each data center. This is because the region/coordinator is only aware of
 and will activate the entity there. It's unaware of the existence of corresponding entities in the 
 other data centers.
 
-Especially when used together with Akka Persistence that is based on the single-writer principle
+Especially when used together with Pekko Persistence that is based on the single-writer principle
 it is important to avoid running the same entity at multiple locations at the same time with a
 shared data store. That would result in corrupt data since the events stored by different instances
 may be interleaved and would be interpreted differently in a later replay. For replicated persistent
diff --git a/docs/src/main/paradox/typed/cluster-membership.md b/docs/src/main/paradox/typed/cluster-membership.md
index 2d7eba183c..9c8f946254 100644
--- a/docs/src/main/paradox/typed/cluster-membership.md
+++ b/docs/src/main/paradox/typed/cluster-membership.md
@@ -1,9 +1,9 @@
 ---
-project.description: The Akka Cluster node membership service, manages dynamic member states and lifecycle with no external infrastructure needed.
+project.description: The Pekko Cluster node membership service, manages dynamic member states and lifecycle with no external infrastructure needed.
 ---
 # Cluster Membership Service
 
-The core of Akka Cluster is the cluster membership, to keep track of what nodes are part of the cluster and
+The core of Pekko Cluster is the cluster membership, to keep track of what nodes are part of the cluster and
 their health. Cluster membership is communicated using @ref:[gossip](cluster-concepts.md#gossip) and
 @ref:[failure detection](cluster-concepts.md#failure-detector).
 
@@ -13,14 +13,14 @@ on top of the cluster membership service.
 ## Introduction
 
 A cluster is made up of a set of member nodes. The identifier for each node is a
-`hostname:port:uid` tuple. An Akka application can be distributed over a cluster with
+`hostname:port:uid` tuple. An Pekko application can be distributed over a cluster with
 each node hosting some part of the application. Cluster membership and the actors running
 on that node of the application are decoupled. A node could be a member of a
 cluster without hosting any actors. Joining a cluster is initiated
 by issuing a `Join` command to one of the nodes in the cluster to join.
 
 The node identifier internally also contains a UID that uniquely identifies this
-actor system instance at that `hostname:port`. Akka uses the UID to be able to
+actor system instance at that `hostname:port`. Pekko uses the UID to be able to
 reliably trigger remote death watch. This means that the same actor system can never
 join a cluster again once it's been removed from that cluster. To re-join an actor
 system with the same `hostname:port` to a cluster you have to stop the actor system
@@ -136,7 +136,7 @@ In some rare cases it may be desirable to do a full cluster shutdown rather than
 For example, a protocol change where it is simpler to restart the cluster than to make the protocol change
 backward compatible.
 
-As of Akka `2.6.13` it can be signalled that a full cluster shutdown is about to happen and any expensive actions such as:
+As of Pekko `2.6.13` it can be signalled that a full cluster shutdown is about to happen and any expensive actions such as:
 
 * Cluster sharding rebalances
 * Moving of Cluster singletons
diff --git a/docs/src/main/paradox/typed/cluster-sharded-daemon-process.md b/docs/src/main/paradox/typed/cluster-sharded-daemon-process.md
index 3d08b449cb..dcc31f29ce 100644
--- a/docs/src/main/paradox/typed/cluster-sharded-daemon-process.md
+++ b/docs/src/main/paradox/typed/cluster-sharded-daemon-process.md
@@ -2,7 +2,7 @@
 
 ## Module info
 
-To use Akka Sharded Daemon Process, you must add the following dependency in your project:
+To use Pekko Sharded Daemon Process, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
diff --git a/docs/src/main/paradox/typed/cluster-sharding.md b/docs/src/main/paradox/typed/cluster-sharding.md
index 99f7ca4bad..6da71c8428 100644
--- a/docs/src/main/paradox/typed/cluster-sharding.md
+++ b/docs/src/main/paradox/typed/cluster-sharding.md
@@ -1,13 +1,13 @@
 ---
-project.description: Shard a clustered compute process across the network with locationally transparent message routing using Akka Cluster Sharding.
+project.description: Shard a clustered compute process across the network with locationally transparent message routing using Pekko Cluster Sharding.
 ---
 # Cluster Sharding
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Cluster Sharding](../cluster-sharding.md)
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Cluster Sharding](../cluster-sharding.md)
 
 ## Module info
 
-To use Akka Cluster Sharding, you must add the following dependency in your project:
+To use Pekko Cluster Sharding, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -30,9 +30,6 @@ It could for example be actors representing Aggregate Roots in Domain-Driven Des
 Here we call these actors "entities". These actors typically have persistent (durable) state,
 but this feature is not limited to actors with persistent state.
 
-The [Introduction to Akka Cluster Sharding video](https://akka.io/blog/news/2019/12/16/akka-cluster-sharding-intro-video)
-is a good starting point for learning Cluster Sharding.
-
 Cluster sharding is typically used when you have many stateful actors that together consume
 more resources (e.g. memory) than fit on one machine. If you only have a few stateful actors
 it might be easier to run them on a @ref:[Cluster Singleton](cluster-singleton.md) node. 
@@ -123,7 +120,7 @@ representation but looked up on the deserializing side.
 When using sharding, entities can be moved to different nodes in the cluster. Persistence can be used to recover the state of
 an actor after it has moved.
 
-Akka Persistence is based on the single-writer principle, for a particular @apidoc[typed.PersistenceId] only one persistent actor
+Pekko Persistence is based on the single-writer principle, for a particular @apidoc[typed.PersistenceId] only one persistent actor
 instance should be active. If multiple instances were to persist events at the same time, the events would be
 interleaved and might not be interpreted correctly on replay. Cluster Sharding is typically used together with
 persistence to ensure that there is only one active entity for each `PersistenceId` (`entityId`).
@@ -189,9 +186,9 @@ round can be limited to make it progress slower since rebalancing too many shard
 result in additional load on the system. For example, causing many Event Sourced entites to be started
 at the same time.
 
-A new rebalance algorithm was included in Akka 2.6.10. It can reach optimal balance in a few rebalance rounds
+A new rebalance algorithm is included in Pekko. It can reach optimal balance in a few rebalance rounds
 (typically 1 or 2 rounds). For backwards compatibility the new algorithm is not enabled by default.
-The new algorithm is recommended and will become the default in future versions of Akka.
+The new algorithm is recommended and will become the default in future versions of Pekko.
 You enable the new algorithm by setting `rebalance-absolute-limit` > 0, for example:
 
 ```
@@ -209,7 +206,7 @@ in one rebalance round. The lower result of `rebalance-relative-limit` and `reba
 An alternative allocation strategy is the @apidoc[ExternalShardAllocationStrategy] which allows
 explicit control over where shards are allocated via the @apidoc[ExternalShardAllocation] extension.
 
-This can be used, for example, to match up Kafka Partition consumption with shard locations. The video [How to co-locate Kafka Partitions with Akka Cluster Shards](https://akka.io/blog/news/2020/03/18/akka-sharding-kafka-video) explains a setup for it. Alpakka Kafka provides [an extension for Akka Cluster Sharding](https://doc.akka.io/docs/alpakka-kafka/current/cluster-sharding.html).
+This can be used, for example, to match up Kafka Partition consumption with shard locations. Pekko Connector Kafka provides [an extension for Pekko Cluster Sharding](https://doc.akka.io/docs/alpakka-kafka/current/cluster-sharding.html).
 
 To use it set it as the allocation strategy on your @apidoc[typed.*.Entity]:
 
@@ -235,7 +232,7 @@ support a greater number of shards.
 
 #### Example project for external allocation strategy
 
-@extref[Kafka to Cluster Sharding](samples:akka-samples-kafka-to-sharding)
+@extref[Kafka to Cluster Sharding](samples:pekko-samples-kafka-to-sharding)
 is an example project that can be downloaded, and with instructions of how to run, that demonstrates how to use
 external sharding to co-locate Kafka partition consumption with shards.
 
@@ -332,7 +329,7 @@ Automatic passivation strategies can limit the number of active entities. Limit-
 replacement policy to determine which active entities should be passivated when the active entity limit is exceeded.
 The configurable limit is for a whole shard region and is divided evenly among the active shards in each region.
 
-A recommended passivation strategy, which will become the new default passivation strategy in future versions of Akka
+A recommended passivation strategy, which will become the new default passivation strategy in future versions of Pekko
 Cluster Sharding, can be enabled with configuration:
 
 @@snip [passivation new default strategy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #passivation-new-default-strategy type=conf }
@@ -534,8 +531,8 @@ it moves between nodes.
 
 There are two options for the state store:
 
-* @ref:[Distributed Data Mode](#distributed-data-mode) - uses Akka @ref:[Distributed Data](distributed-data.md) (CRDTs) (the default)
-* @ref:[Persistence Mode](#persistence-mode) - (deprecated) uses Akka @ref:[Persistence](persistence.md) (Event Sourcing)
+* @ref:[Distributed Data Mode](#distributed-data-mode) - uses Pekko @ref:[Distributed Data](distributed-data.md) (CRDTs) (the default)
+* @ref:[Persistence Mode](#persistence-mode) - (deprecated) uses Pekko @ref:[Persistence](persistence.md) (Event Sourcing)
 
 @@include[cluster.md](../includes/cluster.md) { #sharding-persistence-mode-deprecated }
 
@@ -704,7 +701,7 @@ for more information about `min-nr-of-members`.
 
 ## Health check
 
-An [Akka Management compatible health check](https://doc.akka.io/docs/akka-management/current/healthchecks.html) is included that returns healthy once the local shard region
+An [Pekko Management compatible health check](https://doc.akka.io/docs/akka-management/current/healthchecks.html) is included that returns healthy once the local shard region
 has registered with the coordinator. This health check should be used in cases where you don't want to receive production traffic until the local shard region is ready to retrieve locations
 for shards. For shard regions that aren't critical and therefore should not block this node becoming ready do not include them.
 
@@ -764,7 +761,7 @@ does not run on two nodes.
 Reasons for how this can happen:
 
 * Network partitions without an appropriate downing provider
-* Mistakes in the deployment process leading to two separate Akka Clusters
+* Mistakes in the deployment process leading to two separate Pekko Clusters
 * Timing issues between removing members from the Cluster on one side of a network partition and shutting them down on the other side
 
 A lease can be a final backup that means that each shard won't create child entity actors unless it has the lease. 
@@ -780,7 +777,7 @@ be buffered in the `ShardRegion`. If the lease is lost after initialization the
 
 Removal of internal Cluster Sharding data is only relevant for "Persistent Mode".
 The Cluster Sharding `ShardCoordinator` stores locations of the shards.
-This data is safely be removed when restarting the whole Akka Cluster.
+This data is safely be removed when restarting the whole Pekko Cluster.
 Note that this does not include application data.
 
 There is a utility program @apidoc[cluster.sharding.RemoveInternalClusterShardingData$]
@@ -788,7 +785,7 @@ that removes this data.
 
 @@@ warning
 
-Never use this program while there are running Akka Cluster nodes that are
+Never use this program while there are running Pekko Cluster nodes that are
 using Cluster Sharding. Stop all Cluster nodes before using this program.
 
 @@@
@@ -806,16 +803,13 @@ java -classpath <jar files, including pekko-cluster-sharding>
     -2.3 entityType1 entityType2 entityType3
 ```
 
-The program is included in the `akka-cluster-sharding` jar file. It
+The program is included in the `pekko-cluster-sharding` jar file. It
 is easiest to run it with same classpath and configuration as your ordinary
 application. It can be run from sbt or Maven in similar way.
 
 Specify the entity type names (same as you use in the `init` method
 of `ClusterSharding`) as program arguments.
 
-If you specify `-2.3` as the first program argument it will also try
-to remove data that was stored by Cluster Sharding in Akka 2.3.x using
-different persistenceId.
 
 ## Configuration
 
@@ -835,8 +829,8 @@ as described in @ref:[Shard allocation](#shard-allocation).
 
 ## Example project
 
-@java[@extref[Sharding example project](samples:akka-samples-cluster-sharding-java)]
-@scala[@extref[Sharding example project](samples:akka-samples-cluster-sharding-scala)]
+@java[@extref[Sharding example project](samples:pekko-samples-cluster-sharding-java)]
+@scala[@extref[Sharding example project](samples:pekko-samples-cluster-sharding-scala)]
 is an example project that can be downloaded, and with instructions of how to run.
 
 This project contains a KillrWeather sample illustrating how to use Cluster Sharding.
diff --git a/docs/src/main/paradox/typed/cluster-singleton.md b/docs/src/main/paradox/typed/cluster-singleton.md
index 45d02c99ca..f5aed674ca 100644
--- a/docs/src/main/paradox/typed/cluster-singleton.md
+++ b/docs/src/main/paradox/typed/cluster-singleton.md
@@ -1,6 +1,6 @@
 # Cluster Singleton
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Cluster Singleton](../cluster-singleton.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Cluster Singleton](../cluster-singleton.md).
 
 ## Module info
 
@@ -168,7 +168,7 @@ A @ref[lease](../coordination.md) can be used as an additional safety measure to
 don't run at the same time. Reasons for how this can happen:
 
 * Network partitions without an appropriate downing provider
-* Mistakes in the deployment process leading to two separate Akka Clusters
+* Mistakes in the deployment process leading to two separate Pekko Clusters
 * Timing issues between removing members from the Cluster on one side of a network partition and shutting them down on the other side
 
 A lease can be a final backup that means that the singleton actor won't be created unless
diff --git a/docs/src/main/paradox/typed/cluster.md b/docs/src/main/paradox/typed/cluster.md
index aab8a12e02..d64dec151d 100644
--- a/docs/src/main/paradox/typed/cluster.md
+++ b/docs/src/main/paradox/typed/cluster.md
@@ -1,21 +1,20 @@
 ---
-project.description: Build distributed applications that scale across the network with Akka Cluster, a fault-tolerant decentralized peer-to-peer based cluster node membership service with no single point of failure.
+project.description: Build distributed applications that scale across the network with Pekko Cluster, a fault-tolerant decentralized peer-to-peer based cluster node membership service with no single point of failure.
 ---
 # Cluster Usage
   
-This document describes how to use Akka Cluster and the Cluster APIs. 
-The [Stateful or Stateless Applications: To Akka Cluster or not](https://akka.io/blog/news/2020/06/01/akka-cluster-motivation) video is a good starting point to understand the motivation to use Akka Cluster.
+This document describes how to use Pekko Cluster and the Cluster APIs. 
 
 For specific documentation topics see: 
 
-* @ref:[When and where to use Akka Cluster](choosing-cluster.md)
+* @ref:[When and where to use Pekko Cluster](choosing-cluster.md)
 * @ref:[Cluster Specification](cluster-concepts.md)
 * @ref:[Cluster Membership Service](cluster-membership.md)
 * @ref:[Higher level Cluster tools](#higher-level-cluster-tools)
 * @ref:[Rolling Updates](../additional/rolling-updates.md)
 * @ref:[Operating, Managing, Observability](../additional/operations.md)
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Cluster](../cluster-usage.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Cluster](../cluster-usage.md).
 
 You have to enable @ref:[serialization](../serialization.md)  to send messages between ActorSystems (nodes) in the Cluster.
 @ref:[Serialization with Jackson](../serialization-jackson.md) is a good choice in many cases, and our
@@ -23,7 +22,7 @@ recommendation if you don't have other preferences or constraints.
 
 ## Module info
 
-To use Akka Cluster add the following dependency in your project:
+To use Pekko Cluster add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -142,7 +141,7 @@ way as other nodes.
 #### Joining automatically to seed nodes with Cluster Bootstrap
 
 Automatic discovery of nodes for the joining process is available
-using the open source Akka Management project's module, 
+using the open source Pekko Management project's module, 
 @ref:[Cluster Bootstrap](../additional/operations.md#cluster-bootstrap).
 Please refer to its documentation for more details.
 
@@ -156,15 +155,15 @@ You can define the seed nodes in the @ref:[configuration](#configuration) file (
 
 ```
 pekko.cluster.seed-nodes = [
-  "akka://ClusterSystem@host1:2552",
-  "akka://ClusterSystem@host2:2552"]
+  "pekko://ClusterSystem@host1:2552",
+  "pekko://ClusterSystem@host2:2552"]
 ```
 
 This can also be defined as Java system properties when starting the JVM using the following syntax:
 
 ```
--Dpekko.cluster.seed-nodes.0=akka://ClusterSystem@host1:2552
--Dpekko.cluster.seed-nodes.1=akka://ClusterSystem@host2:2552
+-Dpekko.cluster.seed-nodes.0=pekko://ClusterSystem@host1:2552
+-Dpekko.cluster.seed-nodes.1=pekko://ClusterSystem@host2:2552
 ```
 
 
@@ -260,7 +259,7 @@ Graceful leaving offers faster hand off to peer nodes during node shutdown than
 The @ref:[Coordinated Shutdown](../coordinated-shutdown.md) will also run when the cluster node sees itself as
 `Exiting`, i.e. leaving from another node will trigger the shutdown process on the leaving node.
 Tasks for graceful leaving of cluster, including graceful shutdown of Cluster Singletons and
-Cluster Sharding, are added automatically when Akka Cluster is used. For example, running the shutdown
+Cluster Sharding, are added automatically when Pekko Cluster is used. For example, running the shutdown
 process will also trigger the graceful leaving if not already in progress.
 
 Normally this is handled automatically, but in case of network failures during this process it may still
@@ -281,7 +280,7 @@ status of the unreachable member must be changed to `Down`. Changing status to `
 can be performed automatically or manually.
 
 We recommend that you enable the @ref:[Split Brain Resolver](../split-brain-resolver.md) that is part of the
-Akka Cluster module. You enable it with configuration:
+Pekko Cluster module. You enable it with configuration:
 
 ```
 pekko.cluster.downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
@@ -352,7 +351,7 @@ depending on you environment:
 
 ## How to test
 
-Akka comes with and uses several types of testing strategies:
+Pekko comes with and uses several types of testing strategies:
 
 * @ref:[Testing](testing.md)
 * @ref:[Multi Node Testing](../multi-node-testing.md)
@@ -361,7 +360,7 @@ Akka comes with and uses several types of testing strategies:
 ## Configuration
 
 There are several configuration properties for the cluster. Refer to the 
-@ref:[reference configuration](../general/configuration-reference.md#config-akka-cluster) for full
+@ref:[reference configuration](../general/configuration-reference.md#config-pekko-cluster) for full
 configuration descriptions, default values and options.
 
 ### How To Startup when a Cluster size is reached
@@ -427,8 +426,7 @@ pekko.cluster.configuration-compatibility-check.checkers {
 Configuration Compatibility Check is enabled by default, but can be disabled by setting `pekko.cluster.configuration-compatibility-check.enforce-on-join = off`. This is specially useful when performing rolling updates. Obviously this should only be done if a complete cluster shutdown isn't an option. A cluster with nodes with different configuration settings may lead to data loss or data corruption. 
 
 This setting should only be disabled on the joining nodes. The checks are always performed on both sides, and warnings are logged. In case of incompatibilities, it is the responsibility of the joining node to decide if the process should be interrupted or not.  
-
-If you are performing a rolling update on cluster using Akka 2.5.9 or prior (thus, not supporting this feature), the checks will not be performed because the running cluster has no means to verify the configuration sent by the joining node, nor to send back its own configuration.  
+ 
 
 @@@ 
 
@@ -457,8 +455,8 @@ See @ref:[Reliable Delivery](reliable-delivery.md)
 
 ## Example project
 
-@java[@extref[Cluster example project](samples:akka-samples-cluster-java)]
-@scala[@extref[Cluster example project](samples:akka-samples-cluster-scala)]
+@java[@extref[Cluster example project](samples:pekko-samples-cluster-java)]
+@scala[@extref[Cluster example project](samples:pekko-samples-cluster-scala)]
 is an example project that can be downloaded, and with instructions of how to run.
 
 This project contains samples illustrating different Cluster features, such as
diff --git a/docs/src/main/paradox/typed/coexisting.md b/docs/src/main/paradox/typed/coexisting.md
index 8c563d6fb8..0d5d105fd3 100644
--- a/docs/src/main/paradox/typed/coexisting.md
+++ b/docs/src/main/paradox/typed/coexisting.md
@@ -2,10 +2,10 @@
 
 ## Dependency
 
-To use Akka Actor Typed, you must add the following dependency in your project:
+To use Pekko Actor Typed, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -15,7 +15,7 @@ To use Akka Actor Typed, you must add the following dependency in your project:
 
 ## Introduction
 
-We believe Akka Typed will be adopted in existing systems gradually and therefore it's important to be able to use typed
+We believe Pekko Typed will be adopted in existing systems gradually and therefore it's important to be able to use typed
 and classic actors together, within the same `ActorSystem`. Also, we will not be able to integrate with all existing modules in one big bang release and that is another reason for why these two ways of writing actors must be able to coexist.
 
 There are two different `ActorSystem`s: @apidoc[actor.ActorSystem](actor.ActorSystem) and @apidoc[actor.typed.ActorSystem](typed.ActorSystem). 
@@ -98,7 +98,7 @@ Scala
 Java
 :  @@snip [TypedWatchingClassicTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/TypedWatchingClassicTest.java) { #create }
 
-The above classic-typed difference is further elaborated in @ref:[the `ActorSystem` section](./from-classic.md#actorsystem) of "Learning Akka Typed from Classic". 
+The above classic-typed difference is further elaborated in @ref:[the `ActorSystem` section](./from-classic.md#actorsystem) of "Learning Pekko Typed from Classic". 
 
 ## Typed to classic
 
diff --git a/docs/src/main/paradox/typed/cqrs.md b/docs/src/main/paradox/typed/cqrs.md
index 8db34c563c..30d843c7df 100644
--- a/docs/src/main/paradox/typed/cqrs.md
+++ b/docs/src/main/paradox/typed/cqrs.md
@@ -1,6 +1,6 @@
 # CQRS
 
-@ref:[EventSourcedBehavior](persistence.md)s along with [Akka Projections](https://doc.akka.io/docs/akka-projection/current/)
-can be used to implement Command Query Responsibility Segregation (CQRS). The @extref[Microservices with Akka tutorial](platform-guide:microservices-tutorial/)
+@ref:[EventSourcedBehavior](persistence.md)s along with [Pekko Projections](https://doc.akka.io/docs/akka-projection/current/)
+can be used to implement Command Query Responsibility Segregation (CQRS). The @extref[Microservices with Pekko tutorial](platform-guide:microservices-tutorial/)
 explains how to use Event Sourcing and Projections together. For implementing CQRS using @ref:[DurableStateBehavior](durable-state/persistence.md), please take a look at the corresponding @ref:[CQRS](durable-state/cqrs.md) documentation.
  
diff --git a/docs/src/main/paradox/typed/dispatchers.md b/docs/src/main/paradox/typed/dispatchers.md
index f2c9a2df97..725576be62 100644
--- a/docs/src/main/paradox/typed/dispatchers.md
+++ b/docs/src/main/paradox/typed/dispatchers.md
@@ -1,27 +1,27 @@
 ---
-project.description: Akka dispatchers and how to choose the right ones.
+project.description: Pekko dispatchers and how to choose the right ones.
 ---
 # Dispatchers
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Dispatchers](../dispatchers.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Dispatchers](../dispatchers.md).
 
 ## Dependency
 
-Dispatchers are part of core Akka, which means that they are part of the `akka-actor` dependency. This
-page describes how to use dispatchers with `akka-actor-typed`, which has dependency:
+Dispatchers are part of core Pekko, which means that they are part of the `pekko-actor` dependency. This
+page describes how to use dispatchers with `pekko-actor-typed`, which has dependency:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor-typed_$scala.binary.version$"
+  artifact="pekko-actor-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction 
 
-An Akka `MessageDispatcher` is what makes Akka Actors "tick", it is the engine of the machine so to speak.
+An Pekko `MessageDispatcher` is what makes Pekko Actors "tick", it is the engine of the machine so to speak.
 All `MessageDispatcher` implementations are also an @scala[`ExecutionContext`]@java[`Executor`], which means that they can be used
 to execute arbitrary code, for instance @scala[`Future`s]@java[`CompletableFuture`s].
 
@@ -34,7 +34,7 @@ gives excellent performance in most cases.
 
 ## Internal dispatcher
 
-To protect the internal Actors that are spawned by the various Akka modules, a separate internal dispatcher is used by default.
+To protect the internal Actors that are spawned by the various Pekko modules, a separate internal dispatcher is used by default.
 The internal dispatcher can be tuned in a fine-grained way with the setting `pekko.actor.internal-dispatcher`, it can also
 be replaced by another dispatcher by making `pekko.actor.internal-dispatcher` an @ref[alias](#dispatcher-aliases).
 
@@ -148,7 +148,7 @@ is typically that (network) I/O occurs under the covers.
 
 The [Managing Blocking in Akka video](https://akka.io/blog/news/2020/01/22/managing-blocking-video)
 explains why it is bad to block inside an actor, and how you can use custom dispatchers to manage
-blocking when you cannot avoid it.
+blocking when you cannot avoid it. The same principle applies with Pekko actors.
 
 ### Problem: Blocking on default dispatcher
 
@@ -174,7 +174,7 @@ Often when integrating with existing libraries or systems it is not possible to
 avoid blocking APIs. The following solution explains how to handle blocking
 operations properly.
 
-Note that the same hints apply to managing blocking operations anywhere in Akka,
+Note that the same hints apply to managing blocking operations anywhere in Pekko,
 including Streams, HTTP and other reactive libraries built on top of it.
 
 @@@
@@ -220,20 +220,11 @@ The orange portion of the thread shows that it is idle. Idle threads are fine -
 they're ready to accept new work. However, a large number of turquoise (blocked, or sleeping as in our example) threads
 leads to thread starvation.
 
-@@@ note
-
-If you own a Lightbend subscription you can use the commercial [Thread Starvation Detector](https://doc.akka.io/docs/akka-enhancements/current/starvation-detector.html)
-which will issue warning log statements if it detects any of your dispatchers suffering from starvation and other.
-It is a helpful first step to identify the problem is occurring in a production system,
-and then you can apply the proposed solutions as explained below.
-
-@@@
-
 ![dispatcher-behaviour-on-bad-code.png](../images/dispatcher-behaviour-on-bad-code.png)
 
 In the above example we put the code under load by sending hundreds of messages to blocking actors
 which causes threads of the default dispatcher to be blocked.
-The fork join pool based dispatcher in Akka then attempts to compensate for this blocking by adding more threads to the pool
+The fork join pool based dispatcher in Pekko then attempts to compensate for this blocking by adding more threads to the pool
 (`default-pekko.actor.default-dispatcher 18,19,20,...`).
 This however is not able to help if those too will immediately get blocked,
 and eventually the blocking operations will dominate the entire dispatcher.
@@ -318,7 +309,7 @@ they were still served on the default dispatcher.
 This is the recommended way of dealing with any kind of blocking in reactive
 applications.
 
-For a similar discussion specifically about Akka HTTP, refer to @extref[Handling blocking operations in Akka HTTP](akka.http:handling-blocking-operations-in-akka-http-routes.html).
+For a similar discussion specifically about Pekko HTTP, refer to @extref[Handling blocking operations in Pekko HTTP](pekko.http:handling-blocking-operations-in-akka-http-routes.html).
 
 ### Available solutions to blocking operations
 
@@ -349,7 +340,7 @@ on which DBMS is deployed on what hardware.
 
 @@@ note
 
-Configuring thread pools is a task best delegated to Akka, configure
+Configuring thread pools is a task best delegated to Pekko, configure
 it in `application.conf` and instantiate through an
 @ref:[`ActorSystem`](#dispatcher-lookup)
 
diff --git a/docs/src/main/paradox/typed/distributed-data.md b/docs/src/main/paradox/typed/distributed-data.md
index df77608cce..57a1712814 100644
--- a/docs/src/main/paradox/typed/distributed-data.md
+++ b/docs/src/main/paradox/typed/distributed-data.md
@@ -1,13 +1,13 @@
 ---
-project.description: Share data between nodes and perform updates without coordination in an Akka Cluster using Conflict Free Replicated Data Types CRDT.
+project.description: Share data between nodes and perform updates without coordination in an Pekko Cluster using Conflict Free Replicated Data Types CRDT.
 ---
 # Distributed Data
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Distributed Data](../distributed-data.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Distributed Data](../distributed-data.md).
 
 ## Module info
 
-To use Akka Cluster Distributed Data, you must add the following dependency in your project:
+To use Pekko Cluster Distributed Data, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -22,8 +22,8 @@ To use Akka Cluster Distributed Data, you must add the following dependency in y
 
 ## Introduction
 
-*Akka Distributed Data* is useful when you need to share data between nodes in an
-Akka Cluster. The data is accessed with an actor providing a key-value store like API.
+*Pekko Distributed Data* is useful when you need to share data between nodes in an
+Pekko Cluster. The data is accessed with an actor providing a key-value store like API.
 The keys are unique identifiers with type information of the data values. The values
 are *Conflict Free Replicated Data Types* (CRDTs).
 
@@ -195,7 +195,7 @@ Subscribers will receive `Replicator.Deleted`.
 
 As deleted keys continue to be included in the stored data on each node as well as in gossip
 messages, a continuous series of updates and deletes of top-level entities will result in
-growing memory usage until an ActorSystem runs out of memory. To use Akka Distributed Data
+growing memory usage until an ActorSystem runs out of memory. To use Pekko Distributed Data
 where frequent adds and removes are required, you should use a fixed number of top-level data
 types that support both updates and removals, for example `ORMap` or `ORSet`.
 
@@ -336,7 +336,7 @@ one via the `DistributedData` extension.
 
 ## Replicated data types
 
-Akka contains a set of useful replicated data types and it is fully possible to implement custom replicated data types. 
+Pekko contains a set of useful replicated data types and it is fully possible to implement custom replicated data types. 
 
 The data types must be convergent (stateful) CRDTs and implement the @scala[`ReplicatedData` trait]@java[`AbstractReplicatedData` interface],
 i.e. they provide a monotonic merge function and the state changes always converge.
@@ -588,7 +588,7 @@ Implement the additional methods of @scala[`DeltaReplicatedData`]@java[`Abstract
  
 #### Serialization
 
-The data types must be serializable with an @ref:[Akka Serializer](../serialization.md).
+The data types must be serializable with an @ref:[Pekko Serializer](../serialization.md).
 It is highly recommended that you implement  efficient serialization with Protobuf or similar
 for your custom data types. The built in data types are marked with `ReplicatedDataSerialization`
 and serialized with `org.apache.pekko.cluster.ddata.protobuf.ReplicatedDataSerializer`.
@@ -637,7 +637,7 @@ serializer for those types. This can be done by declaring those as bytes fields
 
 and use the methods `otherMessageToProto` and `otherMessageFromBinary` that are provided
 by the `SerializationSupport` trait to serialize and deserialize the `GSet` instances. This
-works with any type that has a registered Akka serializer. This is how such an serializer would
+works with any type that has a registered Pekko serializer. This is how such an serializer would
 look like for the `TwoPhaseSet`:
 
 Scala
@@ -779,8 +779,8 @@ The `DistributedData` extension can be configured with the following properties:
 
 ## Example project
 
-@java[@extref[Distributed Data example project](samples:akka-samples-distributed-data-java)]
-@scala[@extref[Distributed Data example project](samples:akka-samples-distributed-data-scala)]
+@java[@extref[Distributed Data example project](samples:pekko-samples-distributed-data-java)]
+@scala[@extref[Distributed Data example project](samples:pekko-samples-distributed-data-scala)]
 is an example project that can be downloaded, and with instructions of how to run.
 
 This project contains several samples illustrating how to use Distributed Data.
diff --git a/docs/src/main/paradox/typed/distributed-pub-sub.md b/docs/src/main/paradox/typed/distributed-pub-sub.md
index a1f45fcd4f..7652b3738a 100644
--- a/docs/src/main/paradox/typed/distributed-pub-sub.md
+++ b/docs/src/main/paradox/typed/distributed-pub-sub.md
@@ -1,6 +1,6 @@
 # Distributed Publish Subscribe in Cluster
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Distributed Publish Subscribe](../distributed-pub-sub.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Distributed Publish Subscribe](../distributed-pub-sub.md).
 
 ## Module info
 
@@ -61,9 +61,9 @@ for the topic will not be sent to it.
 
 ## Delivery Guarantee
 
-As in @ref:[Message Delivery Reliability](../general/message-delivery-reliability.md) of Akka, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**. In other words, messages can be lost over the wire. In addition to that the registry of nodes which have subscribers is eventually consistent
+As in @ref:[Message Delivery Reliability](../general/message-delivery-reliability.md) of Pekko, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**. In other words, messages can be lost over the wire. In addition to that the registry of nodes which have subscribers is eventually consistent
 meaning that subscribing an actor on one node will have a short delay before it is known on other nodes and published to.
 
-If you are looking for at-least-once delivery guarantee, we recommend [Alpakka Kafka](https://doc.akka.io/docs/alp(/kafka/current/).
+If you are looking for at-least-once delivery guarantee, we recommend [Pekko Connector Kafka](https://doc.akka.io/docs/alp(/kafka/current/).
 
 
diff --git a/docs/src/main/paradox/typed/durable-state/cqrs.md b/docs/src/main/paradox/typed/durable-state/cqrs.md
index 69b1d18767..9a872dda59 100644
--- a/docs/src/main/paradox/typed/durable-state/cqrs.md
+++ b/docs/src/main/paradox/typed/durable-state/cqrs.md
@@ -1,6 +1,6 @@
 # CQRS
 
-@ref:[DurableStateBehavior](persistence.md)s along with [Akka Projections](https://doc.akka.io/docs/akka-projection/current/)
+@ref:[DurableStateBehavior](persistence.md)s along with [Pekko Projections](https://doc.akka.io/docs/akka-projection/current/)
 can be used to implement Command Query Responsibility Segregation (CQRS). For implementing CQRS using @ref:[EventSourcedBehavior](../persistence.md), please take a look at the corresponding @ref:[CQRS](../cqrs.md) documentation.
 
  
diff --git a/docs/src/main/paradox/typed/durable-state/persistence.md b/docs/src/main/paradox/typed/durable-state/persistence.md
index 80eedd9fa8..406524ee71 100644
--- a/docs/src/main/paradox/typed/durable-state/persistence.md
+++ b/docs/src/main/paradox/typed/durable-state/persistence.md
@@ -1,11 +1,11 @@
 ---
-project.description: Durable State with Akka Persistence enables actors to persist its state for recovery on failure or when migrated within a cluster.
+project.description: Durable State with Pekko Persistence enables actors to persist its state for recovery on failure or when migrated within a cluster.
 ---
 # Durable State
 
 ## Module info
 
-To use Akka Persistence, add the module to your project:
+To use Pekko Persistence, add the module to your project:
 
 @@dependency[sbt,Maven,Gradle] {
   bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
@@ -26,15 +26,15 @@ You also have to select durable state store plugin, see @ref:[Persistence Plugin
 
 ## Introduction
 
-This model of Akka Persistence enables a stateful actor / entity to store the full state after processing each command instead of using event sourcing. This reduces the conceptual complexity and can be a handy tool for simple use cases. Very much like a CRUD based operation, the API is conceptually simple - a function from current state and incoming command to the next state which replaces the current state in the database. 
+This model of Pekko Persistence enables a stateful actor / entity to store the full state after processing each command instead of using event sourcing. This reduces the conceptual complexity and can be a handy tool for simple use cases. Very much like a CRUD based operation, the API is conceptually simple - a function from current state and incoming command to the next state which replaces the current state in the database. 
 
 ```
 (State, Command) => State
 ```
 
-The current state is always stored in the database. Since only the latest state is stored, we don't have access to any of the history of changes, unlike event sourced storage. Akka Persistence would read that state and store it in memory. After processing of the command is finished, the new state will be stored in the database. The processing of the next command will not start until the state has been successfully stored in the database.
+The current state is always stored in the database. Since only the latest state is stored, we don't have access to any of the history of changes, unlike event sourced storage. Pekko Persistence would read that state and store it in memory. After processing of the command is finished, the new state will be stored in the database. The processing of the next command will not start until the state has been successfully stored in the database.
 
-Akka Persistence also supports @ref:[Event Sourcing](../persistence.md) based implementation, where only the _events_ that are persisted by the actor are stored, but not the actual state of the actor. By storing all events, using this model, 
+Pekko Persistence also supports @ref:[Event Sourcing](../persistence.md) based implementation, where only the _events_ that are persisted by the actor are stored, but not the actual state of the actor. By storing all events, using this model, 
 a stateful actor can be recovered by replaying the stored events to the actor, which allows it to rebuild its state.
 
 Since each entity lives on one node, consistency is guaranteed and reads can be served directly from memory. For details on how this guarantee
@@ -42,7 +42,7 @@ is ensured, have a look at the @ref:[Cluster Sharding and DurableStateBehavior](
 
 ## Example and core API
 
-Let's start with a simple example that models a counter using an Akka persistent actor. The minimum required for a @apidoc[DurableStateBehavior] is:
+Let's start with a simple example that models a counter using an Pekko persistent actor. The minimum required for a @apidoc[DurableStateBehavior] is:
 
 Scala
 :  @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #structure }
@@ -51,7 +51,7 @@ Java
 :  @@snip [DurableStatePersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #structure }
 
 The first important thing to notice is the `Behavior` of a persistent actor is typed to the type of the `Command`
-because this is the type of message a persistent actor should receive. In Akka this is now enforced by the type system.
+because this is the type of message a persistent actor should receive. In Pekko this is now enforced by the type system.
 
 The components that make up a `DurableStateBehavior` are:
 
@@ -219,7 +219,7 @@ would fit in the memory of one node. Cluster sharding improves the resilience of
 the persistent actors are quickly started on a new node and can resume operations.
 
 The `DurableStateBehavior` can then be run as any plain actor as described in @ref:[actors documentation](../actors.md),
-but since Akka Persistence is based on the single-writer principle, the persistent actors are typically used together
+but since Pekko Persistence is based on the single-writer principle, the persistent actors are typically used together
 with Cluster Sharding. For a particular `persistenceId` only one persistent actor instance should be active at one time.
 Cluster Sharding ensures that there is only one active entity (or actor instance) for each id. 
 
@@ -244,7 +244,7 @@ persisted as an `Effect` by the `commandHandler`.
 The reason a new behavior can't be returned is that behavior is part of the actor's
 state and must also carefully be reconstructed during recovery from the persisted state. This would imply
 that the state needs to be encoded such that the behavior can also be restored from it. 
-That would be very prone to mistakes which is why it is not allowed in Akka Persistence.
+That would be very prone to mistakes which is why it is not allowed in Pekko Persistence.
 
 For basic actors you can use the same set of command handlers independent of what state the entity is in.
 For more complex actors it's useful to be able to change the behavior in the sense
diff --git a/docs/src/main/paradox/typed/extending.md b/docs/src/main/paradox/typed/extending.md
index 56f4ed0600..06cc2082ec 100644
--- a/docs/src/main/paradox/typed/extending.md
+++ b/docs/src/main/paradox/typed/extending.md
@@ -1,18 +1,18 @@
-# Extending Akka
+# Extending Pekko
 
-Akka extensions can be used for almost anything, they provide a way to create
+Pekko extensions can be used for almost anything, they provide a way to create
 an instance of a class only once for the whole ActorSystem and be able to access
-it from anywhere. Akka features such as Cluster, Serialization and Sharding are all
-Akka extensions. Below is the use-case of managing an expensive database connection 
+it from anywhere. Pekko features such as Cluster, Serialization and Sharding are all
+Pekko extensions. Below is the use-case of managing an expensive database connection 
 pool and accessing it from various places in your application.
 
 You can choose to have your Extension loaded on-demand or at `ActorSystem` creation 
-time through the Akka configuration.
+time through the Pekko configuration.
 Details on how to make that happens are below, in the @ref:[Loading from Configuration](extending.md#loading) section.
 
 @@@ warning
 
-Since an extension is a way to hook into Akka itself, the implementor of the extension needs to
+Since an extension is a way to hook into Pekko itself, the implementor of the extension needs to
 ensure the thread safety and that it is non-blocking.
 
 @@@
@@ -59,7 +59,7 @@ The `DatabaseConnectionPool` can be looked up in this way any number of times an
 <a id="loading"></a>
 ## Loading from configuration
 
-To be able to load extensions from your Akka configuration you must add FQCNs of implementations of the `ExtensionId`
+To be able to load extensions from your Pekko configuration you must add FQCNs of implementations of the `ExtensionId`
 in the `pekko.actor.typed.extensions` section of the config you provide to your `ActorSystem`.
 
 Scala
diff --git a/docs/src/main/paradox/typed/fault-tolerance.md b/docs/src/main/paradox/typed/fault-tolerance.md
index 1b49eb143b..3108498314 100644
--- a/docs/src/main/paradox/typed/fault-tolerance.md
+++ b/docs/src/main/paradox/typed/fault-tolerance.md
@@ -1,6 +1,6 @@
 # Fault Tolerance
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Fault Tolerance](../fault-tolerance.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Fault Tolerance](../fault-tolerance.md).
 
 When an actor throws an unexpected exception, a failure, while processing a message or during initialization, the actor
 will by default be stopped.
@@ -28,7 +28,7 @@ with a fresh state that we know is valid.
 
 ## Supervision
 
-In Akka this "somewhere else" is called supervision. Supervision allows you to declaratively describe what should happen when certain types of exceptions are thrown inside an actor. 
+In Pekko this "somewhere else" is called supervision. Supervision allows you to declaratively describe what should happen when certain types of exceptions are thrown inside an actor. 
 
 The default @ref:[supervision](../general/supervision.md) strategy is to stop the actor if an exception is thrown. 
 In many cases you will want to further customize this behavior. To use supervision the actual Actor behavior is wrapped using @apidoc[Behaviors.supervise](typed.*.Behaviors$) {scala="#supervise[T](wrapped:org.apache.pekko.actor.typed.Behavior[T]):org.apache.pekko.actor.typed.scaladsl.Behaviors.Supervise[T]" java="#supervise(org.apache.pekko.actor.typed.Behavior)"}. 
@@ -149,7 +149,7 @@ to cleanup resources.
 ## Bubble failures up through the hierarchy
 
 In some scenarios it may be useful to push the decision about what to do on a failure upwards in the Actor hierarchy
- and let the parent actor handle what should happen on failures (in classic Akka Actors this is how it works by default).
+ and let the parent actor handle what should happen on failures (in classic Pekko Actors this is how it works by default).
 
 For a parent to be notified when a child is terminated it has to @ref:[watch](actor-lifecycle.md#watching-actors) the
 child. If the child was stopped because of a failure the @apidoc[ChildFailed] signal will be received which will contain the
diff --git a/docs/src/main/paradox/typed/from-classic.md b/docs/src/main/paradox/typed/from-classic.md
index d07a9dc10f..0fd2f9bc79 100644
--- a/docs/src/main/paradox/typed/from-classic.md
+++ b/docs/src/main/paradox/typed/from-classic.md
@@ -1,21 +1,19 @@
-# Learning Akka Typed from Classic
+# Learning Pekko Typed from Classic
 
-Akka Classic is the original Actor APIs, which have been improved by more type safe and guided Actor APIs,
-known as Akka Typed.
+Pekko Classic is the original Actor APIs, which have been improved by more type safe and guided Actor APIs,
+known as Pekko Typed.
 
-If you already know the classic Actor APIs and would like to learn Akka Typed, this reference is a good resource.
+If you already know the classic Actor APIs and would like to learn Pekko Typed, this reference is a good resource.
 Many concepts are the same and this page tries to highlight differences and how to do certain things
 in Typed compared to classic.
 
-You should probably learn some of the basics of Akka Typed to see how it looks like before diving into
+You should probably learn some of the basics of Pekko Typed to see how it looks like before diving into
 the differences and details described here. A good starting point for that is the
 @ref:[IoT example](guide/tutorial_3.md) in the Getting Started Guide or the examples shown in
 @ref:[Introduction to Actors](actors.md).
 
-Another good resource to learning Akka Typed is Manuel Bernhardt's [Tour of Akka Typed](https://manuel.bernhardt.io/articles/#akka-typed).
-
-Note that Akka Classic is still fully supported and existing applications can continue to use
-the classic APIs. It is also possible to use Akka Typed together with classic actors within the same
+Note that Pekko Classic is still fully supported and existing applications can continue to use
+the classic APIs. It is also possible to use Pekko Typed together with classic actors within the same
 ActorSystem, see @ref[coexistence](coexisting.md). For new projects we recommend using the new Actor APIs.
 
 ## Dependencies
@@ -23,10 +21,10 @@ ActorSystem, see @ref[coexistence](coexisting.md). For new projects we recommend
 The dependencies of the Typed modules are named by adding `-typed` suffix of the corresponding classic
 module, with a few exceptions.
 
-For example `akka-cluster-typed`:
+For example `pekko-cluster-typed`:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -36,26 +34,26 @@ For example `akka-cluster-typed`:
 
 Artifact names:
 
-| Classic               | Typed                       |
-|-----------------------|-----------------------------|
-| akka-actor            | akka-actor-typed            |
-| akka-cluster          | akka-cluster-typed          |
-| akka-cluster-sharding | akka-cluster-sharding-typed |
-| akka-cluster-tools    | akka-cluster-typed          |
-| akka-distributed-data | akka-cluster-typed          |
-| akka-persistence      | akka-persistence-typed      |
-| akka-stream           | akka-stream-typed           |
-| akka-testkit          | akka-actor-testkit-typed    |
+| Classic                | Typed                        |
+|------------------------|------------------------------|
+| pekko-actor            | pekko-actor-typed            |
+| pekko-cluster          | pekko-cluster-typed          |
+| pekko-cluster-sharding | pekko-cluster-sharding-typed |
+| pekko-cluster-tools    | pekko-cluster-typed          |
+| pekko-distributed-data | pekko-cluster-typed          |
+| pekko-persistence      | pekko-persistence-typed      |
+| pekko-stream           | pekko-stream-typed           |
+| pekko-testkit          | akka-actor-testkit-typed     |
 
 Cluster Singleton and Distributed Data are included in `akka-cluster-typed`.
 
-Artifacts not listed in above table don't have a specific API for Akka Typed.
+Artifacts not listed in above table don't have a specific API for Pekko Typed.
 
 ## Package names
 
-The convention of the package names in Akka Typed is to add `typed.scaladsl` and `typed.javadsl` to the
-corresponding Akka classic package name. `scaladsl` and `javadsl` is the convention to separate Scala and Java
-APIs, which is familiar from Akka Streams.
+The convention of the package names in Pekko Typed is to add `typed.scaladsl` and `typed.javadsl` to the
+corresponding Pekko classic package name. `scaladsl` and `javadsl` is the convention to separate Scala and Java
+APIs, which is familiar from Pekko Streams.
 
 Examples of a few package names:
 
@@ -154,7 +152,7 @@ when creating an `ActorSystem` in Typed you give it a `Behavior` that will be us
 as the user guardian.
 
 Additional actors for an application are created from the user guardian alongside performing the initialization
-of Akka components such as Cluster Sharding. In contrast, in a classic `ActorSystem`, such initialization is
+of Pekko components such as Cluster Sharding. In contrast, in a classic `ActorSystem`, such initialization is
 typically performed from the "outside".
 
 The `actorOf` method of the classic `ActorSystem` is typically used to create a few (or many) top level actors. The
@@ -364,7 +362,7 @@ Links to reference documentation:
 ## FSM
 
 With classic actors there is explicit support for building Finite State Machines. No support is needed in
-Akka Typed as it is straightforward to represent FSMs with behaviors.
+Pekko Typed as it is straightforward to represent FSMs with behaviors.
 
 Links to reference documentation:
 
diff --git a/docs/src/main/paradox/typed/fsm.md b/docs/src/main/paradox/typed/fsm.md
index a1ebcb2644..8a2950f0c3 100644
--- a/docs/src/main/paradox/typed/fsm.md
+++ b/docs/src/main/paradox/typed/fsm.md
@@ -1,9 +1,9 @@
 ---
-project.description: Finite State Machines (FSM) with Akka Actors.
+project.description: Finite State Machines (FSM) with Pekko Actors.
 ---
 # Behaviors as finite state machines
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic FSM](../fsm.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic FSM](../fsm.md).
 
 An actor can be used to model a Finite State Machine (FSM).
 
@@ -59,8 +59,8 @@ To set state timeouts use `Behaviors.withTimers` along with a `startSingleTimer`
 
 ## Example project
 
-@java[@extref[FSM example project](samples:akka-samples-fsm-java)]
-@scala[@extref[FSM example project](samples:akka-samples-fsm-scala)]
+@java[@extref[FSM example project](samples:pekko-samples-fsm-java)]
+@scala[@extref[FSM example project](samples:pekko-samples-fsm-scala)]
 is an example project that can be downloaded, and with instructions of how to run.
 
 This project contains a Dining Hakkers sample illustrating how to model a Finite State Machine (FSM) with actors.
diff --git a/docs/src/main/paradox/typed/guide/actors-intro.md b/docs/src/main/paradox/typed/guide/actors-intro.md
index 12540fc173..f837c605ec 100644
--- a/docs/src/main/paradox/typed/guide/actors-intro.md
+++ b/docs/src/main/paradox/typed/guide/actors-intro.md
@@ -85,10 +85,10 @@ error situations differently. There are two kinds of errors we need to consider:
    validation issue, like a non-existent user ID). In this case, the service encapsulated by the target actor is intact,
    it is only the task itself that is erroneous.
    The service actor should reply to the sender with a message, presenting the error case. There is nothing special here, errors are part of the domain and hence become ordinary messages.
- * The second case is when a service itself encounters an internal fault. Akka enforces that all actors are organized
+ * The second case is when a service itself encounters an internal fault. Pekko enforces that all actors are organized
    into a tree-like hierarchy, i.e. an actor that creates another actor becomes the parent of that new actor. This is very similar to how operating systems organize processes into a tree. Just like with processes, when an actor fails,
    its parent actor can decide how to react to the failure. Also, if the parent actor is stopped,
-   all of its children are recursively stopped, too. This service is called supervision and it is central to Akka.
+   all of its children are recursively stopped, too. This service is called supervision and it is central to Pekko.
 
 A supervisor strategy is typically defined by the parent actor when it is starting a child actor. It can decide
 to restart the child actor on certain types of failures or stop it completely on others. Children never go silently
@@ -96,4 +96,4 @@ dead (with the notable exception of entering an infinite loop) instead they are
 strategy can react to the fault, or they are stopped (in which case interested parties are notified).
 There is always a responsible entity for managing an actor: its parent. Restarts are not visible from the outside: collaborating actors can keep sending messages while the target actor restarts.
 
-Now, let's take a short tour of the functionality Akka provides.
+Now, let's take a short tour of the functionality Pekko provides.
diff --git a/docs/src/main/paradox/typed/guide/introduction.md b/docs/src/main/paradox/typed/guide/introduction.md
index 86576d8271..271af0bea7 100644
--- a/docs/src/main/paradox/typed/guide/introduction.md
+++ b/docs/src/main/paradox/typed/guide/introduction.md
@@ -1,6 +1,6 @@
-# Introduction to Akka
+# Introduction to Pekko
 
-Welcome to Akka, a set of open-source libraries for designing scalable, resilient systems that span processor cores and networks. Akka allows you to focus on meeting business needs instead of writing low-level code to provide reliable behavior, fault tolerance, and high performance.
+Welcome to Pekko, a set of open-source libraries for designing scalable, resilient systems that span processor cores and networks. Pekko allows you to focus on meeting business needs instead of writing low-level code to provide reliable behavior, fault tolerance, and high performance.
 
 Many common practices and accepted programming models do not address important challenges
 inherent in designing systems for modern computer architectures. To be
@@ -9,34 +9,34 @@ crash without responding, messages get lost without a trace on the wire, and
 network latency fluctuates. These problems occur regularly in carefully managed
 intra-datacenter environments - even more so in virtualized architectures.
 
-To help you deal with these realities, Akka provides:
+To help you deal with these realities, Pekko provides:
 
  * Multi-threaded behavior without the use of low-level concurrency constructs like
    atomics or locks &#8212; relieving you from even thinking about memory visibility issues.
  * Transparent remote communication between systems and their components &#8212; relieving you from writing and maintaining difficult networking code.
  * A clustered, high-availability architecture that is elastic, scales in or out, on demand &#8212; enabling you to deliver a truly reactive system.
 
-Akka's use of the actor model provides a level of abstraction that makes it
+Pekko's use of the actor model provides a level of abstraction that makes it
 easier to write correct concurrent, parallel and distributed systems. The actor
-model spans the full set of Akka libraries, providing you with a consistent way
-of understanding and using them. Thus, Akka offers a depth of integration that
+model spans the full set of Pekko libraries, providing you with a consistent way
+of understanding and using them. Thus, Pekko offers a depth of integration that
 you cannot achieve by picking libraries to solve individual problems and trying
 to piece them together.
 
-By learning Akka and how to use the actor model, you will gain access to a vast
+By learning Pekko and how to use the actor model, you will gain access to a vast
 and deep set of tools that solve difficult distributed/parallel systems problems
 in a uniform programming model where everything fits together tightly and
 efficiently.
 
 ## How to get started
 
-If this is your first experience with Akka, we recommend that you start by
+If this is your first experience with Pekko, we recommend that you start by
 running a simple Hello World project. See the @scala[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-scala/)] @java[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-java/)] for
 instructions on downloading and running the Hello World example. The *Quickstart* guide walks you through example code that introduces how to define actor systems, actors, and messages as well as how to use the test module and logging. Within 30 minutes, you should be able to run the Hello World example and learn how it is constructed.
 
-This *Getting Started* guide provides the next level of information. It covers why the actor model fits the needs of modern distributed systems and includes a tutorial that will help further your knowledge of Akka. Topics include:
+This *Getting Started* guide provides the next level of information. It covers why the actor model fits the needs of modern distributed systems and includes a tutorial that will help further your knowledge of Pekko. Topics include:
 
 * @ref:[Why modern systems need a new programming model](actors-motivation.md)
 * @ref:[How the actor model meets the needs of concurrent, distributed systems](actors-intro.md)
-* @ref:[Overview of Akka libraries and modules](modules.md)
-* A @ref:[more complex example](tutorial.md) that builds on the Hello World example to illustrate common Akka patterns.
+* @ref:[Overview of Pekko libraries and modules](modules.md)
+* A @ref:[more complex example](tutorial.md) that builds on the Hello World example to illustrate common Pekko patterns.
diff --git a/docs/src/main/paradox/typed/guide/modules.md b/docs/src/main/paradox/typed/guide/modules.md
index fb1cecb963..6707e24926 100644
--- a/docs/src/main/paradox/typed/guide/modules.md
+++ b/docs/src/main/paradox/typed/guide/modules.md
@@ -1,8 +1,8 @@
-# Overview of Akka libraries and modules
+# Overview of Pekko libraries and modules
 
-Before delving into some best practices for writing actors, it will be helpful to preview the most commonly used Akka libraries. This will help you start thinking about the functionality you want to use in your system. All core Akka functionality is available as Open Source Software (OSS). Lightbend sponsors Akka development but can also help you with [commercial offerings ](https://www.lightbend.com/lightbend-subscription) such as training, consulting, support, and [Enterprise capabilit [...]
+Before delving into some best practices for writing actors, it will be helpful to preview the most commonly used Pekko libraries. This will help you start thinking about the functionality you want to use in your system. All core Pekko functionality is available as Open Source Software (OSS).
 
-The following capabilities are included with Akka OSS and are introduced later on this page:
+The following capabilities are included with Pekko OSS and are introduced later on this page:
 
 * @ref:[Actor library](#actor-library)
 * @ref:[Remoting](#remoting)
@@ -13,30 +13,17 @@ The following capabilities are included with Akka OSS and are introduced later o
 * @ref:[Projections](#projections)
 * @ref:[Distributed Data](#distributed-data)
 * @ref:[Streams](#streams)
-* @ref:[Alpakka](#alpakka)
+* @ref:[Pekko Connectors](#pekko-connectors)
 * @ref:[HTTP](#http)
 * @ref:[gRPC](#grpc)
-* [Other Akka modules](https://doc.akka.io/docs/akka/current/common/other-modules.html)
+* [Other Pekko modules](https://doc.akka.io/docs/akka/current/common/other-modules.html)
 
-With a [Lightbend Platform Subscription](https://www.lightbend.com/lightbend-subscription), you can use [Akka Enhancements](https://doc.akka.io/docs/akka-enhancements/current/) that includes:
-
-[Akka Resilience Enhancements](https://doc.akka.io/docs/akka-enhancements/current/akka-resilience-enhancements.html):
-
-* [Configuration Checker](https://doc.akka.io/docs/akka-enhancements/current/config-checker.html) &#8212; Checks for potential configuration issues and logs suggestions.
-* [Diagnostics Recorder](https://doc.akka.io/docs/akka-enhancements/current/diagnostics-recorder.html) &#8212; Captures configuration and system information in a format that makes it easy to troubleshoot issues during development and production.
-* [Thread Starvation Detector](https://doc.akka.io/docs/akka-enhancements/current/starvation-detector.html) &#8212; Monitors an Akka system dispatcher and logs warnings if it becomes unresponsive.
-* [Fast Failover](https://doc.akka.io/docs/akka-enhancements/current/fast-failover.html) &#8212; Fast failover for Cluster Sharding.
-
-[Akka Persistence Enhancements](https://doc.akka.io/docs/akka-enhancements/current/akka-persistence-enhancements.html):
-
-* [GDPR for Akka Persistence](https://doc.akka.io/docs/akka-enhancements/current/gdpr/index.html) &#8212; Data shredding can be used to forget information in events.
-
-This page does not list all available modules, but overviews the main functionality and gives you an idea of the level of sophistication you can reach when you start building systems on top of Akka.
+This page does not list all available modules, but overviews the main functionality and gives you an idea of the level of sophistication you can reach when you start building systems on top of Pekko.
 
 ### Actor library
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -44,7 +31,7 @@ This page does not list all available modules, but overviews the main functional
   version=PekkoVersion
 }
 
-The core Akka library is `akka-actor-typed`, but actors are used across Akka libraries, providing a consistent, integrated model that relieves you from individually
+The core Pekko library is `pekko-actor-typed`, but actors are used across Pekko libraries, providing a consistent, integrated model that relieves you from individually
 solving the challenges that arise in concurrent or distributed system design. From a birds-eye view,
 actors are a programming paradigm that takes encapsulation, one of the pillars of OOP, to its extreme.
 Unlike objects, actors encapsulate not only their
@@ -64,7 +51,7 @@ Challenges that actors solve include the following:
 ### Remoting
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -90,7 +77,7 @@ Challenges Remoting solves include the following:
 ### Cluster
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -116,7 +103,7 @@ Challenges the Cluster module solves include the following:
 ### Cluster Sharding
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -124,7 +111,7 @@ Challenges the Cluster module solves include the following:
   version=PekkoVersion
 }
 
-Sharding helps to solve the problem of distributing a set of actors among members of an Akka cluster.
+Sharding helps to solve the problem of distributing a set of actors among members of an Pekko cluster.
 Sharding is a pattern that mostly used together with Persistence to balance a large set of persistent entities
 (backed by actors) to members of a cluster and also migrate them to other nodes when members crash or leave.
 
@@ -138,7 +125,7 @@ Challenges that Sharding solves include the following:
 ### Cluster Singleton
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -162,7 +149,7 @@ The Singleton module can be used to solve these challenges:
 ### Persistence
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -187,7 +174,7 @@ Persistence tackles the following challenges:
 ### Projections
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -207,7 +194,7 @@ Challenges Projections solve include the following:
 ### Distributed Data
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -216,7 +203,7 @@ Challenges Projections solve include the following:
 }
 
 In situations where eventual consistency is acceptable, it is possible to share data between nodes in
-an Akka Cluster and accept both reads and writes even in the face of cluster partitions. This can be
+an Pekko Cluster and accept both reads and writes even in the face of cluster partitions. This can be
 achieved using [Conflict Free Replicated Data Types](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) (CRDTs), where writes on different nodes can
 happen concurrently and are merged in a predictable way afterward. The Distributed Data module
 provides infrastructure to share data and a number of useful data types.
@@ -229,7 +216,7 @@ Distributed Data is intended to solve the following challenges:
 ### Streams
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -253,25 +240,25 @@ Streams solve the following challenges:
 * How to connect asynchronous services in a flexible way to each other with high performance.
 * How to provide or consume Reactive Streams compliant interfaces to interface with a third party library.
 
-### Alpakka
+### Pekko Connectors
 
-[Alpakka](https://doc.akka.io/docs/alpakka/current/) is a separate module from Akka.
+[Pekko Connectors](https://doc.akka.io/docs/alpakka/current/) is a separate module from Pekko.
 
-Alpakka is collection of modules built upon the Streams API to provide Reactive Stream connector
+Pekko Connectors is collection of modules built upon the Streams API to provide Reactive Stream connector
 implementations for a variety of technologies common in the cloud and infrastructure landscape.  
-See the [Alpakka overview page](https://doc.akka.io/docs/alpakka/current/overview.html) for more details on the API and the implementation modules available.
+See the [Pekko Connectors overview page](https://doc.akka.io/docs/alpakka/current/overview.html) for more details on the API and the implementation modules available.
 
-Alpakka helps solve the following challenges:
+Pekko Connectors help solve the following challenges:
 
 * Connecting various infrastructure or persistence components to Stream based flows.
 * Connecting to legacy systems in a manner that adheres to a Reactive Streams API.
 
 ### HTTP
 
-[Akka HTTP](https://doc.akka.io/docs/akka-http/current/) is a separate module from Akka.
+[Pekko HTTP](https://doc.akka.io/docs/akka-http/current/) is a separate module from Pekko.
 
-The de facto standard for providing APIs remotely, internal or external, is [HTTP](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol). Akka provides a library to construct or consume such HTTP services by giving a set of tools to create HTTP services (and serve them) and a client that can be
-used to consume other services. These tools are particularly suited to streaming in and out a large set of data or real-time events by leveraging the underlying model of Akka Streams.
+The de facto standard for providing APIs remotely, internal or external, is [HTTP](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol). Pekko provides a library to construct or consume such HTTP services by giving a set of tools to create HTTP services (and serve them) and a client that can be
+used to consume other services. These tools are particularly suited to streaming in and out a large set of data or real-time events by leveraging the underlying model of Pekko Streams.
 
 Some of the challenges that HTTP tackles:
 
@@ -281,11 +268,11 @@ Some of the challenges that HTTP tackles:
 
 ### gRPC
 
-[Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/index.html) is a separate module from Akka.
+[Pekko gRPC](https://doc.akka.io/docs/akka-grpc/current/index.html) is a separate module from Pekko.
 
-This library provides an implementation of gRPC that integrates nicely with the @ref:[HTTP](#http) and @ref:[Streams](#streams) modules.  It is capable of generating both client and server-side artifacts from protobuf service definitions, which can then be exposed using Akka HTTP, and handled using Streams.
+This library provides an implementation of gRPC that integrates nicely with the @ref:[HTTP](#http) and @ref:[Streams](#streams) modules.  It is capable of generating both client and server-side artifacts from protobuf service definitions, which can then be exposed using Pekko HTTP, and handled using Streams.
 
-Some of the challenges that Akka gRPC tackles:
+Some of the challenges that Pekko gRPC tackles:
 
 * Exposing services with all the benefits of gRPC & protobuf:  
   * Schema-first contract
@@ -297,7 +284,7 @@ Some of the challenges that Akka gRPC tackles:
 
 ### Example of module use
 
-Akka modules integrate together seamlessly. For example, think of a large set of stateful business objects, such as documents or shopping carts, that website users access. If you model these as sharded entities, using Sharding and Persistence, they will be balanced across a cluster that you can scale out on-demand. They will be available during spikes that come from advertising campaigns or before holidays will be handled, even if some systems crash. You can also take the real-time strea [...]
+Pekko modules integrate together seamlessly. For example, think of a large set of stateful business objects, such as documents or shopping carts, that website users access. If you model these as sharded entities, using Sharding and Persistence, they will be balanced across a cluster that you can scale out on-demand. They will be available during spikes that come from advertising campaigns or before holidays will be handled, even if some systems crash. You can also take the real-time stre [...]
 operators and expose it as web socket connections served by a load balanced set of HTTP servers hosted by your cluster
 to power your real-time business analytics tool.
 
diff --git a/docs/src/main/paradox/typed/guide/tutorial.md b/docs/src/main/paradox/typed/guide/tutorial.md
index e2b0624eeb..d1d5f096d6 100644
--- a/docs/src/main/paradox/typed/guide/tutorial.md
+++ b/docs/src/main/paradox/typed/guide/tutorial.md
@@ -1,21 +1,21 @@
 # Introduction to the Example
 
 When writing prose, the hardest part is often composing the first few sentences. There is a similar "blank canvas" feeling
-when starting to build an Akka system. You might wonder: Which should be the first actor? Where should it live? What should it do?
-Fortunately &#8212; unlike with prose &#8212; established best practices can guide us through these initial steps. In the remainder of this guide, we examine the core logic of a simple Akka application to introduce you to actors and show you how to formulate solutions with them. The example demonstrates common patterns that will help you kickstart your Akka projects.
+when starting to build an Pekko system. You might wonder: Which should be the first actor? Where should it live? What should it do?
+Fortunately &#8212; unlike with prose &#8212; established best practices can guide us through these initial steps. In the remainder of this guide, we examine the core logic of a simple Pekko application to introduce you to actors and show you how to formulate solutions with them. The example demonstrates common patterns that will help you kickstart your Pekko projects.
 
 ## Prerequisites
-You should have already followed the instructions in the @scala[[Akka Quickstart with Scala guide](https://developer.lightbend.com/guides/akka-quickstart-scala/)] @java[[Akka Quickstart with Java guide](https://developer.lightbend.com/guides/akka-quickstart-java/)] to download and run the Hello World example. You will use this as a seed project and add the functionality described in this tutorial.
+You should have already followed the instructions in the @scala[[Pekko Quickstart with Scala guide](https://developer.lightbend.com/guides/akka-quickstart-scala/)] @java[[Pekko Quickstart with Java guide](https://developer.lightbend.com/guides/akka-quickstart-java/)] to download and run the Hello World example. You will use this as a seed project and add the functionality described in this tutorial.
 
 @@@ note
-Both the Java and Scala DSLs of Akka modules bundled in the same JAR. For a smooth development experience,
+Both the Java and Scala DSLs of Pekko modules bundled in the same JAR. For a smooth development experience,
 when using an IDE such as Eclipse or IntelliJ, you can disable the auto-importer from suggesting `javadsl`
 imports when working in Scala, or viceversa. See @ref:[IDE Tips](../../additional/ide.md). 
 @@@
 
 ## IoT example use case
 
-In this tutorial, we'll use Akka to build out part of an Internet of Things (IoT) system that reports data from sensor devices installed in customers' homes. The example focuses on temperature readings. The target use case allows customers to log in and view the last reported temperature from different areas of their homes. You can imagine that such sensors could also collect relative humidity or other interesting data and an application would likely support reading and changing device c [...]
+In this tutorial, we'll use Pekko to build out part of an Internet of Things (IoT) system that reports data from sensor devices installed in customers' homes. The example focuses on temperature readings. The target use case allows customers to log in and view the last reported temperature from different areas of their homes. You can imagine that such sensors could also collect relative humidity or other interesting data and an application would likely support reading and changing device  [...]
 
 In a real system, the application would be exposed to customers through a mobile app or browser. This guide concentrates only on the core logic for storing temperatures that would be called over a network protocol, such as HTTP. It also includes writing tests to help you get comfortable and proficient with testing actors.
 
diff --git a/docs/src/main/paradox/typed/guide/tutorial_1.md b/docs/src/main/paradox/typed/guide/tutorial_1.md
index 0a13760d21..b877e75901 100644
--- a/docs/src/main/paradox/typed/guide/tutorial_1.md
+++ b/docs/src/main/paradox/typed/guide/tutorial_1.md
@@ -5,21 +5,21 @@
 Add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group="org.apache.pekko"
-  artifact="akka-actor-typed_$scala.binary.version$"
+  artifact="pekko-actor-typed_$scala.binary.version$"
   version=PekkoVersion
 }
 
 ## Introduction
 
-Use of Akka relieves you from creating the infrastructure for an actor system and from writing the low-level code necessary to control basic behavior. To appreciate this, let's look at the relationships between actors you create in your code and those that Akka creates and manages for you internally, the actor lifecycle, and failure handling.
+Use of Pekko relieves you from creating the infrastructure for an actor system and from writing the low-level code necessary to control basic behavior. To appreciate this, let's look at the relationships between actors you create in your code and those that Pekko creates and manages for you internally, the actor lifecycle, and failure handling.
 
-## The Akka actor hierarchy
+## The Pekko actor hierarchy
 
-An actor in Akka always belongs to a parent. You create an actor by calling  @apidoc[ActorContext.spawn()](actor.typed.*.ActorContext) {scala="#spawn[U](behavior:org.apache.pekko.actor.typed.Behavior[U],name:String,props:org.apache.pekko.actor.typed.Props):org.apache.pekko.actor.typed.ActorRef[U]" java="#spawn(org.apache.pekko.actor.typed.Behavior,java.lang.String,org.apache.pekko.actor.typed.Props)"}. The creator actor becomes the
+An actor in Pekko always belongs to a parent. You create an actor by calling  @apidoc[ActorContext.spawn()](actor.typed.*.ActorContext) {scala="#spawn[U](behavior:org.apache.pekko.actor.typed.Behavior[U],name:String,props:org.apache.pekko.actor.typed.Props):org.apache.pekko.actor.typed.ActorRef[U]" java="#spawn(org.apache.pekko.actor.typed.Behavior,java.lang.String,org.apache.pekko.actor.typed.Props)"}. The creator actor becomes the
 _parent_ of the newly created _child_ actor. You might ask then, who is the parent of the _first_ actor you create?
 
 As illustrated below, all actors have a common parent, the user guardian, which is defined and created when you start the @apidoc[actor.typed.ActorSystem].
@@ -27,15 +27,15 @@ As we covered in the @scala[[Quickstart Guide](https://developer.lightbend.com/g
 
 ![actor tree diagram](diagrams/actor_top_tree.png)
 
-In fact, before your first actor is started, Akka has already created two actors in the system. The names of these built-in actors contain _guardian_. The guardian actors include:
+In fact, before your first actor is started, Pekko has already created two actors in the system. The names of these built-in actors contain _guardian_. The guardian actors include:
 
  - `/` the so-called _root guardian_. This is the parent of all actors in the system, and the last one to stop when the system itself is terminated.
- - `/system` the _system guardian_. Akka or other libraries built on top of Akka may create actors in the _system_ namespace.
+ - `/system` the _system guardian_. Pekko or other libraries built on top of Pekko may create actors in the _system_ namespace.
  - `/user` the _user guardian_. This is the top level actor that you provide to start all other actors in your application.
  
-The easiest way to see the actor hierarchy in action is to print @apidoc[actor.typed.ActorRef] instances. In this small experiment, we create an actor, print its reference, create a child of this actor, and print the child's reference. We start with the Hello World project, if you have not downloaded it, download the Quickstart project from the @scala[[Lightbend Tech Hub](https://developer.lightbend.com/start/?group=akka&amp;project=akka-quickstart-scala)]@java[[Lightbend Tech Hub](https [...]
+The easiest way to see the actor hierarchy in action is to print @apidoc[actor.typed.ActorRef] instances. In this small experiment, we create an actor, print its reference, create a child of this actor, and print the child's reference. 
 
-In your Hello World project, navigate to the `com.example` package and create @scala[a new Scala file called `ActorHierarchyExperiments.scala` here. Copy and paste the code from the snippet below to this new source file]@java[a Java file for each of the classes in the snippet below and copy the respective contents]. Save your @scala[file and run `sbt "runMain com.example.ActorHierarchyExperiments"`]@java[files and run `com.example.ActorHierarchyExperiments` from your build tool or IDE] t [...]
+In a new project, create a `com.example` package and with a @scala[a new Scala file called `ActorHierarchyExperiments.scala` here. Copy and paste the code from the snippet below to this new source file]@java[a Java file for each of the classes in the snippet below and copy the respective contents]. Save your @scala[file and run `sbt "runMain com.example.ActorHierarchyExperiments"`]@java[files and run `com.example.ActorHierarchyExperiments` from your build tool or IDE] to observe the output.
 
 Scala
 :   @@snip [ActorHierarchyExperiments.scala](/docs/src/test/scala/typed/tutorial_1/ActorHierarchyExperiments.scala) { #print-refs }
@@ -46,13 +46,13 @@ Java
 Note the way a message asked the first actor to do its work. We sent the message by using the parent's reference: @scala[`firstRef ! "printit"`]@java[`firstRef.tell("printit", ActorRef.noSender())`]. When the code executes, the output includes the references for the first actor and the child it created as part of the `printit` case. Your output should look similar to the following:
 
 ```
-First: Actor[akka://testSystem/user/first-actor#1053618476]
-Second: Actor[akka://testSystem/user/first-actor/second-actor#-1544706041]
+First: Actor[pekko://testSystem/user/first-actor#1053618476]
+Second: Actor[pekko://testSystem/user/first-actor/second-actor#-1544706041]
 ```
 
 Notice the structure of the references:
 
-* Both paths start with `akka://testSystem/`. Since all actor references are valid URLs, `akka://` is the value of the protocol field.
+* Both paths start with `pekko://testSystem/`. Since all actor references are valid URLs, `pekko://` is the value of the protocol field.
 * Next, just like on the World Wide Web, the URL identifies the system. In this example, the system is named `testSystem`, but it could be any other name. If remote communication between multiple systems is enabled, this part of the URL includes the hostname so other systems can find it on the network.
 * Because the second actor's reference includes the path `/first-actor/`, it identifies it as a child of the first.
 * The last part of the actor reference, `#1053618476` or `#-1544706041`  is a unique identifier that you can ignore in most cases.
@@ -68,7 +68,7 @@ This behavior greatly simplifies resource cleanup and helps avoid resource leaks
 
 To stop an actor, the recommended pattern is to return @apidoc[Behaviors.stopped](typed.*.Behaviors$) {scala="#stopped[T]:org.apache.pekko.actor.typed.Behavior[T]" java="#stopped()"} inside the actor to stop itself, usually as a response to some user defined stop message or when the actor is done with its job. Stopping a child actor is technically possible by calling @apidoc[context.stop(childRef)](actor.typed.*.ActorContext) {scala="#stop[U](child:org.apache.pekko.actor.typed.ActorRef[U [...]
 
-The Akka actor API exposes some lifecycle signals, for example @apidoc[actor.typed.PostStop] is sent just after the actor has been stopped. No messages are processed after this point.
+The Pekko actor API exposes some lifecycle signals, for example @apidoc[actor.typed.PostStop] is sent just after the actor has been stopped. No messages are processed after this point.
 
 Let's use the `PostStop` lifecycle signal in a simple experiment to observe the behavior when we stop an actor. First, add the following 2 actor classes to your project:
 
@@ -126,7 +126,7 @@ supervised actor started
 supervised actor fails now
 supervised actor will be restarted
 supervised actor started
-[ERROR] [11/12/2018 12:03:27.171] [ActorHierarchyExperiments-pekko.actor.default-dispatcher-2] [akka://ActorHierarchyExperiments/user/supervising-actor/supervised-actor] Supervisor pekko.actor.typed.internal.RestartSupervisor@1c452254 saw failure: I failed!
+[ERROR] [11/12/2018 12:03:27.171] [ActorHierarchyExperiments-pekko.actor.default-dispatcher-2] [pekko://ActorHierarchyExperiments/user/supervising-actor/supervised-actor] Supervisor pekko.actor.typed.internal.RestartSupervisor@1c452254 saw failure: I failed!
 java.lang.Exception: I failed!
 	at typed.tutorial_1.SupervisedActor.onMessage(ActorHierarchyExperiments.scala:113)
 	at typed.tutorial_1.SupervisedActor.onMessage(ActorHierarchyExperiments.scala:106)
@@ -146,5 +146,5 @@ For the impatient, we also recommend looking into the @ref:[fault tolerance refe
 details.
 
 # Summary
-We've learned about how Akka manages actors in hierarchies where parents supervise their children and handle exceptions. We saw how to create a very simple actor and child. Next, we'll apply this knowledge to our example use case by modeling the communication necessary to get information from device actors. Later, we'll deal with how to manage the actors in groups.
+We've learned about how Pekko manages actors in hierarchies where parents supervise their children and handle exceptions. We saw how to create a very simple actor and child. Next, we'll apply this knowledge to our example use case by modeling the communication necessary to get information from device actors. Later, we'll deal with how to manage the actors in groups.
 
diff --git a/docs/src/main/paradox/typed/guide/tutorial_2.md b/docs/src/main/paradox/typed/guide/tutorial_2.md
index 59a2430b9e..05df0cab02 100644
--- a/docs/src/main/paradox/typed/guide/tutorial_2.md
+++ b/docs/src/main/paradox/typed/guide/tutorial_2.md
@@ -17,7 +17,7 @@ Scala
 Java
 :   @@snip [IotSupervisor.java](/docs/src/test/java/jdocs/typed/tutorial_2/IotSupervisor.java) { #iot-supervisor }
 
-The code is similar to the actor examples we used in the previous experiments, but notice that instead of `println()` we use Akka's built in logging facility via @scala[@scaladoc[context.log](pekko.actor.typed.scaladsl.ActorContext#log:org.slf4j.Logger)]@java[@javadoc[context.getLog()](pekko.actor.typed.javadsl.ActorContext#getLog())].
+The code is similar to the actor examples we used in the previous experiments, but notice that instead of `println()` we use Pekko's built in logging facility via @scala[@scaladoc[context.log](pekko.actor.typed.scaladsl.ActorContext#log:org.slf4j.Logger)]@java[@javadoc[context.getLog()](pekko.actor.typed.javadsl.ActorContext#getLog())].
 
 To provide the `main` entry point that creates the actor system, add the following code to the new @scala[`IotApp` object] @java[`IotMain` class].
 
diff --git a/docs/src/main/paradox/typed/guide/tutorial_3.md b/docs/src/main/paradox/typed/guide/tutorial_3.md
index cb5e388740..becc1a7eea 100644
--- a/docs/src/main/paradox/typed/guide/tutorial_3.md
+++ b/docs/src/main/paradox/typed/guide/tutorial_3.md
@@ -44,7 +44,7 @@ In addition, while sending inside the same JVM is significantly more reliable, i
 actor fails due to a programmer error while processing the message, the effect is the same as if a remote network request fails due to the remote host crashing while processing the message. Even though in both cases, the service recovers after a while (the actor is restarted by its supervisor, the host is restarted by an operator or by a monitoring system) individual requests are lost during the crash. **Therefore, writing your actors such that every
 message could possibly be lost is the safe, pessimistic bet.**
 
-But to further understand the need for flexibility in the protocol, it will help to consider Akka message ordering and message delivery guarantees. Akka provides the following behavior for message sends:
+But to further understand the need for flexibility in the protocol, it will help to consider Pekko message ordering and message delivery guarantees. Pekko provides the following behavior for message sends:
 
  * At-most-once delivery, that is, no guaranteed delivery.
  * Message ordering is maintained per sender, receiver pair.
@@ -61,7 +61,7 @@ The delivery semantics provided by messaging subsystems typically fall into the
  * **At-least-once delivery** &#8212; potentially multiple attempts are made to deliver each message, until at least one succeeds; again, in more causal terms this means that messages can be duplicated but are never lost.
  * **Exactly-once delivery** &#8212; each message is delivered exactly once to the recipient; the message can neither be lost nor be duplicated.
 
-The first behavior, the one used by Akka, is the cheapest and results in the highest performance. It has the least implementation overhead because it can be done in a fire-and-forget fashion without keeping the state at the sending end or in the transport mechanism. The second, at-least-once, requires retries to counter transport losses. This adds the overhead of keeping the state at the sending end and having an acknowledgment mechanism at the receiving end. Exactly-once delivery is mos [...]
+The first behavior, the one used by Pekko, is the cheapest and results in the highest performance. It has the least implementation overhead because it can be done in a fire-and-forget fashion without keeping the state at the sending end or in the transport mechanism. The second, at-least-once, requires retries to counter transport losses. This adds the overhead of keeping the state at the sending end and having an acknowledgment mechanism at the receiving end. Exactly-once delivery is mo [...]
 duplicate deliveries.
 
 In an actor system, we need to determine exact meaning of a guarantee &#8212; at which point does the system consider the delivery as accomplished:
@@ -85,19 +85,19 @@ immediately after the API has been invoked any of the following can happen:
 
 This illustrates that the **guarantee of delivery** does not translate to the **domain level guarantee**. We only want to report success once the order has been actually fully processed and persisted. **The only entity that can report success is the application itself, since only it has any understanding of the domain guarantees required. No generalized framework can figure out the specifics of a particular domain and what is considered a success in that domain**.
 
-In this particular example, we only want to signal success after a successful database write, where the database acknowledged that the order is now safely stored. **For these reasons Akka lifts the responsibilities of guarantees to the application
-itself, i.e. you have to implement them yourself with the tools that Akka provides. This gives you full control of the guarantees that you want to provide**. Now, let's consider the message ordering that Akka provides to make it easy to reason about application logic.
+In this particular example, we only want to signal success after a successful database write, where the database acknowledged that the order is now safely stored. **For these reasons Pekko lifts the responsibilities of guarantees to the application
+itself, i.e. you have to implement them yourself with the tools that Pekko provides. This gives you full control of the guarantees that you want to provide**. Now, let's consider the message ordering that Pekko provides to make it easy to reason about application logic.
 
 ### Message Ordering
 
-In Akka, for a given pair of actors, messages sent directly from the first to the second will not be received out-of-order. The word directly emphasizes that this guarantee only applies when sending with the tell operator directly to the final destination, but not when employing mediators.
+In Pekko, for a given pair of actors, messages sent directly from the first to the second will not be received out-of-order. The word directly emphasizes that this guarantee only applies when sending with the tell operator directly to the final destination, but not when employing mediators.
 
 If:
 
  * Actor `A1` sends messages `M1`, `M2`, `M3` to `A2`.
  * Actor `A3` sends messages `M4`, `M5`, `M6` to `A2`.
 
-This means that, for Akka messages:
+This means that, for Pekko messages:
 
  * If `M1` is delivered it must be delivered before `M2` and `M3`.
  * If `M2` is delivered it must be delivered before `M3`.
@@ -139,7 +139,7 @@ Note in the code that:
 ## Testing the actor
 
 Based on the actor above, we could write a test. In the `com.example` package in the test tree of your project, add the following code to a @scala[`DeviceSpec.scala`]@java[`DeviceTest.java`] file.
-@scala[(We use ScalaTest but any other test framework can be used with the Akka Testkit)].
+@scala[(We use ScalaTest but any other test framework can be used with the Pekko Testkit)].
 
 You can run this test @scala[by running `test` at the sbt prompt]@java[by running `mvn test`].
 
@@ -161,7 +161,7 @@ Scala
 Java
 :   @@snip [Device.java](/docs/src/test/java/jdocs/typed/tutorial_3/inprogress3/Device.java) { #write-protocol-1 }
 
-However, this approach does not take into account that the sender of the record temperature message can never be sure if the message was processed or not. We have seen that Akka does not guarantee delivery of these messages and leaves it to the application to provide success notifications. In our case, we would like to send an acknowledgment to the sender once we have updated our last temperature recording, e.g. replying with a `TemperatureRecorded` message.
+However, this approach does not take into account that the sender of the record temperature message can never be sure if the message was processed or not. We have seen that Pekko does not guarantee delivery of these messages and leaves it to the application to provide success notifications. In our case, we would like to send an acknowledgment to the sender once we have updated our last temperature recording, e.g. replying with a `TemperatureRecorded` message.
 Just like in the case of temperature queries and responses, it is also a good idea to include an ID field to provide maximum flexibility.
 
 Scala
diff --git a/docs/src/main/paradox/typed/guide/tutorial_4.md b/docs/src/main/paradox/typed/guide/tutorial_4.md
index d46b2d0766..cd23975b78 100644
--- a/docs/src/main/paradox/typed/guide/tutorial_4.md
+++ b/docs/src/main/paradox/typed/guide/tutorial_4.md
@@ -13,7 +13,7 @@ Let's take a closer look at the main functionality required by our use case. In
 
 Steps 1 and 2 take place outside the boundaries of our tutorial system. In this chapter, we will start addressing steps 3-6 and create a way for sensors to register with our system and to communicate with actors. But first, we have another architectural decision &#8212; how many levels of actors should we use to represent device groups and device sensors?
 
-One of the main design challenges for Akka programmers is choosing the best granularity for actors. In practice, depending on the characteristics of the interactions between actors, there are usually several valid ways to organize a system. In our use case, for example, it would be possible to have a single actor maintain all the groups and devices  &#8212; perhaps using hash maps. It would also be reasonable to have an actor for each group that tracks the state of all devices in the same home.
+One of the main design challenges for Pekko programmers is choosing the best granularity for actors. In practice, depending on the characteristics of the interactions between actors, there are usually several valid ways to organize a system. In our use case, for example, it would be possible to have a single actor maintain all the groups and devices  &#8212; perhaps using hash maps. It would also be reasonable to have an actor for each group that tracks the state of all devices in the sa [...]
 
 The following guidelines help us choose the most appropriate actor hierarchy:
 
@@ -117,7 +117,7 @@ Java
 
 So far, we have implemented logic for registering device actors in the group. Devices come and go, however, so we will need a way to remove device actors from the @scala[`Map[String, ActorRef[DeviceMessage]]`]@java[`Map<String, ActorRef<DeviceMessage>>`]. We will assume that when a device is removed, its corresponding device actor is stopped. Supervision, as we discussed earlier, only handles error scenarios &#8212; not graceful stopping. So we need to notify the parent when one of the d [...]
 
-Akka provides a _Death Watch_ feature that allows an actor to _watch_ another actor and be notified if the other actor is stopped. Unlike supervision, watching is not limited to parent-child relationships, any actor can watch any other actor as long as it knows the @apidoc[typed.ActorRef]. After a watched actor stops, the watcher receives a @apidoc[Terminated(actorRef)](typed.Terminated) signal which also contains the reference to the watched actor. The watcher can either handle this mes [...]
+Pekko provides a _Death Watch_ feature that allows an actor to _watch_ another actor and be notified if the other actor is stopped. Unlike supervision, watching is not limited to parent-child relationships, any actor can watch any other actor as long as it knows the @apidoc[typed.ActorRef]. After a watched actor stops, the watcher receives a @apidoc[Terminated(actorRef)](typed.Terminated) signal which also contains the reference to the watched actor. The watcher can either handle this me [...]
 
 Our device group actor needs to include functionality that:
 
diff --git a/docs/src/main/paradox/typed/guide/tutorial_5.md b/docs/src/main/paradox/typed/guide/tutorial_5.md
index 5721709bfe..d5e8d6253b 100644
--- a/docs/src/main/paradox/typed/guide/tutorial_5.md
+++ b/docs/src/main/paradox/typed/guide/tutorial_5.md
@@ -60,7 +60,7 @@ First, we need to design the lifecycle of our query actor. This consists of iden
  * A deadline that indicates how long the query should wait for replies. Making this a parameter will simplify testing.
 
 #### Scheduling the query timeout
-Since we need a way to indicate how long we are willing to wait for responses, it is time to introduce a new Akka feature that we have
+Since we need a way to indicate how long we are willing to wait for responses, it is time to introduce a new Pekko feature that we have
 not used yet, the built-in scheduler facility. Using @apidoc[Behaviors.withTimers](typed.*.Behaviors$) {scala="#withTimers[T](factory:org.apache.pekko.actor.typed.scaladsl.TimerScheduler[T]=%3eorg.apache.pekko.actor.typed.Behavior[T]):org.apache.pekko.actor.typed.Behavior[T]" java="#withTimers(org.apache.pekko.japi.function.Function)"} and @apidoc[startSingleTimer](typed.*.TimerScheduler) {scala="#startSingleTimer(key:Any,msg:T,delay:scala.concurrent.duration.FiniteDuration):Unit" java=" [...]
 
 
@@ -226,15 +226,12 @@ In the context of the IoT system, this guide introduced the following concepts,
 
 ## What's Next?
 
-To continue your journey with Akka, we recommend:
+To continue your journey with Pekko, we recommend:
 
-* Start building your own applications with Akka, make sure you [get involved in our amazing community](https://akka.io/get-involved/) for help if you get stuck.
-* If you’d like some additional background, and detail, read the rest of the @ref:[reference documentation](../actors.md) and check out some of the @ref:[books and videos](../../additional/books.md) on Akka.
+* For some additional background, and detail, read the rest of the @ref:[reference documentation](../actors.md) and check out some of the @ref:[books and videos](../../additional/books.md) on Pekko.
 * If you are interested in functional programming, read how actors can be defined in a @ref:[functional style](../actors.md#functional-style). In this guide the object-oriented style was used, but you can mix both as you like.
 
 To get from this guide to a complete application you would likely need to provide either an UI or an API. For this we recommend that you look at the following technologies and see what fits you:
 
- * @extref[Microservices with Akka tutorial](platform-guide:microservices-tutorial/) illustrates how to implement an Event Sourced CQRS application with Akka Persistence and Akka Projections 
- * [Akka HTTP](https://doc.akka.io/docs/akka-http/current/introduction.html) is a HTTP server and client library, making it possible to publish and consume HTTP endpoints
- * [Play Framework](https://www.playframework.com) is a full fledged web framework that is built on top of Akka HTTP, it integrates well with Akka and can be used to create a complete modern web UI
- * [Lagom](https://www.lagomframework.com) is an opinionated microservice framework built on top of Akka, encoding many best practices around Akka and Play
+ * @extref[Microservices with Pekko tutorial](platform-guide:microservices-tutorial/) illustrates how to implement an Event Sourced CQRS application with Pekko Persistence and Pekko Projections 
+ * [Pekko HTTP](https://doc.akka.io/docs/akka-http/current/introduction.html) is a HTTP server and client library, making it possible to publish and consume HTTP endpoints
\ No newline at end of file
diff --git a/docs/src/main/paradox/typed/index-cluster.md b/docs/src/main/paradox/typed/index-cluster.md
index 4712692cac..ed47a804c8 100644
--- a/docs/src/main/paradox/typed/index-cluster.md
+++ b/docs/src/main/paradox/typed/index-cluster.md
@@ -1,5 +1,5 @@
 ---
-project.description: Akka Cluster concepts, node membership service, CRDT Distributed Data, Cluster Singleton, Cluster Sharding, and Akka Cluster across multiple datacenters.
+project.description: Pekko Cluster concepts, node membership service, CRDT Distributed Data, Cluster Singleton, Cluster Sharding, and Pekko Cluster across multiple datacenters.
 ---
 # Cluster
 
diff --git a/docs/src/main/paradox/typed/index-persistence-durable-state.md b/docs/src/main/paradox/typed/index-persistence-durable-state.md
index ff05a88524..c9754a9062 100644
--- a/docs/src/main/paradox/typed/index-persistence-durable-state.md
+++ b/docs/src/main/paradox/typed/index-persistence-durable-state.md
@@ -1,5 +1,5 @@
 ---
-project.description: Durable state with Akka Persistence enables actors to persist the latest version of the state. This persistence is used for recovery on failure, or when migrating within a cluster.
+project.description: Durable state with Pekko Persistence enables actors to persist the latest version of the state. This persistence is used for recovery on failure, or when migrating within a cluster.
 ---
 
 # Persistence (Durable State)
diff --git a/docs/src/main/paradox/typed/index-persistence.md b/docs/src/main/paradox/typed/index-persistence.md
index 707dcf6e98..b2d09035fa 100644
--- a/docs/src/main/paradox/typed/index-persistence.md
+++ b/docs/src/main/paradox/typed/index-persistence.md
@@ -1,5 +1,5 @@
 ---
-project.description: Use of Akka Persistence with Event Sourcing enables actors to persist your events for recovery on failure or when migrated within a cluster.
+project.description: Use of Pekko Persistence with Event Sourcing enables actors to persist your events for recovery on failure or when migrated within a cluster.
 ---
 
 # Persistence (Event Sourcing)
diff --git a/docs/src/main/paradox/typed/index.md b/docs/src/main/paradox/typed/index.md
index 2370a487a0..0069e859e3 100644
--- a/docs/src/main/paradox/typed/index.md
+++ b/docs/src/main/paradox/typed/index.md
@@ -1,5 +1,5 @@
 ---
-project.description: Using Akka to build reliable multi-core applications distributed across a network that scale up and scale out.
+project.description: Using Pekko to build reliable multi-core applications distributed across a network that scale up and scale out.
 ---
 # Actors
 
diff --git a/docs/src/main/paradox/typed/interaction-patterns.md b/docs/src/main/paradox/typed/interaction-patterns.md
index 34eb1e74f0..acb733ee5c 100644
--- a/docs/src/main/paradox/typed/interaction-patterns.md
+++ b/docs/src/main/paradox/typed/interaction-patterns.md
@@ -1,13 +1,13 @@
 # Interaction Patterns
 
-You are viewing the documentation for the new actor APIs, to view the Akka Classic documentation, see @ref:[Classic Actors](../actors.md).
+You are viewing the documentation for the new actor APIs, to view the Pekko Classic documentation, see @ref:[Classic Actors](../actors.md).
 
 ## Dependency
 
-To use Akka Actor Typed, you must add the following dependency in your project:
+To use Pekko Actor Typed, you must add the following dependency in your project:
 
 @@dependency[sbt,Maven,Gradle] {
-  bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
+  bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
   symbol1=PekkoVersion
   value1="$pekko.version$"
   group=org.apache.pekko
@@ -17,7 +17,7 @@ To use Akka Actor Typed, you must add the following dependency in your project:
 
 ## Introduction
 
-Interacting with an Actor in Akka is done through an @scala[@scaladoc[ActorRef[T]](pekko.actor.typed.ActorRef)]@java[@javadoc[ActorRef<T>](pekko.actor.typed.ActorRef)] where `T` is the type of messages the actor accepts, also known as the "protocol". This ensures that only the right kind of messages can be sent to an actor and also that no one else but the Actor itself can access the Actor instance internals.
+Interacting with an Actor in Pekko is done through an @scala[@scaladoc[ActorRef[T]](pekko.actor.typed.ActorRef)]@java[@javadoc[ActorRef<T>](pekko.actor.typed.ActorRef)] where `T` is the type of messages the actor accepts, also known as the "protocol". This ensures that only the right kind of messages can be sent to an actor and also that no one else but the Actor itself can access the Actor instance internals.
... 2007 lines suppressed ...


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pekko.apache.org
For additional commands, e-mail: commits-help@pekko.apache.org