You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@ignite.apache.org by sk...@apache.org on 2022/06/06 23:23:02 UTC

[ignite-3] branch main updated (091dfdd84 -> d56882b82)

This is an automated email from the ASF dual-hosted git repository.

sk0x50 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/ignite-3.git


    from 091dfdd84 IGNITE-14209 Data rebalance on partition replicas' number changes
     new 678ad5c41 Revert "IGNITE-14209 Data rebalance on partition replicas' number changes"
     new d56882b82 IGNITE-14209 Data rebalance on partition replicas' number changes

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:


[ignite-3] 01/02: Revert "IGNITE-14209 Data rebalance on partition replicas' number changes"

Posted by sk...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sk0x50 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/ignite-3.git

commit 678ad5c41cb43f95d5b4a73c6cd5b139742edd19
Author: Slava Koptilin <sl...@gmail.com>
AuthorDate: Tue Jun 7 02:08:28 2022 +0300

    Revert "IGNITE-14209 Data rebalance on partition replicas' number changes"
    
    This reverts commit 091dfdd8
---
 assembly/README.md                                 |   3 +
 docs/_docs/quick-start/getting-started-guide.adoc  |   5 +-
 docs/_docs/rebalance.adoc                          |   7 +
 examples/README.md                                 |   3 +-
 .../ignite/example/rebalance/RebalanceExample.java | 216 ++++++++
 .../src/main/java/org/apache/ignite/Ignite.java    |  24 +
 .../ignite/internal/client/TcpIgniteClient.java    |   7 +
 .../org/apache/ignite/client/fakes/FakeIgnite.java |   7 +
 .../ignite/client/fakes/FakeInternalTable.java     |   7 -
 .../ignite/internal/causality/VersionedValue.java  |   8 +-
 .../raft/client/service/RaftGroupService.java      |  28 --
 .../apache/ignite/raft/jraft/core/ItNodeTest.java  | 234 +--------
 .../java/org/apache/ignite/internal/raft/Loza.java | 151 +++---
 .../raft/server/RaftGroupEventsListener.java       |  68 ---
 .../ignite/internal/raft/server/RaftServer.java    |  12 -
 .../internal/raft/server/impl/JraftServerImpl.java |  38 --
 .../java/org/apache/ignite/raft/jraft/Node.java    |  11 -
 .../apache/ignite/raft/jraft/RaftMessageGroup.java |   6 -
 .../apache/ignite/raft/jraft/core/NodeImpl.java    |  75 +--
 .../ignite/raft/jraft/option/NodeOptions.java      |  13 -
 .../apache/ignite/raft/jraft/rpc/CliRequests.java  |  21 +-
 .../raft/jraft/rpc/impl/IgniteRpcServer.java       |   2 -
 .../raft/jraft/rpc/impl/RaftGroupServiceImpl.java  |  61 +--
 .../impl/cli/ChangePeersAsyncRequestProcessor.java |  93 ----
 .../org/apache/ignite/internal/raft/LozaTest.java  |   3 +-
 .../internal/raft/server/impl/RaftServerImpl.java  |   7 -
 .../apache/ignite/raft/jraft/core/TestCluster.java |  13 -
 .../cli/ChangePeersAsyncRequestProcessorTest.java  |  64 ---
 .../storage/ItRebalanceDistributedTest.java        | 544 ---------------------
 .../internal/runner/app/ItBaselineChangesTest.java | 174 +++++++
 .../runner/app/ItIgniteNodeRestartTest.java        |   1 -
 .../org/apache/ignite/internal/app/IgniteImpl.java |  12 +-
 .../sql/engine/exec/MockedStructuresTest.java      |  11 +-
 .../ignite/internal/table/InternalTable.java       |  10 -
 .../internal/table/distributed/TableManager.java   | 375 ++++++--------
 .../raft/RebalanceRaftGroupEventsListener.java     | 357 --------------
 .../distributed/storage/InternalTableImpl.java     |  15 -
 .../ignite/internal/utils/RebalanceUtil.java       | 172 -------
 .../ignite/internal/table/TableManagerTest.java    |  18 +-
 modules/table/tech-notes/rebalance.md              |   3 +-
 40 files changed, 716 insertions(+), 2163 deletions(-)

diff --git a/assembly/README.md b/assembly/README.md
index 489382c87..fea9c1a3f 100644
--- a/assembly/README.md
+++ b/assembly/README.md
@@ -42,6 +42,9 @@ The following examples are included:
 * `RecordViewExample` - demonstrates the usage of the `org.apache.ignite.table.RecordView` API
 * `KeyValueViewExample` - demonstrates the usage of the `org.apache.ignite.table.KeyValueView` API
 * `SqlJdbcExample` - demonstrates the usage of the Apache Ignite JDBC driver.
+* `RebalanceExample` - demonstrates the data rebalancing process.
+
+To run the `RebalanceExample`, refer to its JavaDoc for instructions.
 
 To run any other example, do the following:
 1. Import the examples project into you IDE.
diff --git a/docs/_docs/quick-start/getting-started-guide.adoc b/docs/_docs/quick-start/getting-started-guide.adoc
index 961954667..802714c79 100644
--- a/docs/_docs/quick-start/getting-started-guide.adoc
+++ b/docs/_docs/quick-start/getting-started-guide.adoc
@@ -190,8 +190,11 @@ The project includes the following examples:
 * `RecordViewExample` demonstrates the usage of the `org.apache.ignite.table.RecordView` API to create a table. It also shows how to get data from a table, or insert a line into a table.
 * `KeyValueViewExample` - demonstrates the usage of the `org.apache.ignite.table.KeyValueView` API to insert a line into a table.
 * `SqlJdbcExample` - demonstrates the usage of the Apache Ignite JDBC driver.
+* `RebalanceExample` - demonstrates the data rebalancing process.
 
-To run any example, perform the following steps:
+To run the `RebalanceExample`, refer to its link:https://github.com/apache/ignite-3/blob/3.0.0-alpha4/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java[JavaDoc,window=_blank] for instructions.
+
+To run any other example, perform the following steps:
 
 . Import the examples project into you IDE.
 
diff --git a/docs/_docs/rebalance.adoc b/docs/_docs/rebalance.adoc
index 9a897c5ae..54fe40cc7 100644
--- a/docs/_docs/rebalance.adoc
+++ b/docs/_docs/rebalance.adoc
@@ -18,3 +18,10 @@ When a new node joins the cluster, some of the partitions are relocated to the n
 If an existing node permanently leaves the cluster and backups are not configured, you lose the partitions stored on this node. When backups are configured, one of the backup copies of the lost partitions becomes a primary partition and the rebalancing process is initiated.
 
 WARNING: Data rebalancing is triggered by changes in the Baseline Topology. In pure in-memory clusters, the default behavior is to start rebalancing immediately when a node leaves or joins the cluster (the baseline topology changes automatically). In clusters with persistence, the baseline topology has to be changed manually (default behavior), or can be changed automatically when automatic baseline adjustment is enabled.
+
+== Running an Example
+
+Examples are shipped as a separate Maven project, which is located in the `examples` folder. `RebalanceExample` demonstrates the data rebalancing process.
+
+To start running `RebalanceExample`, please refer to its link:https://github.com/apache/ignite-3/blob/3.0.0-alpha3/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java[JavaDoc,window=_blank] for instructions.
+
diff --git a/examples/README.md b/examples/README.md
index 410cbf94e..890753737 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -9,9 +9,10 @@ The following examples are included:
 * `RecordViewExample` - demonstrates the usage of the `org.apache.ignite.table.RecordView` API
 * `KeyValueViewExample` - demonstrates the usage of the `org.apache.ignite.table.KeyValueView` API
 * `SqlJdbcExample` - demonstrates the usage of the Apache Ignite JDBC driver.
+* `RebalanceExample` - demonstrates the data rebalancing process.
 * `VolatilePageMemoryStorageExample` - demonstrates the usage of the PageMemory storage engine configured with an in-memory data region.
 * `PersistentPageMemoryStorageExample` - demonstrates the usage of the PageMemory storage engine configured with a persistent data region.
 
 Before running the examples, read about [cli](https://ignite.apache.org/docs/3.0.0-alpha/ignite-cli-tool).
 
-To run the examples, refer to their JavaDoc for instructions.
+To run the examples, refer to their JavaDoc for instructions.
\ No newline at end of file
diff --git a/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java b/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java
new file mode 100644
index 000000000..4a2db3991
--- /dev/null
+++ b/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java
@@ -0,0 +1,216 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.example.rebalance;
+
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.Statement;
+import java.util.Set;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgnitionManager;
+import org.apache.ignite.client.IgniteClient;
+import org.apache.ignite.table.KeyValueView;
+import org.apache.ignite.table.Tuple;
+
+/**
+ * This example demonstrates the data rebalance process.
+ *
+ * <p>The example emulates the basic scenario when one starts a three-node topology,
+ * inserts some data, and then scales out by adding two more nodes. After the topology is changed, the data is rebalanced and verified for
+ * correctness.
+ *
+ * <p>To run the example, do the following:
+ * <ol>
+ *     <li>Import the examples project into you IDE.</li>
+ *     <li>
+ *         Download and prepare artifacts for running an Ignite node using the CLI tool (if not done yet):<br>
+ *         {@code ignite bootstrap}
+ *     </li>
+ *     <li>
+ *         Start <b>two</b> nodes using the CLI tool:<br>
+ *         {@code ignite node start --config=$IGNITE_HOME/examples/config/ignite-config.json my-first-node}<br>
+ *         {@code ignite node start --config=$IGNITE_HOME/examples/config/ignite-config.json my-second-node}
+ *     </li>
+ *     <li>
+ *         Cluster initialization using the CLI tool (if not done yet):<br>
+ *         {@code ignite cluster init --cluster-name=ignite-cluster --node-endpoint=localhost:10300 --meta-storage-node=my-first-node}
+ *     </li>
+ *     <li>Run the example in the IDE.</li>
+ *     <li>
+ *         When requested, start another <b>two</b> nodes using the CLI tool:
+ *         {@code ignite node start --config=$IGNITE_HOME/examples/config/ignite-config.json my-first-additional-node}<br>
+ *         {@code ignite node start --config=$IGNITE_HOME/examples/config/ignite-config.json my-second-additional-node}
+ *     </li>
+ *     <li>Press {@code Enter} to resume the example.</li>
+ *     <li>
+ *         Stop <b>four</b> nodes using the CLI tool:<br>
+ *         {@code ignite node stop my-first-node}<br>
+ *         {@code ignite node stop my-second-node}<br>
+ *         {@code ignite node stop my-first-additional-node}<br>
+ *         {@code ignite node stop my-second-additional-node}
+ *     </li>
+ * </ol>
+ */
+public class RebalanceExample {
+    /**
+     * Main method of the example.
+     *
+     * @param args The command line arguments.
+     * @throws Exception If failed.
+     */
+    public static void main(String[] args) throws Exception {
+        //--------------------------------------------------------------------------------------
+        //
+        // Creating 'accounts' table.
+        //
+        //--------------------------------------------------------------------------------------
+
+        try (
+                Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10800/");
+                Statement stmt = conn.createStatement()
+        ) {
+            stmt.executeUpdate(
+                    "CREATE TABLE rebalance ("
+                            + "key   INT PRIMARY KEY,"
+                            + "value VARCHAR)"
+            );
+        }
+
+        //--------------------------------------------------------------------------------------
+        //
+        // Creating a client to connect to the cluster.
+        //
+        //--------------------------------------------------------------------------------------
+
+        System.out.println("\nConnecting to server...");
+
+        try (IgniteClient client = IgniteClient.builder()
+                .addresses("127.0.0.1:10800")
+                .build()
+        ) {
+            KeyValueView<Tuple, Tuple> kvView = client.tables().table("PUBLIC.rebalance").keyValueView();
+
+            //--------------------------------------------------------------------------------------
+            //
+            // Inserting several key-value pairs into the table.
+            //
+            //--------------------------------------------------------------------------------------
+
+            System.out.println("\nInserting key-value pairs...");
+
+            for (int i = 0; i < 10; i++) {
+                Tuple key = Tuple.create().set("key", i);
+                Tuple value = Tuple.create().set("value", "test_" + i);
+
+                kvView.put(null, key, value);
+            }
+
+            //--------------------------------------------------------------------------------------
+            //
+            // Retrieving the newly inserted data.
+            //
+            //--------------------------------------------------------------------------------------
+
+            System.out.println("\nRetrieved key-value pairs:");
+
+            for (int i = 0; i < 10; i++) {
+                Tuple key = Tuple.create().set("key", i);
+                Tuple value = kvView.get(null, key);
+
+                System.out.println("    " + i + " -> " + value.stringValue("value"));
+            }
+
+            //--------------------------------------------------------------------------------------
+            //
+            // Scaling out by adding two more nodes into the topology.
+            //
+            //--------------------------------------------------------------------------------------
+
+            System.out.println("\n"
+                    + "Run the following commands using the CLI tool to start two more nodes, and then press 'Enter' to continue...\n"
+                    + "    ignite node start --config=examples/config/ignite-config.json my-first-additional-node\n"
+                    + "    ignite node start --config=examples/config/ignite-config.json my-second-additional-node");
+
+            System.in.read();
+
+            //--------------------------------------------------------------------------------------
+            //
+            // Updating baseline to initiate the data rebalancing process.
+            //
+            // New topology includes the following five nodes:
+            //     1. 'my-first-node' -- the first node started prior to running the example
+            //     2. 'my-second-node' -- the second node started prior to running the example
+            //     3. 'additional-node-1' -- the first node added to the topology
+            //     4. 'additional-node-2' -- the second node added to the topology
+            //     5. 'example-node' -- node that is embedded into the example
+            //
+            // NOTE: An embedded server node is started here for the sole purpose of setting
+            //       the baseline. In the future releases, this API will be provided by the
+            //       clients as well. In addition, the process will be automated where applicable
+            //       to eliminate the need for this manual step.
+            //
+            //--------------------------------------------------------------------------------------
+
+            System.out.println("Starting a server node... Logging to file: example-node.log");
+
+            System.setProperty("java.util.logging.config.file", "config/java.util.logging.properties");
+
+            try (Ignite server = IgnitionManager.start(
+                    "example-node",
+                    Files.readString(Path.of("config", "ignite-config.json")),
+                    Path.of("work")
+            ).join()) {
+                System.out.println("\nUpdating the baseline and rebalancing the data...");
+
+                server.setBaseline(Set.of(
+                        "my-first-node",
+                        "my-second-node",
+                        "my-first-additional-node",
+                        "my-second-additional-node",
+                        "example-node"
+                ));
+
+                //--------------------------------------------------------------------------------------
+                //
+                // Retrieving data again to validate correctness.
+                //
+                //--------------------------------------------------------------------------------------
+
+                System.out.println("\nKey-value pairs retrieved after the topology change:");
+
+                for (int i = 0; i < 10; i++) {
+                    Tuple key = Tuple.create().set("key", i);
+                    Tuple value = kvView.get(null, key);
+
+                    System.out.println("    " + i + " -> " + value.stringValue("value"));
+                }
+            }
+        }
+
+        System.out.println("\nDropping the table...");
+
+        try (
+                Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10800/");
+                Statement stmt = conn.createStatement()
+        ) {
+            stmt.executeUpdate("DROP TABLE rebalance");
+        }
+    }
+}
diff --git a/modules/api/src/main/java/org/apache/ignite/Ignite.java b/modules/api/src/main/java/org/apache/ignite/Ignite.java
index ad0a1434b..c530df5fc 100644
--- a/modules/api/src/main/java/org/apache/ignite/Ignite.java
+++ b/modules/api/src/main/java/org/apache/ignite/Ignite.java
@@ -18,13 +18,16 @@
 package org.apache.ignite;
 
 import java.util.Collection;
+import java.util.Set;
 import java.util.concurrent.CompletableFuture;
 import org.apache.ignite.compute.ComputeJob;
 import org.apache.ignite.compute.IgniteCompute;
+import org.apache.ignite.lang.IgniteException;
 import org.apache.ignite.network.ClusterNode;
 import org.apache.ignite.sql.IgniteSql;
 import org.apache.ignite.table.manager.IgniteTables;
 import org.apache.ignite.tx.IgniteTransactions;
+import org.jetbrains.annotations.ApiStatus.Experimental;
 
 /**
  * Ignite API entry point.
@@ -58,6 +61,27 @@ public interface Ignite extends AutoCloseable {
      */
     IgniteSql sql();
 
+    /**
+     * Set new baseline nodes for table assignments.
+     *
+     * <p>Current implementation has significant restrictions: - Only alive nodes can be a part of new baseline. If any passed nodes are not
+     * alive, {@link IgniteException} with appropriate message will be thrown. - Potentially it can be a long operation and current
+     * synchronous changePeers-based implementation can't handle this issue well. - No recovery logic supported, if setBaseline fails - it
+     * can produce random state of cluster.
+     * TODO: IGNITE-14209 issues above must be fixed.
+     * TODO: IGNITE-15815 add a test for stopping node and asynchronous implementation.
+     *
+     * @param baselineNodes Names of baseline nodes.
+     * @throws IgniteException If an unspecified platform exception has happened internally. Is thrown when:
+     *                         <ul>
+     *                             <li>the node is stopping,</li>
+     *                             <li>{@code baselineNodes} argument is empty or null,</li>
+     *                             <li>any node from {@code baselineNodes} is not alive.</li>
+     *                         </ul>
+     */
+    @Experimental
+    void setBaseline(Set<String> baselineNodes);
+
     /**
      * Returns {@link IgniteCompute} which can be used to execute compute jobs.
      *
diff --git a/modules/client/src/main/java/org/apache/ignite/internal/client/TcpIgniteClient.java b/modules/client/src/main/java/org/apache/ignite/internal/client/TcpIgniteClient.java
index c33746546..4a185b04b 100644
--- a/modules/client/src/main/java/org/apache/ignite/internal/client/TcpIgniteClient.java
+++ b/modules/client/src/main/java/org/apache/ignite/internal/client/TcpIgniteClient.java
@@ -22,6 +22,7 @@ import static org.apache.ignite.internal.client.ClientUtils.sync;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.List;
+import java.util.Set;
 import java.util.concurrent.CompletableFuture;
 import java.util.function.BiFunction;
 import org.apache.ignite.client.IgniteClient;
@@ -133,6 +134,12 @@ public class TcpIgniteClient implements IgniteClient {
         return sql;
     }
 
+    /** {@inheritDoc} */
+    @Override
+    public void setBaseline(Set<String> baselineNodes) {
+        throw new UnsupportedOperationException();
+    }
+
     /** {@inheritDoc} */
     @Override
     public IgniteCompute compute() {
diff --git a/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java b/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java
index 6341a0cf6..aa617b84d 100644
--- a/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java
+++ b/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java
@@ -18,6 +18,7 @@
 package org.apache.ignite.client.fakes;
 
 import java.util.Collection;
+import java.util.Set;
 import java.util.concurrent.CompletableFuture;
 import org.apache.ignite.Ignite;
 import org.apache.ignite.compute.IgniteCompute;
@@ -99,6 +100,12 @@ public class FakeIgnite implements Ignite {
         return new FakeIgniteSql();
     }
 
+    /** {@inheritDoc} */
+    @Override
+    public void setBaseline(Set<String> baselineNodes) {
+        throw new UnsupportedOperationException();
+    }
+
     /** {@inheritDoc} */
     @Override
     public IgniteCompute compute() {
diff --git a/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeInternalTable.java b/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeInternalTable.java
index 7da5e0bc1..77e7afad7 100644
--- a/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeInternalTable.java
+++ b/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeInternalTable.java
@@ -33,7 +33,6 @@ import org.apache.ignite.internal.table.InternalTable;
 import org.apache.ignite.internal.tx.InternalTransaction;
 import org.apache.ignite.lang.IgniteInternalException;
 import org.apache.ignite.network.ClusterNode;
-import org.apache.ignite.raft.client.service.RaftGroupService;
 import org.jetbrains.annotations.NotNull;
 import org.jetbrains.annotations.Nullable;
 
@@ -282,12 +281,6 @@ public class FakeInternalTable implements InternalTable {
         throw new IgniteInternalException(new OperationNotSupportedException());
     }
 
-    /** {@inheritDoc} */
-    @Override
-    public RaftGroupService partitionRaftGroupService(int partition) {
-        return null;
-    }
-
     /** {@inheritDoc} */
     @Override
     public int partition(BinaryRowEx keyRow) {
diff --git a/modules/core/src/main/java/org/apache/ignite/internal/causality/VersionedValue.java b/modules/core/src/main/java/org/apache/ignite/internal/causality/VersionedValue.java
index 1eb90e152..10b3df8c3 100644
--- a/modules/core/src/main/java/org/apache/ignite/internal/causality/VersionedValue.java
+++ b/modules/core/src/main/java/org/apache/ignite/internal/causality/VersionedValue.java
@@ -591,10 +591,10 @@ public class VersionedValue<T> {
      * Check that the given causality token os correct according to the actual token.
      *
      * @param actualToken Actual token.
-     * @param candidateToken Candidate token.
+     * @param causalityToken Causality token.
      */
-    private static void checkToken(long actualToken, long candidateToken) {
-        assert actualToken == NOT_INITIALIZED || actualToken < candidateToken : IgniteStringFormatter.format(
-                "Token must be greater than actual [token={}, actual={}]", candidateToken, actualToken);
+    private static void checkToken(long actualToken, long causalityToken) {
+        assert actualToken == NOT_INITIALIZED || actualToken + 1 == causalityToken : IgniteStringFormatter.format(
+            "Token must be greater than actual by exactly 1 [token={}, actual={}]", causalityToken, actualToken);
     }
 }
diff --git a/modules/raft-client/src/main/java/org/apache/ignite/raft/client/service/RaftGroupService.java b/modules/raft-client/src/main/java/org/apache/ignite/raft/client/service/RaftGroupService.java
index 270ea9953..9fc267422 100644
--- a/modules/raft-client/src/main/java/org/apache/ignite/raft/client/service/RaftGroupService.java
+++ b/modules/raft-client/src/main/java/org/apache/ignite/raft/client/service/RaftGroupService.java
@@ -20,7 +20,6 @@ package org.apache.ignite.raft.client.service;
 import java.util.List;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.TimeoutException;
-import org.apache.ignite.lang.IgniteBiTuple;
 import org.apache.ignite.network.ClusterService;
 import org.apache.ignite.raft.client.Command;
 import org.apache.ignite.raft.client.Peer;
@@ -92,15 +91,6 @@ public interface RaftGroupService {
      */
     CompletableFuture<Void> refreshLeader();
 
-    /**
-     * Refreshes a replication group leader and returns (leader, term) tuple.
-     *
-     * <p>This operation is executed on a group leader.
-     *
-     * @return A future, with (leader, term) tuple.
-     */
-    CompletableFuture<IgniteBiTuple<Peer, Long>> refreshAndGetLeaderWithTerm();
-
     /**
      * Refreshes replication group members.
      *
@@ -153,24 +143,6 @@ public interface RaftGroupService {
      */
     CompletableFuture<Void> changePeers(List<Peer> peers);
 
-    /**
-     * Changes peers of the replication group.
-     *
-     * <p>Asynchronous variant of the previous method.
-     * When the future completed, it just means, that changePeers process successfully started.
-     *
-     * <p>The results of rebalance itself will be processed by the listener of raft reconfiguration event
-     * (from raft/server module).
-     *
-     * <p>This operation is executed on a group leader.
-     *
-     * @param peers Peers.
-     * @param term Current known leader term.
-     *             If real raft group term will be different - changePeers will be skipped.
-     * @return A future.
-     */
-    CompletableFuture<Void> changePeersAsync(List<Peer> peers, long term);
-
     /**
      * Adds learners (non-voting members).
      *
diff --git a/modules/raft/src/integrationTest/java/org/apache/ignite/raft/jraft/core/ItNodeTest.java b/modules/raft/src/integrationTest/java/org/apache/ignite/raft/jraft/core/ItNodeTest.java
index 3087963b9..914e197e6 100644
--- a/modules/raft/src/integrationTest/java/org/apache/ignite/raft/jraft/core/ItNodeTest.java
+++ b/modules/raft/src/integrationTest/java/org/apache/ignite/raft/jraft/core/ItNodeTest.java
@@ -31,21 +31,12 @@ import static org.junit.jupiter.api.Assertions.assertNull;
 import static org.junit.jupiter.api.Assertions.assertSame;
 import static org.junit.jupiter.api.Assertions.assertTrue;
 import static org.junit.jupiter.api.Assertions.fail;
-import static org.mockito.ArgumentMatchers.any;
-import static org.mockito.ArgumentMatchers.anyLong;
-import static org.mockito.ArgumentMatchers.argThat;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.never;
-import static org.mockito.Mockito.timeout;
-import static org.mockito.Mockito.times;
-import static org.mockito.Mockito.verify;
 
 import com.codahale.metrics.ConsoleReporter;
 import java.io.File;
 import java.nio.ByteBuffer;
 import java.nio.file.Files;
 import java.nio.file.Path;
-import java.rmi.StubNotFoundException;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
@@ -65,17 +56,14 @@ import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicReference;
 import java.util.function.BiPredicate;
 import java.util.function.BooleanSupplier;
-import java.util.stream.IntStream;
 import java.util.stream.Stream;
-import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.testframework.WorkDirectory;
 import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
 import org.apache.ignite.lang.IgniteLogger;
 import org.apache.ignite.network.ClusterService;
 import org.apache.ignite.network.NetworkAddress;
+import org.apache.ignite.network.NodeFinder;
 import org.apache.ignite.network.StaticNodeFinder;
-import org.apache.ignite.network.scalecube.TestScaleCubeClusterServiceFactory;
-import org.apache.ignite.raft.jraft.Closure;
 import org.apache.ignite.raft.jraft.Iterator;
 import org.apache.ignite.raft.jraft.JRaftUtils;
 import org.apache.ignite.raft.jraft.Node;
@@ -3014,15 +3002,6 @@ public class ItNodeTest {
 
     @Test
     public void testChangePeers() throws Exception {
-        changePeers(false);
-    }
-
-    @Test
-    public void testChangeAsyncPeers() throws Exception {
-        changePeers(true);
-    }
-
-    private void changePeers(boolean async) throws Exception {
         PeerId peer0 = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT);
         cluster = new TestCluster("testChangePeers", dataPath, Collections.singletonList(peer0), testInfo);
         assertTrue(cluster.start(peer0.getEndpoint()));
@@ -3037,225 +3016,22 @@ public class ItNodeTest {
         }
         for (int i = 0; i < 9; i++) {
             cluster.waitLeader();
-            leader = cluster.getLeader();
-            assertNotNull(leader);
-            PeerId leaderPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i);
-            assertEquals(leaderPeer, leader.getNodeId().getPeerId());
-            PeerId newLeaderPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i + 1);
-            if (async) {
-                SynchronizedClosure done = new SynchronizedClosure();
-                leader.changePeersAsync(new Configuration(Collections.singletonList(newLeaderPeer)),
-                        leader.getCurrentTerm(), done);
-                Status status = done.await();
-                assertTrue(status.isOk(), status.getRaftError().toString());
-                assertTrue(waitForCondition(() -> {
-                    if (cluster.getLeader() != null) {
-                        return newLeaderPeer.equals(cluster.getLeader().getLeaderId());
-                    }
-                    return false;
-                }, 10_000));
-            } else {
-                SynchronizedClosure done = new SynchronizedClosure();
-                leader.changePeers(new Configuration(Collections.singletonList(newLeaderPeer)), done);
-                Status status = done.await();
-                assertTrue(status.isOk(), status.getRaftError().toString());
-            }
-        }
-
-        cluster.waitLeader();
-
-        for (MockStateMachine fsm : cluster.getFsms()) {
-            assertEquals(10, fsm.getLogs().size());
-        }
-    }
-
-    @Test
-    public void testOnReconfigurationErrorListener() throws Exception {
-        PeerId peer0 = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT);
-        cluster = new TestCluster("testChangePeers", dataPath, Collections.singletonList(peer0), testInfo);
-
-        var raftGrpEvtsLsnr = mock(RaftGroupEventsListener.class);
-
-        cluster.setRaftGrpEvtsLsnr(raftGrpEvtsLsnr);
-        assertTrue(cluster.start(peer0.getEndpoint()));
-
-        cluster.waitLeader();
-
-        Node leader = cluster.getLeader();
-        sendTestTaskAndWait(leader);
-
-        verify(raftGrpEvtsLsnr, never()).onNewPeersConfigurationApplied(any());
-
-        PeerId newPeer = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT + 1);
-
-        SynchronizedClosure done = new SynchronizedClosure();
-
-        leader.changePeersAsync(new Configuration(Collections.singletonList(newPeer)),
-                leader.getCurrentTerm(), done);
-        assertEquals(done.await(), Status.OK());
-
-        verify(raftGrpEvtsLsnr, timeout(10_000))
-                .onReconfigurationError(argThat(st -> st.getRaftError() == RaftError.ECATCHUP), any(), anyLong());
-    }
-
-    @Test
-    public void testNewPeersConfigurationAppliedListener() throws Exception {
-        PeerId peer0 = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT);
-        cluster = new TestCluster("testChangePeers", dataPath, Collections.singletonList(peer0), testInfo);
-
-        var raftGrpEvtsLsnr = mock(RaftGroupEventsListener.class);
-
-        cluster.setRaftGrpEvtsLsnr(raftGrpEvtsLsnr);
-        assertTrue(cluster.start(peer0.getEndpoint()));
-
-        cluster.waitLeader();
-
-        Node leader = cluster.getLeader();
-        sendTestTaskAndWait(leader);
-
-        for (int i = 1; i < 5; i++) {
-            PeerId peer = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT + i);
-            assertTrue(cluster.start(peer.getEndpoint(), false, 300));
-        }
-
-        verify(raftGrpEvtsLsnr, never()).onNewPeersConfigurationApplied(any());
-
-        for (int i = 0; i < 4; i++) {
             leader = cluster.getLeader();
             assertNotNull(leader);
             PeerId peer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i);
             assertEquals(peer, leader.getNodeId().getPeerId());
-            PeerId newPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i + 1);
-
+            peer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i + 1);
             SynchronizedClosure done = new SynchronizedClosure();
-            leader.changePeersAsync(new Configuration(Collections.singletonList(newPeer)),
-                    leader.getCurrentTerm(), done);
-            assertEquals(done.await(), Status.OK());
-            assertTrue(waitForCondition(() -> {
-                if (cluster.getLeader() != null) {
-                    return newPeer.equals(cluster.getLeader().getLeaderId());
-                }
-                return false;
-            }, 10_000));
-
-            verify(raftGrpEvtsLsnr, times(1)).onNewPeersConfigurationApplied(Collections.singletonList(newPeer));
-        }
-    }
-
-    @Test
-    public void testChangePeersOnLeaderElected() throws Exception {
-        List<PeerId> peers = IntStream.range(0, 6)
-                .mapToObj(i -> new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT + i))
-                .collect(toList());
-
-        cluster = new TestCluster("testChangePeers", dataPath, peers, testInfo);
-
-        var raftGrpEvtsLsnr = mock(RaftGroupEventsListener.class);
-
-        cluster.setRaftGrpEvtsLsnr(raftGrpEvtsLsnr);
-
-        for (PeerId p: peers) {
-            assertTrue(cluster.start(p.getEndpoint(), false, 300));
+            leader.changePeers(new Configuration(Collections.singletonList(peer)), done);
+            Status status = done.await();
+            assertTrue(status.isOk(), status.getRaftError().toString());
         }
 
         cluster.waitLeader();
 
-        verify(raftGrpEvtsLsnr, times(1)).onLeaderElected(anyLong());
-
-        cluster.stop(cluster.getLeader().getLeaderId().getEndpoint());
-
-        cluster.waitLeader();
-
-        verify(raftGrpEvtsLsnr, times(2)).onLeaderElected(anyLong());
-
-        cluster.stop(cluster.getLeader().getLeaderId().getEndpoint());
-
-        cluster.waitLeader();
-
-        verify(raftGrpEvtsLsnr, times(3)).onLeaderElected(anyLong());
-    }
-
-    @Test
-    public void changePeersAsyncResponses() throws Exception {
-        PeerId peer0 = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT);
-        cluster = new TestCluster("testChangePeers", dataPath, Collections.singletonList(peer0), testInfo);
-        assertTrue(cluster.start(peer0.getEndpoint()));
-
-        cluster.waitLeader();
-        Node leader = cluster.getLeader();
-        sendTestTaskAndWait(leader);
-
-        PeerId peer = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT + 1);
-        assertTrue(cluster.start(peer.getEndpoint(), false, 300));
-
-        cluster.waitLeader();
-        leader = cluster.getLeader();
-        assertNotNull(leader);
-        PeerId leaderPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort());
-        assertEquals(leaderPeer, leader.getNodeId().getPeerId());
-
-        PeerId newLeaderPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + 1);
-
-        // wrong leader term, do nothing
-        SynchronizedClosure done = new SynchronizedClosure();
-        leader.changePeersAsync(new Configuration(Collections.singletonList(newLeaderPeer)),
-                leader.getCurrentTerm() - 1, done);
-        assertEquals(done.await(), Status.OK());
-
-        // the same config, do nothing
-        done = new SynchronizedClosure();
-        leader.changePeersAsync(new Configuration(Collections.singletonList(leaderPeer)),
-                leader.getCurrentTerm(), done);
-        assertEquals(done.await(), Status.OK());
-
-        // change peer to new conf containing only new node
-        done = new SynchronizedClosure();
-        leader.changePeersAsync(new Configuration(Collections.singletonList(newLeaderPeer)),
-                leader.getCurrentTerm(), done);
-        assertEquals(done.await(), Status.OK());
-
-        assertTrue(waitForCondition(() -> {
-            if (cluster.getLeader() != null)
-                return newLeaderPeer.equals(cluster.getLeader().getLeaderId());
-            return false;
-        }, 10_000));
-
         for (MockStateMachine fsm : cluster.getFsms()) {
             assertEquals(10, fsm.getLogs().size());
         }
-
-        // check concurrent start of two async change peers.
-        Node newLeader = cluster.getLeader();
-
-        sendTestTaskAndWait(newLeader);
-
-        ExecutorService executor = Executors.newFixedThreadPool(10);
-
-        List<SynchronizedClosure> dones = new ArrayList<>();
-        List<Future> futs = new ArrayList<>();
-
-        for (int i = 0; i < 2; i++) {
-            SynchronizedClosure newDone = new SynchronizedClosure();
-            dones.add(newDone);
-            futs.add(executor.submit(() -> {
-                newLeader.changePeersAsync(new Configuration(Collections.singletonList(peer0)), 2, newDone);
-            }));
-        }
-        futs.get(0).get();
-        futs.get(1).get();
-
-        assertEquals(dones.get(0).await(), Status.OK());
-        assertEquals(dones.get(1).await().getRaftError(), RaftError.EBUSY);
-
-        assertTrue(waitForCondition(() -> {
-            if (cluster.getLeader() != null)
-                return peer0.equals(cluster.getLeader().getLeaderId());
-            return false;
-        }, 10_000));
-
-        for (MockStateMachine fsm : cluster.getFsms()) {
-            assertEquals(20, fsm.getLogs().size());
-        }
     }
 
     @Test
diff --git a/modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java b/modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
index 96407ad77..1e37a424c 100644
--- a/modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
+++ b/modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
@@ -29,7 +29,6 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.function.Supplier;
 import java.util.stream.Collectors;
 import org.apache.ignite.internal.manager.IgniteComponent;
-import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.raft.server.RaftServer;
 import org.apache.ignite.internal.raft.server.impl.JraftServerImpl;
 import org.apache.ignite.internal.thread.NamedThreadFactory;
@@ -46,6 +45,7 @@ import org.apache.ignite.raft.client.service.RaftGroupService;
 import org.apache.ignite.raft.jraft.RaftMessagesFactory;
 import org.apache.ignite.raft.jraft.rpc.impl.RaftGroupServiceImpl;
 import org.apache.ignite.raft.jraft.util.Utils;
+import org.jetbrains.annotations.ApiStatus.Experimental;
 import org.jetbrains.annotations.TestOnly;
 
 /**
@@ -145,12 +145,16 @@ public class Loza implements IgniteComponent {
      * Creates a raft group service providing operations on a raft group. If {@code nodes} contains the current node, then raft group starts
      * on the current node.
      *
+     * <p>IMPORTANT: DON'T USE. This method should be used only for long running changePeers requests - until IGNITE-14209 will be fixed
+     * with stable solution.
+     *
      * @param groupId      Raft group id.
      * @param nodes        Raft group nodes.
      * @param lsnrSupplier Raft group listener supplier.
      * @return Future representing pending completion of the operation.
      * @throws NodeStoppingException If node stopping intention was detected.
      */
+    @Experimental
     public CompletableFuture<RaftGroupService> prepareRaftGroup(
             String groupId,
             List<ClusterNode> nodes,
@@ -161,7 +165,7 @@ public class Loza implements IgniteComponent {
         }
 
         try {
-            return prepareRaftGroupInternal(groupId, nodes, lsnrSupplier, () -> RaftGroupEventsListener.noopLsnr);
+            return prepareRaftGroupInternal(groupId, nodes, lsnrSupplier);
         } finally {
             busyLock.leaveBusy();
         }
@@ -170,14 +174,13 @@ public class Loza implements IgniteComponent {
     /**
      * Internal method to a raft group creation.
      *
-     * @param groupId                 Raft group id.
-     * @param nodes                   Raft group nodes.
-     * @param lsnrSupplier            Raft group listener supplier.
-     * @param raftGrpEvtsLsnrSupplier Raft group events listener supplier.
+     * @param groupId      Raft group id.
+     * @param nodes        Raft group nodes.
+     * @param lsnrSupplier Raft group listener supplier.
      * @return Future representing pending completion of the operation.
      */
     private CompletableFuture<RaftGroupService> prepareRaftGroupInternal(String groupId, List<ClusterNode> nodes,
-            Supplier<RaftGroupListener> lsnrSupplier, Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier) {
+            Supplier<RaftGroupListener> lsnrSupplier) {
         assert !nodes.isEmpty();
 
         List<Peer> peers = nodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
@@ -187,7 +190,7 @@ public class Loza implements IgniteComponent {
         boolean hasLocalRaft = nodes.stream().anyMatch(n -> locNodeName.equals(n.name()));
 
         if (hasLocalRaft) {
-            if (!raftServer.startRaftGroup(groupId, raftGrpEvtsLsnrSupplier.get(), lsnrSupplier.get(), peers)) {
+            if (!raftServer.startRaftGroup(groupId, lsnrSupplier.get(), peers)) {
                 throw new IgniteInternalException(IgniteStringFormatter.format(
                         "Raft group on the node is already started [node={}, raftGrp={}]",
                         locNodeName,
@@ -209,72 +212,30 @@ public class Loza implements IgniteComponent {
         );
     }
 
-    /**
-     * If {@code deltaNodes} contains the current node, then raft group starts on the current node.
-     *
-     * @param grpId                   Raft group id.
-     * @param nodes                   Full set of raft group nodes.
-     * @param deltaNodes              New raft group nodes.
-     * @param lsnrSupplier            Raft group listener supplier.
-     * @param raftGrpEvtsLsnrSupplier Raft group events listener supplier.
-     * @throws NodeStoppingException If node stopping intention was detected.
-     */
-    public void startRaftGroupNode(
-            String grpId,
-            Collection<ClusterNode> nodes,
-            Collection<ClusterNode> deltaNodes,
-            Supplier<RaftGroupListener> lsnrSupplier,
-            Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier) throws NodeStoppingException {
-        assert !nodes.isEmpty();
-
-        if (!busyLock.enterBusy()) {
-            throw new NodeStoppingException();
-        }
-
-        try {
-            List<Peer> peers = nodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
-
-            String locNodeName = clusterNetSvc.topologyService().localMember().name();
-
-            if (deltaNodes.stream().anyMatch(n -> locNodeName.equals(n.name()))) {
-                if (!raftServer.startRaftGroup(grpId, raftGrpEvtsLsnrSupplier.get(), lsnrSupplier.get(), peers)) {
-                    throw new IgniteInternalException(IgniteStringFormatter.format(
-                            "Raft group on the node is already started [node={}, raftGrp={}]",
-                            locNodeName,
-                            grpId
-                    ));
-                }
-            }
-        } finally {
-            busyLock.leaveBusy();
-        }
-    }
-
     /**
      * Creates a raft group service providing operations on a raft group. If {@code deltaNodes} contains the current node, then raft group
      * starts on the current node.
      *
-     * @param grpId                   Raft group id.
-     * @param nodes                   Full set of raft group nodes.
-     * @param deltaNodes              New raft group nodes.
-     * @param lsnrSupplier            Raft group listener supplier.
-     * @param raftGrpEvtsLsnrSupplier Raft group events listener supplier.
+     * @param groupId      Raft group id.
+     * @param nodes        Full set of raft group nodes.
+     * @param deltaNodes   New raft group nodes.
+     * @param lsnrSupplier Raft group listener supplier.
      * @return Future representing pending completion of the operation.
      * @throws NodeStoppingException If node stopping intention was detected.
      */
+    @Experimental
     public CompletableFuture<RaftGroupService> updateRaftGroup(
-            String grpId,
+            String groupId,
             Collection<ClusterNode> nodes,
             Collection<ClusterNode> deltaNodes,
-            Supplier<RaftGroupListener> lsnrSupplier,
-            Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier
+            Supplier<RaftGroupListener> lsnrSupplier
     ) throws NodeStoppingException {
         if (!busyLock.enterBusy()) {
             throw new NodeStoppingException();
         }
 
         try {
-            return updateRaftGroupInternal(grpId, nodes, deltaNodes, lsnrSupplier, raftGrpEvtsLsnrSupplier);
+            return updateRaftGroupInternal(groupId, nodes, deltaNodes, lsnrSupplier);
         } finally {
             busyLock.leaveBusy();
         }
@@ -283,19 +244,14 @@ public class Loza implements IgniteComponent {
     /**
      * Internal method for updating a raft group.
      *
-     * @param grpId                   Raft group id.
-     * @param nodes                   Full set of raft group nodes.
-     * @param deltaNodes              New raft group nodes.
-     * @param lsnrSupplier            Raft group listener supplier.
-     * @param raftGrpEvtsLsnrSupplier Raft group events listener supplier.
+     * @param groupId      Raft group id.
+     * @param nodes        Full set of raft group nodes.
+     * @param deltaNodes   New raft group nodes.
+     * @param lsnrSupplier Raft group listener supplier.
      * @return Future representing pending completion of the operation.
      */
-    private CompletableFuture<RaftGroupService> updateRaftGroupInternal(
-            String grpId,
-            Collection<ClusterNode> nodes,
-            Collection<ClusterNode> deltaNodes,
-            Supplier<RaftGroupListener> lsnrSupplier,
-            Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier) {
+    private CompletableFuture<RaftGroupService> updateRaftGroupInternal(String groupId, Collection<ClusterNode> nodes,
+            Collection<ClusterNode> deltaNodes, Supplier<RaftGroupListener> lsnrSupplier) {
         assert !nodes.isEmpty();
 
         List<Peer> peers = nodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
@@ -303,17 +259,17 @@ public class Loza implements IgniteComponent {
         String locNodeName = clusterNetSvc.topologyService().localMember().name();
 
         if (deltaNodes.stream().anyMatch(n -> locNodeName.equals(n.name()))) {
-            if (!raftServer.startRaftGroup(grpId,  raftGrpEvtsLsnrSupplier.get(), lsnrSupplier.get(), peers)) {
+            if (!raftServer.startRaftGroup(groupId, lsnrSupplier.get(), peers)) {
                 throw new IgniteInternalException(IgniteStringFormatter.format(
                         "Raft group on the node is already started [node={}, raftGrp={}]",
                         locNodeName,
-                        grpId
+                        groupId
                 ));
             }
         }
 
         return RaftGroupServiceImpl.start(
-                grpId,
+                groupId,
                 clusterNetSvc,
                 FACTORY,
                 RETRY_TIMEOUT,
@@ -325,6 +281,57 @@ public class Loza implements IgniteComponent {
         );
     }
 
+    /**
+     * Changes peers for a group from {@code expectedNodes} to {@code changedNodes}.
+     *
+     * @param groupId       Raft group id.
+     * @param expectedNodes List of nodes that contains the raft group peers.
+     * @param changedNodes  List of nodes that will contain the raft group peers after.
+     * @return Future which will complete when peers change.
+     * @throws NodeStoppingException If node stopping intention was detected.
+     */
+    public CompletableFuture<Void> changePeers(
+            String groupId,
+            List<ClusterNode> expectedNodes,
+            List<ClusterNode> changedNodes
+    ) throws NodeStoppingException {
+        if (!busyLock.enterBusy()) {
+            throw new NodeStoppingException();
+        }
+
+        try {
+            return changePeersInternal(groupId, expectedNodes, changedNodes);
+        } finally {
+            busyLock.leaveBusy();
+        }
+    }
+
+    /**
+     * Internal method for changing peers for a RAFT group.
+     *
+     * @param groupId       Raft group id.
+     * @param expectedNodes List of nodes that contains the raft group peers.
+     * @param changedNodes  List of nodes that will contain the raft group peers after.
+     * @return Future which will complete when peers change.
+     */
+    private CompletableFuture<Void> changePeersInternal(String groupId, List<ClusterNode> expectedNodes, List<ClusterNode> changedNodes) {
+        List<Peer> expectedPeers = expectedNodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
+        List<Peer> changedPeers = changedNodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
+
+        return RaftGroupServiceImpl.start(
+                groupId,
+                clusterNetSvc,
+                FACTORY,
+                10 * RETRY_TIMEOUT,
+                10 * RPC_TIMEOUT,
+                expectedPeers,
+                true,
+                DELAY,
+                executor
+        ).thenCompose(srvc -> srvc.changePeers(changedPeers)
+                .thenRun(() -> srvc.shutdown()));
+    }
+
     /**
      * Stops a raft group on the current node.
      *
diff --git a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftGroupEventsListener.java b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftGroupEventsListener.java
deleted file mode 100644
index 1779846ac..000000000
--- a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftGroupEventsListener.java
+++ /dev/null
@@ -1,68 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ignite.internal.raft.server;
-
-import java.util.List;
-import org.apache.ignite.raft.jraft.Status;
-import org.apache.ignite.raft.jraft.entity.PeerId;
-
-/**
- * Listener for group membership and other events.
- */
-public interface RaftGroupEventsListener {
-    /**
-     * Invoked, when new leader is elected (if it is the first leader of group ever - will be invoked too).
-     *
-     * @param term Raft term of the current leader.
-     */
-    void onLeaderElected(long term);
-
-    /**
-     * Invoked on the leader, when new peers' configuration applied to raft group.
-     *
-     * @param peers list of peers, which was applied by raft group membership configuration.
-     */
-    void onNewPeersConfigurationApplied(List<PeerId> peers);
-
-    /**
-     * Invoked on the leader, when membership reconfiguration was failed, because of {@link Status}.
-     *
-     * @param status with description of failure.
-     * @param peers List of peers, which was tried as a target of reconfiguration.
-     * @param term Raft term of the current leader.
-     */
-    void onReconfigurationError(Status status, List<PeerId> peers, long term);
-
-    /**
-     * No-op raft group events listener.
-     */
-    RaftGroupEventsListener noopLsnr = new RaftGroupEventsListener() {
-        /** {@inheritDoc} */
-        @Override
-        public void onLeaderElected(long term) { }
-
-        /** {@inheritDoc} */
-        @Override
-        public void onNewPeersConfigurationApplied(List<PeerId> peers) { }
-
-        /** {@inheritDoc} */
-        @Override
-        public void onReconfigurationError(Status status, List<PeerId> peers, long term) {}
-    };
-
-}
diff --git a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftServer.java b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftServer.java
index 3e795d73f..526cc4d50 100644
--- a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftServer.java
+++ b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftServer.java
@@ -47,18 +47,6 @@ public interface RaftServer extends IgniteComponent {
      */
     boolean startRaftGroup(String groupId, RaftGroupListener lsnr, List<Peer> initialConf);
 
-    /**
-     * Starts a raft group bound to this cluster node.
-     *
-     * @param groupId     Group id.
-     * @param evLsnr      Listener for group membership and other events.
-     * @param lsnr        Listener for state machine events.
-     * @param initialConf Inititial group configuration.
-     * @return {@code True} if a group was successfully started, {@code False} when the group with given name is already exists.
-     */
-    boolean startRaftGroup(String groupId, RaftGroupEventsListener evLsnr,
-            RaftGroupListener lsnr, List<Peer> initialConf);
-
     /**
      * Synchronously stops a raft group if any.
      *
diff --git a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/impl/JraftServerImpl.java b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/impl/JraftServerImpl.java
index d15693164..1df80378e 100644
--- a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/impl/JraftServerImpl.java
+++ b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/impl/JraftServerImpl.java
@@ -30,9 +30,7 @@ import java.util.Set;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.ExecutorService;
-import java.util.function.BiPredicate;
 import java.util.stream.Collectors;
-import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.raft.server.RaftServer;
 import org.apache.ignite.internal.thread.NamedThreadFactory;
 import org.apache.ignite.lang.IgniteInternalException;
@@ -71,9 +69,7 @@ import org.apache.ignite.raft.jraft.storage.snapshot.SnapshotWriter;
 import org.apache.ignite.raft.jraft.util.ExecutorServiceHelper;
 import org.apache.ignite.raft.jraft.util.ExponentialBackoffTimeoutStrategy;
 import org.apache.ignite.raft.jraft.util.JDKMarshaller;
-import org.jetbrains.annotations.NotNull;
 import org.jetbrains.annotations.Nullable;
-import org.jetbrains.annotations.TestOnly;
 
 /**
  * Raft server implementation on top of forked JRaft library.
@@ -317,13 +313,6 @@ public class JraftServerImpl implements RaftServer {
     /** {@inheritDoc} */
     @Override
     public synchronized boolean startRaftGroup(String groupId, RaftGroupListener lsnr, @Nullable List<Peer> initialConf) {
-        return startRaftGroup(groupId, RaftGroupEventsListener.noopLsnr, lsnr, initialConf);
-    }
-
-    /** {@inheritDoc} */
-    @Override
-    public synchronized boolean startRaftGroup(String groupId, @NotNull RaftGroupEventsListener evLsnr,
-            RaftGroupListener lsnr, @Nullable List<Peer> initialConf) {
         if (groups.containsKey(groupId)) {
             return false;
         }
@@ -344,8 +333,6 @@ public class JraftServerImpl implements RaftServer {
 
         nodeOptions.setFsm(new DelegatingStateMachine(lsnr));
 
-        nodeOptions.setRaftGrpEvtsLsnr(evLsnr);
-
         if (initialConf != null) {
             List<PeerId> mapped = initialConf.stream().map(PeerId::fromPeer).collect(Collectors.toList());
 
@@ -413,31 +400,6 @@ public class JraftServerImpl implements RaftServer {
         return groups.keySet();
     }
 
-    /**
-     * Blocks messages for raft group node according to provided predicate.
-     *
-     * @param groupId Raft group id.
-     * @param predicate Predicate to block messages.
-     */
-    @TestOnly
-    public void blockMessages(String groupId, BiPredicate<Object, String> predicate) {
-        IgniteRpcClient client = (IgniteRpcClient) groups.get(groupId).getNodeOptions().getRpcClient();
-
-        client.blockMessages(predicate);
-    }
-
-    /**
-     * Stops blocking messages for raft group node.
-     *
-     * @param groupId Raft group id.
-     */
-    @TestOnly
-    public void stopBlockMessages(String groupId) {
-        IgniteRpcClient client = (IgniteRpcClient) groups.get(groupId).getNodeOptions().getRpcClient();
-
-        client.stopBlock();
-    }
-
     /**
      * Wrapper of {@link StateMachineAdapter}.
      */
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/Node.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/Node.java
index cac7fb393..1ab3fd409 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/Node.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/Node.java
@@ -189,17 +189,6 @@ public interface Node extends Lifecycle<NodeOptions>, Describer {
      */
     void changePeers(final Configuration newPeers, final Closure done);
 
-    /**
-     * Asynchronously change the configuration of the raft group to |newPeers|. If done closure was completed with {@link Status#OK()},
-     * then it is guaranteed that state of {@link org.apache.ignite.raft.jraft.core.NodeImpl.ConfigurationCtx} was switched to
-     * {@code STAGE_CATCHING_UP}
-     *
-     * @param newPeers new peers to change
-     * @param term term on which this method was called.
-     * @param done callback
-     */
-    void changePeersAsync(final Configuration newPeers, long term, final Closure done);
-
     /**
      * Reset the configuration of this node individually, without any replication to other peers before this node
      * becomes the leader. This function is supposed to be invoked when the majority of the replication group are dead
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/RaftMessageGroup.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/RaftMessageGroup.java
index 9aae7bc93..2295b8bb1 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/RaftMessageGroup.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/RaftMessageGroup.java
@@ -83,12 +83,6 @@ public class RaftMessageGroup {
 
         /** */
         public static final short LEARNERS_OP_RESPONSE = 1016;
-
-        /** */
-        public static final short CHANGE_PEERS_ASYNC_REQUEST = 1017;
-
-        /** */
-        public static final short CHANGE_PEERS_ASYNC_RESPONSE = 1018;
     }
 
     /**
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/core/NodeImpl.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/core/NodeImpl.java
index 80bc52e72..1c00f49ef 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/core/NodeImpl.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/core/NodeImpl.java
@@ -36,7 +36,6 @@ import java.util.concurrent.locks.ReadWriteLock;
 import java.util.stream.Collectors;
 import org.apache.ignite.internal.thread.NamedThreadFactory;
 import org.apache.ignite.lang.IgniteLogger;
-import org.apache.ignite.network.NetworkAddress;
 import org.apache.ignite.raft.client.Peer;
 import org.apache.ignite.raft.jraft.Closure;
 import org.apache.ignite.raft.jraft.FSMCaller;
@@ -331,7 +330,7 @@ public class NodeImpl implements Node, RaftServerService {
         /**
          * Start change configuration.
          */
-        void start(final Configuration oldConf, final Configuration newConf, final Closure done, boolean async) {
+        void start(final Configuration oldConf, final Configuration newConf, final Closure done) {
             if (isBusy()) {
                 if (done != null) {
                     Utils.runClosureInThread(this.node.getOptions().getCommonExecutor(), done, new Status(RaftError.EBUSY, "Already in busy stage."));
@@ -346,9 +345,6 @@ public class NodeImpl implements Node, RaftServerService {
             }
             this.done = done;
             this.stage = Stage.STAGE_CATCHING_UP;
-            if (async) {
-                Utils.runClosureInThread(this.node.getOptions().getCommonExecutor(), done, Status.OK());
-            }
             this.oldPeers = oldConf.listPeers();
             this.newPeers = newConf.listPeers();
             this.oldLearners = oldConf.listLearners();
@@ -389,7 +385,7 @@ public class NodeImpl implements Node, RaftServerService {
         private void addNewLearners() {
             final Set<PeerId> addingLearners = new HashSet<>(this.newLearners);
             addingLearners.removeAll(this.oldLearners);
-            LOG.info("Adding learners: {}.", addingLearners);
+            LOG.info("Adding learners: {}.", this.addingPeers);
             for (final PeerId newLearner : addingLearners) {
                 if (!this.node.replicatorGroup.addReplicator(newLearner, ReplicatorType.Learner)) {
                     LOG.error("Node {} start the learner replicator failed, peer={}.", this.node.getNodeId(),
@@ -431,32 +427,15 @@ public class NodeImpl implements Node, RaftServerService {
                 this.node.stopReplicator(this.oldPeers, this.newPeers);
                 this.node.stopReplicator(this.oldLearners, this.newLearners);
             }
-
-            // must be copied before clearing
-            final List<PeerId> resultPeerIds = new ArrayList<>(this.newPeers);
-
             clearPeers();
             clearLearners();
 
             this.version++;
             this.stage = Stage.STAGE_NONE;
             this.nchanges = 0;
-
-            Closure oldDoneClosure = done;
-
             if (this.done != null) {
-                Closure newDone = (Status status) -> {
-                    if (status.isOk()) {
-                        node.getOptions().getRaftGrpEvtsLsnr().onNewPeersConfigurationApplied(resultPeerIds);
-                    } else {
-                        node.getOptions().getRaftGrpEvtsLsnr().onReconfigurationError(status, resultPeerIds, node.getCurrentTerm());
-                    }
-                    oldDoneClosure.run(status);
-                };
-
-                // TODO: in case of changePeerAsync this invocation is useless as far as we have already sent OK response in done closure.
-                Utils.runClosureInThread(this.node.getOptions().getCommonExecutor(), newDone, st != null ? st :
-                        new Status(RaftError.EPERM, "Leader stepped down."));
+                Utils.runClosureInThread(this.node.getOptions().getCommonExecutor(), this.done, st != null ? st :
+                    new Status(RaftError.EPERM, "Leader stepped down."));
                 this.done = null;
             }
         }
@@ -1373,7 +1352,6 @@ public class NodeImpl implements Node, RaftServerService {
             throw new IllegalStateException();
         }
         this.confCtx.flush(this.conf.getConf(), this.conf.getOldConf());
-
         resetElectionTimeoutToInitial();
         this.stepDownTimer.start();
     }
@@ -2467,9 +2445,6 @@ public class NodeImpl implements Node, RaftServerService {
             if (status.isOk()) {
                 onConfigurationChangeDone(this.term);
                 if (this.leaderStart) {
-                    if (getOptions().getRaftGrpEvtsLsnr() != null) {
-                        options.getRaftGrpEvtsLsnr().onLeaderElected(term);
-                    }
                     getOptions().getFsm().onLeaderStart(this.term);
                 }
             }
@@ -2502,12 +2477,8 @@ public class NodeImpl implements Node, RaftServerService {
         checkAndSetConfiguration(false);
     }
 
-    private void unsafeRegisterConfChange(final Configuration oldConf, final Configuration newConf, final Closure done) {
-        unsafeRegisterConfChange(oldConf, newConf, done, false);
-    }
-
     private void unsafeRegisterConfChange(final Configuration oldConf, final Configuration newConf,
-        final Closure done, boolean async) {
+        final Closure done) {
 
         Requires.requireTrue(newConf.isValid(), "Invalid new conf: %s", newConf);
         // The new conf entry(will be stored in log manager) should be valid
@@ -2538,16 +2509,10 @@ public class NodeImpl implements Node, RaftServerService {
         }
         // Return immediately when the new peers equals to current configuration
         if (this.conf.getConf().equals(newConf)) {
-            Closure newDone = (Status status) -> {
-                // doOnNewPeersConfigurationApplied should be called, otherwise we could lose the callback invocation.
-                // For example, old leader failed just before an invocation of doOnNewPeersConfigurationApplied
-                this.getOptions().getRaftGrpEvtsLsnr().onNewPeersConfigurationApplied(newConf.getPeers());
-                done.run(status);
-            };
-            Utils.runClosureInThread(this.getOptions().getCommonExecutor(), newDone);
+            Utils.runClosureInThread(this.getOptions().getCommonExecutor(), done);
             return;
         }
-        this.confCtx.start(oldConf, newConf, done, async);
+        this.confCtx.start(oldConf, newConf, done);
     }
 
     private void afterShutdown() {
@@ -3253,32 +3218,6 @@ public class NodeImpl implements Node, RaftServerService {
         }
     }
 
-    @Override
-    public void changePeersAsync(final Configuration newPeers, long term, Closure done) {
-        Requires.requireNonNull(newPeers, "Null new peers");
-        Requires.requireTrue(!newPeers.isEmpty(), "Empty new peers");
-        this.writeLock.lock();
-        try {
-            long currentTerm = getCurrentTerm();
-
-            if (currentTerm != term) {
-                LOG.warn("Node {} refused configuration because of mismatching terms. Current term is {}, but provided is {}.",
-                        getNodeId(), currentTerm, term);
-
-                Utils.runClosureInThread(this.getOptions().getCommonExecutor(), done, Status.OK());
-
-                return;
-            }
-
-            LOG.info("Node {} change peers from {} to {}.", getNodeId(), this.conf.getConf(), newPeers);
-
-            unsafeRegisterConfChange(this.conf.getConf(), newPeers, done, true);
-        }
-        finally {
-            this.writeLock.unlock();
-        }
-    }
-
     @Override
     public Status resetPeers(final Configuration newPeers) {
         Requires.requireNonNull(newPeers, "Null new peers");
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/option/NodeOptions.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/option/NodeOptions.java
index 31e151746..76279a78b 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/option/NodeOptions.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/option/NodeOptions.java
@@ -18,7 +18,6 @@ package org.apache.ignite.raft.jraft.option;
 
 import java.util.List;
 import java.util.concurrent.ExecutorService;
-import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.raft.jraft.util.TimeoutStrategy;
 import org.apache.ignite.raft.jraft.util.NoopTimeoutStrategy;
 import org.apache.ignite.raft.jraft.JRaftServiceFactory;
@@ -38,7 +37,6 @@ import org.apache.ignite.raft.jraft.util.StringUtils;
 import org.apache.ignite.raft.jraft.util.Utils;
 import org.apache.ignite.raft.jraft.util.concurrent.FixedThreadsExecutorGroup;
 import org.apache.ignite.raft.jraft.util.timer.Timer;
-import org.jetbrains.annotations.NotNull;
 
 /**
  * Node options.
@@ -106,9 +104,6 @@ public class NodeOptions extends RpcOptions implements Copiable<NodeOptions> {
     // a valid instance.
     private StateMachine fsm;
 
-    // Listener for raft group reconfiguration events.
-    private RaftGroupEventsListener raftGrpEvtsLsnr;
-
     // Describe a specific RaftMetaStorage in format ${type}://${parameters}
     private String raftMetaUri;
 
@@ -429,14 +424,6 @@ public class NodeOptions extends RpcOptions implements Copiable<NodeOptions> {
         this.initialConf = initialConf;
     }
 
-    public RaftGroupEventsListener getRaftGrpEvtsLsnr() {
-        return raftGrpEvtsLsnr;
-    }
-
-    public void setRaftGrpEvtsLsnr(@NotNull RaftGroupEventsListener raftGrpEvtsLsnr) {
-        this.raftGrpEvtsLsnr = raftGrpEvtsLsnr;
-    }
-
     public StateMachine getFsm() {
         return this.fsm;
     }
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/CliRequests.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/CliRequests.java
index ceee9a35c..f005ed22d 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/CliRequests.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/CliRequests.java
@@ -20,9 +20,8 @@
 package org.apache.ignite.raft.jraft.rpc;
 
 import java.util.Collection;
-import org.apache.ignite.raft.jraft.RaftMessageGroup;
 import org.apache.ignite.network.annotations.Transferable;
-import org.apache.ignite.raft.jraft.RaftMessageGroup.RpcClientMessageGroup;
+import org.apache.ignite.raft.jraft.RaftMessageGroup;
 
 public final class CliRequests {
     @Transferable(value = RaftMessageGroup.RpcClientMessageGroup.ADD_PEER_REQUEST)
@@ -73,24 +72,6 @@ public final class CliRequests {
         Collection<String> newPeersList();
     }
 
-    @Transferable(value = RpcClientMessageGroup.CHANGE_PEERS_ASYNC_REQUEST)
-    public interface ChangePeersAsyncRequest extends Message {
-        String groupId();
-
-        String leaderId();
-
-        Collection<String> newPeersList();
-
-        long term();
-    }
-
-    @Transferable(value = RpcClientMessageGroup.CHANGE_PEERS_ASYNC_RESPONSE)
-    public interface ChangePeersAsyncResponse extends Message {
-        Collection<String> oldPeersList();
-
-        Collection<String> newPeersList();
-    }
-
     @Transferable(value = RaftMessageGroup.RpcClientMessageGroup.SNAPSHOT_REQUEST)
     public interface SnapshotRequest extends Message {
         String groupId();
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/IgniteRpcServer.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/IgniteRpcServer.java
index 7681e7979..49de89069 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/IgniteRpcServer.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/IgniteRpcServer.java
@@ -38,7 +38,6 @@ import org.apache.ignite.raft.jraft.rpc.RpcProcessor;
 import org.apache.ignite.raft.jraft.rpc.RpcServer;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.AddLearnersRequestProcessor;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.AddPeerRequestProcessor;
-import org.apache.ignite.raft.jraft.rpc.impl.cli.ChangePeersAsyncRequestProcessor;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.ChangePeersRequestProcessor;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.GetLeaderRequestProcessor;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.GetPeersRequestProcessor;
@@ -105,7 +104,6 @@ public class IgniteRpcServer implements RpcServer<Void> {
         registerProcessor(new RemovePeerRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new ResetPeerRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new ChangePeersRequestProcessor(rpcExecutor, raftMessagesFactory));
-        registerProcessor(new ChangePeersAsyncRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new GetLeaderRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new SnapshotRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new TransferLeaderRequestProcessor(rpcExecutor, raftMessagesFactory));
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java
index 3d634726e..5f84a35d5 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java
@@ -51,7 +51,6 @@ import java.util.concurrent.TimeoutException;
 import java.util.function.BiConsumer;
 import java.util.stream.Collectors;
 import org.apache.ignite.internal.tostring.S;
-import org.apache.ignite.lang.IgniteBiTuple;
 import org.apache.ignite.lang.IgniteException;
 import org.apache.ignite.lang.IgniteLogger;
 import org.apache.ignite.network.ClusterService;
@@ -66,9 +65,6 @@ import org.apache.ignite.raft.jraft.entity.PeerId;
 import org.apache.ignite.raft.jraft.error.RaftError;
 import org.apache.ignite.raft.jraft.rpc.ActionRequest;
 import org.apache.ignite.raft.jraft.rpc.ActionResponse;
-import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncRequest;
-import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncResponse;
-import org.apache.ignite.raft.jraft.rpc.Message;
 import org.apache.ignite.raft.jraft.rpc.RpcRequests;
 import org.jetbrains.annotations.NotNull;
 
@@ -253,23 +249,6 @@ public class RaftGroupServiceImpl implements RaftGroupService {
         });
     }
 
-    /** {@inheritDoc} */
-    @Override public CompletableFuture<IgniteBiTuple<Peer, Long>> refreshAndGetLeaderWithTerm() {
-        GetLeaderRequest req = factory.getLeaderRequest().groupId(groupId).build();
-
-        CompletableFuture<GetLeaderResponse> fut = new CompletableFuture<>();
-
-        sendWithRetry(randomNode(), req, currentTimeMillis() + timeout, fut);
-
-        return fut.thenApply(resp -> {
-            Peer respLeader = parsePeer(resp.leaderId());
-
-            leader = respLeader;
-
-            return new IgniteBiTuple<>(respLeader, resp.currentTerm());
-        });
-    }
-
     /** {@inheritDoc} */
     @Override public CompletableFuture<Void> refreshMembers(boolean onlyAlive) {
         GetPeersRequest req = factory.getPeersRequest().onlyAlive(onlyAlive).groupId(groupId).build();
@@ -355,27 +334,6 @@ public class RaftGroupServiceImpl implements RaftGroupService {
         });
     }
 
-    /** {@inheritDoc} */
-    @Override public CompletableFuture<Void> changePeersAsync(List<Peer> peers, long term) {
-        Peer leader = this.leader;
-
-        if (leader == null)
-            return refreshLeader().thenCompose(res -> changePeersAsync(peers, term));
-
-        List<String> peersToChange = peers.stream().map(p -> PeerId.fromPeer(p).toString())
-                .collect(Collectors.toList());
-
-        ChangePeersAsyncRequest req = factory.changePeersAsyncRequest().groupId(groupId)
-                .term(term)
-                .newPeersList(peersToChange).build();
-
-        CompletableFuture<ChangePeersAsyncResponse> fut = new CompletableFuture<>();
-
-        sendWithRetry(leader, req, currentTimeMillis() + timeout, fut);
-
-        return fut.thenRun(() -> {});
-    }
-
     /** {@inheritDoc} */
     @Override public CompletableFuture<Void> addLearners(List<Peer> learners) {
         Peer leader = this.leader;
@@ -471,12 +429,23 @@ public class RaftGroupServiceImpl implements RaftGroupService {
                 .peerId(PeerId.fromPeer(newLeader).toString())
                 .build();
 
-        CompletableFuture<NetworkMessage> fut = new CompletableFuture<>();
+        CompletableFuture<NetworkMessage> fut = cluster.messagingService().invoke(leader.address(), req, rpcTimeout);
 
-        sendWithRetry(leader, req, currentTimeMillis() + timeout, fut);
+        return fut.thenCompose(resp -> {
+            if (resp != null) {
+                RpcRequests.ErrorResponse resp0 = (RpcRequests.ErrorResponse) resp;
 
-        return fut.thenRun(() -> {
-            this.leader = newLeader;
+                if (resp0.errorCode() != RaftError.SUCCESS.getNumber())
+                    return CompletableFuture.failedFuture(
+                        new RaftException(
+                            RaftError.forNumber(resp0.errorCode()), resp0.errorMsg()
+                        )
+                    );
+                else
+                    this.leader = newLeader;
+            }
+
+            return CompletableFuture.completedFuture(null);
         });
     }
 
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessor.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessor.java
deleted file mode 100644
index bd7bd409a..000000000
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessor.java
+++ /dev/null
@@ -1,93 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ignite.raft.jraft.rpc.impl.cli;
-
-import java.util.List;
-import java.util.concurrent.Executor;
-import org.apache.ignite.raft.jraft.RaftMessagesFactory;
-import org.apache.ignite.raft.jraft.conf.Configuration;
-import org.apache.ignite.raft.jraft.entity.PeerId;
-import org.apache.ignite.raft.jraft.error.RaftError;
-import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncRequest;
-import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncResponse;
-import org.apache.ignite.raft.jraft.rpc.Message;
-import org.apache.ignite.raft.jraft.rpc.RaftRpcFactory;
-
-import static java.util.stream.Collectors.toList;
-
-/**
- * Change peers request processor.
- */
-public class ChangePeersAsyncRequestProcessor extends BaseCliRequestProcessor<ChangePeersAsyncRequest> {
-
-    public ChangePeersAsyncRequestProcessor(Executor executor, RaftMessagesFactory msgFactory) {
-        super(executor, msgFactory);
-    }
-
-    @Override
-    protected String getPeerId(final ChangePeersAsyncRequest request) {
-        return request.leaderId();
-    }
-
-    @Override
-    protected String getGroupId(final ChangePeersAsyncRequest request) {
-        return request.groupId();
-    }
-
-    @Override
-    protected Message processRequest0(final CliRequestContext ctx, final ChangePeersAsyncRequest request,
-            final IgniteCliRpcRequestClosure done) {
-        final List<PeerId> oldConf = ctx.node.listPeers();
-
-        final Configuration conf = new Configuration();
-        for (final String peerIdStr : request.newPeersList()) {
-            final PeerId peer = new PeerId();
-            if (peer.parse(peerIdStr)) {
-                conf.addPeer(peer);
-            }
-            else {
-                return RaftRpcFactory.DEFAULT //
-                        .newResponse(msgFactory(), RaftError.EINVAL, "Fail to parse peer id %s", peerIdStr);
-            }
-        }
-
-        long term = request.term();
-
-        LOG.info("Receive ChangePeersAsyncRequest with term {} to {} from {}, new conf is {}", term, ctx.node.getNodeId(), done.getRpcCtx()
-                .getRemoteAddress(), conf);
-
-        ctx.node.changePeersAsync(conf, term, status -> {
-            if (!status.isOk()) {
-                done.run(status);
-            }
-            else {
-                ChangePeersAsyncResponse resp = msgFactory().changePeersAsyncResponse()
-                        .oldPeersList(oldConf.stream().map(Object::toString).collect(toList()))
-                        .newPeersList(conf.getPeers().stream().map(Object::toString).collect(toList()))
-                        .build();
-
-                done.sendResponse(resp);
-            }
-        });
-        return null;
-    }
-
-    @Override
-    public String interest() {
-        return ChangePeersAsyncRequest.class.getName();
-    }
-}
diff --git a/modules/raft/src/test/java/org/apache/ignite/internal/raft/LozaTest.java b/modules/raft/src/test/java/org/apache/ignite/internal/raft/LozaTest.java
index 26857cafd..ad1dc2f08 100644
--- a/modules/raft/src/test/java/org/apache/ignite/internal/raft/LozaTest.java
+++ b/modules/raft/src/test/java/org/apache/ignite/internal/raft/LozaTest.java
@@ -77,8 +77,9 @@ public class LozaTest extends IgniteAbstractTest {
 
         Supplier<RaftGroupListener> lsnrSupplier = () -> null;
 
-        assertThrows(NodeStoppingException.class, () -> loza.updateRaftGroup(raftGroupId, nodes, newNodes, lsnrSupplier, () -> null));
+        assertThrows(NodeStoppingException.class, () -> loza.updateRaftGroup(raftGroupId, nodes, newNodes, lsnrSupplier));
         assertThrows(NodeStoppingException.class, () -> loza.stopRaftGroup(raftGroupId));
         assertThrows(NodeStoppingException.class, () -> loza.prepareRaftGroup(raftGroupId, nodes, lsnrSupplier));
+        assertThrows(NodeStoppingException.class, () -> loza.changePeers(raftGroupId, nodes, newNodes));
     }
 }
diff --git a/modules/raft/src/test/java/org/apache/ignite/internal/raft/server/impl/RaftServerImpl.java b/modules/raft/src/test/java/org/apache/ignite/internal/raft/server/impl/RaftServerImpl.java
index 296fb0157..37fc8c39b 100644
--- a/modules/raft/src/test/java/org/apache/ignite/internal/raft/server/impl/RaftServerImpl.java
+++ b/modules/raft/src/test/java/org/apache/ignite/internal/raft/server/impl/RaftServerImpl.java
@@ -27,7 +27,6 @@ import java.util.concurrent.BlockingQueue;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.function.BiConsumer;
-import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.raft.server.RaftServer;
 import org.apache.ignite.lang.IgniteLogger;
 import org.apache.ignite.lang.IgniteStringFormatter;
@@ -183,12 +182,6 @@ public class RaftServerImpl implements RaftServer {
         return true;
     }
 
-    /** {@inheritDoc} */
-    @Override
-    public boolean startRaftGroup(String groupId, RaftGroupEventsListener evLsnr, RaftGroupListener lsnr, List<Peer> initialConf) {
-        return startRaftGroup(groupId, lsnr, initialConf);
-    }
-
     /** {@inheritDoc} */
     @Override
     public synchronized boolean stopRaftGroup(String groupId) {
diff --git a/modules/raft/src/test/java/org/apache/ignite/raft/jraft/core/TestCluster.java b/modules/raft/src/test/java/org/apache/ignite/raft/jraft/core/TestCluster.java
index 83b6a2b8b..36fec93f9 100644
--- a/modules/raft/src/test/java/org/apache/ignite/raft/jraft/core/TestCluster.java
+++ b/modules/raft/src/test/java/org/apache/ignite/raft/jraft/core/TestCluster.java
@@ -39,7 +39,6 @@ import java.util.concurrent.locks.ReentrantLock;
 import java.util.function.Consumer;
 import java.util.function.Predicate;
 import java.util.stream.Stream;
-import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.util.IgniteUtils;
 import org.apache.ignite.raft.jraft.util.ExponentialBackoffTimeoutStrategy;
 import org.apache.ignite.lang.IgniteLogger;
@@ -96,8 +95,6 @@ public class TestCluster {
 
     private LinkedHashSet<PeerId> learners;
 
-    private RaftGroupEventsListener raftGrpEvtsLsnr = RaftGroupEventsListener.noopLsnr;
-
     public JRaftServiceFactory getRaftServiceFactory() {
         return this.raftServiceFactory;
     }
@@ -242,8 +239,6 @@ public class TestCluster {
             MockStateMachine fsm = new MockStateMachine(listenAddr);
             nodeOptions.setFsm(fsm);
 
-            nodeOptions.setRaftGrpEvtsLsnr(raftGrpEvtsLsnr);
-
             if (!emptyPeers)
                 nodeOptions.setInitialConf(new Configuration(this.peers, this.learners));
 
@@ -356,14 +351,6 @@ public class TestCluster {
         IgniteUtils.deleteIfExists(path);
     }
 
-    public RaftGroupEventsListener getRaftGrpEvtsLsnr() {
-        return raftGrpEvtsLsnr;
-    }
-
-    public void setRaftGrpEvtsLsnr(RaftGroupEventsListener raftGrpEvtsLsnr) {
-        this.raftGrpEvtsLsnr = raftGrpEvtsLsnr;
-    }
-
     public Node getLeader() {
         this.lock.lock();
         try {
diff --git a/modules/raft/src/test/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessorTest.java b/modules/raft/src/test/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessorTest.java
deleted file mode 100644
index 6d5d2ba91..000000000
--- a/modules/raft/src/test/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessorTest.java
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ignite.raft.jraft.rpc.impl.cli;
-
-import static org.junit.jupiter.api.Assertions.assertEquals;
-import static org.junit.jupiter.api.Assertions.assertNotNull;
-import static org.mockito.ArgumentMatchers.eq;
-
-import java.util.List;
-import org.apache.ignite.raft.jraft.Closure;
-import org.apache.ignite.raft.jraft.JRaftUtils;
-import org.apache.ignite.raft.jraft.Node;
-import org.apache.ignite.raft.jraft.Status;
-import org.apache.ignite.raft.jraft.entity.PeerId;
-import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncRequest;
-import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncResponse;
-import org.mockito.ArgumentCaptor;
-import org.mockito.Mockito;
-
-public class ChangePeersAsyncRequestProcessorTest extends AbstractCliRequestProcessorTest<ChangePeersAsyncRequest>{
-    @Override
-    public ChangePeersAsyncRequest createRequest(String groupId, PeerId peerId) {
-        return msgFactory.changePeersAsyncRequest()
-                .groupId(groupId)
-                .leaderId(peerId.toString())
-                .newPeersList(List.of("localhost:8084", "localhost:8085"))
-                .term(1)
-                .build();
-    }
-
-    @Override
-    public BaseCliRequestProcessor<ChangePeersAsyncRequest> newProcessor() {
-        return new ChangePeersAsyncRequestProcessor(null, msgFactory);
-    }
-
-    @Override
-    public void verify(String interest, Node node, ArgumentCaptor<Closure> doneArg) {
-        assertEquals(ChangePeersAsyncRequest.class.getName(), interest);
-        Mockito.verify(node).changePeersAsync(eq(JRaftUtils.getConfiguration("localhost:8084,localhost:8085")),
-                eq(1L), doneArg.capture());
-        Closure done = doneArg.getValue();
-        assertNotNull(done);
-        done.run(Status.OK());
-        assertNotNull(this.asyncContext.getResponseObject());
-        assertEquals("[localhost:8081, localhost:8082, localhost:8083]", this.asyncContext
-                .as(ChangePeersAsyncResponse.class).oldPeersList().toString());
-        assertEquals("[localhost:8084, localhost:8085]", this.asyncContext.as(ChangePeersAsyncResponse.class)
-                .newPeersList().toString());
-    }
-}
diff --git a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/configuration/storage/ItRebalanceDistributedTest.java b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/configuration/storage/ItRebalanceDistributedTest.java
deleted file mode 100644
index 5f174f3a0..000000000
--- a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/configuration/storage/ItRebalanceDistributedTest.java
+++ /dev/null
@@ -1,544 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ignite.internal.configuration.storage;
-
-import static org.apache.ignite.internal.testframework.IgniteTestUtils.testNodeName;
-import static org.junit.jupiter.api.Assertions.assertEquals;
-
-import java.io.IOException;
-import java.nio.file.Files;
-import java.nio.file.Path;
-import java.nio.file.Paths;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.concurrent.CompletableFuture;
-import java.util.concurrent.CountDownLatch;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.locks.LockSupport;
-import java.util.function.Consumer;
-import java.util.function.Function;
-import java.util.stream.Collectors;
-import java.util.stream.IntStream;
-import java.util.stream.Stream;
-import org.apache.ignite.configuration.RootKey;
-import org.apache.ignite.configuration.schemas.clientconnector.ClientConnectorConfiguration;
-import org.apache.ignite.configuration.schemas.network.NetworkConfiguration;
-import org.apache.ignite.configuration.schemas.rest.RestConfiguration;
-import org.apache.ignite.configuration.schemas.store.UnknownDataStorageConfigurationSchema;
-import org.apache.ignite.configuration.schemas.table.HashIndexConfigurationSchema;
-import org.apache.ignite.configuration.schemas.table.TablesConfiguration;
-import org.apache.ignite.internal.baseline.BaselineManager;
-import org.apache.ignite.internal.cluster.management.ClusterManagementGroupManager;
-import org.apache.ignite.internal.cluster.management.raft.ConcurrentMapClusterStateStorage;
-import org.apache.ignite.internal.configuration.ConfigurationManager;
-import org.apache.ignite.internal.configuration.schema.ExtendedTableConfiguration;
-import org.apache.ignite.internal.configuration.schema.ExtendedTableConfigurationSchema;
-import org.apache.ignite.internal.manager.IgniteComponent;
-import org.apache.ignite.internal.metastorage.MetaStorageManager;
-import org.apache.ignite.internal.metastorage.server.SimpleInMemoryKeyValueStorage;
-import org.apache.ignite.internal.pagememory.configuration.schema.UnsafeMemoryAllocatorConfigurationSchema;
-import org.apache.ignite.internal.raft.Loza;
-import org.apache.ignite.internal.raft.server.impl.JraftServerImpl;
-import org.apache.ignite.internal.schema.SchemaManager;
-import org.apache.ignite.internal.schema.configuration.SchemaConfigurationConverter;
-import org.apache.ignite.internal.sql.engine.SqlQueryProcessor;
-import org.apache.ignite.internal.storage.DataStorageManager;
-import org.apache.ignite.internal.storage.DataStorageModules;
-import org.apache.ignite.internal.storage.pagememory.PageMemoryDataStorageModule;
-import org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryDataStorageConfigurationSchema;
-import org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryStorageEngineConfiguration;
-import org.apache.ignite.internal.storage.rocksdb.RocksDbDataStorageModule;
-import org.apache.ignite.internal.storage.rocksdb.configuration.schema.RocksDbDataStorageConfigurationSchema;
-import org.apache.ignite.internal.storage.rocksdb.configuration.schema.RocksDbStorageEngineConfiguration;
-import org.apache.ignite.internal.table.TableImpl;
-import org.apache.ignite.internal.table.distributed.TableManager;
-import org.apache.ignite.internal.table.distributed.TableTxManagerImpl;
-import org.apache.ignite.internal.testframework.WorkDirectory;
-import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
-import org.apache.ignite.internal.tx.LockManager;
-import org.apache.ignite.internal.tx.TxManager;
-import org.apache.ignite.internal.tx.impl.HeapLockManager;
-import org.apache.ignite.internal.util.ByteUtils;
-import org.apache.ignite.internal.vault.VaultManager;
-import org.apache.ignite.internal.vault.persistence.PersistentVaultService;
-import org.apache.ignite.lang.IgniteInternalException;
-import org.apache.ignite.network.ClusterNode;
-import org.apache.ignite.network.ClusterService;
-import org.apache.ignite.network.NetworkAddress;
-import org.apache.ignite.network.StaticNodeFinder;
-import org.apache.ignite.raft.client.Peer;
-import org.apache.ignite.raft.jraft.rpc.RpcRequests;
-import org.apache.ignite.schema.SchemaBuilders;
-import org.apache.ignite.schema.definition.ColumnType;
-import org.apache.ignite.schema.definition.TableDefinition;
-import org.apache.ignite.utils.ClusterServiceTestUtils;
-import org.junit.jupiter.api.AfterEach;
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-import org.junit.jupiter.api.TestInfo;
-import org.junit.jupiter.api.extension.ExtendWith;
-
-/**
- * Test suite for rebalance process, when replicas' number changed.
- */
-@ExtendWith(WorkDirectoryExtension.class)
-public class ItRebalanceDistributedTest {
-
-    public static final int BASE_PORT = 20_000;
-
-    public static final String HOST = "localhost";
-
-    private static StaticNodeFinder finder;
-
-    private static List<Node> nodes;
-
-    @BeforeEach
-    private void before(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
-        nodes = new ArrayList<>();
-
-        List<NetworkAddress> nodeAddresses = new ArrayList<>();
-
-        for (int i = 0; i < 3; i++) {
-            nodeAddresses.add(new NetworkAddress(HOST, BASE_PORT + i));
-        }
-
-        finder = new StaticNodeFinder(nodeAddresses);
-
-        for (NetworkAddress addr : nodeAddresses) {
-            var node = new Node(testInfo, workDir, addr);
-
-            nodes.add(node);
-
-            node.start();
-        }
-
-        nodes.get(0).cmgManager.initCluster(List.of(nodes.get(0).name), List.of(), "cluster");
-    }
-
-    @AfterEach
-    private void after() throws Exception {
-        for (Node node : nodes) {
-            node.stop();
-        }
-    }
-
-    @Test
-    void testOneRebalance(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
-        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
-                SchemaBuilders.column("key", ColumnType.INT64).build(),
-                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
-        ).withPrimaryKey("key").build();
-
-        nodes.get(0).tableManager.createTable(
-                "PUBLIC.tbl1",
-                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
-                        .changeReplicas(1)
-                        .changePartitions(1));
-
-        assertEquals(1, nodes.get(0).clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY)
-                .tables().get("PUBLIC.TBL1").replicas().value());
-
-        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(2));
-
-        waitPartitionAssignmentsSyncedToExpected(0, 2);
-
-        assertEquals(2, getPartitionClusterNodes(0, 0).size());
-        assertEquals(2, getPartitionClusterNodes(1, 0).size());
-        assertEquals(2, getPartitionClusterNodes(2, 0).size());
-    }
-
-    @Test
-    void testTwoQueuedRebalances(@WorkDirectory Path workDir, TestInfo testInfo) {
-        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
-                SchemaBuilders.column("key", ColumnType.INT64).build(),
-                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
-        ).withPrimaryKey("key").build();
-
-        nodes.get(0).tableManager.createTable(
-                "PUBLIC.tbl1",
-                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
-                        .changeReplicas(1)
-                        .changePartitions(1));
-
-        assertEquals(1, nodes.get(0).clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY).tables()
-                .get("PUBLIC.TBL1").replicas().value());
-
-        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(2));
-        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(3));
-
-        waitPartitionAssignmentsSyncedToExpected(0, 3);
-
-        assertEquals(3, getPartitionClusterNodes(0, 0).size());
-        assertEquals(3, getPartitionClusterNodes(1, 0).size());
-        assertEquals(3, getPartitionClusterNodes(2, 0).size());
-    }
-
-    @Test
-    void testThreeQueuedRebalances(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
-        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
-                SchemaBuilders.column("key", ColumnType.INT64).build(),
-                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
-        ).withPrimaryKey("key").build();
-
-        nodes.get(0).tableManager.createTable(
-                "PUBLIC.tbl1",
-                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
-                        .changeReplicas(1)
-                        .changePartitions(1));
-
-        assertEquals(1, nodes.get(0).clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY).tables()
-                .get("PUBLIC.TBL1").replicas().value());
-
-        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(2));
-        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(3));
-        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(2));
-
-        waitPartitionAssignmentsSyncedToExpected(0, 2);
-
-        assertEquals(2, getPartitionClusterNodes(0, 0).size());
-        assertEquals(2, getPartitionClusterNodes(1, 0).size());
-        assertEquals(2, getPartitionClusterNodes(2, 0).size());
-    }
-
-    @Test
-    void testOnLeaderElectedRebalanceRestart(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
-        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
-                SchemaBuilders.column("key", ColumnType.INT64).build(),
-                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
-        ).withPrimaryKey("key").build();
-
-        var table = (TableImpl) nodes.get(1).tableManager.createTable(
-                "PUBLIC.tbl1",
-                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
-                        .changeReplicas(2)
-                        .changePartitions(1));
-
-        Set<NetworkAddress> partitionNodesAddresses = getPartitionClusterNodes(0, 0)
-                .stream().map(ClusterNode::address).collect(Collectors.toSet());
-
-        Node newNode = nodes.stream().filter(n -> !partitionNodesAddresses.contains(n.address())).findFirst().get();
-
-        Node leaderNode = findNodeByAddress(table.leaderAssignment(0).address());
-
-        NetworkAddress nonLeaderNodeAddress = partitionNodesAddresses
-                .stream().filter(n -> !n.equals(leaderNode.address())).findFirst().get();
-
-        TableImpl nonLeaderTable = (TableImpl) findNodeByAddress(nonLeaderNodeAddress).tableManager.table("PUBLIC.TBL1");
-
-        var countDownLatch = new CountDownLatch(1);
-
-        String raftGroupNodeName = leaderNode.raftManager.server().startedGroups()
-                .stream().filter(grp -> grp.contains("part")).findFirst().get();
-
-        ((JraftServerImpl) leaderNode.raftManager.server()).blockMessages(
-                raftGroupNodeName, (msg, node) -> {
-                    if (node.equals(String.valueOf(newNode.address().toString())) && msg instanceof RpcRequests.PingRequest) {
-                        countDownLatch.countDown();
-
-                        return true;
-                    }
-                    return false;
-                });
-
-        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(3));
-
-        countDownLatch.await();
-
-        nonLeaderTable.internalTable().partitionRaftGroupService(0).transferLeadership(new Peer(nonLeaderNodeAddress)).get();
-
-        ((JraftServerImpl) leaderNode.raftManager.server()).stopBlockMessages(raftGroupNodeName);
-
-        waitPartitionAssignmentsSyncedToExpected(0, 3);
-
-        assertEquals(3, getPartitionClusterNodes(0, 0).size());
-        assertEquals(3, getPartitionClusterNodes(1, 0).size());
-        assertEquals(3, getPartitionClusterNodes(2, 0).size());
-    }
-
-    @Test
-    void testRebalanceRetryWhenCatchupFailed(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
-        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
-                SchemaBuilders.column("key", ColumnType.INT64).build(),
-                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
-        ).withPrimaryKey("key").build();
-
-        nodes.get(0).tableManager.createTable(
-                "PUBLIC.tbl1",
-                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
-                        .changeReplicas(1)
-                        .changePartitions(1));
-
-        assertEquals(1, nodes.get(0).clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY)
-                .tables().get("PUBLIC.TBL1").replicas().value());
-
-        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(1));
-
-        waitPartitionAssignmentsSyncedToExpected(0, 1);
-
-        JraftServerImpl raftServer = (JraftServerImpl) nodes.stream()
-                .filter(n -> n.raftManager.startedGroups().stream().anyMatch(grp -> grp.contains("_part_"))).findFirst()
-                .get().raftManager.server();
-
-        AtomicInteger counter = new AtomicInteger(0);
-
-        String partGrpId = raftServer.startedGroups().stream().filter(grp -> grp.contains("_part_")).findFirst().get();
-
-        raftServer.blockMessages(partGrpId, (msg, node) -> {
-            if (msg instanceof RpcRequests.PingRequest) {
-                // We block ping request to prevent starting replicator, hence we fail catch up and fail rebalance.
-                assertEquals(1, getPartitionClusterNodes(0, 0).size());
-                assertEquals(1, getPartitionClusterNodes(1, 0).size());
-                assertEquals(1, getPartitionClusterNodes(2, 0).size());
-                return counter.incrementAndGet() <= 5;
-            }
-            return false;
-        });
-
-        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(3));
-
-        waitPartitionAssignmentsSyncedToExpected(0, 3);
-
-        assertEquals(3, getPartitionClusterNodes(0, 0).size());
-        assertEquals(3, getPartitionClusterNodes(1, 0).size());
-        assertEquals(3, getPartitionClusterNodes(2, 0).size());
-    }
-
-    private void waitPartitionAssignmentsSyncedToExpected(int partNum, int replicasNum) {
-        while (!IntStream.range(0, nodes.size()).allMatch(n -> getPartitionClusterNodes(n, partNum).size() == replicasNum)) {
-            LockSupport.parkNanos(100_000_000);
-        }
-    }
-
-    private Node findNodeByAddress(NetworkAddress addr) {
-        return nodes.stream().filter(n -> n.address().equals(addr)).findFirst().get();
-    }
-
-    private List<ClusterNode> getPartitionClusterNodes(int nodeNum, int partNum) {
-        var table = ((ExtendedTableConfiguration) nodes.get(nodeNum).clusterCfgMgr.configurationRegistry()
-                .getConfiguration(TablesConfiguration.KEY).tables().get("PUBLIC.TBL1"));
-
-        if (table != null) {
-            var assignments = table.assignments().value();
-
-            if (assignments != null) {
-                return ((List<List<ClusterNode>>) ByteUtils.fromBytes(assignments)).get(partNum);
-            }
-        }
-
-        return List.of();
-    }
-
-    private static class Node {
-        private final String name;
-
-        private final VaultManager vaultManager;
-
-        private final ClusterService clusterService;
-
-        private final LockManager lockManager;
-
-        private final TxManager txManager;
-
-        private final Loza raftManager;
-
-        private final MetaStorageManager metaStorageManager;
-
-        private final DistributedConfigurationStorage cfgStorage;
-
-        private final DataStorageManager dataStorageMgr;
-
-        private final TableManager tableManager;
-
-        private final BaselineManager baselineMgr;
-
-        private final ConfigurationManager nodeCfgMgr;
-
-        private final ConfigurationManager clusterCfgMgr;
-
-        private final ClusterManagementGroupManager cmgManager;
-
-        private final SchemaManager schemaManager;
-
-        private final SqlQueryProcessor sqlQueryProcessor;
-
-        /**
-         * Constructor that simply creates a subset of components of this node.
-         */
-        Node(TestInfo testInfo, Path workDir, NetworkAddress addr) {
-
-            name = testNodeName(testInfo, addr.port());
-
-            Path dir = workDir.resolve(name);
-
-            vaultManager = createVault(dir);
-
-            nodeCfgMgr = new ConfigurationManager(
-                    List.of(NetworkConfiguration.KEY,
-                            RestConfiguration.KEY,
-                            ClientConnectorConfiguration.KEY),
-                    Map.of(),
-                    new LocalConfigurationStorage(vaultManager),
-                    List.of(),
-                    List.of()
-            );
-
-            clusterService = ClusterServiceTestUtils.clusterService(
-                    testInfo,
-                    addr.port(),
-                    finder
-            );
-
-            lockManager = new HeapLockManager();
-
-            raftManager = new Loza(clusterService, dir);
-
-            txManager = new TableTxManagerImpl(clusterService, lockManager);
-
-            List<RootKey<?, ?>> rootKeys = List.of(
-                    TablesConfiguration.KEY);
-
-            cmgManager = new ClusterManagementGroupManager(
-                    vaultManager,
-                    clusterService,
-                    raftManager,
-                    new ConcurrentMapClusterStateStorage()
-            );
-
-            metaStorageManager = new MetaStorageManager(
-                    vaultManager,
-                    clusterService,
-                    cmgManager,
-                    raftManager,
-                    new SimpleInMemoryKeyValueStorage()
-            );
-
-            cfgStorage = new DistributedConfigurationStorage(metaStorageManager, vaultManager);
-
-            clusterCfgMgr = new ConfigurationManager(
-                    List.of(RocksDbStorageEngineConfiguration.KEY,
-                            PageMemoryStorageEngineConfiguration.KEY,
-                            TablesConfiguration.KEY),
-                    Map.of(),
-                    cfgStorage,
-                    List.of(ExtendedTableConfigurationSchema.class),
-                    List.of(UnknownDataStorageConfigurationSchema.class,
-                            PageMemoryDataStorageConfigurationSchema.class,
-                            UnsafeMemoryAllocatorConfigurationSchema.class,
-                            RocksDbDataStorageConfigurationSchema.class,
-                            HashIndexConfigurationSchema.class)
-            );
-
-            Consumer<Function<Long, CompletableFuture<?>>> registry = (Function<Long, CompletableFuture<?>> function) -> {
-                clusterCfgMgr.configurationRegistry().listenUpdateStorageRevision(
-                        newStorageRevision -> function.apply(newStorageRevision));
-            };
-
-            TablesConfiguration tablesCfg = clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY);
-
-            DataStorageModules dataStorageModules = new DataStorageModules(List.of(
-                    new RocksDbDataStorageModule(), new PageMemoryDataStorageModule()));
-
-            dataStorageMgr = new DataStorageManager(
-                    tablesCfg,
-                    dataStorageModules.createStorageEngines(
-                            name,
-                            clusterCfgMgr.configurationRegistry(),
-                            dir.resolve("storage"),
-                            null));
-
-            baselineMgr = new BaselineManager(
-                    clusterCfgMgr,
-                    metaStorageManager,
-                    clusterService);
-
-            schemaManager = new SchemaManager(registry, tablesCfg);
-
-            tableManager = new TableManager(
-                    registry,
-                    tablesCfg,
-                    raftManager,
-                    baselineMgr,
-                    clusterService.topologyService(),
-                    txManager,
-                    dataStorageMgr,
-                    metaStorageManager,
-                    schemaManager);
-
-            //TODO: Get rid of it after IGNITE-17062.
-            sqlQueryProcessor = new SqlQueryProcessor(registry, clusterService, tableManager, dataStorageMgr, Map::of);
-        }
-
-        /**
-         * Starts the created components.
-         */
-        void start() throws Exception {
-            vaultManager.start();
-
-            nodeCfgMgr.start();
-
-            Stream.of(clusterService, clusterCfgMgr, dataStorageMgr, raftManager, txManager, cmgManager,
-                    metaStorageManager, baselineMgr, schemaManager, tableManager, sqlQueryProcessor).forEach(IgniteComponent::start);
-
-            CompletableFuture.allOf(
-                    nodeCfgMgr.configurationRegistry().notifyCurrentConfigurationListeners(),
-                    clusterCfgMgr.configurationRegistry().notifyCurrentConfigurationListeners()
-            ).get();
-
-            // deploy watches to propagate data from the metastore into the vault
-            metaStorageManager.deployWatches();
-        }
-
-        /**
-         * Stops the created components.
-         */
-        void stop() throws Exception {
-            var components =
-                    List.of(sqlQueryProcessor, tableManager, schemaManager, baselineMgr, metaStorageManager, cmgManager, dataStorageMgr,
-                            raftManager, txManager, clusterCfgMgr, clusterService, nodeCfgMgr, vaultManager);
-
-            for (IgniteComponent igniteComponent : components) {
-                igniteComponent.beforeNodeStop();
-            }
-
-            for (IgniteComponent component : components) {
-                component.stop();
-            }
-        }
-
-        NetworkAddress address() {
-            return clusterService.topologyService().localMember().address();
-        }
-    }
-
-    /**
-     * Starts the Vault component.
-     */
-    private static VaultManager createVault(Path workDir) {
-        Path vaultPath = workDir.resolve(Paths.get("vault"));
-
-        try {
-            Files.createDirectories(vaultPath);
-        } catch (IOException e) {
-            throw new IgniteInternalException(e);
-        }
-
-        return new VaultManager(new PersistentVaultService(vaultPath));
-    }
-}
diff --git a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItBaselineChangesTest.java b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItBaselineChangesTest.java
new file mode 100644
index 000000000..96d891396
--- /dev/null
+++ b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItBaselineChangesTest.java
@@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.runner.app;
+
+import static java.util.stream.Collectors.toList;
+import static org.apache.ignite.internal.testframework.IgniteTestUtils.testNodeName;
+import static org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.willCompleteSuccessfully;
+import static org.hamcrest.MatcherAssert.assertThat;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import java.util.stream.IntStream;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgnitionManager;
+import org.apache.ignite.internal.schema.configuration.SchemaConfigurationConverter;
+import org.apache.ignite.internal.testframework.WorkDirectory;
+import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
+import org.apache.ignite.internal.util.IgniteUtils;
+import org.apache.ignite.schema.SchemaBuilders;
+import org.apache.ignite.schema.definition.ColumnType;
+import org.apache.ignite.schema.definition.TableDefinition;
+import org.apache.ignite.table.RecordView;
+import org.apache.ignite.table.Table;
+import org.apache.ignite.table.Tuple;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.TestInfo;
+import org.junit.jupiter.api.extension.ExtendWith;
+
+/**
+ * Test for baseline changes.
+ */
+@ExtendWith(WorkDirectoryExtension.class)
+public class ItBaselineChangesTest {
+    private static final int NUM_NODES = 3;
+
+    /** Start network port for test nodes. */
+    private static final int BASE_PORT = 3344;
+
+    private final List<String> clusterNodeNames = new ArrayList<>();
+
+    private final List<Ignite> clusterNodes = new ArrayList<>();
+
+    @WorkDirectory
+    private Path workDir;
+
+    /**
+     * Before each.
+     */
+    @BeforeEach
+    void setUp(TestInfo testInfo) {
+        List<CompletableFuture<Ignite>> futures = IntStream.range(0, NUM_NODES)
+                .mapToObj(i -> startNodeAsync(testInfo, i))
+                .collect(toList());
+
+        String metaStorageNode = testNodeName(testInfo, BASE_PORT);
+
+        IgnitionManager.init(metaStorageNode, List.of(metaStorageNode), "cluster");
+
+        for (CompletableFuture<Ignite> future : futures) {
+            assertThat(future, willCompleteSuccessfully());
+
+            clusterNodes.add(future.join());
+        }
+    }
+
+    /**
+     * After each.
+     */
+    @AfterEach
+    void tearDown() throws Exception {
+        List<AutoCloseable> closeables = clusterNodeNames.stream()
+                .map(name -> (AutoCloseable) () -> IgnitionManager.stop(name))
+                .collect(toList());
+
+        IgniteUtils.closeAll(closeables);
+    }
+
+    /**
+     * Check dynamic table creation.
+     */
+    @Test
+    void testBaselineExtending(TestInfo testInfo) {
+        assertEquals(NUM_NODES, clusterNodes.size());
+
+        // Create table on node 0.
+        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
+                SchemaBuilders.column("key", ColumnType.INT64).build(),
+                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
+        ).withPrimaryKey("key").build();
+
+        clusterNodes.get(0).tables().createTable(schTbl1.canonicalName(), tblCh ->
+                SchemaConfigurationConverter.convert(schTbl1, tblCh)
+                        .changeReplicas(5)
+                        .changePartitions(1)
+        );
+
+        // Put data on node 1.
+        Table tbl1 = clusterNodes.get(1).tables().table(schTbl1.canonicalName());
+        RecordView<Tuple> recView1 = tbl1.recordView();
+
+        recView1.insert(null, Tuple.create().set("key", 1L).set("val", 111));
+
+        Ignite metaStoreNode = clusterNodes.get(0);
+
+        // Start 2 new nodes after
+        Ignite node3 = startNode(testInfo);
+
+        Ignite node4 = startNode(testInfo);
+
+        // Update baseline to nodes 1,4,5
+        metaStoreNode.setBaseline(Set.of(metaStoreNode.name(), node3.name(), node4.name()));
+
+        IgnitionManager.stop(clusterNodes.get(1).name());
+        IgnitionManager.stop(clusterNodes.get(2).name());
+
+        Table tbl4 = node4.tables().table(schTbl1.canonicalName());
+
+        Tuple keyTuple1 = Tuple.create().set("key", 1L);
+
+        assertEquals(1, (Long) tbl4.recordView().get(null, keyTuple1).value("key"));
+    }
+
+    private static String buildConfig(int nodeIdx) {
+        return "{\n"
+                + "  network: {\n"
+                + "    port: " + (BASE_PORT + nodeIdx) + ",\n"
+                + "    nodeFinder: {\n"
+                + "      netClusterNodes: [ \"localhost:3344\", \"localhost:3345\", \"localhost:3346\" ] \n"
+                + "    }\n"
+                + "  }\n"
+                + "}";
+    }
+
+    private Ignite startNode(TestInfo testInfo) {
+        CompletableFuture<Ignite> future = startNodeAsync(testInfo, clusterNodes.size());
+
+        assertThat(future, willCompleteSuccessfully());
+
+        Ignite ignite = future.join();
+
+        clusterNodes.add(ignite);
+
+        return ignite;
+    }
+
+    private CompletableFuture<Ignite> startNodeAsync(TestInfo testInfo, int index) {
+        String nodeName = testNodeName(testInfo, BASE_PORT + index);
+
+        clusterNodeNames.add(nodeName);
+
+        return IgnitionManager.start(nodeName, buildConfig(index), workDir.resolve(nodeName));
+    }
+}
diff --git a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItIgniteNodeRestartTest.java b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItIgniteNodeRestartTest.java
index c6190e5b9..8f15f4fd0 100644
--- a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItIgniteNodeRestartTest.java
+++ b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItIgniteNodeRestartTest.java
@@ -278,7 +278,6 @@ public class ItIgniteNodeRestartTest extends IgniteAbstractTest {
                 clusterSvc.topologyService(),
                 txManager,
                 dataStorageManager,
-                metaStorageMgr,
                 schemaManager
         );
 
diff --git a/modules/runner/src/main/java/org/apache/ignite/internal/app/IgniteImpl.java b/modules/runner/src/main/java/org/apache/ignite/internal/app/IgniteImpl.java
index 80d367c3d..04b33842f 100644
--- a/modules/runner/src/main/java/org/apache/ignite/internal/app/IgniteImpl.java
+++ b/modules/runner/src/main/java/org/apache/ignite/internal/app/IgniteImpl.java
@@ -24,6 +24,7 @@ import java.nio.file.Paths;
 import java.util.Collection;
 import java.util.List;
 import java.util.ServiceLoader;
+import java.util.Set;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.CompletionException;
 import java.util.function.Consumer;
@@ -316,7 +317,6 @@ public class IgniteImpl implements Ignite {
                 clusterSvc.topologyService(),
                 txManager,
                 dataStorageMgr,
-                metaStorageMgr,
                 schemaManager
         );
 
@@ -544,6 +544,16 @@ public class IgniteImpl implements Ignite {
         return name;
     }
 
+    /** {@inheritDoc} */
+    @Override
+    public void setBaseline(Set<String> baselineNodes) {
+        try {
+            distributedTblMgr.setBaseline(baselineNodes);
+        } catch (NodeStoppingException e) {
+            throw new IgniteException(e);
+        }
+    }
+
     /** {@inheritDoc} */
     @Override
     public IgniteCompute compute() {
diff --git a/modules/sql-engine/src/test/java/org/apache/ignite/internal/sql/engine/exec/MockedStructuresTest.java b/modules/sql-engine/src/test/java/org/apache/ignite/internal/sql/engine/exec/MockedStructuresTest.java
index 5cab8e634..79195cf84 100644
--- a/modules/sql-engine/src/test/java/org/apache/ignite/internal/sql/engine/exec/MockedStructuresTest.java
+++ b/modules/sql-engine/src/test/java/org/apache/ignite/internal/sql/engine/exec/MockedStructuresTest.java
@@ -56,7 +56,6 @@ import org.apache.ignite.internal.configuration.schema.ExtendedTableConfiguratio
 import org.apache.ignite.internal.configuration.testframework.ConfigurationExtension;
 import org.apache.ignite.internal.configuration.testframework.InjectConfiguration;
 import org.apache.ignite.internal.configuration.testframework.InjectRevisionListenerHolder;
-import org.apache.ignite.internal.metastorage.MetaStorageManager;
 import org.apache.ignite.internal.raft.Loza;
 import org.apache.ignite.internal.schema.SchemaDescriptor;
 import org.apache.ignite.internal.schema.SchemaManager;
@@ -76,7 +75,6 @@ import org.apache.ignite.internal.storage.rocksdb.configuration.schema.RocksDbSt
 import org.apache.ignite.internal.table.distributed.TableManager;
 import org.apache.ignite.internal.testframework.IgniteAbstractTest;
 import org.apache.ignite.internal.tx.TxManager;
-import org.apache.ignite.lang.ByteArray;
 import org.apache.ignite.lang.ColumnAlreadyExistsException;
 import org.apache.ignite.lang.ColumnNotFoundException;
 import org.apache.ignite.lang.IgniteException;
@@ -132,10 +130,6 @@ public class MockedStructuresTest extends IgniteAbstractTest {
     @Mock(lenient = true)
     private TxManager tm;
 
-    /** Meta storage manager. */
-    @Mock
-    MetaStorageManager msm;
-
     /**
      * Revision listener holder. It uses for the test configurations:
      * <ul>
@@ -634,7 +628,7 @@ public class MockedStructuresTest extends IgniteAbstractTest {
             return completedFuture(raftGrpSrvcMock);
         });
 
-        when(rm.updateRaftGroup(any(), any(), any(), any(), any())).thenAnswer(mock -> {
+        when(rm.updateRaftGroup(any(), any(), any(), any())).thenAnswer(mock -> {
             RaftGroupService raftGrpSrvcMock = mock(RaftGroupService.class);
 
             when(raftGrpSrvcMock.leader()).thenReturn(new Peer(new NetworkAddress("localhost", 47500)));
@@ -675,8 +669,6 @@ public class MockedStructuresTest extends IgniteAbstractTest {
             return ret;
         });
 
-        when(msm.registerWatch(any(ByteArray.class), any())).thenReturn(CompletableFuture.completedFuture(1L));
-
         TableManager tableManager = createTableManager();
 
         return tableManager;
@@ -693,7 +685,6 @@ public class MockedStructuresTest extends IgniteAbstractTest {
                 ts,
                 tm,
                 dataStorageManager,
-                msm,
                 sm
         );
 
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/table/InternalTable.java b/modules/table/src/main/java/org/apache/ignite/internal/table/InternalTable.java
index 10cc05936..8f11b8c03 100644
--- a/modules/table/src/main/java/org/apache/ignite/internal/table/InternalTable.java
+++ b/modules/table/src/main/java/org/apache/ignite/internal/table/InternalTable.java
@@ -28,7 +28,6 @@ import org.apache.ignite.internal.storage.engine.TableStorage;
 import org.apache.ignite.internal.tx.InternalTransaction;
 import org.apache.ignite.internal.tx.LockException;
 import org.apache.ignite.network.ClusterNode;
-import org.apache.ignite.raft.client.service.RaftGroupService;
 import org.jetbrains.annotations.NotNull;
 import org.jetbrains.annotations.Nullable;
 
@@ -239,14 +238,5 @@ public interface InternalTable extends AutoCloseable {
      */
     ClusterNode leaderAssignment(int partition);
 
-    /**
-     * Returns raft group client for corresponding partition.
-     *
-     * @param partition partition number
-     * @return raft group client for corresponding partition
-     * @throws org.apache.ignite.lang.IgniteInternalException if partition can't be found.
-     */
-    RaftGroupService partitionRaftGroupService(int partition);
-
     //TODO: IGNITE-14488. Add invoke() methods.
 }
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
index eb8522c31..b7d57e846 100644
--- a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
+++ b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
@@ -22,35 +22,26 @@ import static java.util.concurrent.CompletableFuture.completedFuture;
 import static java.util.concurrent.CompletableFuture.failedFuture;
 import static org.apache.ignite.internal.configuration.util.ConfigurationUtil.getByInternalId;
 import static org.apache.ignite.internal.schema.SchemaManager.INITIAL_SCHEMA_VERSION;
-import static org.apache.ignite.internal.util.IgniteUtils.shutdownAndAwaitTermination;
-import static org.apache.ignite.internal.utils.RebalanceUtil.PENDING_ASSIGNMENTS_PREFIX;
-import static org.apache.ignite.internal.utils.RebalanceUtil.STABLE_ASSIGNMENTS_PREFIX;
-import static org.apache.ignite.internal.utils.RebalanceUtil.extractPartitionNumber;
-import static org.apache.ignite.internal.utils.RebalanceUtil.extractTableId;
-import static org.apache.ignite.internal.utils.RebalanceUtil.pendingPartAssignmentsKey;
-import static org.apache.ignite.internal.utils.RebalanceUtil.stablePartAssignmentsKey;
-import static org.apache.ignite.internal.utils.RebalanceUtil.updatePendingAssignmentsKeys;
 
 import it.unimi.dsi.fastutil.ints.Int2ObjectOpenHashMap;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 import java.util.NoSuchElementException;
+import java.util.Set;
 import java.util.UUID;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.CompletionException;
 import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.ScheduledThreadPoolExecutor;
-import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.function.Consumer;
 import java.util.function.Function;
 import java.util.function.Supplier;
 import java.util.stream.Collectors;
-import java.util.stream.Stream;
+import org.apache.ignite.Ignite;
 import org.apache.ignite.configuration.ConfigurationChangeException;
 import org.apache.ignite.configuration.ConfigurationProperty;
 import org.apache.ignite.configuration.NamedListView;
@@ -71,12 +62,7 @@ import org.apache.ignite.internal.configuration.util.ConfigurationUtil;
 import org.apache.ignite.internal.manager.EventListener;
 import org.apache.ignite.internal.manager.IgniteComponent;
 import org.apache.ignite.internal.manager.Producer;
-import org.apache.ignite.internal.metastorage.MetaStorageManager;
-import org.apache.ignite.internal.metastorage.client.Entry;
-import org.apache.ignite.internal.metastorage.client.WatchEvent;
-import org.apache.ignite.internal.metastorage.client.WatchListener;
 import org.apache.ignite.internal.raft.Loza;
-import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.schema.SchemaDescriptor;
 import org.apache.ignite.internal.schema.SchemaManager;
 import org.apache.ignite.internal.schema.SchemaUtils;
@@ -89,20 +75,15 @@ import org.apache.ignite.internal.table.IgniteTablesInternal;
 import org.apache.ignite.internal.table.InternalTable;
 import org.apache.ignite.internal.table.TableImpl;
 import org.apache.ignite.internal.table.distributed.raft.PartitionListener;
-import org.apache.ignite.internal.table.distributed.raft.RebalanceRaftGroupEventsListener;
 import org.apache.ignite.internal.table.distributed.storage.InternalTableImpl;
 import org.apache.ignite.internal.table.distributed.storage.VersionedRowStore;
 import org.apache.ignite.internal.table.event.TableEvent;
 import org.apache.ignite.internal.table.event.TableEventParameters;
-import org.apache.ignite.internal.thread.NamedThreadFactory;
 import org.apache.ignite.internal.tx.TxManager;
 import org.apache.ignite.internal.util.ByteUtils;
 import org.apache.ignite.internal.util.IgniteObjectName;
 import org.apache.ignite.internal.util.IgniteSpinBusyLock;
-import org.apache.ignite.lang.ByteArray;
-import org.apache.ignite.lang.IgniteBiTuple;
 import org.apache.ignite.lang.IgniteException;
-import org.apache.ignite.lang.IgniteInternalException;
 import org.apache.ignite.lang.IgniteLogger;
 import org.apache.ignite.lang.IgniteStringFormatter;
 import org.apache.ignite.lang.IgniteSystemProperties;
@@ -112,10 +93,6 @@ import org.apache.ignite.lang.TableNotFoundException;
 import org.apache.ignite.network.ClusterNode;
 import org.apache.ignite.network.NetworkAddress;
 import org.apache.ignite.network.TopologyService;
-import org.apache.ignite.raft.client.Peer;
-import org.apache.ignite.raft.client.service.RaftGroupListener;
-import org.apache.ignite.raft.client.service.RaftGroupService;
-import org.apache.ignite.raft.jraft.util.Utils;
 import org.apache.ignite.table.Table;
 import org.apache.ignite.table.manager.IgniteTables;
 import org.jetbrains.annotations.NotNull;
@@ -150,9 +127,6 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
     /** Transaction manager. */
     private final TxManager txManager;
 
-    /** Meta storage manager. */
-    private final MetaStorageManager metaStorageMgr;
-
     /** Data storage manager. */
     private final DataStorageManager dataStorageMgr;
 
@@ -177,12 +151,6 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
     /** Schema manager. */
     private final SchemaManager schemaManager;
 
-    /** Executor for scheduling retries of a rebalance. */
-    private final ScheduledExecutorService rebalanceScheduler;
-
-    /** Rebalance scheduler pool size. */
-    private static final int REBALANCE_SCHEDULER_POOL_SIZE = Math.min(Utils.cpus() * 3, 20);
-
     /**
      * Creates a new table manager.
      *
@@ -202,7 +170,6 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
             TopologyService topologyService,
             TxManager txManager,
             DataStorageManager dataStorageMgr,
-            MetaStorageManager metaStorageMgr,
             SchemaManager schemaManager
     ) {
         this.tablesCfg = tablesCfg;
@@ -210,7 +177,6 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
         this.baselineMgr = baselineMgr;
         this.txManager = txManager;
         this.dataStorageMgr = dataStorageMgr;
-        this.metaStorageMgr = metaStorageMgr;
         this.schemaManager = schemaManager;
 
         netAddrResolver = addr -> {
@@ -225,19 +191,14 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
         clusterNodeResolver = topologyService::getByAddress;
 
         tablesByIdVv = new VersionedValue<>(null, HashMap::new);
-
-        rebalanceScheduler = new ScheduledThreadPoolExecutor(REBALANCE_SCHEDULER_POOL_SIZE,
-                new NamedThreadFactory("rebalance-scheduler"));
     }
 
     /** {@inheritDoc} */
     @Override
     public void start() {
-        tablesCfg.tables().any().replicas().listen(this::onUpdateReplicas);
-
-        registerRebalanceListeners();
-
-        ((ExtendedTableConfiguration) tablesCfg.tables().any()).assignments().listen(this::onUpdateAssignments);
+        ((ExtendedTableConfiguration) tablesCfg.tables().any()).assignments().listen(assignmentsCtx -> {
+            return onUpdateAssignments(assignmentsCtx);
+        });
 
         tablesCfg.tables().listenElements(new ConfigurationNamedListListener<>() {
             @Override
@@ -349,45 +310,6 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
         return CompletableFuture.completedFuture(null);
     }
 
-    /**
-     * Listener of replicas configuration changes.
-     *
-     * @param replicasCtx Replicas configuration event context.
-     * @return A future, which will be completed, when event processed by listener.
-     */
-    private CompletableFuture<?> onUpdateReplicas(ConfigurationNotificationEvent<Integer> replicasCtx) {
-        if (!busyLock.enterBusy()) {
-            return CompletableFuture.completedFuture(new NodeStoppingException());
-        }
-
-        try {
-            if (replicasCtx.oldValue() != null && replicasCtx.oldValue() > 0) {
-                TableConfiguration tblCfg = replicasCtx.config(TableConfiguration.class);
-
-                int partCnt = tblCfg.partitions().value();
-
-                int newReplicas = replicasCtx.newValue();
-
-                CompletableFuture<?>[] futures = new CompletableFuture<?>[partCnt];
-
-                for (int i = 0; i < partCnt; i++) {
-                    String partId = partitionRaftGroupName(((ExtendedTableConfiguration) tblCfg).id().value(), i);
-
-                    futures[i] = updatePendingAssignmentsKeys(
-                            partId, baselineMgr.nodes(),
-                            partCnt, newReplicas,
-                            replicasCtx.storageRevision(), metaStorageMgr, i);
-                }
-
-                return CompletableFuture.allOf(futures);
-            } else {
-                return CompletableFuture.completedFuture(null);
-            }
-        } finally {
-            busyLock.leaveBusy();
-        }
-    }
-
     /**
      * Listener of assignment configuration changes.
      *
@@ -399,7 +321,6 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
             return failedFuture(new NodeStoppingException());
         }
 
-
         try {
             updateAssignmentInternal(assignmentsCtx);
         } finally {
@@ -440,10 +361,14 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
         for (int i = 0; i < partitions; i++) {
             int partId = i;
 
-            List<ClusterNode> oldPartAssignment = oldAssignments == null ? Collections.emptyList() :
+            List<ClusterNode> oldPartitionAssignment = oldAssignments == null ? Collections.emptyList() :
                     oldAssignments.get(partId);
 
-            List<ClusterNode> newPartAssignment = newAssignments.get(partId);
+            List<ClusterNode> newPartitionAssignment = newAssignments.get(partId);
+
+            var toAdd = new HashSet<>(newPartitionAssignment);
+
+            toAdd.removeAll(oldPartitionAssignment);
 
             // Create new raft nodes according to new assignments.
             tablesByIdVv.update(causalityToken, (tablesById, e) -> {
@@ -451,27 +376,18 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
                     return failedFuture(e);
                 }
 
-                InternalTable internalTbl = tablesById.get(tblId).internalTable();
+                InternalTable internalTable = tablesById.get(tblId).internalTable();
 
                 try {
                     futures[partId] = raftMgr.updateRaftGroup(
-                            partitionRaftGroupName(tblId, partId),
-                            newPartAssignment,
-                            // start new nodes, only if it is table creation
-                            // other cases will be covered by rebalance logic
-                            (oldPartAssignment.isEmpty()) ? newPartAssignment : Collections.emptyList(),
+                            raftGroupName(tblId, partId),
+                            newPartitionAssignment,
+                            toAdd,
                             () -> new PartitionListener(tblId,
-                                    new VersionedRowStore(internalTbl.storage().getOrCreatePartition(partId), txManager)),
-                            () -> new RebalanceRaftGroupEventsListener(
-                                    metaStorageMgr,
-                                    tablesCfg.tables().get(tablesById.get(tblId).name()),
-                                    partitionRaftGroupName(tblId, partId),
-                                    partId,
-                                    busyLock,
-                                    () -> internalTbl.partitionRaftGroupService(partId),
-                                    rebalanceScheduler)
+                                    new VersionedRowStore(internalTable.storage().getOrCreatePartition(partId),
+                                            txManager))
                     ).thenAccept(
-                            updatedRaftGroupService -> ((InternalTableImpl) internalTbl)
+                            updatedRaftGroupService -> ((InternalTableImpl) internalTable)
                                     .updateInternalTableRaftGroupService(partId, updatedRaftGroupService)
                     ).exceptionally(th -> {
                         LOG.error("Failed to update raft groups one the node", th);
@@ -506,14 +422,12 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
                 table.internalTable().close();
 
                 for (int p = 0; p < table.internalTable().partitions(); p++) {
-                    raftMgr.stopRaftGroup(partitionRaftGroupName(table.tableId(), p));
+                    raftMgr.stopRaftGroup(raftGroupName(table.tableId(), p));
                 }
             } catch (Exception e) {
                 LOG.error("Failed to stop a table {}", e, table.name());
             }
         }
-
-        shutdownAndAwaitTermination(rebalanceScheduler, 10, TimeUnit.SECONDS);
     }
 
     /**
@@ -532,7 +446,6 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
 
         tableStorage.start();
 
-
         InternalTableImpl internalTable = new InternalTableImpl(name, tblId, new Int2ObjectOpenHashMap<>(partitions),
                 partitions, netAddrResolver, clusterNodeResolver, txManager, tableStorage);
 
@@ -586,7 +499,7 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
             int partitions = assignment.size();
 
             for (int p = 0; p < partitions; p++) {
-                raftMgr.stopRaftGroup(partitionRaftGroupName(tblId, p));
+                raftMgr.stopRaftGroup(raftGroupName(tblId, p));
             }
 
             tablesByIdVv.update(causalityToken, (previousVal, e) -> {
@@ -624,7 +537,7 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
      * @return A RAFT group name.
      */
     @NotNull
-    private String partitionRaftGroupName(UUID tblId, int partition) {
+    private String raftGroupName(UUID tblId, int partition) {
         return tblId + "_part_" + partition;
     }
 
@@ -1206,160 +1119,152 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
     }
 
     /**
-     * Register the new meta storage listener for changes in the rebalance-specific keys.
+     * Sets the nodes as baseline for all tables created by the manager.
+     *
+     * @param nodes New baseline nodes.
+     * @throws NodeStoppingException If an implementation stopped before the method was invoked.
      */
-    private void registerRebalanceListeners() {
-        metaStorageMgr.registerWatchByPrefix(ByteArray.fromString(PENDING_ASSIGNMENTS_PREFIX), new WatchListener() {
-            @Override
-            public boolean onUpdate(@NotNull WatchEvent evt) {
-                if (!busyLock.enterBusy()) {
-                    throw new IgniteInternalException(new NodeStoppingException());
-                }
-
-                try {
-                    assert evt.single();
-
-                    Entry pendingAssignmentsWatchEvent = evt.entryEvent().newEntry();
-
-                    if (pendingAssignmentsWatchEvent.value() == null) {
-                        return true;
-                    }
+    public void setBaseline(Set<String> nodes) throws NodeStoppingException {
+        if (!busyLock.enterBusy()) {
+            throw new NodeStoppingException();
+        }
+        try {
+            setBaselineInternal(nodes);
+        } finally {
+            busyLock.leaveBusy();
+        }
+    }
 
-                    int part = extractPartitionNumber(pendingAssignmentsWatchEvent.key());
-                    UUID tblId = extractTableId(pendingAssignmentsWatchEvent.key(), PENDING_ASSIGNMENTS_PREFIX);
+    /**
+     * Internal method for setting a baseline.
+     *
+     * @param nodes Names of baseline nodes.
+     */
+    private void setBaselineInternal(Set<String> nodes) {
+        if (nodes == null || nodes.isEmpty()) {
+            throw new IgniteException("New baseline can't be null or empty");
+        }
 
-                    String partId = partitionRaftGroupName(tblId, part);
+        var currClusterMembers = new HashSet<>(baselineMgr.nodes());
 
-                    // Assignments of the pending rebalance that we received through the meta storage watch mechanism.
-                    List<ClusterNode> newPeers = ((List<ClusterNode>) ByteUtils.fromBytes(pendingAssignmentsWatchEvent.value()));
+        var currClusterMemberNames =
+                currClusterMembers.stream().map(ClusterNode::name).collect(Collectors.toSet());
 
-                    var pendingAssignments = metaStorageMgr.get(pendingPartAssignmentsKey(partId)).join();
+        for (String nodeName : nodes) {
+            if (!currClusterMemberNames.contains(nodeName)) {
+                throw new IgniteException("Node '" + nodeName + "' not in current network cluster membership. "
+                        + " Adding not alive nodes is not supported yet.");
+            }
+        }
 
-                    assert pendingAssignmentsWatchEvent.revision() <= pendingAssignments.revision()
-                            : "Meta Storage watch cannot notify about an event with the revision that is more than the actual revision.";
+        var newBaseline = currClusterMembers
+                .stream().filter(n -> nodes.contains(n.name())).collect(Collectors.toSet());
 
-                    TableImpl tbl = tablesByIdVv.latest().get(tblId);
+        updateAssignments(currClusterMembers);
 
-                    ExtendedTableConfiguration tblCfg = (ExtendedTableConfiguration) tablesCfg.tables().get(tbl.name());
+        if (!newBaseline.equals(currClusterMembers)) {
+            updateAssignments(newBaseline);
+        }
+    }
 
-                    Supplier<RaftGroupListener> raftGrpLsnrSupplier = () -> new PartitionListener(tblId,
-                            new VersionedRowStore(
-                                    tbl.internalTable().storage().getOrCreatePartition(part), txManager));
+    /**
+     * Update assignments for all current tables according to input nodes list. These approach has known issues {@link
+     * Ignite#setBaseline(Set)}.
+     *
+     * @param clusterNodes Set of nodes for assignment.
+     */
+    private void updateAssignments(Set<ClusterNode> clusterNodes) {
+        var setBaselineFut = new CompletableFuture<>();
 
-                    Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier = () -> new RebalanceRaftGroupEventsListener(
-                            metaStorageMgr,
-                            tblCfg,
-                            partId,
-                            part,
-                            busyLock,
-                            () -> tbl.internalTable().partitionRaftGroupService(part),
-                            rebalanceScheduler);
+        var changePeersQueue = new ArrayList<Supplier<CompletableFuture<Void>>>();
 
-                    // Stable assignments from the meta store, which revision is bounded by the current pending event.
-                    byte[] stableAssignments = metaStorageMgr.get(stablePartAssignmentsKey(partId),
-                            pendingAssignmentsWatchEvent.revision()).join().value();
+        tablesCfg.tables()
+                .change(tbls -> {
+                    changePeersQueue.clear();
 
-                    List<ClusterNode> assignments = stableAssignments == null
-                            // This is for the case when the first rebalance occurs.
-                            ? ((List<List<ClusterNode>>) ByteUtils.fromBytes(tblCfg.assignments().value())).get(part)
-                            : (List<ClusterNode>) ByteUtils.fromBytes(stableAssignments);
+                    for (int i = 0; i < tbls.size(); i++) {
+                        tbls.createOrUpdate(tbls.get(i).name(), changeX -> {
+                            ExtendedTableChange change = (ExtendedTableChange) changeX;
+                            byte[] currAssignments = change.assignments();
 
-                    var deltaPeers = newPeers.stream()
-                            .filter(p -> !assignments.contains(p))
-                            .collect(Collectors.toList());
+                            List<List<ClusterNode>> recalculatedAssignments = AffinityUtils.calculateAssignments(
+                                    clusterNodes,
+                                    change.partitions(),
+                                    change.replicas());
 
-                    try {
-                        raftMgr.startRaftGroupNode(partId, assignments, deltaPeers, raftGrpLsnrSupplier,
-                                raftGrpEvtsLsnrSupplier);
-                    } catch (NodeStoppingException e) {
-                        // no-op
-                    }
+                            if (!recalculatedAssignments.equals(ByteUtils.fromBytes(currAssignments))) {
+                                change.changeAssignments(ByteUtils.toBytes(recalculatedAssignments));
 
-                    // Do not change peers of the raft group if this is a stale event.
-                    // Note that we start raft node before for the sake of the consistency in a starting and stopping raft nodes.
-                    if (pendingAssignmentsWatchEvent.revision() < pendingAssignments.revision()) {
-                        return true;
+                                changePeersQueue.add(() ->
+                                        updateRaftTopology(
+                                                (List<List<ClusterNode>>) ByteUtils.fromBytes(currAssignments),
+                                                recalculatedAssignments,
+                                                change.id()));
+                            }
+                        });
                     }
+                })
+                .thenCompose((v) -> {
+                    CompletableFuture<?>[] changePeersFutures = new CompletableFuture<?>[changePeersQueue.size()];
 
-                    var newNodes = newPeers.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
-
-                    RaftGroupService partGrpSvc = tbl.internalTable().partitionRaftGroupService(part);
+                    int i = 0;
 
-                    IgniteBiTuple<Peer, Long> leaderWithTerm = partGrpSvc.refreshAndGetLeaderWithTerm().join();
-
-                    ClusterNode localMember = raftMgr.server().clusterService().topologyService().localMember();
-
-                    // run update of raft configuration if this node is a leader
-                    if (localMember.address().equals(leaderWithTerm.get1().address())) {
-                        partGrpSvc.changePeersAsync(newNodes, leaderWithTerm.get2()).join();
+                    for (Supplier<CompletableFuture<Void>> task : changePeersQueue) {
+                        changePeersFutures[i++] = task.get();
                     }
 
-                    return true;
-                } finally {
-                    busyLock.leaveBusy();
-                }
-            }
-
-            @Override
-            public void onError(@NotNull Throwable e) {
-                LOG.error("Error while processing pending assignments event", e);
-            }
-        });
-
-        metaStorageMgr.registerWatchByPrefix(ByteArray.fromString(STABLE_ASSIGNMENTS_PREFIX), new WatchListener() {
-            @Override
-            public boolean onUpdate(@NotNull WatchEvent evt) {
-                if (!busyLock.enterBusy()) {
-                    throw new IgniteInternalException(new NodeStoppingException());
-                }
-
-                try {
-                    assert evt.single();
-
-                    Entry stableAssignmentsWatchEvent = evt.entryEvent().newEntry();
-
-                    if (stableAssignmentsWatchEvent.value() == null) {
-                        return true;
+                    return CompletableFuture.allOf(changePeersFutures);
+                })
+                .whenComplete((res, th) -> {
+                    if (th != null) {
+                        setBaselineFut.completeExceptionally(th);
+                    } else {
+                        setBaselineFut.complete(null);
                     }
+                });
 
-                    int part = extractPartitionNumber(stableAssignmentsWatchEvent.key());
-                    UUID tblId = extractTableId(stableAssignmentsWatchEvent.key(), STABLE_ASSIGNMENTS_PREFIX);
-
-                    String partId = partitionRaftGroupName(tblId, part);
-
-                    var stableAssignments = (List<ClusterNode>) ByteUtils.fromBytes(stableAssignmentsWatchEvent.value());
-
-                    byte[] pendingFromMetastorage = metaStorageMgr.get(pendingPartAssignmentsKey(partId),
-                            stableAssignmentsWatchEvent.revision()).join().value();
-
-                    List<ClusterNode> pendingAssignments = pendingFromMetastorage == null
-                            ? Collections.emptyList()
-                            : (List<ClusterNode>) ByteUtils.fromBytes(pendingFromMetastorage);
+        setBaselineFut.join();
+    }
 
-                    List<ClusterNode> appliedPeers = Stream.concat(stableAssignments.stream(), pendingAssignments.stream())
-                            .collect(Collectors.toList());
+    /**
+     * Update raft groups of table partitions to new peers list.
+     *
+     * @param oldAssignments Old assignment.
+     * @param newAssignments New assignment.
+     * @param tblId Table ID.
+     * @return Future, which completes, when update finished.
+     */
+    private CompletableFuture<Void> updateRaftTopology(
+            List<List<ClusterNode>> oldAssignments,
+            List<List<ClusterNode>> newAssignments,
+            UUID tblId) {
+        CompletableFuture<?>[] futures = new CompletableFuture<?>[oldAssignments.size()];
 
-                    try {
-                        ClusterNode localMember = raftMgr.server().clusterService().topologyService().localMember();
+        // TODO: IGNITE-15554 Add logic for assignment recalculation in case of partitions or replicas changes
+        // TODO: Until IGNITE-15554 is implemented it's safe to iterate over partitions and replicas cause there will
+        // TODO: be exact same amount of partitions and replicas for both old and new assignments
+        for (int i = 0; i < oldAssignments.size(); i++) {
+            final int p = i;
 
-                        if (!appliedPeers.contains(localMember)) {
-                            raftMgr.stopRaftGroup(partId);
-                        }
-                    } catch (NodeStoppingException e) {
-                        // no-op
-                    }
+            List<ClusterNode> oldPartitionAssignment = oldAssignments.get(p);
+            List<ClusterNode> newPartitionAssignment = newAssignments.get(p);
 
-                    return true;
-                } finally {
-                    busyLock.leaveBusy();
-                }
+            try {
+                futures[i] = raftMgr.changePeers(
+                        raftGroupName(tblId, p),
+                        oldPartitionAssignment,
+                        newPartitionAssignment
+                ).exceptionally(th -> {
+                    LOG.error("Failed to update raft peers for group " + raftGroupName(tblId, p)
+                            + "from " + oldPartitionAssignment + " to " + newPartitionAssignment, th);
+                    return null;
+                });
+            } catch (NodeStoppingException e) {
+                throw new AssertionError("Loza was stopped before Table manager", e);
             }
+        }
 
-            @Override
-            public void onError(@NotNull Throwable e) {
-                LOG.error("Error while processing stable assignments event", e);
-            }
-        });
+        return CompletableFuture.allOf(futures);
     }
 
     /**
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/raft/RebalanceRaftGroupEventsListener.java b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/raft/RebalanceRaftGroupEventsListener.java
deleted file mode 100644
index f625fed47..000000000
--- a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/raft/RebalanceRaftGroupEventsListener.java
+++ /dev/null
@@ -1,357 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ignite.internal.table.distributed.raft;
-
-import static org.apache.ignite.internal.metastorage.client.Conditions.notExists;
-import static org.apache.ignite.internal.metastorage.client.Conditions.revision;
-import static org.apache.ignite.internal.metastorage.client.Operations.ops;
-import static org.apache.ignite.internal.metastorage.client.Operations.put;
-import static org.apache.ignite.internal.metastorage.client.Operations.remove;
-import static org.apache.ignite.internal.utils.RebalanceUtil.pendingPartAssignmentsKey;
-import static org.apache.ignite.internal.utils.RebalanceUtil.plannedPartAssignmentsKey;
-import static org.apache.ignite.internal.utils.RebalanceUtil.stablePartAssignmentsKey;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.concurrent.CompletableFuture;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.function.Supplier;
-import org.apache.ignite.configuration.schemas.table.TableConfiguration;
-import org.apache.ignite.internal.configuration.schema.ExtendedTableChange;
-import org.apache.ignite.internal.metastorage.MetaStorageManager;
-import org.apache.ignite.internal.metastorage.client.Entry;
-import org.apache.ignite.internal.metastorage.client.If;
-import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
-import org.apache.ignite.internal.util.ByteUtils;
-import org.apache.ignite.internal.util.IgniteSpinBusyLock;
-import org.apache.ignite.lang.ByteArray;
-import org.apache.ignite.lang.IgniteInternalException;
-import org.apache.ignite.lang.IgniteLogger;
-import org.apache.ignite.network.ClusterNode;
-import org.apache.ignite.network.NetworkAddress;
-import org.apache.ignite.raft.client.Peer;
-import org.apache.ignite.raft.client.service.RaftGroupService;
-import org.apache.ignite.raft.jraft.Status;
-import org.apache.ignite.raft.jraft.entity.PeerId;
-import org.apache.ignite.raft.jraft.error.RaftError;
-
-/**
- * Listener for the raft group events, which must provide correct error handling of rebalance process
- * and start new rebalance after the current one finished.
- */
-public class RebalanceRaftGroupEventsListener implements RaftGroupEventsListener {
-    /** Ignite logger. */
-    private static final IgniteLogger LOG = IgniteLogger.forClass(RebalanceRaftGroupEventsListener.class);
-
-    /** Meta storage manager. */
-    private final MetaStorageManager metaStorageMgr;
-
-    /** Table configuration instance. */
-    private final TableConfiguration tblConfiguration;
-
-    /** Unique partition id. */
-    private final String partId;
-
-    /** Partition number. */
-    private final int partNum;
-
-    /** Busy lock of parent component for synchronous stop. */
-    private final IgniteSpinBusyLock busyLock;
-
-    /** Executor for scheduling rebalance retries. */
-    private final ScheduledExecutorService rebalanceScheduler;
-
-    /** Supplier of client for raft group of rebalance listener. */
-    private final Supplier<RaftGroupService> raftGroupServiceSupplier;
-
-    /** Attempts to retry the current rebalance in case of errors. */
-    private final AtomicInteger rebalanceAttempts =  new AtomicInteger(0);
-
-    /** Number of retrying of the current rebalance in case of errors. */
-    private static final int REBALANCE_RETRY_THRESHOLD = 10;
-
-    /** Delay between unsuccessful trial of a rebalance and a new trial, ms. */
-    public static final int REBALANCE_RETRY_DELAY_MS = 200;
-
-    /**
-     * Constructs new listener.
-     *
-     * @param metaStorageMgr Meta storage manager.
-     * @param tblConfiguration Table configuration.
-     * @param partId Partition id.
-     * @param partNum Partition number.
-     * @param rebalanceScheduler Executor for scheduling rebalance retries.
-     */
-    public RebalanceRaftGroupEventsListener(
-            MetaStorageManager metaStorageMgr,
-            TableConfiguration tblConfiguration,
-            String partId,
-            int partNum,
-            IgniteSpinBusyLock busyLock,
-            Supplier<RaftGroupService> raftGroupServiceSupplier,
-            ScheduledExecutorService rebalanceScheduler) {
-        this.metaStorageMgr = metaStorageMgr;
-        this.tblConfiguration = tblConfiguration;
-        this.partId = partId;
-        this.partNum = partNum;
-        this.busyLock = busyLock;
-        this.raftGroupServiceSupplier = raftGroupServiceSupplier;
-        this.rebalanceScheduler = rebalanceScheduler;
-    }
-
-    /** {@inheritDoc} */
-    @Override
-    public void onLeaderElected(long term) {
-        if (!busyLock.enterBusy()) {
-            return;
-        }
-
-        try {
-            rebalanceScheduler.schedule(() -> {
-                if (!busyLock.enterBusy()) {
-                    return;
-                }
-
-                try {
-                    rebalanceAttempts.set(0);
-
-                    metaStorageMgr.get(pendingPartAssignmentsKey(partId))
-                            .thenCompose(pendingEntry -> {
-                                if (!pendingEntry.empty()) {
-                                    List<ClusterNode> pendingNodes = (List<ClusterNode>) ByteUtils.fromBytes(pendingEntry.value());
-
-                                    return raftGroupServiceSupplier.get().changePeersAsync(clusterNodesToPeers(pendingNodes), term);
-                                } else {
-                                    return CompletableFuture.completedFuture(null);
-                                }
-                            }).get();
-                } catch (InterruptedException | ExecutionException e) {
-                    // TODO: IGNITE-17013 errors during this call should be handled by retry logic
-                    LOG.error("Couldn't start rebalance for partition {} of table {} on new elected leader for term {}",
-                            e, partNum, tblConfiguration.name().value(), term);
-                } finally {
-                    busyLock.leaveBusy();
-                }
-            }, 0, TimeUnit.MILLISECONDS);
-        } finally {
-            busyLock.leaveBusy();
-        }
-    }
-
-    /** {@inheritDoc} */
-    @Override
-    public void onNewPeersConfigurationApplied(List<PeerId> peers) {
-        if (!busyLock.enterBusy()) {
-            return;
-        }
-
-        try {
-            rebalanceScheduler.schedule(() -> {
-                if (!busyLock.enterBusy()) {
-                    return;
-                }
-
-                try {
-                    doOnNewPeersConfigurationApplied(peers);
-                } finally {
-                    busyLock.leaveBusy();
-                }
-            }, 0, TimeUnit.MILLISECONDS);
-        } finally {
-            busyLock.leaveBusy();
-        }
-    }
-
-    /** {@inheritDoc} */
-    @Override
-    public void onReconfigurationError(Status status, List<PeerId> peers, long term) {
-        if (!busyLock.enterBusy()) {
-            return;
-        }
-
-        try {
-            if (status == null) {
-                // leader stepped down, so we are expecting RebalanceRaftGroupEventsListener.onLeaderElected to be called on a new leader.
-                LOG.info("Leader stepped down during the current rebalance for the partId = {}.", partId);
-
-                return;
-            }
-
-            assert status.getRaftError() == RaftError.ECATCHUP : "According to the JRaft protocol, RaftError.ECATCHUP is expected.";
-
-            LOG.warn("Error occurred during the current rebalance for partId = {}.", partId);
-
-            if (rebalanceAttempts.incrementAndGet() < REBALANCE_RETRY_THRESHOLD) {
-                scheduleChangePeers(peers, term);
-            } else {
-                LOG.error("The number of retries of the rebalance for the partId = {} exceeded the threshold = {}.", partId,
-                        REBALANCE_RETRY_THRESHOLD);
-
-                // TODO: currently we just retry intent to change peers according to the rebalance infinitely, until new leader is elected,
-                // TODO: but rebalance cancel mechanism should be implemented. https://issues.apache.org/jira/browse/IGNITE-17056
-                scheduleChangePeers(peers, term);
-            }
-        } finally {
-            busyLock.leaveBusy();
-        }
-    }
-
-    /**
-     * Schedules changing peers according to the current rebalance.
-     *
-     * @param peers Peers to change configuration for a raft group.
-     * @param term Current known leader term.
-     */
-    private void scheduleChangePeers(List<PeerId> peers, long term) {
-        rebalanceScheduler.schedule(() -> {
-            if (!busyLock.enterBusy()) {
-                return;
-            }
-
-            LOG.info("Started {} attempt to retry the current rebalance for the partId = {}.", rebalanceAttempts.get(), partId);
-
-            try {
-                raftGroupServiceSupplier.get().changePeersAsync(peerIdsToPeers(peers), term).get();
-            } catch (InterruptedException | ExecutionException e) {
-                // TODO: IGNITE-17013 errors during this call should be handled by retry logic
-                LOG.error("Error during the rebalance retry for the partId = {}", e, partId);
-            } finally {
-                busyLock.leaveBusy();
-            }
-        }, REBALANCE_RETRY_DELAY_MS, TimeUnit.MILLISECONDS);
-    }
-
-    /**
-     * Implementation of {@link RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied(List)}.
-     *
-     * @param peers Peers
-     */
-    private void doOnNewPeersConfigurationApplied(List<PeerId> peers) {
-        try {
-            Map<ByteArray, Entry> keys = metaStorageMgr.getAll(
-                    Set.of(
-                            plannedPartAssignmentsKey(partId),
-                            pendingPartAssignmentsKey(partId),
-                            stablePartAssignmentsKey(partId))).get();
-
-            Entry plannedEntry = keys.get(plannedPartAssignmentsKey(partId));
-
-            List<ClusterNode> appliedPeers = resolveClusterNodes(peers,
-                    keys.get(pendingPartAssignmentsKey(partId)).value(), keys.get(stablePartAssignmentsKey(partId)).value());
-
-            tblConfiguration.change(ch -> {
-                List<List<ClusterNode>> assignments =
-                        (List<List<ClusterNode>>) ByteUtils.fromBytes(((ExtendedTableChange) ch).assignments());
-                assignments.set(partNum, appliedPeers);
-                ((ExtendedTableChange) ch).changeAssignments(ByteUtils.toBytes(assignments));
-            }).get();
-
-            if (plannedEntry.value() != null) {
-                if (!metaStorageMgr.invoke(If.iif(
-                        revision(plannedPartAssignmentsKey(partId)).eq(plannedEntry.revision()),
-                        ops(
-                                put(stablePartAssignmentsKey(partId), ByteUtils.toBytes(appliedPeers)),
-                                put(pendingPartAssignmentsKey(partId), plannedEntry.value()),
-                                remove(plannedPartAssignmentsKey(partId)))
-                                .yield(true),
-                        ops().yield(false))).get().getAsBoolean()) {
-                    doOnNewPeersConfigurationApplied(peers);
-                }
-            } else {
-                if (!metaStorageMgr.invoke(If.iif(
-                        notExists(plannedPartAssignmentsKey(partId)),
-                        ops(put(stablePartAssignmentsKey(partId), ByteUtils.toBytes(appliedPeers)),
-                                remove(pendingPartAssignmentsKey(partId))).yield(true),
-                        ops().yield(false))).get().getAsBoolean()) {
-                    doOnNewPeersConfigurationApplied(peers);
-                }
-            }
-
-            rebalanceAttempts.set(0);
-        } catch (InterruptedException | ExecutionException e) {
-            // TODO: IGNITE-17013 errors during this call should be handled by retry logic
-            LOG.error("Couldn't commit new partition configuration to metastore for table = {}, partition = {}",
-                    e, tblConfiguration.name(), partNum);
-        }
-    }
-
-    private static List<ClusterNode> resolveClusterNodes(
-            List<PeerId> peers, byte[] pendingAssignments, byte[] stableAssignments) {
-        Map<NetworkAddress, ClusterNode> resolveRegistry = new HashMap<>();
-
-        if (pendingAssignments != null) {
-            ((List<ClusterNode>) ByteUtils.fromBytes(pendingAssignments)).forEach(n -> resolveRegistry.put(n.address(), n));
-        }
-
-        if (stableAssignments != null) {
-            ((List<ClusterNode>) ByteUtils.fromBytes(stableAssignments)).forEach(n -> resolveRegistry.put(n.address(), n));
-        }
-
-        List<ClusterNode> resolvedNodes = new ArrayList<>(peers.size());
-
-        for (PeerId p : peers) {
-            var addr = NetworkAddress.from(p.getEndpoint().getIp() + ":" + p.getEndpoint().getPort());
-
-            if (resolveRegistry.containsKey(addr)) {
-                resolvedNodes.add(resolveRegistry.get(addr));
-            } else {
-                throw new IgniteInternalException("Can't find appropriate cluster node for raft group peer: " + p);
-            }
-        }
-
-        return resolvedNodes;
-    }
-
-    /**
-     * Transforms list of cluster nodes to the list of peers.
-     *
-     * @param nodes List of cluster nodes to transform.
-     * @return List of transformed peers.
-     */
-    private static List<Peer> clusterNodesToPeers(List<ClusterNode> nodes) {
-        List<Peer> peers = new ArrayList<>(nodes.size());
-
-        for (ClusterNode node : nodes) {
-            peers.add(new Peer(node.address()));
-        }
-
-        return peers;
-    }
-
-    /**
-     * Transforms list of peerIds to list of peers.
-     *
-     * @param peerIds List of peerIds to transform.
-     * @return List of transformed peers.
-     */
-    private static List<Peer> peerIdsToPeers(List<PeerId> peerIds) {
-        List<Peer> peers = new ArrayList<>(peerIds.size());
-
-        for (PeerId peerId : peerIds) {
-            peers.add(new Peer(NetworkAddress.from(peerId.getEndpoint().toString())));
-        }
-
-        return peers;
-    }
-}
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/storage/InternalTableImpl.java b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/storage/InternalTableImpl.java
index 14acc2932..c65eb0b46 100644
--- a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/storage/InternalTableImpl.java
+++ b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/storage/InternalTableImpl.java
@@ -414,21 +414,6 @@ public class InternalTableImpl implements InternalTable {
         return clusterNodeResolver.apply(raftGroupService.leader().address());
     }
 
-    /** {@inheritDoc} */
-    @Override
-    public RaftGroupService partitionRaftGroupService(int partition) {
-        RaftGroupService raftGroupService = partitionMap.get(partition);
-        if (raftGroupService == null) {
-            throw new IgniteInternalException("No such partition " + partition + " in table " + tableName);
-        }
-
-        if (raftGroupService.leader() == null) {
-            raftGroupService.refreshLeader().join();
-        }
-
-        return raftGroupService;
-    }
-
     private void awaitLeaderInitialization() {
         List<CompletableFuture<Void>> futs = new ArrayList<>();
 
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/utils/RebalanceUtil.java b/modules/table/src/main/java/org/apache/ignite/internal/utils/RebalanceUtil.java
deleted file mode 100644
index 90caa2bdd..000000000
--- a/modules/table/src/main/java/org/apache/ignite/internal/utils/RebalanceUtil.java
+++ /dev/null
@@ -1,172 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ignite.internal.utils;
-
-import static org.apache.ignite.internal.metastorage.client.CompoundCondition.and;
-import static org.apache.ignite.internal.metastorage.client.CompoundCondition.or;
-import static org.apache.ignite.internal.metastorage.client.Conditions.notExists;
-import static org.apache.ignite.internal.metastorage.client.Conditions.value;
-import static org.apache.ignite.internal.metastorage.client.Operations.ops;
-import static org.apache.ignite.internal.metastorage.client.Operations.put;
-import static org.apache.ignite.internal.metastorage.client.Operations.remove;
-
-import java.util.Collection;
-import java.util.UUID;
-import java.util.concurrent.CompletableFuture;
-import org.apache.ignite.internal.affinity.AffinityUtils;
-import org.apache.ignite.internal.metastorage.MetaStorageManager;
-import org.apache.ignite.internal.metastorage.client.If;
-import org.apache.ignite.internal.metastorage.client.StatementResult;
-import org.apache.ignite.internal.util.ByteUtils;
-import org.apache.ignite.lang.ByteArray;
-import org.apache.ignite.network.ClusterNode;
-import org.jetbrains.annotations.NotNull;
-
-/**
- * Util class for methods needed for the rebalance process.
- */
-public class RebalanceUtil {
-
-    /**
-     * Update keys that related to rebalance algorithm in Meta Storage. Keys are specific for partition.
-     *
-     * @param partId Unique identifier of a partition.
-     * @param baselineNodes Nodes in baseline.
-     * @param partitions Number of partitions in a table.
-     * @param replicas Number of replicas for a table.
-     * @param revision Revision of Meta Storage that is specific for the assignment update.
-     * @param metaStorageMgr Meta Storage manager.
-     * @return Future representing result of updating keys in {@code metaStorageMgr}
-     */
-    public static @NotNull CompletableFuture<StatementResult> updatePendingAssignmentsKeys(
-            String partId, Collection<ClusterNode> baselineNodes,
-            int partitions, int replicas, long revision, MetaStorageManager metaStorageMgr, int partNum) {
-        ByteArray partChangeTriggerKey = partChangeTriggerKey(partId);
-
-        ByteArray partAssignmentsPendingKey = pendingPartAssignmentsKey(partId);
-
-        ByteArray partAssignmentsPlannedKey = plannedPartAssignmentsKey(partId);
-
-        ByteArray partAssignmentsStableKey = stablePartAssignmentsKey(partId);
-
-        byte[] partAssignmentsBytes = ByteUtils.toBytes(
-                AffinityUtils.calculateAssignments(baselineNodes, partitions, replicas).get(partNum));
-
-        //    if empty(partition.change.trigger.revision) || partition.change.trigger.revision < event.revision:
-        //        if empty(partition.assignments.pending) && partition.assignments.stable != calcPartAssighments():
-        //            partition.assignments.pending = calcPartAssignments()
-        //            partition.change.trigger.revision = event.revision
-        //        else:
-        //            if partition.assignments.pending != calcPartAssignments
-        //                partition.assignments.planned = calcPartAssignments()
-        //                partition.change.trigger.revision = event.revision
-        //            else
-        //                remove(partition.assignments.planned)
-        //    else:
-        //        skip
-        var iif = If.iif(or(notExists(partChangeTriggerKey), value(partChangeTriggerKey).lt(ByteUtils.longToBytes(revision))),
-                If.iif(and(notExists(partAssignmentsPendingKey), value(partAssignmentsStableKey).ne(partAssignmentsBytes)),
-                        ops(
-                                put(partAssignmentsPendingKey, partAssignmentsBytes),
-                                put(partChangeTriggerKey, ByteUtils.longToBytes(revision))
-                        ).yield(),
-                        If.iif(value(partAssignmentsPendingKey).ne(partAssignmentsBytes),
-                                ops(
-                                        put(partAssignmentsPlannedKey, partAssignmentsBytes),
-                                        put(partChangeTriggerKey, ByteUtils.longToBytes(revision))
-                                ).yield(),
-                                ops(remove(partAssignmentsPlannedKey)).yield())),
-                ops().yield());
-
-        return metaStorageMgr.invoke(iif);
-    }
-
-    /** Key prefix for pending assignments. */
-    public static final String PENDING_ASSIGNMENTS_PREFIX = "assignments.pending.";
-
-    /** Key prefix for stable assignments. */
-    public static final String STABLE_ASSIGNMENTS_PREFIX = "assignments.stable.";
-
-    /**
-     * Key that is needed for the rebalance algorithm.
-     *
-     * @param partId Unique identifier of a partition.
-     * @return Key for a partition.
-     * @see <a href="https://github.com/apache/ignite-3/blob/main/modules/table/tech-notes/rebalance.md">Rebalnce documentation</a>
-     */
-    public static ByteArray partChangeTriggerKey(String partId) {
-        return new ByteArray(partId + ".change.trigger");
-    }
-
-    /**
-     * Key that is needed for the rebalance algorithm.
-     *
-     * @param partId Unique identifier of a partition.
-     * @return Key for a partition.
-     * @see <a href="https://github.com/apache/ignite-3/blob/main/modules/table/tech-notes/rebalance.md">Rebalnce documentation</a>
-     */
-    public static ByteArray pendingPartAssignmentsKey(String partId) {
-        return new ByteArray(PENDING_ASSIGNMENTS_PREFIX + partId);
-    }
-
-    /**
-     * Key that is needed for the rebalance algorithm.
-     *
-     * @param partId Unique identifier of a partition.
-     * @return Key for a partition.
-     * @see <a href="https://github.com/apache/ignite-3/blob/main/modules/table/tech-notes/rebalance.md">Rebalnce documentation</a>
-     */
-    public static ByteArray plannedPartAssignmentsKey(String partId) {
-        return new ByteArray("assignments.planned." + partId);
-    }
-
-    /**
-     * Key that is needed for the rebalance algorithm.
-     *
-     * @param partId Unique identifier of a partition.
-     * @return Key for a partition.
-     * @see <a href="https://github.com/apache/ignite-3/blob/main/modules/table/tech-notes/rebalance.md">Rebalnce documentation</a>
-     */
-    public static ByteArray stablePartAssignmentsKey(String partId) {
-        return new ByteArray(STABLE_ASSIGNMENTS_PREFIX + partId);
-    }
-
-    /**
-     * Extract table id from pending key of partition.
-     *
-     * @param key Key.
-     * @return Table id.
-     */
-    public static UUID extractTableId(ByteArray key, String prefix) {
-        var strKey = key.toString();
-
-        return UUID.fromString(strKey.substring(prefix.length(), strKey.indexOf("_part_")));
-    }
-
-    /**
-     * Extract partition number from the rebalance key of partition.
-     *
-     * @param key Key.
-     * @return Partition number.
-     */
-    public static int extractPartitionNumber(ByteArray key) {
-        var strKey = key.toString();
-
-        return Integer.parseInt(strKey.substring(strKey.indexOf("_part_") + "_part_".length()));
-    }
-}
diff --git a/modules/table/src/test/java/org/apache/ignite/internal/table/TableManagerTest.java b/modules/table/src/test/java/org/apache/ignite/internal/table/TableManagerTest.java
index 835df74c3..60cc92552 100644
--- a/modules/table/src/test/java/org/apache/ignite/internal/table/TableManagerTest.java
+++ b/modules/table/src/test/java/org/apache/ignite/internal/table/TableManagerTest.java
@@ -67,7 +67,6 @@ import org.apache.ignite.internal.configuration.schema.ExtendedTableView;
 import org.apache.ignite.internal.configuration.testframework.ConfigurationExtension;
 import org.apache.ignite.internal.configuration.testframework.InjectConfiguration;
 import org.apache.ignite.internal.configuration.testframework.InjectRevisionListenerHolder;
-import org.apache.ignite.internal.metastorage.MetaStorageManager;
 import org.apache.ignite.internal.pagememory.configuration.schema.UnsafeMemoryAllocatorConfigurationSchema;
 import org.apache.ignite.internal.raft.Loza;
 import org.apache.ignite.internal.schema.SchemaDescriptor;
@@ -88,8 +87,8 @@ import org.apache.ignite.internal.testframework.IgniteAbstractTest;
 import org.apache.ignite.internal.tx.LockManager;
 import org.apache.ignite.internal.tx.TxManager;
 import org.apache.ignite.internal.util.ByteUtils;
-import org.apache.ignite.lang.ByteArray;
 import org.apache.ignite.lang.IgniteException;
+import org.apache.ignite.lang.NodeStoppingException;
 import org.apache.ignite.network.ClusterNode;
 import org.apache.ignite.network.NetworkAddress;
 import org.apache.ignite.network.TopologyService;
@@ -153,10 +152,6 @@ public class TableManagerTest extends IgniteAbstractTest {
     @Mock(lenient = true)
     private LockManager lm;
 
-    /** Meta storage manager. */
-    @Mock
-    MetaStorageManager msm;
-
     /**
      * Revision listener holder. It uses for the test configurations:
      * <ul>
@@ -216,8 +211,6 @@ public class TableManagerTest extends IgniteAbstractTest {
             });
         };
 
-        when(msm.registerWatch(any(ByteArray.class), any())).thenReturn(CompletableFuture.completedFuture(1L));
-
         tblManagerFut = new CompletableFuture<>();
     }
 
@@ -239,7 +232,7 @@ public class TableManagerTest extends IgniteAbstractTest {
      */
     @Test
     public void testPreconfiguredTable() throws Exception {
-        when(rm.updateRaftGroup(any(), any(), any(), any(), any())).thenAnswer(mock ->
+        when(rm.updateRaftGroup(any(), any(), any(), any())).thenAnswer(mock ->
                 CompletableFuture.completedFuture(mock(RaftGroupService.class)));
 
         TableManager tableManager = createTableManager(tblManagerFut, false);
@@ -399,6 +392,8 @@ public class TableManagerTest extends IgniteAbstractTest {
 
         assertThrows(IgniteException.class, () -> tableManager.table(fakeTblId));
         assertThrows(IgniteException.class, () -> tableManager.tableAsync(fakeTblId));
+
+        assertThrows(NodeStoppingException.class, () -> tableManager.setBaseline(Collections.singleton("fakeNode0")));
     }
 
     /**
@@ -415,7 +410,7 @@ public class TableManagerTest extends IgniteAbstractTest {
 
         mockManagersAndCreateTable(scmTbl, tblManagerFut);
 
-        verify(rm, times(PARTITIONS)).updateRaftGroup(anyString(), any(), any(), any(), any());
+        verify(rm, times(PARTITIONS)).updateRaftGroup(anyString(), any(), any(), any());
 
         TableManager tableManager = tblManagerFut.join();
 
@@ -524,7 +519,7 @@ public class TableManagerTest extends IgniteAbstractTest {
             CompletableFuture<TableManager> tblManagerFut,
             Phaser phaser
     ) throws Exception {
-        when(rm.updateRaftGroup(any(), any(), any(), any(), any())).thenAnswer(mock -> {
+        when(rm.updateRaftGroup(any(), any(), any(), any())).thenAnswer(mock -> {
             RaftGroupService raftGrpSrvcMock = mock(RaftGroupService.class);
 
             when(raftGrpSrvcMock.leader()).thenReturn(new Peer(new NetworkAddress("localhost", 47500)));
@@ -629,7 +624,6 @@ public class TableManagerTest extends IgniteAbstractTest {
                 ts,
                 tm,
                 dsm = createDataStorageManager(configRegistry, workDir, pageMemoryEngineConfig),
-                msm,
                 sm = new SchemaManager(revisionUpdater, tblsCfg)
         );
 
diff --git a/modules/table/tech-notes/rebalance.md b/modules/table/tech-notes/rebalance.md
index 2f096b52c..c0d374b33 100644
--- a/modules/table/tech-notes/rebalance.md
+++ b/modules/table/tech-notes/rebalance.md
@@ -32,7 +32,7 @@ Also, we will need the utility key:
 
 ## Operations, which can trigger rebalance
 Three types of events can trigger the rebalance:
-- Change of baseline metastore key (1 for all tables for now, but maybe it should be separate per table in future)
+- API call of any special method like `org.apache.ignite.Ignite.setBaseline`, which will change baseline value in metastore (1 for all tables for now, but maybe it should be separate per table in future)
 - Configuration change through `org.apache.ignite.configuration.schemas.table.TableChange.changeReplicas` produce metastore update event
 - Configuration change through `org.apache.ignite.configuration.schemas.table.TableChange.changePartitions` produce metastore update event (IMPORTANT: this type of trigger has additional difficulties because of cross raft group data migration and it is out of scope of this document)
 
@@ -96,7 +96,6 @@ metastoreInvoke: \\ atomic
         partition.assignments.pending = empty
     else:
         partition.assignments.pending = partition.assignments.planned
-        remove(partition.assignments.planned)
 ```
 
 Failover helpers (detailed failover scenarious must be developed in future)


[ignite-3] 02/02: IGNITE-14209 Data rebalance on partition replicas' number changes

Posted by sk...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sk0x50 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/ignite-3.git

commit d56882b82fc8040daa1f84ed7f48239aed621774
Author: Kirill Gusakov <kg...@gmail.com>
AuthorDate: Tue Jun 7 02:18:59 2022 +0300

    IGNITE-14209 Data rebalance on partition replicas' number changes
    
    IGNITE-16012 Add change peer async functionality
    IGNITE-15554 Add changing assignments mock on replica changing
    IGNITE-16109 Raft listeners for onLeaderElected/onNewPeersConfigurationApplied
    IGNITE-16010 Move setBaseline to the cluster configuration
    IGNITE-16379 Added listener for reconfiguration errors
    IGNITE-16063 Update assignments rebalance keys in metastorage on replica number change
    IGNITE-16011 Start rebalance, when pending assignments updated
    IGNITE-16903 Stop redundant raft nodes not from the new assignments during rebalance
    IGNITE-16801 Added error handling during rebalance in case of errors from JRaft
    IGNITE-16800 Check if restart rebalance needed on the leader start
    
    Co-authored-by: Mirza Aliev <al...@gmail.com>
    Signed-off-by: Slava Koptilin <sl...@gmail.com>
---
 assembly/README.md                                 |   3 -
 docs/_docs/quick-start/getting-started-guide.adoc  |   5 +-
 docs/_docs/rebalance.adoc                          |   7 -
 examples/README.md                                 |   3 +-
 .../ignite/example/rebalance/RebalanceExample.java | 216 --------
 .../src/main/java/org/apache/ignite/Ignite.java    |  24 -
 .../ignite/internal/client/TcpIgniteClient.java    |   7 -
 .../org/apache/ignite/client/fakes/FakeIgnite.java |   7 -
 .../ignite/client/fakes/FakeInternalTable.java     |   7 +
 .../ignite/internal/causality/VersionedValue.java  |   8 +-
 .../raft/client/service/RaftGroupService.java      |  28 ++
 .../apache/ignite/raft/jraft/core/ItNodeTest.java  | 234 ++++++++-
 .../java/org/apache/ignite/internal/raft/Loza.java | 151 +++---
 .../raft/server/RaftGroupEventsListener.java       |  68 +++
 .../ignite/internal/raft/server/RaftServer.java    |  12 +
 .../internal/raft/server/impl/JraftServerImpl.java |  38 ++
 .../java/org/apache/ignite/raft/jraft/Node.java    |  11 +
 .../apache/ignite/raft/jraft/RaftMessageGroup.java |   6 +
 .../apache/ignite/raft/jraft/core/NodeImpl.java    |  75 ++-
 .../ignite/raft/jraft/option/NodeOptions.java      |  13 +
 .../apache/ignite/raft/jraft/rpc/CliRequests.java  |  21 +-
 .../raft/jraft/rpc/impl/IgniteRpcServer.java       |   2 +
 .../raft/jraft/rpc/impl/RaftGroupServiceImpl.java  |  61 ++-
 .../impl/cli/ChangePeersAsyncRequestProcessor.java |  93 ++++
 .../org/apache/ignite/internal/raft/LozaTest.java  |   3 +-
 .../internal/raft/server/impl/RaftServerImpl.java  |   7 +
 .../apache/ignite/raft/jraft/core/TestCluster.java |  13 +
 .../cli/ChangePeersAsyncRequestProcessorTest.java  |  64 +++
 .../storage/ItRebalanceDistributedTest.java        | 544 +++++++++++++++++++++
 .../internal/runner/app/ItBaselineChangesTest.java | 174 -------
 .../runner/app/ItIgniteNodeRestartTest.java        |   1 +
 .../org/apache/ignite/internal/app/IgniteImpl.java |  12 +-
 .../sql/engine/exec/MockedStructuresTest.java      |  11 +-
 .../ignite/internal/table/InternalTable.java       |  10 +
 .../internal/table/distributed/TableManager.java   | 375 ++++++++------
 .../raft/RebalanceRaftGroupEventsListener.java     | 357 ++++++++++++++
 .../distributed/storage/InternalTableImpl.java     |  15 +
 .../ignite/internal/utils/RebalanceUtil.java       | 172 +++++++
 .../ignite/internal/table/TableManagerTest.java    |  18 +-
 modules/table/tech-notes/rebalance.md              |   3 +-
 40 files changed, 2163 insertions(+), 716 deletions(-)

diff --git a/assembly/README.md b/assembly/README.md
index fea9c1a3f..489382c87 100644
--- a/assembly/README.md
+++ b/assembly/README.md
@@ -42,9 +42,6 @@ The following examples are included:
 * `RecordViewExample` - demonstrates the usage of the `org.apache.ignite.table.RecordView` API
 * `KeyValueViewExample` - demonstrates the usage of the `org.apache.ignite.table.KeyValueView` API
 * `SqlJdbcExample` - demonstrates the usage of the Apache Ignite JDBC driver.
-* `RebalanceExample` - demonstrates the data rebalancing process.
-
-To run the `RebalanceExample`, refer to its JavaDoc for instructions.
 
 To run any other example, do the following:
 1. Import the examples project into you IDE.
diff --git a/docs/_docs/quick-start/getting-started-guide.adoc b/docs/_docs/quick-start/getting-started-guide.adoc
index 802714c79..961954667 100644
--- a/docs/_docs/quick-start/getting-started-guide.adoc
+++ b/docs/_docs/quick-start/getting-started-guide.adoc
@@ -190,11 +190,8 @@ The project includes the following examples:
 * `RecordViewExample` demonstrates the usage of the `org.apache.ignite.table.RecordView` API to create a table. It also shows how to get data from a table, or insert a line into a table.
 * `KeyValueViewExample` - demonstrates the usage of the `org.apache.ignite.table.KeyValueView` API to insert a line into a table.
 * `SqlJdbcExample` - demonstrates the usage of the Apache Ignite JDBC driver.
-* `RebalanceExample` - demonstrates the data rebalancing process.
 
-To run the `RebalanceExample`, refer to its link:https://github.com/apache/ignite-3/blob/3.0.0-alpha4/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java[JavaDoc,window=_blank] for instructions.
-
-To run any other example, perform the following steps:
+To run any example, perform the following steps:
 
 . Import the examples project into you IDE.
 
diff --git a/docs/_docs/rebalance.adoc b/docs/_docs/rebalance.adoc
index 54fe40cc7..9a897c5ae 100644
--- a/docs/_docs/rebalance.adoc
+++ b/docs/_docs/rebalance.adoc
@@ -18,10 +18,3 @@ When a new node joins the cluster, some of the partitions are relocated to the n
 If an existing node permanently leaves the cluster and backups are not configured, you lose the partitions stored on this node. When backups are configured, one of the backup copies of the lost partitions becomes a primary partition and the rebalancing process is initiated.
 
 WARNING: Data rebalancing is triggered by changes in the Baseline Topology. In pure in-memory clusters, the default behavior is to start rebalancing immediately when a node leaves or joins the cluster (the baseline topology changes automatically). In clusters with persistence, the baseline topology has to be changed manually (default behavior), or can be changed automatically when automatic baseline adjustment is enabled.
-
-== Running an Example
-
-Examples are shipped as a separate Maven project, which is located in the `examples` folder. `RebalanceExample` demonstrates the data rebalancing process.
-
-To start running `RebalanceExample`, please refer to its link:https://github.com/apache/ignite-3/blob/3.0.0-alpha3/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java[JavaDoc,window=_blank] for instructions.
-
diff --git a/examples/README.md b/examples/README.md
index 890753737..410cbf94e 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -9,10 +9,9 @@ The following examples are included:
 * `RecordViewExample` - demonstrates the usage of the `org.apache.ignite.table.RecordView` API
 * `KeyValueViewExample` - demonstrates the usage of the `org.apache.ignite.table.KeyValueView` API
 * `SqlJdbcExample` - demonstrates the usage of the Apache Ignite JDBC driver.
-* `RebalanceExample` - demonstrates the data rebalancing process.
 * `VolatilePageMemoryStorageExample` - demonstrates the usage of the PageMemory storage engine configured with an in-memory data region.
 * `PersistentPageMemoryStorageExample` - demonstrates the usage of the PageMemory storage engine configured with a persistent data region.
 
 Before running the examples, read about [cli](https://ignite.apache.org/docs/3.0.0-alpha/ignite-cli-tool).
 
-To run the examples, refer to their JavaDoc for instructions.
\ No newline at end of file
+To run the examples, refer to their JavaDoc for instructions.
diff --git a/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java b/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java
deleted file mode 100644
index 4a2db3991..000000000
--- a/examples/src/main/java/org/apache/ignite/example/rebalance/RebalanceExample.java
+++ /dev/null
@@ -1,216 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ignite.example.rebalance;
-
-import java.nio.file.Files;
-import java.nio.file.Path;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.Statement;
-import java.util.Set;
-import org.apache.ignite.Ignite;
-import org.apache.ignite.IgnitionManager;
-import org.apache.ignite.client.IgniteClient;
-import org.apache.ignite.table.KeyValueView;
-import org.apache.ignite.table.Tuple;
-
-/**
- * This example demonstrates the data rebalance process.
- *
- * <p>The example emulates the basic scenario when one starts a three-node topology,
- * inserts some data, and then scales out by adding two more nodes. After the topology is changed, the data is rebalanced and verified for
- * correctness.
- *
- * <p>To run the example, do the following:
- * <ol>
- *     <li>Import the examples project into you IDE.</li>
- *     <li>
- *         Download and prepare artifacts for running an Ignite node using the CLI tool (if not done yet):<br>
- *         {@code ignite bootstrap}
- *     </li>
- *     <li>
- *         Start <b>two</b> nodes using the CLI tool:<br>
- *         {@code ignite node start --config=$IGNITE_HOME/examples/config/ignite-config.json my-first-node}<br>
- *         {@code ignite node start --config=$IGNITE_HOME/examples/config/ignite-config.json my-second-node}
- *     </li>
- *     <li>
- *         Cluster initialization using the CLI tool (if not done yet):<br>
- *         {@code ignite cluster init --cluster-name=ignite-cluster --node-endpoint=localhost:10300 --meta-storage-node=my-first-node}
- *     </li>
- *     <li>Run the example in the IDE.</li>
- *     <li>
- *         When requested, start another <b>two</b> nodes using the CLI tool:
- *         {@code ignite node start --config=$IGNITE_HOME/examples/config/ignite-config.json my-first-additional-node}<br>
- *         {@code ignite node start --config=$IGNITE_HOME/examples/config/ignite-config.json my-second-additional-node}
- *     </li>
- *     <li>Press {@code Enter} to resume the example.</li>
- *     <li>
- *         Stop <b>four</b> nodes using the CLI tool:<br>
- *         {@code ignite node stop my-first-node}<br>
- *         {@code ignite node stop my-second-node}<br>
- *         {@code ignite node stop my-first-additional-node}<br>
- *         {@code ignite node stop my-second-additional-node}
- *     </li>
- * </ol>
- */
-public class RebalanceExample {
-    /**
-     * Main method of the example.
-     *
-     * @param args The command line arguments.
-     * @throws Exception If failed.
-     */
-    public static void main(String[] args) throws Exception {
-        //--------------------------------------------------------------------------------------
-        //
-        // Creating 'accounts' table.
-        //
-        //--------------------------------------------------------------------------------------
-
-        try (
-                Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10800/");
-                Statement stmt = conn.createStatement()
-        ) {
-            stmt.executeUpdate(
-                    "CREATE TABLE rebalance ("
-                            + "key   INT PRIMARY KEY,"
-                            + "value VARCHAR)"
-            );
-        }
-
-        //--------------------------------------------------------------------------------------
-        //
-        // Creating a client to connect to the cluster.
-        //
-        //--------------------------------------------------------------------------------------
-
-        System.out.println("\nConnecting to server...");
-
-        try (IgniteClient client = IgniteClient.builder()
-                .addresses("127.0.0.1:10800")
-                .build()
-        ) {
-            KeyValueView<Tuple, Tuple> kvView = client.tables().table("PUBLIC.rebalance").keyValueView();
-
-            //--------------------------------------------------------------------------------------
-            //
-            // Inserting several key-value pairs into the table.
-            //
-            //--------------------------------------------------------------------------------------
-
-            System.out.println("\nInserting key-value pairs...");
-
-            for (int i = 0; i < 10; i++) {
-                Tuple key = Tuple.create().set("key", i);
-                Tuple value = Tuple.create().set("value", "test_" + i);
-
-                kvView.put(null, key, value);
-            }
-
-            //--------------------------------------------------------------------------------------
-            //
-            // Retrieving the newly inserted data.
-            //
-            //--------------------------------------------------------------------------------------
-
-            System.out.println("\nRetrieved key-value pairs:");
-
-            for (int i = 0; i < 10; i++) {
-                Tuple key = Tuple.create().set("key", i);
-                Tuple value = kvView.get(null, key);
-
-                System.out.println("    " + i + " -> " + value.stringValue("value"));
-            }
-
-            //--------------------------------------------------------------------------------------
-            //
-            // Scaling out by adding two more nodes into the topology.
-            //
-            //--------------------------------------------------------------------------------------
-
-            System.out.println("\n"
-                    + "Run the following commands using the CLI tool to start two more nodes, and then press 'Enter' to continue...\n"
-                    + "    ignite node start --config=examples/config/ignite-config.json my-first-additional-node\n"
-                    + "    ignite node start --config=examples/config/ignite-config.json my-second-additional-node");
-
-            System.in.read();
-
-            //--------------------------------------------------------------------------------------
-            //
-            // Updating baseline to initiate the data rebalancing process.
-            //
-            // New topology includes the following five nodes:
-            //     1. 'my-first-node' -- the first node started prior to running the example
-            //     2. 'my-second-node' -- the second node started prior to running the example
-            //     3. 'additional-node-1' -- the first node added to the topology
-            //     4. 'additional-node-2' -- the second node added to the topology
-            //     5. 'example-node' -- node that is embedded into the example
-            //
-            // NOTE: An embedded server node is started here for the sole purpose of setting
-            //       the baseline. In the future releases, this API will be provided by the
-            //       clients as well. In addition, the process will be automated where applicable
-            //       to eliminate the need for this manual step.
-            //
-            //--------------------------------------------------------------------------------------
-
-            System.out.println("Starting a server node... Logging to file: example-node.log");
-
-            System.setProperty("java.util.logging.config.file", "config/java.util.logging.properties");
-
-            try (Ignite server = IgnitionManager.start(
-                    "example-node",
-                    Files.readString(Path.of("config", "ignite-config.json")),
-                    Path.of("work")
-            ).join()) {
-                System.out.println("\nUpdating the baseline and rebalancing the data...");
-
-                server.setBaseline(Set.of(
-                        "my-first-node",
-                        "my-second-node",
-                        "my-first-additional-node",
-                        "my-second-additional-node",
-                        "example-node"
-                ));
-
-                //--------------------------------------------------------------------------------------
-                //
-                // Retrieving data again to validate correctness.
-                //
-                //--------------------------------------------------------------------------------------
-
-                System.out.println("\nKey-value pairs retrieved after the topology change:");
-
-                for (int i = 0; i < 10; i++) {
-                    Tuple key = Tuple.create().set("key", i);
-                    Tuple value = kvView.get(null, key);
-
-                    System.out.println("    " + i + " -> " + value.stringValue("value"));
-                }
-            }
-        }
-
-        System.out.println("\nDropping the table...");
-
-        try (
-                Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1:10800/");
-                Statement stmt = conn.createStatement()
-        ) {
-            stmt.executeUpdate("DROP TABLE rebalance");
-        }
-    }
-}
diff --git a/modules/api/src/main/java/org/apache/ignite/Ignite.java b/modules/api/src/main/java/org/apache/ignite/Ignite.java
index c530df5fc..ad0a1434b 100644
--- a/modules/api/src/main/java/org/apache/ignite/Ignite.java
+++ b/modules/api/src/main/java/org/apache/ignite/Ignite.java
@@ -18,16 +18,13 @@
 package org.apache.ignite;
 
 import java.util.Collection;
-import java.util.Set;
 import java.util.concurrent.CompletableFuture;
 import org.apache.ignite.compute.ComputeJob;
 import org.apache.ignite.compute.IgniteCompute;
-import org.apache.ignite.lang.IgniteException;
 import org.apache.ignite.network.ClusterNode;
 import org.apache.ignite.sql.IgniteSql;
 import org.apache.ignite.table.manager.IgniteTables;
 import org.apache.ignite.tx.IgniteTransactions;
-import org.jetbrains.annotations.ApiStatus.Experimental;
 
 /**
  * Ignite API entry point.
@@ -61,27 +58,6 @@ public interface Ignite extends AutoCloseable {
      */
     IgniteSql sql();
 
-    /**
-     * Set new baseline nodes for table assignments.
-     *
-     * <p>Current implementation has significant restrictions: - Only alive nodes can be a part of new baseline. If any passed nodes are not
-     * alive, {@link IgniteException} with appropriate message will be thrown. - Potentially it can be a long operation and current
-     * synchronous changePeers-based implementation can't handle this issue well. - No recovery logic supported, if setBaseline fails - it
-     * can produce random state of cluster.
-     * TODO: IGNITE-14209 issues above must be fixed.
-     * TODO: IGNITE-15815 add a test for stopping node and asynchronous implementation.
-     *
-     * @param baselineNodes Names of baseline nodes.
-     * @throws IgniteException If an unspecified platform exception has happened internally. Is thrown when:
-     *                         <ul>
-     *                             <li>the node is stopping,</li>
-     *                             <li>{@code baselineNodes} argument is empty or null,</li>
-     *                             <li>any node from {@code baselineNodes} is not alive.</li>
-     *                         </ul>
-     */
-    @Experimental
-    void setBaseline(Set<String> baselineNodes);
-
     /**
      * Returns {@link IgniteCompute} which can be used to execute compute jobs.
      *
diff --git a/modules/client/src/main/java/org/apache/ignite/internal/client/TcpIgniteClient.java b/modules/client/src/main/java/org/apache/ignite/internal/client/TcpIgniteClient.java
index 4a185b04b..c33746546 100644
--- a/modules/client/src/main/java/org/apache/ignite/internal/client/TcpIgniteClient.java
+++ b/modules/client/src/main/java/org/apache/ignite/internal/client/TcpIgniteClient.java
@@ -22,7 +22,6 @@ import static org.apache.ignite.internal.client.ClientUtils.sync;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.List;
-import java.util.Set;
 import java.util.concurrent.CompletableFuture;
 import java.util.function.BiFunction;
 import org.apache.ignite.client.IgniteClient;
@@ -134,12 +133,6 @@ public class TcpIgniteClient implements IgniteClient {
         return sql;
     }
 
-    /** {@inheritDoc} */
-    @Override
-    public void setBaseline(Set<String> baselineNodes) {
-        throw new UnsupportedOperationException();
-    }
-
     /** {@inheritDoc} */
     @Override
     public IgniteCompute compute() {
diff --git a/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java b/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java
index aa617b84d..6341a0cf6 100644
--- a/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java
+++ b/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeIgnite.java
@@ -18,7 +18,6 @@
 package org.apache.ignite.client.fakes;
 
 import java.util.Collection;
-import java.util.Set;
 import java.util.concurrent.CompletableFuture;
 import org.apache.ignite.Ignite;
 import org.apache.ignite.compute.IgniteCompute;
@@ -100,12 +99,6 @@ public class FakeIgnite implements Ignite {
         return new FakeIgniteSql();
     }
 
-    /** {@inheritDoc} */
-    @Override
-    public void setBaseline(Set<String> baselineNodes) {
-        throw new UnsupportedOperationException();
-    }
-
     /** {@inheritDoc} */
     @Override
     public IgniteCompute compute() {
diff --git a/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeInternalTable.java b/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeInternalTable.java
index 77e7afad7..7da5e0bc1 100644
--- a/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeInternalTable.java
+++ b/modules/client/src/test/java/org/apache/ignite/client/fakes/FakeInternalTable.java
@@ -33,6 +33,7 @@ import org.apache.ignite.internal.table.InternalTable;
 import org.apache.ignite.internal.tx.InternalTransaction;
 import org.apache.ignite.lang.IgniteInternalException;
 import org.apache.ignite.network.ClusterNode;
+import org.apache.ignite.raft.client.service.RaftGroupService;
 import org.jetbrains.annotations.NotNull;
 import org.jetbrains.annotations.Nullable;
 
@@ -281,6 +282,12 @@ public class FakeInternalTable implements InternalTable {
         throw new IgniteInternalException(new OperationNotSupportedException());
     }
 
+    /** {@inheritDoc} */
+    @Override
+    public RaftGroupService partitionRaftGroupService(int partition) {
+        return null;
+    }
+
     /** {@inheritDoc} */
     @Override
     public int partition(BinaryRowEx keyRow) {
diff --git a/modules/core/src/main/java/org/apache/ignite/internal/causality/VersionedValue.java b/modules/core/src/main/java/org/apache/ignite/internal/causality/VersionedValue.java
index 10b3df8c3..1eb90e152 100644
--- a/modules/core/src/main/java/org/apache/ignite/internal/causality/VersionedValue.java
+++ b/modules/core/src/main/java/org/apache/ignite/internal/causality/VersionedValue.java
@@ -591,10 +591,10 @@ public class VersionedValue<T> {
      * Check that the given causality token os correct according to the actual token.
      *
      * @param actualToken Actual token.
-     * @param causalityToken Causality token.
+     * @param candidateToken Candidate token.
      */
-    private static void checkToken(long actualToken, long causalityToken) {
-        assert actualToken == NOT_INITIALIZED || actualToken + 1 == causalityToken : IgniteStringFormatter.format(
-            "Token must be greater than actual by exactly 1 [token={}, actual={}]", causalityToken, actualToken);
+    private static void checkToken(long actualToken, long candidateToken) {
+        assert actualToken == NOT_INITIALIZED || actualToken < candidateToken : IgniteStringFormatter.format(
+                "Token must be greater than actual [token={}, actual={}]", candidateToken, actualToken);
     }
 }
diff --git a/modules/raft-client/src/main/java/org/apache/ignite/raft/client/service/RaftGroupService.java b/modules/raft-client/src/main/java/org/apache/ignite/raft/client/service/RaftGroupService.java
index 9fc267422..270ea9953 100644
--- a/modules/raft-client/src/main/java/org/apache/ignite/raft/client/service/RaftGroupService.java
+++ b/modules/raft-client/src/main/java/org/apache/ignite/raft/client/service/RaftGroupService.java
@@ -20,6 +20,7 @@ package org.apache.ignite.raft.client.service;
 import java.util.List;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.TimeoutException;
+import org.apache.ignite.lang.IgniteBiTuple;
 import org.apache.ignite.network.ClusterService;
 import org.apache.ignite.raft.client.Command;
 import org.apache.ignite.raft.client.Peer;
@@ -91,6 +92,15 @@ public interface RaftGroupService {
      */
     CompletableFuture<Void> refreshLeader();
 
+    /**
+     * Refreshes a replication group leader and returns (leader, term) tuple.
+     *
+     * <p>This operation is executed on a group leader.
+     *
+     * @return A future, with (leader, term) tuple.
+     */
+    CompletableFuture<IgniteBiTuple<Peer, Long>> refreshAndGetLeaderWithTerm();
+
     /**
      * Refreshes replication group members.
      *
@@ -143,6 +153,24 @@ public interface RaftGroupService {
      */
     CompletableFuture<Void> changePeers(List<Peer> peers);
 
+    /**
+     * Changes peers of the replication group.
+     *
+     * <p>Asynchronous variant of the previous method.
+     * When the future completed, it just means, that changePeers process successfully started.
+     *
+     * <p>The results of rebalance itself will be processed by the listener of raft reconfiguration event
+     * (from raft/server module).
+     *
+     * <p>This operation is executed on a group leader.
+     *
+     * @param peers Peers.
+     * @param term Current known leader term.
+     *             If real raft group term will be different - changePeers will be skipped.
+     * @return A future.
+     */
+    CompletableFuture<Void> changePeersAsync(List<Peer> peers, long term);
+
     /**
      * Adds learners (non-voting members).
      *
diff --git a/modules/raft/src/integrationTest/java/org/apache/ignite/raft/jraft/core/ItNodeTest.java b/modules/raft/src/integrationTest/java/org/apache/ignite/raft/jraft/core/ItNodeTest.java
index 914e197e6..3087963b9 100644
--- a/modules/raft/src/integrationTest/java/org/apache/ignite/raft/jraft/core/ItNodeTest.java
+++ b/modules/raft/src/integrationTest/java/org/apache/ignite/raft/jraft/core/ItNodeTest.java
@@ -31,12 +31,21 @@ import static org.junit.jupiter.api.Assertions.assertNull;
 import static org.junit.jupiter.api.Assertions.assertSame;
 import static org.junit.jupiter.api.Assertions.assertTrue;
 import static org.junit.jupiter.api.Assertions.fail;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.anyLong;
+import static org.mockito.ArgumentMatchers.argThat;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.timeout;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
 
 import com.codahale.metrics.ConsoleReporter;
 import java.io.File;
 import java.nio.ByteBuffer;
 import java.nio.file.Files;
 import java.nio.file.Path;
+import java.rmi.StubNotFoundException;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
@@ -56,14 +65,17 @@ import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicReference;
 import java.util.function.BiPredicate;
 import java.util.function.BooleanSupplier;
+import java.util.stream.IntStream;
 import java.util.stream.Stream;
+import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.testframework.WorkDirectory;
 import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
 import org.apache.ignite.lang.IgniteLogger;
 import org.apache.ignite.network.ClusterService;
 import org.apache.ignite.network.NetworkAddress;
-import org.apache.ignite.network.NodeFinder;
 import org.apache.ignite.network.StaticNodeFinder;
+import org.apache.ignite.network.scalecube.TestScaleCubeClusterServiceFactory;
+import org.apache.ignite.raft.jraft.Closure;
 import org.apache.ignite.raft.jraft.Iterator;
 import org.apache.ignite.raft.jraft.JRaftUtils;
 import org.apache.ignite.raft.jraft.Node;
@@ -3002,6 +3014,15 @@ public class ItNodeTest {
 
     @Test
     public void testChangePeers() throws Exception {
+        changePeers(false);
+    }
+
+    @Test
+    public void testChangeAsyncPeers() throws Exception {
+        changePeers(true);
+    }
+
+    private void changePeers(boolean async) throws Exception {
         PeerId peer0 = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT);
         cluster = new TestCluster("testChangePeers", dataPath, Collections.singletonList(peer0), testInfo);
         assertTrue(cluster.start(peer0.getEndpoint()));
@@ -3016,22 +3037,225 @@ public class ItNodeTest {
         }
         for (int i = 0; i < 9; i++) {
             cluster.waitLeader();
+            leader = cluster.getLeader();
+            assertNotNull(leader);
+            PeerId leaderPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i);
+            assertEquals(leaderPeer, leader.getNodeId().getPeerId());
+            PeerId newLeaderPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i + 1);
+            if (async) {
+                SynchronizedClosure done = new SynchronizedClosure();
+                leader.changePeersAsync(new Configuration(Collections.singletonList(newLeaderPeer)),
+                        leader.getCurrentTerm(), done);
+                Status status = done.await();
+                assertTrue(status.isOk(), status.getRaftError().toString());
+                assertTrue(waitForCondition(() -> {
+                    if (cluster.getLeader() != null) {
+                        return newLeaderPeer.equals(cluster.getLeader().getLeaderId());
+                    }
+                    return false;
+                }, 10_000));
+            } else {
+                SynchronizedClosure done = new SynchronizedClosure();
+                leader.changePeers(new Configuration(Collections.singletonList(newLeaderPeer)), done);
+                Status status = done.await();
+                assertTrue(status.isOk(), status.getRaftError().toString());
+            }
+        }
+
+        cluster.waitLeader();
+
+        for (MockStateMachine fsm : cluster.getFsms()) {
+            assertEquals(10, fsm.getLogs().size());
+        }
+    }
+
+    @Test
+    public void testOnReconfigurationErrorListener() throws Exception {
+        PeerId peer0 = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT);
+        cluster = new TestCluster("testChangePeers", dataPath, Collections.singletonList(peer0), testInfo);
+
+        var raftGrpEvtsLsnr = mock(RaftGroupEventsListener.class);
+
+        cluster.setRaftGrpEvtsLsnr(raftGrpEvtsLsnr);
+        assertTrue(cluster.start(peer0.getEndpoint()));
+
+        cluster.waitLeader();
+
+        Node leader = cluster.getLeader();
+        sendTestTaskAndWait(leader);
+
+        verify(raftGrpEvtsLsnr, never()).onNewPeersConfigurationApplied(any());
+
+        PeerId newPeer = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT + 1);
+
+        SynchronizedClosure done = new SynchronizedClosure();
+
+        leader.changePeersAsync(new Configuration(Collections.singletonList(newPeer)),
+                leader.getCurrentTerm(), done);
+        assertEquals(done.await(), Status.OK());
+
+        verify(raftGrpEvtsLsnr, timeout(10_000))
+                .onReconfigurationError(argThat(st -> st.getRaftError() == RaftError.ECATCHUP), any(), anyLong());
+    }
+
+    @Test
+    public void testNewPeersConfigurationAppliedListener() throws Exception {
+        PeerId peer0 = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT);
+        cluster = new TestCluster("testChangePeers", dataPath, Collections.singletonList(peer0), testInfo);
+
+        var raftGrpEvtsLsnr = mock(RaftGroupEventsListener.class);
+
+        cluster.setRaftGrpEvtsLsnr(raftGrpEvtsLsnr);
+        assertTrue(cluster.start(peer0.getEndpoint()));
+
+        cluster.waitLeader();
+
+        Node leader = cluster.getLeader();
+        sendTestTaskAndWait(leader);
+
+        for (int i = 1; i < 5; i++) {
+            PeerId peer = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT + i);
+            assertTrue(cluster.start(peer.getEndpoint(), false, 300));
+        }
+
+        verify(raftGrpEvtsLsnr, never()).onNewPeersConfigurationApplied(any());
+
+        for (int i = 0; i < 4; i++) {
             leader = cluster.getLeader();
             assertNotNull(leader);
             PeerId peer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i);
             assertEquals(peer, leader.getNodeId().getPeerId());
-            peer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i + 1);
+            PeerId newPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + i + 1);
+
             SynchronizedClosure done = new SynchronizedClosure();
-            leader.changePeers(new Configuration(Collections.singletonList(peer)), done);
-            Status status = done.await();
-            assertTrue(status.isOk(), status.getRaftError().toString());
+            leader.changePeersAsync(new Configuration(Collections.singletonList(newPeer)),
+                    leader.getCurrentTerm(), done);
+            assertEquals(done.await(), Status.OK());
+            assertTrue(waitForCondition(() -> {
+                if (cluster.getLeader() != null) {
+                    return newPeer.equals(cluster.getLeader().getLeaderId());
+                }
+                return false;
+            }, 10_000));
+
+            verify(raftGrpEvtsLsnr, times(1)).onNewPeersConfigurationApplied(Collections.singletonList(newPeer));
+        }
+    }
+
+    @Test
+    public void testChangePeersOnLeaderElected() throws Exception {
+        List<PeerId> peers = IntStream.range(0, 6)
+                .mapToObj(i -> new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT + i))
+                .collect(toList());
+
+        cluster = new TestCluster("testChangePeers", dataPath, peers, testInfo);
+
+        var raftGrpEvtsLsnr = mock(RaftGroupEventsListener.class);
+
+        cluster.setRaftGrpEvtsLsnr(raftGrpEvtsLsnr);
+
+        for (PeerId p: peers) {
+            assertTrue(cluster.start(p.getEndpoint(), false, 300));
         }
 
         cluster.waitLeader();
 
+        verify(raftGrpEvtsLsnr, times(1)).onLeaderElected(anyLong());
+
+        cluster.stop(cluster.getLeader().getLeaderId().getEndpoint());
+
+        cluster.waitLeader();
+
+        verify(raftGrpEvtsLsnr, times(2)).onLeaderElected(anyLong());
+
+        cluster.stop(cluster.getLeader().getLeaderId().getEndpoint());
+
+        cluster.waitLeader();
+
+        verify(raftGrpEvtsLsnr, times(3)).onLeaderElected(anyLong());
+    }
+
+    @Test
+    public void changePeersAsyncResponses() throws Exception {
+        PeerId peer0 = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT);
+        cluster = new TestCluster("testChangePeers", dataPath, Collections.singletonList(peer0), testInfo);
+        assertTrue(cluster.start(peer0.getEndpoint()));
+
+        cluster.waitLeader();
+        Node leader = cluster.getLeader();
+        sendTestTaskAndWait(leader);
+
+        PeerId peer = new PeerId(TestUtils.getLocalAddress(), TestUtils.INIT_PORT + 1);
+        assertTrue(cluster.start(peer.getEndpoint(), false, 300));
+
+        cluster.waitLeader();
+        leader = cluster.getLeader();
+        assertNotNull(leader);
+        PeerId leaderPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort());
+        assertEquals(leaderPeer, leader.getNodeId().getPeerId());
+
+        PeerId newLeaderPeer = new PeerId(TestUtils.getLocalAddress(), peer0.getEndpoint().getPort() + 1);
+
+        // wrong leader term, do nothing
+        SynchronizedClosure done = new SynchronizedClosure();
+        leader.changePeersAsync(new Configuration(Collections.singletonList(newLeaderPeer)),
+                leader.getCurrentTerm() - 1, done);
+        assertEquals(done.await(), Status.OK());
+
+        // the same config, do nothing
+        done = new SynchronizedClosure();
+        leader.changePeersAsync(new Configuration(Collections.singletonList(leaderPeer)),
+                leader.getCurrentTerm(), done);
+        assertEquals(done.await(), Status.OK());
+
+        // change peer to new conf containing only new node
+        done = new SynchronizedClosure();
+        leader.changePeersAsync(new Configuration(Collections.singletonList(newLeaderPeer)),
+                leader.getCurrentTerm(), done);
+        assertEquals(done.await(), Status.OK());
+
+        assertTrue(waitForCondition(() -> {
+            if (cluster.getLeader() != null)
+                return newLeaderPeer.equals(cluster.getLeader().getLeaderId());
+            return false;
+        }, 10_000));
+
         for (MockStateMachine fsm : cluster.getFsms()) {
             assertEquals(10, fsm.getLogs().size());
         }
+
+        // check concurrent start of two async change peers.
+        Node newLeader = cluster.getLeader();
+
+        sendTestTaskAndWait(newLeader);
+
+        ExecutorService executor = Executors.newFixedThreadPool(10);
+
+        List<SynchronizedClosure> dones = new ArrayList<>();
+        List<Future> futs = new ArrayList<>();
+
+        for (int i = 0; i < 2; i++) {
+            SynchronizedClosure newDone = new SynchronizedClosure();
+            dones.add(newDone);
+            futs.add(executor.submit(() -> {
+                newLeader.changePeersAsync(new Configuration(Collections.singletonList(peer0)), 2, newDone);
+            }));
+        }
+        futs.get(0).get();
+        futs.get(1).get();
+
+        assertEquals(dones.get(0).await(), Status.OK());
+        assertEquals(dones.get(1).await().getRaftError(), RaftError.EBUSY);
+
+        assertTrue(waitForCondition(() -> {
+            if (cluster.getLeader() != null)
+                return peer0.equals(cluster.getLeader().getLeaderId());
+            return false;
+        }, 10_000));
+
+        for (MockStateMachine fsm : cluster.getFsms()) {
+            assertEquals(20, fsm.getLogs().size());
+        }
     }
 
     @Test
diff --git a/modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java b/modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
index 1e37a424c..96407ad77 100644
--- a/modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
+++ b/modules/raft/src/main/java/org/apache/ignite/internal/raft/Loza.java
@@ -29,6 +29,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.function.Supplier;
 import java.util.stream.Collectors;
 import org.apache.ignite.internal.manager.IgniteComponent;
+import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.raft.server.RaftServer;
 import org.apache.ignite.internal.raft.server.impl.JraftServerImpl;
 import org.apache.ignite.internal.thread.NamedThreadFactory;
@@ -45,7 +46,6 @@ import org.apache.ignite.raft.client.service.RaftGroupService;
 import org.apache.ignite.raft.jraft.RaftMessagesFactory;
 import org.apache.ignite.raft.jraft.rpc.impl.RaftGroupServiceImpl;
 import org.apache.ignite.raft.jraft.util.Utils;
-import org.jetbrains.annotations.ApiStatus.Experimental;
 import org.jetbrains.annotations.TestOnly;
 
 /**
@@ -145,16 +145,12 @@ public class Loza implements IgniteComponent {
      * Creates a raft group service providing operations on a raft group. If {@code nodes} contains the current node, then raft group starts
      * on the current node.
      *
-     * <p>IMPORTANT: DON'T USE. This method should be used only for long running changePeers requests - until IGNITE-14209 will be fixed
-     * with stable solution.
-     *
      * @param groupId      Raft group id.
      * @param nodes        Raft group nodes.
      * @param lsnrSupplier Raft group listener supplier.
      * @return Future representing pending completion of the operation.
      * @throws NodeStoppingException If node stopping intention was detected.
      */
-    @Experimental
     public CompletableFuture<RaftGroupService> prepareRaftGroup(
             String groupId,
             List<ClusterNode> nodes,
@@ -165,7 +161,7 @@ public class Loza implements IgniteComponent {
         }
 
         try {
-            return prepareRaftGroupInternal(groupId, nodes, lsnrSupplier);
+            return prepareRaftGroupInternal(groupId, nodes, lsnrSupplier, () -> RaftGroupEventsListener.noopLsnr);
         } finally {
             busyLock.leaveBusy();
         }
@@ -174,13 +170,14 @@ public class Loza implements IgniteComponent {
     /**
      * Internal method to a raft group creation.
      *
-     * @param groupId      Raft group id.
-     * @param nodes        Raft group nodes.
-     * @param lsnrSupplier Raft group listener supplier.
+     * @param groupId                 Raft group id.
+     * @param nodes                   Raft group nodes.
+     * @param lsnrSupplier            Raft group listener supplier.
+     * @param raftGrpEvtsLsnrSupplier Raft group events listener supplier.
      * @return Future representing pending completion of the operation.
      */
     private CompletableFuture<RaftGroupService> prepareRaftGroupInternal(String groupId, List<ClusterNode> nodes,
-            Supplier<RaftGroupListener> lsnrSupplier) {
+            Supplier<RaftGroupListener> lsnrSupplier, Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier) {
         assert !nodes.isEmpty();
 
         List<Peer> peers = nodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
@@ -190,7 +187,7 @@ public class Loza implements IgniteComponent {
         boolean hasLocalRaft = nodes.stream().anyMatch(n -> locNodeName.equals(n.name()));
 
         if (hasLocalRaft) {
-            if (!raftServer.startRaftGroup(groupId, lsnrSupplier.get(), peers)) {
+            if (!raftServer.startRaftGroup(groupId, raftGrpEvtsLsnrSupplier.get(), lsnrSupplier.get(), peers)) {
                 throw new IgniteInternalException(IgniteStringFormatter.format(
                         "Raft group on the node is already started [node={}, raftGrp={}]",
                         locNodeName,
@@ -212,30 +209,72 @@ public class Loza implements IgniteComponent {
         );
     }
 
+    /**
+     * If {@code deltaNodes} contains the current node, then raft group starts on the current node.
+     *
+     * @param grpId                   Raft group id.
+     * @param nodes                   Full set of raft group nodes.
+     * @param deltaNodes              New raft group nodes.
+     * @param lsnrSupplier            Raft group listener supplier.
+     * @param raftGrpEvtsLsnrSupplier Raft group events listener supplier.
+     * @throws NodeStoppingException If node stopping intention was detected.
+     */
+    public void startRaftGroupNode(
+            String grpId,
+            Collection<ClusterNode> nodes,
+            Collection<ClusterNode> deltaNodes,
+            Supplier<RaftGroupListener> lsnrSupplier,
+            Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier) throws NodeStoppingException {
+        assert !nodes.isEmpty();
+
+        if (!busyLock.enterBusy()) {
+            throw new NodeStoppingException();
+        }
+
+        try {
+            List<Peer> peers = nodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
+
+            String locNodeName = clusterNetSvc.topologyService().localMember().name();
+
+            if (deltaNodes.stream().anyMatch(n -> locNodeName.equals(n.name()))) {
+                if (!raftServer.startRaftGroup(grpId, raftGrpEvtsLsnrSupplier.get(), lsnrSupplier.get(), peers)) {
+                    throw new IgniteInternalException(IgniteStringFormatter.format(
+                            "Raft group on the node is already started [node={}, raftGrp={}]",
+                            locNodeName,
+                            grpId
+                    ));
+                }
+            }
+        } finally {
+            busyLock.leaveBusy();
+        }
+    }
+
     /**
      * Creates a raft group service providing operations on a raft group. If {@code deltaNodes} contains the current node, then raft group
      * starts on the current node.
      *
-     * @param groupId      Raft group id.
-     * @param nodes        Full set of raft group nodes.
-     * @param deltaNodes   New raft group nodes.
-     * @param lsnrSupplier Raft group listener supplier.
+     * @param grpId                   Raft group id.
+     * @param nodes                   Full set of raft group nodes.
+     * @param deltaNodes              New raft group nodes.
+     * @param lsnrSupplier            Raft group listener supplier.
+     * @param raftGrpEvtsLsnrSupplier Raft group events listener supplier.
      * @return Future representing pending completion of the operation.
      * @throws NodeStoppingException If node stopping intention was detected.
      */
-    @Experimental
     public CompletableFuture<RaftGroupService> updateRaftGroup(
-            String groupId,
+            String grpId,
             Collection<ClusterNode> nodes,
             Collection<ClusterNode> deltaNodes,
-            Supplier<RaftGroupListener> lsnrSupplier
+            Supplier<RaftGroupListener> lsnrSupplier,
+            Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier
     ) throws NodeStoppingException {
         if (!busyLock.enterBusy()) {
             throw new NodeStoppingException();
         }
 
         try {
-            return updateRaftGroupInternal(groupId, nodes, deltaNodes, lsnrSupplier);
+            return updateRaftGroupInternal(grpId, nodes, deltaNodes, lsnrSupplier, raftGrpEvtsLsnrSupplier);
         } finally {
             busyLock.leaveBusy();
         }
@@ -244,14 +283,19 @@ public class Loza implements IgniteComponent {
     /**
      * Internal method for updating a raft group.
      *
-     * @param groupId      Raft group id.
-     * @param nodes        Full set of raft group nodes.
-     * @param deltaNodes   New raft group nodes.
-     * @param lsnrSupplier Raft group listener supplier.
+     * @param grpId                   Raft group id.
+     * @param nodes                   Full set of raft group nodes.
+     * @param deltaNodes              New raft group nodes.
+     * @param lsnrSupplier            Raft group listener supplier.
+     * @param raftGrpEvtsLsnrSupplier Raft group events listener supplier.
      * @return Future representing pending completion of the operation.
      */
-    private CompletableFuture<RaftGroupService> updateRaftGroupInternal(String groupId, Collection<ClusterNode> nodes,
-            Collection<ClusterNode> deltaNodes, Supplier<RaftGroupListener> lsnrSupplier) {
+    private CompletableFuture<RaftGroupService> updateRaftGroupInternal(
+            String grpId,
+            Collection<ClusterNode> nodes,
+            Collection<ClusterNode> deltaNodes,
+            Supplier<RaftGroupListener> lsnrSupplier,
+            Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier) {
         assert !nodes.isEmpty();
 
         List<Peer> peers = nodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
@@ -259,17 +303,17 @@ public class Loza implements IgniteComponent {
         String locNodeName = clusterNetSvc.topologyService().localMember().name();
 
         if (deltaNodes.stream().anyMatch(n -> locNodeName.equals(n.name()))) {
-            if (!raftServer.startRaftGroup(groupId, lsnrSupplier.get(), peers)) {
+            if (!raftServer.startRaftGroup(grpId,  raftGrpEvtsLsnrSupplier.get(), lsnrSupplier.get(), peers)) {
                 throw new IgniteInternalException(IgniteStringFormatter.format(
                         "Raft group on the node is already started [node={}, raftGrp={}]",
                         locNodeName,
-                        groupId
+                        grpId
                 ));
             }
         }
 
         return RaftGroupServiceImpl.start(
-                groupId,
+                grpId,
                 clusterNetSvc,
                 FACTORY,
                 RETRY_TIMEOUT,
@@ -281,57 +325,6 @@ public class Loza implements IgniteComponent {
         );
     }
 
-    /**
-     * Changes peers for a group from {@code expectedNodes} to {@code changedNodes}.
-     *
-     * @param groupId       Raft group id.
-     * @param expectedNodes List of nodes that contains the raft group peers.
-     * @param changedNodes  List of nodes that will contain the raft group peers after.
-     * @return Future which will complete when peers change.
-     * @throws NodeStoppingException If node stopping intention was detected.
-     */
-    public CompletableFuture<Void> changePeers(
-            String groupId,
-            List<ClusterNode> expectedNodes,
-            List<ClusterNode> changedNodes
-    ) throws NodeStoppingException {
-        if (!busyLock.enterBusy()) {
-            throw new NodeStoppingException();
-        }
-
-        try {
-            return changePeersInternal(groupId, expectedNodes, changedNodes);
-        } finally {
-            busyLock.leaveBusy();
-        }
-    }
-
-    /**
-     * Internal method for changing peers for a RAFT group.
-     *
-     * @param groupId       Raft group id.
-     * @param expectedNodes List of nodes that contains the raft group peers.
-     * @param changedNodes  List of nodes that will contain the raft group peers after.
-     * @return Future which will complete when peers change.
-     */
-    private CompletableFuture<Void> changePeersInternal(String groupId, List<ClusterNode> expectedNodes, List<ClusterNode> changedNodes) {
-        List<Peer> expectedPeers = expectedNodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
-        List<Peer> changedPeers = changedNodes.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
-
-        return RaftGroupServiceImpl.start(
-                groupId,
-                clusterNetSvc,
-                FACTORY,
-                10 * RETRY_TIMEOUT,
-                10 * RPC_TIMEOUT,
-                expectedPeers,
-                true,
-                DELAY,
-                executor
-        ).thenCompose(srvc -> srvc.changePeers(changedPeers)
-                .thenRun(() -> srvc.shutdown()));
-    }
-
     /**
      * Stops a raft group on the current node.
      *
diff --git a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftGroupEventsListener.java b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftGroupEventsListener.java
new file mode 100644
index 000000000..1779846ac
--- /dev/null
+++ b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftGroupEventsListener.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.raft.server;
+
+import java.util.List;
+import org.apache.ignite.raft.jraft.Status;
+import org.apache.ignite.raft.jraft.entity.PeerId;
+
+/**
+ * Listener for group membership and other events.
+ */
+public interface RaftGroupEventsListener {
+    /**
+     * Invoked, when new leader is elected (if it is the first leader of group ever - will be invoked too).
+     *
+     * @param term Raft term of the current leader.
+     */
+    void onLeaderElected(long term);
+
+    /**
+     * Invoked on the leader, when new peers' configuration applied to raft group.
+     *
+     * @param peers list of peers, which was applied by raft group membership configuration.
+     */
+    void onNewPeersConfigurationApplied(List<PeerId> peers);
+
+    /**
+     * Invoked on the leader, when membership reconfiguration was failed, because of {@link Status}.
+     *
+     * @param status with description of failure.
+     * @param peers List of peers, which was tried as a target of reconfiguration.
+     * @param term Raft term of the current leader.
+     */
+    void onReconfigurationError(Status status, List<PeerId> peers, long term);
+
+    /**
+     * No-op raft group events listener.
+     */
+    RaftGroupEventsListener noopLsnr = new RaftGroupEventsListener() {
+        /** {@inheritDoc} */
+        @Override
+        public void onLeaderElected(long term) { }
+
+        /** {@inheritDoc} */
+        @Override
+        public void onNewPeersConfigurationApplied(List<PeerId> peers) { }
+
+        /** {@inheritDoc} */
+        @Override
+        public void onReconfigurationError(Status status, List<PeerId> peers, long term) {}
+    };
+
+}
diff --git a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftServer.java b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftServer.java
index 526cc4d50..3e795d73f 100644
--- a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftServer.java
+++ b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/RaftServer.java
@@ -47,6 +47,18 @@ public interface RaftServer extends IgniteComponent {
      */
     boolean startRaftGroup(String groupId, RaftGroupListener lsnr, List<Peer> initialConf);
 
+    /**
+     * Starts a raft group bound to this cluster node.
+     *
+     * @param groupId     Group id.
+     * @param evLsnr      Listener for group membership and other events.
+     * @param lsnr        Listener for state machine events.
+     * @param initialConf Inititial group configuration.
+     * @return {@code True} if a group was successfully started, {@code False} when the group with given name is already exists.
+     */
+    boolean startRaftGroup(String groupId, RaftGroupEventsListener evLsnr,
+            RaftGroupListener lsnr, List<Peer> initialConf);
+
     /**
      * Synchronously stops a raft group if any.
      *
diff --git a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/impl/JraftServerImpl.java b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/impl/JraftServerImpl.java
index 1df80378e..d15693164 100644
--- a/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/impl/JraftServerImpl.java
+++ b/modules/raft/src/main/java/org/apache/ignite/internal/raft/server/impl/JraftServerImpl.java
@@ -30,7 +30,9 @@ import java.util.Set;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.ExecutorService;
+import java.util.function.BiPredicate;
 import java.util.stream.Collectors;
+import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.raft.server.RaftServer;
 import org.apache.ignite.internal.thread.NamedThreadFactory;
 import org.apache.ignite.lang.IgniteInternalException;
@@ -69,7 +71,9 @@ import org.apache.ignite.raft.jraft.storage.snapshot.SnapshotWriter;
 import org.apache.ignite.raft.jraft.util.ExecutorServiceHelper;
 import org.apache.ignite.raft.jraft.util.ExponentialBackoffTimeoutStrategy;
 import org.apache.ignite.raft.jraft.util.JDKMarshaller;
+import org.jetbrains.annotations.NotNull;
 import org.jetbrains.annotations.Nullable;
+import org.jetbrains.annotations.TestOnly;
 
 /**
  * Raft server implementation on top of forked JRaft library.
@@ -313,6 +317,13 @@ public class JraftServerImpl implements RaftServer {
     /** {@inheritDoc} */
     @Override
     public synchronized boolean startRaftGroup(String groupId, RaftGroupListener lsnr, @Nullable List<Peer> initialConf) {
+        return startRaftGroup(groupId, RaftGroupEventsListener.noopLsnr, lsnr, initialConf);
+    }
+
+    /** {@inheritDoc} */
+    @Override
+    public synchronized boolean startRaftGroup(String groupId, @NotNull RaftGroupEventsListener evLsnr,
+            RaftGroupListener lsnr, @Nullable List<Peer> initialConf) {
         if (groups.containsKey(groupId)) {
             return false;
         }
@@ -333,6 +344,8 @@ public class JraftServerImpl implements RaftServer {
 
         nodeOptions.setFsm(new DelegatingStateMachine(lsnr));
 
+        nodeOptions.setRaftGrpEvtsLsnr(evLsnr);
+
         if (initialConf != null) {
             List<PeerId> mapped = initialConf.stream().map(PeerId::fromPeer).collect(Collectors.toList());
 
@@ -400,6 +413,31 @@ public class JraftServerImpl implements RaftServer {
         return groups.keySet();
     }
 
+    /**
+     * Blocks messages for raft group node according to provided predicate.
+     *
+     * @param groupId Raft group id.
+     * @param predicate Predicate to block messages.
+     */
+    @TestOnly
+    public void blockMessages(String groupId, BiPredicate<Object, String> predicate) {
+        IgniteRpcClient client = (IgniteRpcClient) groups.get(groupId).getNodeOptions().getRpcClient();
+
+        client.blockMessages(predicate);
+    }
+
+    /**
+     * Stops blocking messages for raft group node.
+     *
+     * @param groupId Raft group id.
+     */
+    @TestOnly
+    public void stopBlockMessages(String groupId) {
+        IgniteRpcClient client = (IgniteRpcClient) groups.get(groupId).getNodeOptions().getRpcClient();
+
+        client.stopBlock();
+    }
+
     /**
      * Wrapper of {@link StateMachineAdapter}.
      */
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/Node.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/Node.java
index 1ab3fd409..cac7fb393 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/Node.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/Node.java
@@ -189,6 +189,17 @@ public interface Node extends Lifecycle<NodeOptions>, Describer {
      */
     void changePeers(final Configuration newPeers, final Closure done);
 
+    /**
+     * Asynchronously change the configuration of the raft group to |newPeers|. If done closure was completed with {@link Status#OK()},
+     * then it is guaranteed that state of {@link org.apache.ignite.raft.jraft.core.NodeImpl.ConfigurationCtx} was switched to
+     * {@code STAGE_CATCHING_UP}
+     *
+     * @param newPeers new peers to change
+     * @param term term on which this method was called.
+     * @param done callback
+     */
+    void changePeersAsync(final Configuration newPeers, long term, final Closure done);
+
     /**
      * Reset the configuration of this node individually, without any replication to other peers before this node
      * becomes the leader. This function is supposed to be invoked when the majority of the replication group are dead
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/RaftMessageGroup.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/RaftMessageGroup.java
index 2295b8bb1..9aae7bc93 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/RaftMessageGroup.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/RaftMessageGroup.java
@@ -83,6 +83,12 @@ public class RaftMessageGroup {
 
         /** */
         public static final short LEARNERS_OP_RESPONSE = 1016;
+
+        /** */
+        public static final short CHANGE_PEERS_ASYNC_REQUEST = 1017;
+
+        /** */
+        public static final short CHANGE_PEERS_ASYNC_RESPONSE = 1018;
     }
 
     /**
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/core/NodeImpl.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/core/NodeImpl.java
index 1c00f49ef..80bc52e72 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/core/NodeImpl.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/core/NodeImpl.java
@@ -36,6 +36,7 @@ import java.util.concurrent.locks.ReadWriteLock;
 import java.util.stream.Collectors;
 import org.apache.ignite.internal.thread.NamedThreadFactory;
 import org.apache.ignite.lang.IgniteLogger;
+import org.apache.ignite.network.NetworkAddress;
 import org.apache.ignite.raft.client.Peer;
 import org.apache.ignite.raft.jraft.Closure;
 import org.apache.ignite.raft.jraft.FSMCaller;
@@ -330,7 +331,7 @@ public class NodeImpl implements Node, RaftServerService {
         /**
          * Start change configuration.
          */
-        void start(final Configuration oldConf, final Configuration newConf, final Closure done) {
+        void start(final Configuration oldConf, final Configuration newConf, final Closure done, boolean async) {
             if (isBusy()) {
                 if (done != null) {
                     Utils.runClosureInThread(this.node.getOptions().getCommonExecutor(), done, new Status(RaftError.EBUSY, "Already in busy stage."));
@@ -345,6 +346,9 @@ public class NodeImpl implements Node, RaftServerService {
             }
             this.done = done;
             this.stage = Stage.STAGE_CATCHING_UP;
+            if (async) {
+                Utils.runClosureInThread(this.node.getOptions().getCommonExecutor(), done, Status.OK());
+            }
             this.oldPeers = oldConf.listPeers();
             this.newPeers = newConf.listPeers();
             this.oldLearners = oldConf.listLearners();
@@ -385,7 +389,7 @@ public class NodeImpl implements Node, RaftServerService {
         private void addNewLearners() {
             final Set<PeerId> addingLearners = new HashSet<>(this.newLearners);
             addingLearners.removeAll(this.oldLearners);
-            LOG.info("Adding learners: {}.", this.addingPeers);
+            LOG.info("Adding learners: {}.", addingLearners);
             for (final PeerId newLearner : addingLearners) {
                 if (!this.node.replicatorGroup.addReplicator(newLearner, ReplicatorType.Learner)) {
                     LOG.error("Node {} start the learner replicator failed, peer={}.", this.node.getNodeId(),
@@ -427,15 +431,32 @@ public class NodeImpl implements Node, RaftServerService {
                 this.node.stopReplicator(this.oldPeers, this.newPeers);
                 this.node.stopReplicator(this.oldLearners, this.newLearners);
             }
+
+            // must be copied before clearing
+            final List<PeerId> resultPeerIds = new ArrayList<>(this.newPeers);
+
             clearPeers();
             clearLearners();
 
             this.version++;
             this.stage = Stage.STAGE_NONE;
             this.nchanges = 0;
+
+            Closure oldDoneClosure = done;
+
             if (this.done != null) {
-                Utils.runClosureInThread(this.node.getOptions().getCommonExecutor(), this.done, st != null ? st :
-                    new Status(RaftError.EPERM, "Leader stepped down."));
+                Closure newDone = (Status status) -> {
+                    if (status.isOk()) {
+                        node.getOptions().getRaftGrpEvtsLsnr().onNewPeersConfigurationApplied(resultPeerIds);
+                    } else {
+                        node.getOptions().getRaftGrpEvtsLsnr().onReconfigurationError(status, resultPeerIds, node.getCurrentTerm());
+                    }
+                    oldDoneClosure.run(status);
+                };
+
+                // TODO: in case of changePeerAsync this invocation is useless as far as we have already sent OK response in done closure.
+                Utils.runClosureInThread(this.node.getOptions().getCommonExecutor(), newDone, st != null ? st :
+                        new Status(RaftError.EPERM, "Leader stepped down."));
                 this.done = null;
             }
         }
@@ -1352,6 +1373,7 @@ public class NodeImpl implements Node, RaftServerService {
             throw new IllegalStateException();
         }
         this.confCtx.flush(this.conf.getConf(), this.conf.getOldConf());
+
         resetElectionTimeoutToInitial();
         this.stepDownTimer.start();
     }
@@ -2445,6 +2467,9 @@ public class NodeImpl implements Node, RaftServerService {
             if (status.isOk()) {
                 onConfigurationChangeDone(this.term);
                 if (this.leaderStart) {
+                    if (getOptions().getRaftGrpEvtsLsnr() != null) {
+                        options.getRaftGrpEvtsLsnr().onLeaderElected(term);
+                    }
                     getOptions().getFsm().onLeaderStart(this.term);
                 }
             }
@@ -2477,8 +2502,12 @@ public class NodeImpl implements Node, RaftServerService {
         checkAndSetConfiguration(false);
     }
 
+    private void unsafeRegisterConfChange(final Configuration oldConf, final Configuration newConf, final Closure done) {
+        unsafeRegisterConfChange(oldConf, newConf, done, false);
+    }
+
     private void unsafeRegisterConfChange(final Configuration oldConf, final Configuration newConf,
-        final Closure done) {
+        final Closure done, boolean async) {
 
         Requires.requireTrue(newConf.isValid(), "Invalid new conf: %s", newConf);
         // The new conf entry(will be stored in log manager) should be valid
@@ -2509,10 +2538,16 @@ public class NodeImpl implements Node, RaftServerService {
         }
         // Return immediately when the new peers equals to current configuration
         if (this.conf.getConf().equals(newConf)) {
-            Utils.runClosureInThread(this.getOptions().getCommonExecutor(), done);
+            Closure newDone = (Status status) -> {
+                // doOnNewPeersConfigurationApplied should be called, otherwise we could lose the callback invocation.
+                // For example, old leader failed just before an invocation of doOnNewPeersConfigurationApplied
+                this.getOptions().getRaftGrpEvtsLsnr().onNewPeersConfigurationApplied(newConf.getPeers());
+                done.run(status);
+            };
+            Utils.runClosureInThread(this.getOptions().getCommonExecutor(), newDone);
             return;
         }
-        this.confCtx.start(oldConf, newConf, done);
+        this.confCtx.start(oldConf, newConf, done, async);
     }
 
     private void afterShutdown() {
@@ -3218,6 +3253,32 @@ public class NodeImpl implements Node, RaftServerService {
         }
     }
 
+    @Override
+    public void changePeersAsync(final Configuration newPeers, long term, Closure done) {
+        Requires.requireNonNull(newPeers, "Null new peers");
+        Requires.requireTrue(!newPeers.isEmpty(), "Empty new peers");
+        this.writeLock.lock();
+        try {
+            long currentTerm = getCurrentTerm();
+
+            if (currentTerm != term) {
+                LOG.warn("Node {} refused configuration because of mismatching terms. Current term is {}, but provided is {}.",
+                        getNodeId(), currentTerm, term);
+
+                Utils.runClosureInThread(this.getOptions().getCommonExecutor(), done, Status.OK());
+
+                return;
+            }
+
+            LOG.info("Node {} change peers from {} to {}.", getNodeId(), this.conf.getConf(), newPeers);
+
+            unsafeRegisterConfChange(this.conf.getConf(), newPeers, done, true);
+        }
+        finally {
+            this.writeLock.unlock();
+        }
+    }
+
     @Override
     public Status resetPeers(final Configuration newPeers) {
         Requires.requireNonNull(newPeers, "Null new peers");
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/option/NodeOptions.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/option/NodeOptions.java
index 76279a78b..31e151746 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/option/NodeOptions.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/option/NodeOptions.java
@@ -18,6 +18,7 @@ package org.apache.ignite.raft.jraft.option;
 
 import java.util.List;
 import java.util.concurrent.ExecutorService;
+import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.raft.jraft.util.TimeoutStrategy;
 import org.apache.ignite.raft.jraft.util.NoopTimeoutStrategy;
 import org.apache.ignite.raft.jraft.JRaftServiceFactory;
@@ -37,6 +38,7 @@ import org.apache.ignite.raft.jraft.util.StringUtils;
 import org.apache.ignite.raft.jraft.util.Utils;
 import org.apache.ignite.raft.jraft.util.concurrent.FixedThreadsExecutorGroup;
 import org.apache.ignite.raft.jraft.util.timer.Timer;
+import org.jetbrains.annotations.NotNull;
 
 /**
  * Node options.
@@ -104,6 +106,9 @@ public class NodeOptions extends RpcOptions implements Copiable<NodeOptions> {
     // a valid instance.
     private StateMachine fsm;
 
+    // Listener for raft group reconfiguration events.
+    private RaftGroupEventsListener raftGrpEvtsLsnr;
+
     // Describe a specific RaftMetaStorage in format ${type}://${parameters}
     private String raftMetaUri;
 
@@ -424,6 +429,14 @@ public class NodeOptions extends RpcOptions implements Copiable<NodeOptions> {
         this.initialConf = initialConf;
     }
 
+    public RaftGroupEventsListener getRaftGrpEvtsLsnr() {
+        return raftGrpEvtsLsnr;
+    }
+
+    public void setRaftGrpEvtsLsnr(@NotNull RaftGroupEventsListener raftGrpEvtsLsnr) {
+        this.raftGrpEvtsLsnr = raftGrpEvtsLsnr;
+    }
+
     public StateMachine getFsm() {
         return this.fsm;
     }
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/CliRequests.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/CliRequests.java
index f005ed22d..ceee9a35c 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/CliRequests.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/CliRequests.java
@@ -20,8 +20,9 @@
 package org.apache.ignite.raft.jraft.rpc;
 
 import java.util.Collection;
-import org.apache.ignite.network.annotations.Transferable;
 import org.apache.ignite.raft.jraft.RaftMessageGroup;
+import org.apache.ignite.network.annotations.Transferable;
+import org.apache.ignite.raft.jraft.RaftMessageGroup.RpcClientMessageGroup;
 
 public final class CliRequests {
     @Transferable(value = RaftMessageGroup.RpcClientMessageGroup.ADD_PEER_REQUEST)
@@ -72,6 +73,24 @@ public final class CliRequests {
         Collection<String> newPeersList();
     }
 
+    @Transferable(value = RpcClientMessageGroup.CHANGE_PEERS_ASYNC_REQUEST)
+    public interface ChangePeersAsyncRequest extends Message {
+        String groupId();
+
+        String leaderId();
+
+        Collection<String> newPeersList();
+
+        long term();
+    }
+
+    @Transferable(value = RpcClientMessageGroup.CHANGE_PEERS_ASYNC_RESPONSE)
+    public interface ChangePeersAsyncResponse extends Message {
+        Collection<String> oldPeersList();
+
+        Collection<String> newPeersList();
+    }
+
     @Transferable(value = RaftMessageGroup.RpcClientMessageGroup.SNAPSHOT_REQUEST)
     public interface SnapshotRequest extends Message {
         String groupId();
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/IgniteRpcServer.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/IgniteRpcServer.java
index 49de89069..7681e7979 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/IgniteRpcServer.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/IgniteRpcServer.java
@@ -38,6 +38,7 @@ import org.apache.ignite.raft.jraft.rpc.RpcProcessor;
 import org.apache.ignite.raft.jraft.rpc.RpcServer;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.AddLearnersRequestProcessor;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.AddPeerRequestProcessor;
+import org.apache.ignite.raft.jraft.rpc.impl.cli.ChangePeersAsyncRequestProcessor;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.ChangePeersRequestProcessor;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.GetLeaderRequestProcessor;
 import org.apache.ignite.raft.jraft.rpc.impl.cli.GetPeersRequestProcessor;
@@ -104,6 +105,7 @@ public class IgniteRpcServer implements RpcServer<Void> {
         registerProcessor(new RemovePeerRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new ResetPeerRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new ChangePeersRequestProcessor(rpcExecutor, raftMessagesFactory));
+        registerProcessor(new ChangePeersAsyncRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new GetLeaderRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new SnapshotRequestProcessor(rpcExecutor, raftMessagesFactory));
         registerProcessor(new TransferLeaderRequestProcessor(rpcExecutor, raftMessagesFactory));
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java
index 5f84a35d5..3d634726e 100644
--- a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/RaftGroupServiceImpl.java
@@ -51,6 +51,7 @@ import java.util.concurrent.TimeoutException;
 import java.util.function.BiConsumer;
 import java.util.stream.Collectors;
 import org.apache.ignite.internal.tostring.S;
+import org.apache.ignite.lang.IgniteBiTuple;
 import org.apache.ignite.lang.IgniteException;
 import org.apache.ignite.lang.IgniteLogger;
 import org.apache.ignite.network.ClusterService;
@@ -65,6 +66,9 @@ import org.apache.ignite.raft.jraft.entity.PeerId;
 import org.apache.ignite.raft.jraft.error.RaftError;
 import org.apache.ignite.raft.jraft.rpc.ActionRequest;
 import org.apache.ignite.raft.jraft.rpc.ActionResponse;
+import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncRequest;
+import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncResponse;
+import org.apache.ignite.raft.jraft.rpc.Message;
 import org.apache.ignite.raft.jraft.rpc.RpcRequests;
 import org.jetbrains.annotations.NotNull;
 
@@ -249,6 +253,23 @@ public class RaftGroupServiceImpl implements RaftGroupService {
         });
     }
 
+    /** {@inheritDoc} */
+    @Override public CompletableFuture<IgniteBiTuple<Peer, Long>> refreshAndGetLeaderWithTerm() {
+        GetLeaderRequest req = factory.getLeaderRequest().groupId(groupId).build();
+
+        CompletableFuture<GetLeaderResponse> fut = new CompletableFuture<>();
+
+        sendWithRetry(randomNode(), req, currentTimeMillis() + timeout, fut);
+
+        return fut.thenApply(resp -> {
+            Peer respLeader = parsePeer(resp.leaderId());
+
+            leader = respLeader;
+
+            return new IgniteBiTuple<>(respLeader, resp.currentTerm());
+        });
+    }
+
     /** {@inheritDoc} */
     @Override public CompletableFuture<Void> refreshMembers(boolean onlyAlive) {
         GetPeersRequest req = factory.getPeersRequest().onlyAlive(onlyAlive).groupId(groupId).build();
@@ -334,6 +355,27 @@ public class RaftGroupServiceImpl implements RaftGroupService {
         });
     }
 
+    /** {@inheritDoc} */
+    @Override public CompletableFuture<Void> changePeersAsync(List<Peer> peers, long term) {
+        Peer leader = this.leader;
+
+        if (leader == null)
+            return refreshLeader().thenCompose(res -> changePeersAsync(peers, term));
+
+        List<String> peersToChange = peers.stream().map(p -> PeerId.fromPeer(p).toString())
+                .collect(Collectors.toList());
+
+        ChangePeersAsyncRequest req = factory.changePeersAsyncRequest().groupId(groupId)
+                .term(term)
+                .newPeersList(peersToChange).build();
+
+        CompletableFuture<ChangePeersAsyncResponse> fut = new CompletableFuture<>();
+
+        sendWithRetry(leader, req, currentTimeMillis() + timeout, fut);
+
+        return fut.thenRun(() -> {});
+    }
+
     /** {@inheritDoc} */
     @Override public CompletableFuture<Void> addLearners(List<Peer> learners) {
         Peer leader = this.leader;
@@ -429,23 +471,12 @@ public class RaftGroupServiceImpl implements RaftGroupService {
                 .peerId(PeerId.fromPeer(newLeader).toString())
                 .build();
 
-        CompletableFuture<NetworkMessage> fut = cluster.messagingService().invoke(leader.address(), req, rpcTimeout);
+        CompletableFuture<NetworkMessage> fut = new CompletableFuture<>();
 
-        return fut.thenCompose(resp -> {
-            if (resp != null) {
-                RpcRequests.ErrorResponse resp0 = (RpcRequests.ErrorResponse) resp;
-
-                if (resp0.errorCode() != RaftError.SUCCESS.getNumber())
-                    return CompletableFuture.failedFuture(
-                        new RaftException(
-                            RaftError.forNumber(resp0.errorCode()), resp0.errorMsg()
-                        )
-                    );
-                else
-                    this.leader = newLeader;
-            }
+        sendWithRetry(leader, req, currentTimeMillis() + timeout, fut);
 
-            return CompletableFuture.completedFuture(null);
+        return fut.thenRun(() -> {
+            this.leader = newLeader;
         });
     }
 
diff --git a/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessor.java b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessor.java
new file mode 100644
index 000000000..bd7bd409a
--- /dev/null
+++ b/modules/raft/src/main/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessor.java
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.raft.jraft.rpc.impl.cli;
+
+import java.util.List;
+import java.util.concurrent.Executor;
+import org.apache.ignite.raft.jraft.RaftMessagesFactory;
+import org.apache.ignite.raft.jraft.conf.Configuration;
+import org.apache.ignite.raft.jraft.entity.PeerId;
+import org.apache.ignite.raft.jraft.error.RaftError;
+import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncRequest;
+import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncResponse;
+import org.apache.ignite.raft.jraft.rpc.Message;
+import org.apache.ignite.raft.jraft.rpc.RaftRpcFactory;
+
+import static java.util.stream.Collectors.toList;
+
+/**
+ * Change peers request processor.
+ */
+public class ChangePeersAsyncRequestProcessor extends BaseCliRequestProcessor<ChangePeersAsyncRequest> {
+
+    public ChangePeersAsyncRequestProcessor(Executor executor, RaftMessagesFactory msgFactory) {
+        super(executor, msgFactory);
+    }
+
+    @Override
+    protected String getPeerId(final ChangePeersAsyncRequest request) {
+        return request.leaderId();
+    }
+
+    @Override
+    protected String getGroupId(final ChangePeersAsyncRequest request) {
+        return request.groupId();
+    }
+
+    @Override
+    protected Message processRequest0(final CliRequestContext ctx, final ChangePeersAsyncRequest request,
+            final IgniteCliRpcRequestClosure done) {
+        final List<PeerId> oldConf = ctx.node.listPeers();
+
+        final Configuration conf = new Configuration();
+        for (final String peerIdStr : request.newPeersList()) {
+            final PeerId peer = new PeerId();
+            if (peer.parse(peerIdStr)) {
+                conf.addPeer(peer);
+            }
+            else {
+                return RaftRpcFactory.DEFAULT //
+                        .newResponse(msgFactory(), RaftError.EINVAL, "Fail to parse peer id %s", peerIdStr);
+            }
+        }
+
+        long term = request.term();
+
+        LOG.info("Receive ChangePeersAsyncRequest with term {} to {} from {}, new conf is {}", term, ctx.node.getNodeId(), done.getRpcCtx()
+                .getRemoteAddress(), conf);
+
+        ctx.node.changePeersAsync(conf, term, status -> {
+            if (!status.isOk()) {
+                done.run(status);
+            }
+            else {
+                ChangePeersAsyncResponse resp = msgFactory().changePeersAsyncResponse()
+                        .oldPeersList(oldConf.stream().map(Object::toString).collect(toList()))
+                        .newPeersList(conf.getPeers().stream().map(Object::toString).collect(toList()))
+                        .build();
+
+                done.sendResponse(resp);
+            }
+        });
+        return null;
+    }
+
+    @Override
+    public String interest() {
+        return ChangePeersAsyncRequest.class.getName();
+    }
+}
diff --git a/modules/raft/src/test/java/org/apache/ignite/internal/raft/LozaTest.java b/modules/raft/src/test/java/org/apache/ignite/internal/raft/LozaTest.java
index ad1dc2f08..26857cafd 100644
--- a/modules/raft/src/test/java/org/apache/ignite/internal/raft/LozaTest.java
+++ b/modules/raft/src/test/java/org/apache/ignite/internal/raft/LozaTest.java
@@ -77,9 +77,8 @@ public class LozaTest extends IgniteAbstractTest {
 
         Supplier<RaftGroupListener> lsnrSupplier = () -> null;
 
-        assertThrows(NodeStoppingException.class, () -> loza.updateRaftGroup(raftGroupId, nodes, newNodes, lsnrSupplier));
+        assertThrows(NodeStoppingException.class, () -> loza.updateRaftGroup(raftGroupId, nodes, newNodes, lsnrSupplier, () -> null));
         assertThrows(NodeStoppingException.class, () -> loza.stopRaftGroup(raftGroupId));
         assertThrows(NodeStoppingException.class, () -> loza.prepareRaftGroup(raftGroupId, nodes, lsnrSupplier));
-        assertThrows(NodeStoppingException.class, () -> loza.changePeers(raftGroupId, nodes, newNodes));
     }
 }
diff --git a/modules/raft/src/test/java/org/apache/ignite/internal/raft/server/impl/RaftServerImpl.java b/modules/raft/src/test/java/org/apache/ignite/internal/raft/server/impl/RaftServerImpl.java
index 37fc8c39b..296fb0157 100644
--- a/modules/raft/src/test/java/org/apache/ignite/internal/raft/server/impl/RaftServerImpl.java
+++ b/modules/raft/src/test/java/org/apache/ignite/internal/raft/server/impl/RaftServerImpl.java
@@ -27,6 +27,7 @@ import java.util.concurrent.BlockingQueue;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.function.BiConsumer;
+import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.raft.server.RaftServer;
 import org.apache.ignite.lang.IgniteLogger;
 import org.apache.ignite.lang.IgniteStringFormatter;
@@ -182,6 +183,12 @@ public class RaftServerImpl implements RaftServer {
         return true;
     }
 
+    /** {@inheritDoc} */
+    @Override
+    public boolean startRaftGroup(String groupId, RaftGroupEventsListener evLsnr, RaftGroupListener lsnr, List<Peer> initialConf) {
+        return startRaftGroup(groupId, lsnr, initialConf);
+    }
+
     /** {@inheritDoc} */
     @Override
     public synchronized boolean stopRaftGroup(String groupId) {
diff --git a/modules/raft/src/test/java/org/apache/ignite/raft/jraft/core/TestCluster.java b/modules/raft/src/test/java/org/apache/ignite/raft/jraft/core/TestCluster.java
index 36fec93f9..83b6a2b8b 100644
--- a/modules/raft/src/test/java/org/apache/ignite/raft/jraft/core/TestCluster.java
+++ b/modules/raft/src/test/java/org/apache/ignite/raft/jraft/core/TestCluster.java
@@ -39,6 +39,7 @@ import java.util.concurrent.locks.ReentrantLock;
 import java.util.function.Consumer;
 import java.util.function.Predicate;
 import java.util.stream.Stream;
+import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.util.IgniteUtils;
 import org.apache.ignite.raft.jraft.util.ExponentialBackoffTimeoutStrategy;
 import org.apache.ignite.lang.IgniteLogger;
@@ -95,6 +96,8 @@ public class TestCluster {
 
     private LinkedHashSet<PeerId> learners;
 
+    private RaftGroupEventsListener raftGrpEvtsLsnr = RaftGroupEventsListener.noopLsnr;
+
     public JRaftServiceFactory getRaftServiceFactory() {
         return this.raftServiceFactory;
     }
@@ -239,6 +242,8 @@ public class TestCluster {
             MockStateMachine fsm = new MockStateMachine(listenAddr);
             nodeOptions.setFsm(fsm);
 
+            nodeOptions.setRaftGrpEvtsLsnr(raftGrpEvtsLsnr);
+
             if (!emptyPeers)
                 nodeOptions.setInitialConf(new Configuration(this.peers, this.learners));
 
@@ -351,6 +356,14 @@ public class TestCluster {
         IgniteUtils.deleteIfExists(path);
     }
 
+    public RaftGroupEventsListener getRaftGrpEvtsLsnr() {
+        return raftGrpEvtsLsnr;
+    }
+
+    public void setRaftGrpEvtsLsnr(RaftGroupEventsListener raftGrpEvtsLsnr) {
+        this.raftGrpEvtsLsnr = raftGrpEvtsLsnr;
+    }
+
     public Node getLeader() {
         this.lock.lock();
         try {
diff --git a/modules/raft/src/test/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessorTest.java b/modules/raft/src/test/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessorTest.java
new file mode 100644
index 000000000..6d5d2ba91
--- /dev/null
+++ b/modules/raft/src/test/java/org/apache/ignite/raft/jraft/rpc/impl/cli/ChangePeersAsyncRequestProcessorTest.java
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.raft.jraft.rpc.impl.cli;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+import static org.mockito.ArgumentMatchers.eq;
+
+import java.util.List;
+import org.apache.ignite.raft.jraft.Closure;
+import org.apache.ignite.raft.jraft.JRaftUtils;
+import org.apache.ignite.raft.jraft.Node;
+import org.apache.ignite.raft.jraft.Status;
+import org.apache.ignite.raft.jraft.entity.PeerId;
+import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncRequest;
+import org.apache.ignite.raft.jraft.rpc.CliRequests.ChangePeersAsyncResponse;
+import org.mockito.ArgumentCaptor;
+import org.mockito.Mockito;
+
+public class ChangePeersAsyncRequestProcessorTest extends AbstractCliRequestProcessorTest<ChangePeersAsyncRequest>{
+    @Override
+    public ChangePeersAsyncRequest createRequest(String groupId, PeerId peerId) {
+        return msgFactory.changePeersAsyncRequest()
+                .groupId(groupId)
+                .leaderId(peerId.toString())
+                .newPeersList(List.of("localhost:8084", "localhost:8085"))
+                .term(1)
+                .build();
+    }
+
+    @Override
+    public BaseCliRequestProcessor<ChangePeersAsyncRequest> newProcessor() {
+        return new ChangePeersAsyncRequestProcessor(null, msgFactory);
+    }
+
+    @Override
+    public void verify(String interest, Node node, ArgumentCaptor<Closure> doneArg) {
+        assertEquals(ChangePeersAsyncRequest.class.getName(), interest);
+        Mockito.verify(node).changePeersAsync(eq(JRaftUtils.getConfiguration("localhost:8084,localhost:8085")),
+                eq(1L), doneArg.capture());
+        Closure done = doneArg.getValue();
+        assertNotNull(done);
+        done.run(Status.OK());
+        assertNotNull(this.asyncContext.getResponseObject());
+        assertEquals("[localhost:8081, localhost:8082, localhost:8083]", this.asyncContext
+                .as(ChangePeersAsyncResponse.class).oldPeersList().toString());
+        assertEquals("[localhost:8084, localhost:8085]", this.asyncContext.as(ChangePeersAsyncResponse.class)
+                .newPeersList().toString());
+    }
+}
diff --git a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/configuration/storage/ItRebalanceDistributedTest.java b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/configuration/storage/ItRebalanceDistributedTest.java
new file mode 100644
index 000000000..5f174f3a0
--- /dev/null
+++ b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/configuration/storage/ItRebalanceDistributedTest.java
@@ -0,0 +1,544 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.configuration.storage;
+
+import static org.apache.ignite.internal.testframework.IgniteTestUtils.testNodeName;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.locks.LockSupport;
+import java.util.function.Consumer;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+import java.util.stream.Stream;
+import org.apache.ignite.configuration.RootKey;
+import org.apache.ignite.configuration.schemas.clientconnector.ClientConnectorConfiguration;
+import org.apache.ignite.configuration.schemas.network.NetworkConfiguration;
+import org.apache.ignite.configuration.schemas.rest.RestConfiguration;
+import org.apache.ignite.configuration.schemas.store.UnknownDataStorageConfigurationSchema;
+import org.apache.ignite.configuration.schemas.table.HashIndexConfigurationSchema;
+import org.apache.ignite.configuration.schemas.table.TablesConfiguration;
+import org.apache.ignite.internal.baseline.BaselineManager;
+import org.apache.ignite.internal.cluster.management.ClusterManagementGroupManager;
+import org.apache.ignite.internal.cluster.management.raft.ConcurrentMapClusterStateStorage;
+import org.apache.ignite.internal.configuration.ConfigurationManager;
+import org.apache.ignite.internal.configuration.schema.ExtendedTableConfiguration;
+import org.apache.ignite.internal.configuration.schema.ExtendedTableConfigurationSchema;
+import org.apache.ignite.internal.manager.IgniteComponent;
+import org.apache.ignite.internal.metastorage.MetaStorageManager;
+import org.apache.ignite.internal.metastorage.server.SimpleInMemoryKeyValueStorage;
+import org.apache.ignite.internal.pagememory.configuration.schema.UnsafeMemoryAllocatorConfigurationSchema;
+import org.apache.ignite.internal.raft.Loza;
+import org.apache.ignite.internal.raft.server.impl.JraftServerImpl;
+import org.apache.ignite.internal.schema.SchemaManager;
+import org.apache.ignite.internal.schema.configuration.SchemaConfigurationConverter;
+import org.apache.ignite.internal.sql.engine.SqlQueryProcessor;
+import org.apache.ignite.internal.storage.DataStorageManager;
+import org.apache.ignite.internal.storage.DataStorageModules;
+import org.apache.ignite.internal.storage.pagememory.PageMemoryDataStorageModule;
+import org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryDataStorageConfigurationSchema;
+import org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryStorageEngineConfiguration;
+import org.apache.ignite.internal.storage.rocksdb.RocksDbDataStorageModule;
+import org.apache.ignite.internal.storage.rocksdb.configuration.schema.RocksDbDataStorageConfigurationSchema;
+import org.apache.ignite.internal.storage.rocksdb.configuration.schema.RocksDbStorageEngineConfiguration;
+import org.apache.ignite.internal.table.TableImpl;
+import org.apache.ignite.internal.table.distributed.TableManager;
+import org.apache.ignite.internal.table.distributed.TableTxManagerImpl;
+import org.apache.ignite.internal.testframework.WorkDirectory;
+import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
+import org.apache.ignite.internal.tx.LockManager;
+import org.apache.ignite.internal.tx.TxManager;
+import org.apache.ignite.internal.tx.impl.HeapLockManager;
+import org.apache.ignite.internal.util.ByteUtils;
+import org.apache.ignite.internal.vault.VaultManager;
+import org.apache.ignite.internal.vault.persistence.PersistentVaultService;
+import org.apache.ignite.lang.IgniteInternalException;
+import org.apache.ignite.network.ClusterNode;
+import org.apache.ignite.network.ClusterService;
+import org.apache.ignite.network.NetworkAddress;
+import org.apache.ignite.network.StaticNodeFinder;
+import org.apache.ignite.raft.client.Peer;
+import org.apache.ignite.raft.jraft.rpc.RpcRequests;
+import org.apache.ignite.schema.SchemaBuilders;
+import org.apache.ignite.schema.definition.ColumnType;
+import org.apache.ignite.schema.definition.TableDefinition;
+import org.apache.ignite.utils.ClusterServiceTestUtils;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.TestInfo;
+import org.junit.jupiter.api.extension.ExtendWith;
+
+/**
+ * Test suite for rebalance process, when replicas' number changed.
+ */
+@ExtendWith(WorkDirectoryExtension.class)
+public class ItRebalanceDistributedTest {
+
+    public static final int BASE_PORT = 20_000;
+
+    public static final String HOST = "localhost";
+
+    private static StaticNodeFinder finder;
+
+    private static List<Node> nodes;
+
+    @BeforeEach
+    private void before(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
+        nodes = new ArrayList<>();
+
+        List<NetworkAddress> nodeAddresses = new ArrayList<>();
+
+        for (int i = 0; i < 3; i++) {
+            nodeAddresses.add(new NetworkAddress(HOST, BASE_PORT + i));
+        }
+
+        finder = new StaticNodeFinder(nodeAddresses);
+
+        for (NetworkAddress addr : nodeAddresses) {
+            var node = new Node(testInfo, workDir, addr);
+
+            nodes.add(node);
+
+            node.start();
+        }
+
+        nodes.get(0).cmgManager.initCluster(List.of(nodes.get(0).name), List.of(), "cluster");
+    }
+
+    @AfterEach
+    private void after() throws Exception {
+        for (Node node : nodes) {
+            node.stop();
+        }
+    }
+
+    @Test
+    void testOneRebalance(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
+        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
+                SchemaBuilders.column("key", ColumnType.INT64).build(),
+                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
+        ).withPrimaryKey("key").build();
+
+        nodes.get(0).tableManager.createTable(
+                "PUBLIC.tbl1",
+                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
+                        .changeReplicas(1)
+                        .changePartitions(1));
+
+        assertEquals(1, nodes.get(0).clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY)
+                .tables().get("PUBLIC.TBL1").replicas().value());
+
+        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(2));
+
+        waitPartitionAssignmentsSyncedToExpected(0, 2);
+
+        assertEquals(2, getPartitionClusterNodes(0, 0).size());
+        assertEquals(2, getPartitionClusterNodes(1, 0).size());
+        assertEquals(2, getPartitionClusterNodes(2, 0).size());
+    }
+
+    @Test
+    void testTwoQueuedRebalances(@WorkDirectory Path workDir, TestInfo testInfo) {
+        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
+                SchemaBuilders.column("key", ColumnType.INT64).build(),
+                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
+        ).withPrimaryKey("key").build();
+
+        nodes.get(0).tableManager.createTable(
+                "PUBLIC.tbl1",
+                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
+                        .changeReplicas(1)
+                        .changePartitions(1));
+
+        assertEquals(1, nodes.get(0).clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY).tables()
+                .get("PUBLIC.TBL1").replicas().value());
+
+        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(2));
+        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(3));
+
+        waitPartitionAssignmentsSyncedToExpected(0, 3);
+
+        assertEquals(3, getPartitionClusterNodes(0, 0).size());
+        assertEquals(3, getPartitionClusterNodes(1, 0).size());
+        assertEquals(3, getPartitionClusterNodes(2, 0).size());
+    }
+
+    @Test
+    void testThreeQueuedRebalances(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
+        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
+                SchemaBuilders.column("key", ColumnType.INT64).build(),
+                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
+        ).withPrimaryKey("key").build();
+
+        nodes.get(0).tableManager.createTable(
+                "PUBLIC.tbl1",
+                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
+                        .changeReplicas(1)
+                        .changePartitions(1));
+
+        assertEquals(1, nodes.get(0).clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY).tables()
+                .get("PUBLIC.TBL1").replicas().value());
+
+        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(2));
+        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(3));
+        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(2));
+
+        waitPartitionAssignmentsSyncedToExpected(0, 2);
+
+        assertEquals(2, getPartitionClusterNodes(0, 0).size());
+        assertEquals(2, getPartitionClusterNodes(1, 0).size());
+        assertEquals(2, getPartitionClusterNodes(2, 0).size());
+    }
+
+    @Test
+    void testOnLeaderElectedRebalanceRestart(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
+        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
+                SchemaBuilders.column("key", ColumnType.INT64).build(),
+                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
+        ).withPrimaryKey("key").build();
+
+        var table = (TableImpl) nodes.get(1).tableManager.createTable(
+                "PUBLIC.tbl1",
+                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
+                        .changeReplicas(2)
+                        .changePartitions(1));
+
+        Set<NetworkAddress> partitionNodesAddresses = getPartitionClusterNodes(0, 0)
+                .stream().map(ClusterNode::address).collect(Collectors.toSet());
+
+        Node newNode = nodes.stream().filter(n -> !partitionNodesAddresses.contains(n.address())).findFirst().get();
+
+        Node leaderNode = findNodeByAddress(table.leaderAssignment(0).address());
+
+        NetworkAddress nonLeaderNodeAddress = partitionNodesAddresses
+                .stream().filter(n -> !n.equals(leaderNode.address())).findFirst().get();
+
+        TableImpl nonLeaderTable = (TableImpl) findNodeByAddress(nonLeaderNodeAddress).tableManager.table("PUBLIC.TBL1");
+
+        var countDownLatch = new CountDownLatch(1);
+
+        String raftGroupNodeName = leaderNode.raftManager.server().startedGroups()
+                .stream().filter(grp -> grp.contains("part")).findFirst().get();
+
+        ((JraftServerImpl) leaderNode.raftManager.server()).blockMessages(
+                raftGroupNodeName, (msg, node) -> {
+                    if (node.equals(String.valueOf(newNode.address().toString())) && msg instanceof RpcRequests.PingRequest) {
+                        countDownLatch.countDown();
+
+                        return true;
+                    }
+                    return false;
+                });
+
+        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(3));
+
+        countDownLatch.await();
+
+        nonLeaderTable.internalTable().partitionRaftGroupService(0).transferLeadership(new Peer(nonLeaderNodeAddress)).get();
+
+        ((JraftServerImpl) leaderNode.raftManager.server()).stopBlockMessages(raftGroupNodeName);
+
+        waitPartitionAssignmentsSyncedToExpected(0, 3);
+
+        assertEquals(3, getPartitionClusterNodes(0, 0).size());
+        assertEquals(3, getPartitionClusterNodes(1, 0).size());
+        assertEquals(3, getPartitionClusterNodes(2, 0).size());
+    }
+
+    @Test
+    void testRebalanceRetryWhenCatchupFailed(@WorkDirectory Path workDir, TestInfo testInfo) throws Exception {
+        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
+                SchemaBuilders.column("key", ColumnType.INT64).build(),
+                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
+        ).withPrimaryKey("key").build();
+
+        nodes.get(0).tableManager.createTable(
+                "PUBLIC.tbl1",
+                tblChanger -> SchemaConfigurationConverter.convert(schTbl1, tblChanger)
+                        .changeReplicas(1)
+                        .changePartitions(1));
+
+        assertEquals(1, nodes.get(0).clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY)
+                .tables().get("PUBLIC.TBL1").replicas().value());
+
+        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(1));
+
+        waitPartitionAssignmentsSyncedToExpected(0, 1);
+
+        JraftServerImpl raftServer = (JraftServerImpl) nodes.stream()
+                .filter(n -> n.raftManager.startedGroups().stream().anyMatch(grp -> grp.contains("_part_"))).findFirst()
+                .get().raftManager.server();
+
+        AtomicInteger counter = new AtomicInteger(0);
+
+        String partGrpId = raftServer.startedGroups().stream().filter(grp -> grp.contains("_part_")).findFirst().get();
+
+        raftServer.blockMessages(partGrpId, (msg, node) -> {
+            if (msg instanceof RpcRequests.PingRequest) {
+                // We block ping request to prevent starting replicator, hence we fail catch up and fail rebalance.
+                assertEquals(1, getPartitionClusterNodes(0, 0).size());
+                assertEquals(1, getPartitionClusterNodes(1, 0).size());
+                assertEquals(1, getPartitionClusterNodes(2, 0).size());
+                return counter.incrementAndGet() <= 5;
+            }
+            return false;
+        });
+
+        nodes.get(0).tableManager.alterTable("PUBLIC.TBL1", ch -> ch.changeReplicas(3));
+
+        waitPartitionAssignmentsSyncedToExpected(0, 3);
+
+        assertEquals(3, getPartitionClusterNodes(0, 0).size());
+        assertEquals(3, getPartitionClusterNodes(1, 0).size());
+        assertEquals(3, getPartitionClusterNodes(2, 0).size());
+    }
+
+    private void waitPartitionAssignmentsSyncedToExpected(int partNum, int replicasNum) {
+        while (!IntStream.range(0, nodes.size()).allMatch(n -> getPartitionClusterNodes(n, partNum).size() == replicasNum)) {
+            LockSupport.parkNanos(100_000_000);
+        }
+    }
+
+    private Node findNodeByAddress(NetworkAddress addr) {
+        return nodes.stream().filter(n -> n.address().equals(addr)).findFirst().get();
+    }
+
+    private List<ClusterNode> getPartitionClusterNodes(int nodeNum, int partNum) {
+        var table = ((ExtendedTableConfiguration) nodes.get(nodeNum).clusterCfgMgr.configurationRegistry()
+                .getConfiguration(TablesConfiguration.KEY).tables().get("PUBLIC.TBL1"));
+
+        if (table != null) {
+            var assignments = table.assignments().value();
+
+            if (assignments != null) {
+                return ((List<List<ClusterNode>>) ByteUtils.fromBytes(assignments)).get(partNum);
+            }
+        }
+
+        return List.of();
+    }
+
+    private static class Node {
+        private final String name;
+
+        private final VaultManager vaultManager;
+
+        private final ClusterService clusterService;
+
+        private final LockManager lockManager;
+
+        private final TxManager txManager;
+
+        private final Loza raftManager;
+
+        private final MetaStorageManager metaStorageManager;
+
+        private final DistributedConfigurationStorage cfgStorage;
+
+        private final DataStorageManager dataStorageMgr;
+
+        private final TableManager tableManager;
+
+        private final BaselineManager baselineMgr;
+
+        private final ConfigurationManager nodeCfgMgr;
+
+        private final ConfigurationManager clusterCfgMgr;
+
+        private final ClusterManagementGroupManager cmgManager;
+
+        private final SchemaManager schemaManager;
+
+        private final SqlQueryProcessor sqlQueryProcessor;
+
+        /**
+         * Constructor that simply creates a subset of components of this node.
+         */
+        Node(TestInfo testInfo, Path workDir, NetworkAddress addr) {
+
+            name = testNodeName(testInfo, addr.port());
+
+            Path dir = workDir.resolve(name);
+
+            vaultManager = createVault(dir);
+
+            nodeCfgMgr = new ConfigurationManager(
+                    List.of(NetworkConfiguration.KEY,
+                            RestConfiguration.KEY,
+                            ClientConnectorConfiguration.KEY),
+                    Map.of(),
+                    new LocalConfigurationStorage(vaultManager),
+                    List.of(),
+                    List.of()
+            );
+
+            clusterService = ClusterServiceTestUtils.clusterService(
+                    testInfo,
+                    addr.port(),
+                    finder
+            );
+
+            lockManager = new HeapLockManager();
+
+            raftManager = new Loza(clusterService, dir);
+
+            txManager = new TableTxManagerImpl(clusterService, lockManager);
+
+            List<RootKey<?, ?>> rootKeys = List.of(
+                    TablesConfiguration.KEY);
+
+            cmgManager = new ClusterManagementGroupManager(
+                    vaultManager,
+                    clusterService,
+                    raftManager,
+                    new ConcurrentMapClusterStateStorage()
+            );
+
+            metaStorageManager = new MetaStorageManager(
+                    vaultManager,
+                    clusterService,
+                    cmgManager,
+                    raftManager,
+                    new SimpleInMemoryKeyValueStorage()
+            );
+
+            cfgStorage = new DistributedConfigurationStorage(metaStorageManager, vaultManager);
+
+            clusterCfgMgr = new ConfigurationManager(
+                    List.of(RocksDbStorageEngineConfiguration.KEY,
+                            PageMemoryStorageEngineConfiguration.KEY,
+                            TablesConfiguration.KEY),
+                    Map.of(),
+                    cfgStorage,
+                    List.of(ExtendedTableConfigurationSchema.class),
+                    List.of(UnknownDataStorageConfigurationSchema.class,
+                            PageMemoryDataStorageConfigurationSchema.class,
+                            UnsafeMemoryAllocatorConfigurationSchema.class,
+                            RocksDbDataStorageConfigurationSchema.class,
+                            HashIndexConfigurationSchema.class)
+            );
+
+            Consumer<Function<Long, CompletableFuture<?>>> registry = (Function<Long, CompletableFuture<?>> function) -> {
+                clusterCfgMgr.configurationRegistry().listenUpdateStorageRevision(
+                        newStorageRevision -> function.apply(newStorageRevision));
+            };
+
+            TablesConfiguration tablesCfg = clusterCfgMgr.configurationRegistry().getConfiguration(TablesConfiguration.KEY);
+
+            DataStorageModules dataStorageModules = new DataStorageModules(List.of(
+                    new RocksDbDataStorageModule(), new PageMemoryDataStorageModule()));
+
+            dataStorageMgr = new DataStorageManager(
+                    tablesCfg,
+                    dataStorageModules.createStorageEngines(
+                            name,
+                            clusterCfgMgr.configurationRegistry(),
+                            dir.resolve("storage"),
+                            null));
+
+            baselineMgr = new BaselineManager(
+                    clusterCfgMgr,
+                    metaStorageManager,
+                    clusterService);
+
+            schemaManager = new SchemaManager(registry, tablesCfg);
+
+            tableManager = new TableManager(
+                    registry,
+                    tablesCfg,
+                    raftManager,
+                    baselineMgr,
+                    clusterService.topologyService(),
+                    txManager,
+                    dataStorageMgr,
+                    metaStorageManager,
+                    schemaManager);
+
+            //TODO: Get rid of it after IGNITE-17062.
+            sqlQueryProcessor = new SqlQueryProcessor(registry, clusterService, tableManager, dataStorageMgr, Map::of);
+        }
+
+        /**
+         * Starts the created components.
+         */
+        void start() throws Exception {
+            vaultManager.start();
+
+            nodeCfgMgr.start();
+
+            Stream.of(clusterService, clusterCfgMgr, dataStorageMgr, raftManager, txManager, cmgManager,
+                    metaStorageManager, baselineMgr, schemaManager, tableManager, sqlQueryProcessor).forEach(IgniteComponent::start);
+
+            CompletableFuture.allOf(
+                    nodeCfgMgr.configurationRegistry().notifyCurrentConfigurationListeners(),
+                    clusterCfgMgr.configurationRegistry().notifyCurrentConfigurationListeners()
+            ).get();
+
+            // deploy watches to propagate data from the metastore into the vault
+            metaStorageManager.deployWatches();
+        }
+
+        /**
+         * Stops the created components.
+         */
+        void stop() throws Exception {
+            var components =
+                    List.of(sqlQueryProcessor, tableManager, schemaManager, baselineMgr, metaStorageManager, cmgManager, dataStorageMgr,
+                            raftManager, txManager, clusterCfgMgr, clusterService, nodeCfgMgr, vaultManager);
+
+            for (IgniteComponent igniteComponent : components) {
+                igniteComponent.beforeNodeStop();
+            }
+
+            for (IgniteComponent component : components) {
+                component.stop();
+            }
+        }
+
+        NetworkAddress address() {
+            return clusterService.topologyService().localMember().address();
+        }
+    }
+
+    /**
+     * Starts the Vault component.
+     */
+    private static VaultManager createVault(Path workDir) {
+        Path vaultPath = workDir.resolve(Paths.get("vault"));
+
+        try {
+            Files.createDirectories(vaultPath);
+        } catch (IOException e) {
+            throw new IgniteInternalException(e);
+        }
+
+        return new VaultManager(new PersistentVaultService(vaultPath));
+    }
+}
diff --git a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItBaselineChangesTest.java b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItBaselineChangesTest.java
deleted file mode 100644
index 96d891396..000000000
--- a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItBaselineChangesTest.java
+++ /dev/null
@@ -1,174 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ignite.internal.runner.app;
-
-import static java.util.stream.Collectors.toList;
-import static org.apache.ignite.internal.testframework.IgniteTestUtils.testNodeName;
-import static org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.willCompleteSuccessfully;
-import static org.hamcrest.MatcherAssert.assertThat;
-import static org.junit.jupiter.api.Assertions.assertEquals;
-
-import java.nio.file.Path;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Set;
-import java.util.concurrent.CompletableFuture;
-import java.util.stream.IntStream;
-import org.apache.ignite.Ignite;
-import org.apache.ignite.IgnitionManager;
-import org.apache.ignite.internal.schema.configuration.SchemaConfigurationConverter;
-import org.apache.ignite.internal.testframework.WorkDirectory;
-import org.apache.ignite.internal.testframework.WorkDirectoryExtension;
-import org.apache.ignite.internal.util.IgniteUtils;
-import org.apache.ignite.schema.SchemaBuilders;
-import org.apache.ignite.schema.definition.ColumnType;
-import org.apache.ignite.schema.definition.TableDefinition;
-import org.apache.ignite.table.RecordView;
-import org.apache.ignite.table.Table;
-import org.apache.ignite.table.Tuple;
-import org.junit.jupiter.api.AfterEach;
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-import org.junit.jupiter.api.TestInfo;
-import org.junit.jupiter.api.extension.ExtendWith;
-
-/**
- * Test for baseline changes.
- */
-@ExtendWith(WorkDirectoryExtension.class)
-public class ItBaselineChangesTest {
-    private static final int NUM_NODES = 3;
-
-    /** Start network port for test nodes. */
-    private static final int BASE_PORT = 3344;
-
-    private final List<String> clusterNodeNames = new ArrayList<>();
-
-    private final List<Ignite> clusterNodes = new ArrayList<>();
-
-    @WorkDirectory
-    private Path workDir;
-
-    /**
-     * Before each.
-     */
-    @BeforeEach
-    void setUp(TestInfo testInfo) {
-        List<CompletableFuture<Ignite>> futures = IntStream.range(0, NUM_NODES)
-                .mapToObj(i -> startNodeAsync(testInfo, i))
-                .collect(toList());
-
-        String metaStorageNode = testNodeName(testInfo, BASE_PORT);
-
-        IgnitionManager.init(metaStorageNode, List.of(metaStorageNode), "cluster");
-
-        for (CompletableFuture<Ignite> future : futures) {
-            assertThat(future, willCompleteSuccessfully());
-
-            clusterNodes.add(future.join());
-        }
-    }
-
-    /**
-     * After each.
-     */
-    @AfterEach
-    void tearDown() throws Exception {
-        List<AutoCloseable> closeables = clusterNodeNames.stream()
-                .map(name -> (AutoCloseable) () -> IgnitionManager.stop(name))
-                .collect(toList());
-
-        IgniteUtils.closeAll(closeables);
-    }
-
-    /**
-     * Check dynamic table creation.
-     */
-    @Test
-    void testBaselineExtending(TestInfo testInfo) {
-        assertEquals(NUM_NODES, clusterNodes.size());
-
-        // Create table on node 0.
-        TableDefinition schTbl1 = SchemaBuilders.tableBuilder("PUBLIC", "tbl1").columns(
-                SchemaBuilders.column("key", ColumnType.INT64).build(),
-                SchemaBuilders.column("val", ColumnType.INT32).asNullable(true).build()
-        ).withPrimaryKey("key").build();
-
-        clusterNodes.get(0).tables().createTable(schTbl1.canonicalName(), tblCh ->
-                SchemaConfigurationConverter.convert(schTbl1, tblCh)
-                        .changeReplicas(5)
-                        .changePartitions(1)
-        );
-
-        // Put data on node 1.
-        Table tbl1 = clusterNodes.get(1).tables().table(schTbl1.canonicalName());
-        RecordView<Tuple> recView1 = tbl1.recordView();
-
-        recView1.insert(null, Tuple.create().set("key", 1L).set("val", 111));
-
-        Ignite metaStoreNode = clusterNodes.get(0);
-
-        // Start 2 new nodes after
-        Ignite node3 = startNode(testInfo);
-
-        Ignite node4 = startNode(testInfo);
-
-        // Update baseline to nodes 1,4,5
-        metaStoreNode.setBaseline(Set.of(metaStoreNode.name(), node3.name(), node4.name()));
-
-        IgnitionManager.stop(clusterNodes.get(1).name());
-        IgnitionManager.stop(clusterNodes.get(2).name());
-
-        Table tbl4 = node4.tables().table(schTbl1.canonicalName());
-
-        Tuple keyTuple1 = Tuple.create().set("key", 1L);
-
-        assertEquals(1, (Long) tbl4.recordView().get(null, keyTuple1).value("key"));
-    }
-
-    private static String buildConfig(int nodeIdx) {
-        return "{\n"
-                + "  network: {\n"
-                + "    port: " + (BASE_PORT + nodeIdx) + ",\n"
-                + "    nodeFinder: {\n"
-                + "      netClusterNodes: [ \"localhost:3344\", \"localhost:3345\", \"localhost:3346\" ] \n"
-                + "    }\n"
-                + "  }\n"
-                + "}";
-    }
-
-    private Ignite startNode(TestInfo testInfo) {
-        CompletableFuture<Ignite> future = startNodeAsync(testInfo, clusterNodes.size());
-
-        assertThat(future, willCompleteSuccessfully());
-
-        Ignite ignite = future.join();
-
-        clusterNodes.add(ignite);
-
-        return ignite;
-    }
-
-    private CompletableFuture<Ignite> startNodeAsync(TestInfo testInfo, int index) {
-        String nodeName = testNodeName(testInfo, BASE_PORT + index);
-
-        clusterNodeNames.add(nodeName);
-
-        return IgnitionManager.start(nodeName, buildConfig(index), workDir.resolve(nodeName));
-    }
-}
diff --git a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItIgniteNodeRestartTest.java b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItIgniteNodeRestartTest.java
index 8f15f4fd0..c6190e5b9 100644
--- a/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItIgniteNodeRestartTest.java
+++ b/modules/runner/src/integrationTest/java/org/apache/ignite/internal/runner/app/ItIgniteNodeRestartTest.java
@@ -278,6 +278,7 @@ public class ItIgniteNodeRestartTest extends IgniteAbstractTest {
                 clusterSvc.topologyService(),
                 txManager,
                 dataStorageManager,
+                metaStorageMgr,
                 schemaManager
         );
 
diff --git a/modules/runner/src/main/java/org/apache/ignite/internal/app/IgniteImpl.java b/modules/runner/src/main/java/org/apache/ignite/internal/app/IgniteImpl.java
index 04b33842f..80d367c3d 100644
--- a/modules/runner/src/main/java/org/apache/ignite/internal/app/IgniteImpl.java
+++ b/modules/runner/src/main/java/org/apache/ignite/internal/app/IgniteImpl.java
@@ -24,7 +24,6 @@ import java.nio.file.Paths;
 import java.util.Collection;
 import java.util.List;
 import java.util.ServiceLoader;
-import java.util.Set;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.CompletionException;
 import java.util.function.Consumer;
@@ -317,6 +316,7 @@ public class IgniteImpl implements Ignite {
                 clusterSvc.topologyService(),
                 txManager,
                 dataStorageMgr,
+                metaStorageMgr,
                 schemaManager
         );
 
@@ -544,16 +544,6 @@ public class IgniteImpl implements Ignite {
         return name;
     }
 
-    /** {@inheritDoc} */
-    @Override
-    public void setBaseline(Set<String> baselineNodes) {
-        try {
-            distributedTblMgr.setBaseline(baselineNodes);
-        } catch (NodeStoppingException e) {
-            throw new IgniteException(e);
-        }
-    }
-
     /** {@inheritDoc} */
     @Override
     public IgniteCompute compute() {
diff --git a/modules/sql-engine/src/test/java/org/apache/ignite/internal/sql/engine/exec/MockedStructuresTest.java b/modules/sql-engine/src/test/java/org/apache/ignite/internal/sql/engine/exec/MockedStructuresTest.java
index 79195cf84..5cab8e634 100644
--- a/modules/sql-engine/src/test/java/org/apache/ignite/internal/sql/engine/exec/MockedStructuresTest.java
+++ b/modules/sql-engine/src/test/java/org/apache/ignite/internal/sql/engine/exec/MockedStructuresTest.java
@@ -56,6 +56,7 @@ import org.apache.ignite.internal.configuration.schema.ExtendedTableConfiguratio
 import org.apache.ignite.internal.configuration.testframework.ConfigurationExtension;
 import org.apache.ignite.internal.configuration.testframework.InjectConfiguration;
 import org.apache.ignite.internal.configuration.testframework.InjectRevisionListenerHolder;
+import org.apache.ignite.internal.metastorage.MetaStorageManager;
 import org.apache.ignite.internal.raft.Loza;
 import org.apache.ignite.internal.schema.SchemaDescriptor;
 import org.apache.ignite.internal.schema.SchemaManager;
@@ -75,6 +76,7 @@ import org.apache.ignite.internal.storage.rocksdb.configuration.schema.RocksDbSt
 import org.apache.ignite.internal.table.distributed.TableManager;
 import org.apache.ignite.internal.testframework.IgniteAbstractTest;
 import org.apache.ignite.internal.tx.TxManager;
+import org.apache.ignite.lang.ByteArray;
 import org.apache.ignite.lang.ColumnAlreadyExistsException;
 import org.apache.ignite.lang.ColumnNotFoundException;
 import org.apache.ignite.lang.IgniteException;
@@ -130,6 +132,10 @@ public class MockedStructuresTest extends IgniteAbstractTest {
     @Mock(lenient = true)
     private TxManager tm;
 
+    /** Meta storage manager. */
+    @Mock
+    MetaStorageManager msm;
+
     /**
      * Revision listener holder. It uses for the test configurations:
      * <ul>
@@ -628,7 +634,7 @@ public class MockedStructuresTest extends IgniteAbstractTest {
             return completedFuture(raftGrpSrvcMock);
         });
 
-        when(rm.updateRaftGroup(any(), any(), any(), any())).thenAnswer(mock -> {
+        when(rm.updateRaftGroup(any(), any(), any(), any(), any())).thenAnswer(mock -> {
             RaftGroupService raftGrpSrvcMock = mock(RaftGroupService.class);
 
             when(raftGrpSrvcMock.leader()).thenReturn(new Peer(new NetworkAddress("localhost", 47500)));
@@ -669,6 +675,8 @@ public class MockedStructuresTest extends IgniteAbstractTest {
             return ret;
         });
 
+        when(msm.registerWatch(any(ByteArray.class), any())).thenReturn(CompletableFuture.completedFuture(1L));
+
         TableManager tableManager = createTableManager();
 
         return tableManager;
@@ -685,6 +693,7 @@ public class MockedStructuresTest extends IgniteAbstractTest {
                 ts,
                 tm,
                 dataStorageManager,
+                msm,
                 sm
         );
 
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/table/InternalTable.java b/modules/table/src/main/java/org/apache/ignite/internal/table/InternalTable.java
index 8f11b8c03..10cc05936 100644
--- a/modules/table/src/main/java/org/apache/ignite/internal/table/InternalTable.java
+++ b/modules/table/src/main/java/org/apache/ignite/internal/table/InternalTable.java
@@ -28,6 +28,7 @@ import org.apache.ignite.internal.storage.engine.TableStorage;
 import org.apache.ignite.internal.tx.InternalTransaction;
 import org.apache.ignite.internal.tx.LockException;
 import org.apache.ignite.network.ClusterNode;
+import org.apache.ignite.raft.client.service.RaftGroupService;
 import org.jetbrains.annotations.NotNull;
 import org.jetbrains.annotations.Nullable;
 
@@ -238,5 +239,14 @@ public interface InternalTable extends AutoCloseable {
      */
     ClusterNode leaderAssignment(int partition);
 
+    /**
+     * Returns raft group client for corresponding partition.
+     *
+     * @param partition partition number
+     * @return raft group client for corresponding partition
+     * @throws org.apache.ignite.lang.IgniteInternalException if partition can't be found.
+     */
+    RaftGroupService partitionRaftGroupService(int partition);
+
     //TODO: IGNITE-14488. Add invoke() methods.
 }
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
index b7d57e846..eb8522c31 100644
--- a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
+++ b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/TableManager.java
@@ -22,26 +22,35 @@ import static java.util.concurrent.CompletableFuture.completedFuture;
 import static java.util.concurrent.CompletableFuture.failedFuture;
 import static org.apache.ignite.internal.configuration.util.ConfigurationUtil.getByInternalId;
 import static org.apache.ignite.internal.schema.SchemaManager.INITIAL_SCHEMA_VERSION;
+import static org.apache.ignite.internal.util.IgniteUtils.shutdownAndAwaitTermination;
+import static org.apache.ignite.internal.utils.RebalanceUtil.PENDING_ASSIGNMENTS_PREFIX;
+import static org.apache.ignite.internal.utils.RebalanceUtil.STABLE_ASSIGNMENTS_PREFIX;
+import static org.apache.ignite.internal.utils.RebalanceUtil.extractPartitionNumber;
+import static org.apache.ignite.internal.utils.RebalanceUtil.extractTableId;
+import static org.apache.ignite.internal.utils.RebalanceUtil.pendingPartAssignmentsKey;
+import static org.apache.ignite.internal.utils.RebalanceUtil.stablePartAssignmentsKey;
+import static org.apache.ignite.internal.utils.RebalanceUtil.updatePendingAssignmentsKeys;
 
 import it.unimi.dsi.fastutil.ints.Int2ObjectOpenHashMap;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.HashMap;
-import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 import java.util.NoSuchElementException;
-import java.util.Set;
 import java.util.UUID;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.CompletionException;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.function.Consumer;
 import java.util.function.Function;
 import java.util.function.Supplier;
 import java.util.stream.Collectors;
-import org.apache.ignite.Ignite;
+import java.util.stream.Stream;
 import org.apache.ignite.configuration.ConfigurationChangeException;
 import org.apache.ignite.configuration.ConfigurationProperty;
 import org.apache.ignite.configuration.NamedListView;
@@ -62,7 +71,12 @@ import org.apache.ignite.internal.configuration.util.ConfigurationUtil;
 import org.apache.ignite.internal.manager.EventListener;
 import org.apache.ignite.internal.manager.IgniteComponent;
 import org.apache.ignite.internal.manager.Producer;
+import org.apache.ignite.internal.metastorage.MetaStorageManager;
+import org.apache.ignite.internal.metastorage.client.Entry;
+import org.apache.ignite.internal.metastorage.client.WatchEvent;
+import org.apache.ignite.internal.metastorage.client.WatchListener;
 import org.apache.ignite.internal.raft.Loza;
+import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
 import org.apache.ignite.internal.schema.SchemaDescriptor;
 import org.apache.ignite.internal.schema.SchemaManager;
 import org.apache.ignite.internal.schema.SchemaUtils;
@@ -75,15 +89,20 @@ import org.apache.ignite.internal.table.IgniteTablesInternal;
 import org.apache.ignite.internal.table.InternalTable;
 import org.apache.ignite.internal.table.TableImpl;
 import org.apache.ignite.internal.table.distributed.raft.PartitionListener;
+import org.apache.ignite.internal.table.distributed.raft.RebalanceRaftGroupEventsListener;
 import org.apache.ignite.internal.table.distributed.storage.InternalTableImpl;
 import org.apache.ignite.internal.table.distributed.storage.VersionedRowStore;
 import org.apache.ignite.internal.table.event.TableEvent;
 import org.apache.ignite.internal.table.event.TableEventParameters;
+import org.apache.ignite.internal.thread.NamedThreadFactory;
 import org.apache.ignite.internal.tx.TxManager;
 import org.apache.ignite.internal.util.ByteUtils;
 import org.apache.ignite.internal.util.IgniteObjectName;
 import org.apache.ignite.internal.util.IgniteSpinBusyLock;
+import org.apache.ignite.lang.ByteArray;
+import org.apache.ignite.lang.IgniteBiTuple;
 import org.apache.ignite.lang.IgniteException;
+import org.apache.ignite.lang.IgniteInternalException;
 import org.apache.ignite.lang.IgniteLogger;
 import org.apache.ignite.lang.IgniteStringFormatter;
 import org.apache.ignite.lang.IgniteSystemProperties;
@@ -93,6 +112,10 @@ import org.apache.ignite.lang.TableNotFoundException;
 import org.apache.ignite.network.ClusterNode;
 import org.apache.ignite.network.NetworkAddress;
 import org.apache.ignite.network.TopologyService;
+import org.apache.ignite.raft.client.Peer;
+import org.apache.ignite.raft.client.service.RaftGroupListener;
+import org.apache.ignite.raft.client.service.RaftGroupService;
+import org.apache.ignite.raft.jraft.util.Utils;
 import org.apache.ignite.table.Table;
 import org.apache.ignite.table.manager.IgniteTables;
 import org.jetbrains.annotations.NotNull;
@@ -127,6 +150,9 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
     /** Transaction manager. */
     private final TxManager txManager;
 
+    /** Meta storage manager. */
+    private final MetaStorageManager metaStorageMgr;
+
     /** Data storage manager. */
     private final DataStorageManager dataStorageMgr;
 
@@ -151,6 +177,12 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
     /** Schema manager. */
     private final SchemaManager schemaManager;
 
+    /** Executor for scheduling retries of a rebalance. */
+    private final ScheduledExecutorService rebalanceScheduler;
+
+    /** Rebalance scheduler pool size. */
+    private static final int REBALANCE_SCHEDULER_POOL_SIZE = Math.min(Utils.cpus() * 3, 20);
+
     /**
      * Creates a new table manager.
      *
@@ -170,6 +202,7 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
             TopologyService topologyService,
             TxManager txManager,
             DataStorageManager dataStorageMgr,
+            MetaStorageManager metaStorageMgr,
             SchemaManager schemaManager
     ) {
         this.tablesCfg = tablesCfg;
@@ -177,6 +210,7 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
         this.baselineMgr = baselineMgr;
         this.txManager = txManager;
         this.dataStorageMgr = dataStorageMgr;
+        this.metaStorageMgr = metaStorageMgr;
         this.schemaManager = schemaManager;
 
         netAddrResolver = addr -> {
@@ -191,14 +225,19 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
         clusterNodeResolver = topologyService::getByAddress;
 
         tablesByIdVv = new VersionedValue<>(null, HashMap::new);
+
+        rebalanceScheduler = new ScheduledThreadPoolExecutor(REBALANCE_SCHEDULER_POOL_SIZE,
+                new NamedThreadFactory("rebalance-scheduler"));
     }
 
     /** {@inheritDoc} */
     @Override
     public void start() {
-        ((ExtendedTableConfiguration) tablesCfg.tables().any()).assignments().listen(assignmentsCtx -> {
-            return onUpdateAssignments(assignmentsCtx);
-        });
+        tablesCfg.tables().any().replicas().listen(this::onUpdateReplicas);
+
+        registerRebalanceListeners();
+
+        ((ExtendedTableConfiguration) tablesCfg.tables().any()).assignments().listen(this::onUpdateAssignments);
 
         tablesCfg.tables().listenElements(new ConfigurationNamedListListener<>() {
             @Override
@@ -310,6 +349,45 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
         return CompletableFuture.completedFuture(null);
     }
 
+    /**
+     * Listener of replicas configuration changes.
+     *
+     * @param replicasCtx Replicas configuration event context.
+     * @return A future, which will be completed, when event processed by listener.
+     */
+    private CompletableFuture<?> onUpdateReplicas(ConfigurationNotificationEvent<Integer> replicasCtx) {
+        if (!busyLock.enterBusy()) {
+            return CompletableFuture.completedFuture(new NodeStoppingException());
+        }
+
+        try {
+            if (replicasCtx.oldValue() != null && replicasCtx.oldValue() > 0) {
+                TableConfiguration tblCfg = replicasCtx.config(TableConfiguration.class);
+
+                int partCnt = tblCfg.partitions().value();
+
+                int newReplicas = replicasCtx.newValue();
+
+                CompletableFuture<?>[] futures = new CompletableFuture<?>[partCnt];
+
+                for (int i = 0; i < partCnt; i++) {
+                    String partId = partitionRaftGroupName(((ExtendedTableConfiguration) tblCfg).id().value(), i);
+
+                    futures[i] = updatePendingAssignmentsKeys(
+                            partId, baselineMgr.nodes(),
+                            partCnt, newReplicas,
+                            replicasCtx.storageRevision(), metaStorageMgr, i);
+                }
+
+                return CompletableFuture.allOf(futures);
+            } else {
+                return CompletableFuture.completedFuture(null);
+            }
+        } finally {
+            busyLock.leaveBusy();
+        }
+    }
+
     /**
      * Listener of assignment configuration changes.
      *
@@ -321,6 +399,7 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
             return failedFuture(new NodeStoppingException());
         }
 
+
         try {
             updateAssignmentInternal(assignmentsCtx);
         } finally {
@@ -361,14 +440,10 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
         for (int i = 0; i < partitions; i++) {
             int partId = i;
 
-            List<ClusterNode> oldPartitionAssignment = oldAssignments == null ? Collections.emptyList() :
+            List<ClusterNode> oldPartAssignment = oldAssignments == null ? Collections.emptyList() :
                     oldAssignments.get(partId);
 
-            List<ClusterNode> newPartitionAssignment = newAssignments.get(partId);
-
-            var toAdd = new HashSet<>(newPartitionAssignment);
-
-            toAdd.removeAll(oldPartitionAssignment);
+            List<ClusterNode> newPartAssignment = newAssignments.get(partId);
 
             // Create new raft nodes according to new assignments.
             tablesByIdVv.update(causalityToken, (tablesById, e) -> {
@@ -376,18 +451,27 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
                     return failedFuture(e);
                 }
 
-                InternalTable internalTable = tablesById.get(tblId).internalTable();
+                InternalTable internalTbl = tablesById.get(tblId).internalTable();
 
                 try {
                     futures[partId] = raftMgr.updateRaftGroup(
-                            raftGroupName(tblId, partId),
-                            newPartitionAssignment,
-                            toAdd,
+                            partitionRaftGroupName(tblId, partId),
+                            newPartAssignment,
+                            // start new nodes, only if it is table creation
+                            // other cases will be covered by rebalance logic
+                            (oldPartAssignment.isEmpty()) ? newPartAssignment : Collections.emptyList(),
                             () -> new PartitionListener(tblId,
-                                    new VersionedRowStore(internalTable.storage().getOrCreatePartition(partId),
-                                            txManager))
+                                    new VersionedRowStore(internalTbl.storage().getOrCreatePartition(partId), txManager)),
+                            () -> new RebalanceRaftGroupEventsListener(
+                                    metaStorageMgr,
+                                    tablesCfg.tables().get(tablesById.get(tblId).name()),
+                                    partitionRaftGroupName(tblId, partId),
+                                    partId,
+                                    busyLock,
+                                    () -> internalTbl.partitionRaftGroupService(partId),
+                                    rebalanceScheduler)
                     ).thenAccept(
-                            updatedRaftGroupService -> ((InternalTableImpl) internalTable)
+                            updatedRaftGroupService -> ((InternalTableImpl) internalTbl)
                                     .updateInternalTableRaftGroupService(partId, updatedRaftGroupService)
                     ).exceptionally(th -> {
                         LOG.error("Failed to update raft groups one the node", th);
@@ -422,12 +506,14 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
                 table.internalTable().close();
 
                 for (int p = 0; p < table.internalTable().partitions(); p++) {
-                    raftMgr.stopRaftGroup(raftGroupName(table.tableId(), p));
+                    raftMgr.stopRaftGroup(partitionRaftGroupName(table.tableId(), p));
                 }
             } catch (Exception e) {
                 LOG.error("Failed to stop a table {}", e, table.name());
             }
         }
+
+        shutdownAndAwaitTermination(rebalanceScheduler, 10, TimeUnit.SECONDS);
     }
 
     /**
@@ -446,6 +532,7 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
 
         tableStorage.start();
 
+
         InternalTableImpl internalTable = new InternalTableImpl(name, tblId, new Int2ObjectOpenHashMap<>(partitions),
                 partitions, netAddrResolver, clusterNodeResolver, txManager, tableStorage);
 
@@ -499,7 +586,7 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
             int partitions = assignment.size();
 
             for (int p = 0; p < partitions; p++) {
-                raftMgr.stopRaftGroup(raftGroupName(tblId, p));
+                raftMgr.stopRaftGroup(partitionRaftGroupName(tblId, p));
             }
 
             tablesByIdVv.update(causalityToken, (previousVal, e) -> {
@@ -537,7 +624,7 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
      * @return A RAFT group name.
      */
     @NotNull
-    private String raftGroupName(UUID tblId, int partition) {
+    private String partitionRaftGroupName(UUID tblId, int partition) {
         return tblId + "_part_" + partition;
     }
 
@@ -1119,152 +1206,160 @@ public class TableManager extends Producer<TableEvent, TableEventParameters> imp
     }
 
     /**
-     * Sets the nodes as baseline for all tables created by the manager.
-     *
-     * @param nodes New baseline nodes.
-     * @throws NodeStoppingException If an implementation stopped before the method was invoked.
+     * Register the new meta storage listener for changes in the rebalance-specific keys.
      */
-    public void setBaseline(Set<String> nodes) throws NodeStoppingException {
-        if (!busyLock.enterBusy()) {
-            throw new NodeStoppingException();
-        }
-        try {
-            setBaselineInternal(nodes);
-        } finally {
-            busyLock.leaveBusy();
-        }
-    }
+    private void registerRebalanceListeners() {
+        metaStorageMgr.registerWatchByPrefix(ByteArray.fromString(PENDING_ASSIGNMENTS_PREFIX), new WatchListener() {
+            @Override
+            public boolean onUpdate(@NotNull WatchEvent evt) {
+                if (!busyLock.enterBusy()) {
+                    throw new IgniteInternalException(new NodeStoppingException());
+                }
 
-    /**
-     * Internal method for setting a baseline.
-     *
-     * @param nodes Names of baseline nodes.
-     */
-    private void setBaselineInternal(Set<String> nodes) {
-        if (nodes == null || nodes.isEmpty()) {
-            throw new IgniteException("New baseline can't be null or empty");
-        }
+                try {
+                    assert evt.single();
 
-        var currClusterMembers = new HashSet<>(baselineMgr.nodes());
+                    Entry pendingAssignmentsWatchEvent = evt.entryEvent().newEntry();
 
-        var currClusterMemberNames =
-                currClusterMembers.stream().map(ClusterNode::name).collect(Collectors.toSet());
+                    if (pendingAssignmentsWatchEvent.value() == null) {
+                        return true;
+                    }
 
-        for (String nodeName : nodes) {
-            if (!currClusterMemberNames.contains(nodeName)) {
-                throw new IgniteException("Node '" + nodeName + "' not in current network cluster membership. "
-                        + " Adding not alive nodes is not supported yet.");
-            }
-        }
+                    int part = extractPartitionNumber(pendingAssignmentsWatchEvent.key());
+                    UUID tblId = extractTableId(pendingAssignmentsWatchEvent.key(), PENDING_ASSIGNMENTS_PREFIX);
 
-        var newBaseline = currClusterMembers
-                .stream().filter(n -> nodes.contains(n.name())).collect(Collectors.toSet());
+                    String partId = partitionRaftGroupName(tblId, part);
 
-        updateAssignments(currClusterMembers);
+                    // Assignments of the pending rebalance that we received through the meta storage watch mechanism.
+                    List<ClusterNode> newPeers = ((List<ClusterNode>) ByteUtils.fromBytes(pendingAssignmentsWatchEvent.value()));
 
-        if (!newBaseline.equals(currClusterMembers)) {
-            updateAssignments(newBaseline);
-        }
-    }
+                    var pendingAssignments = metaStorageMgr.get(pendingPartAssignmentsKey(partId)).join();
 
-    /**
-     * Update assignments for all current tables according to input nodes list. These approach has known issues {@link
-     * Ignite#setBaseline(Set)}.
-     *
-     * @param clusterNodes Set of nodes for assignment.
-     */
-    private void updateAssignments(Set<ClusterNode> clusterNodes) {
-        var setBaselineFut = new CompletableFuture<>();
+                    assert pendingAssignmentsWatchEvent.revision() <= pendingAssignments.revision()
+                            : "Meta Storage watch cannot notify about an event with the revision that is more than the actual revision.";
 
-        var changePeersQueue = new ArrayList<Supplier<CompletableFuture<Void>>>();
+                    TableImpl tbl = tablesByIdVv.latest().get(tblId);
 
-        tablesCfg.tables()
-                .change(tbls -> {
-                    changePeersQueue.clear();
+                    ExtendedTableConfiguration tblCfg = (ExtendedTableConfiguration) tablesCfg.tables().get(tbl.name());
 
-                    for (int i = 0; i < tbls.size(); i++) {
-                        tbls.createOrUpdate(tbls.get(i).name(), changeX -> {
-                            ExtendedTableChange change = (ExtendedTableChange) changeX;
-                            byte[] currAssignments = change.assignments();
+                    Supplier<RaftGroupListener> raftGrpLsnrSupplier = () -> new PartitionListener(tblId,
+                            new VersionedRowStore(
+                                    tbl.internalTable().storage().getOrCreatePartition(part), txManager));
 
-                            List<List<ClusterNode>> recalculatedAssignments = AffinityUtils.calculateAssignments(
-                                    clusterNodes,
-                                    change.partitions(),
-                                    change.replicas());
+                    Supplier<RaftGroupEventsListener> raftGrpEvtsLsnrSupplier = () -> new RebalanceRaftGroupEventsListener(
+                            metaStorageMgr,
+                            tblCfg,
+                            partId,
+                            part,
+                            busyLock,
+                            () -> tbl.internalTable().partitionRaftGroupService(part),
+                            rebalanceScheduler);
 
-                            if (!recalculatedAssignments.equals(ByteUtils.fromBytes(currAssignments))) {
-                                change.changeAssignments(ByteUtils.toBytes(recalculatedAssignments));
+                    // Stable assignments from the meta store, which revision is bounded by the current pending event.
+                    byte[] stableAssignments = metaStorageMgr.get(stablePartAssignmentsKey(partId),
+                            pendingAssignmentsWatchEvent.revision()).join().value();
 
-                                changePeersQueue.add(() ->
-                                        updateRaftTopology(
-                                                (List<List<ClusterNode>>) ByteUtils.fromBytes(currAssignments),
-                                                recalculatedAssignments,
-                                                change.id()));
-                            }
-                        });
+                    List<ClusterNode> assignments = stableAssignments == null
+                            // This is for the case when the first rebalance occurs.
+                            ? ((List<List<ClusterNode>>) ByteUtils.fromBytes(tblCfg.assignments().value())).get(part)
+                            : (List<ClusterNode>) ByteUtils.fromBytes(stableAssignments);
+
+                    var deltaPeers = newPeers.stream()
+                            .filter(p -> !assignments.contains(p))
+                            .collect(Collectors.toList());
+
+                    try {
+                        raftMgr.startRaftGroupNode(partId, assignments, deltaPeers, raftGrpLsnrSupplier,
+                                raftGrpEvtsLsnrSupplier);
+                    } catch (NodeStoppingException e) {
+                        // no-op
                     }
-                })
-                .thenCompose((v) -> {
-                    CompletableFuture<?>[] changePeersFutures = new CompletableFuture<?>[changePeersQueue.size()];
 
-                    int i = 0;
+                    // Do not change peers of the raft group if this is a stale event.
+                    // Note that we start raft node before for the sake of the consistency in a starting and stopping raft nodes.
+                    if (pendingAssignmentsWatchEvent.revision() < pendingAssignments.revision()) {
+                        return true;
+                    }
+
+                    var newNodes = newPeers.stream().map(n -> new Peer(n.address())).collect(Collectors.toList());
+
+                    RaftGroupService partGrpSvc = tbl.internalTable().partitionRaftGroupService(part);
 
-                    for (Supplier<CompletableFuture<Void>> task : changePeersQueue) {
-                        changePeersFutures[i++] = task.get();
+                    IgniteBiTuple<Peer, Long> leaderWithTerm = partGrpSvc.refreshAndGetLeaderWithTerm().join();
+
+                    ClusterNode localMember = raftMgr.server().clusterService().topologyService().localMember();
+
+                    // run update of raft configuration if this node is a leader
+                    if (localMember.address().equals(leaderWithTerm.get1().address())) {
+                        partGrpSvc.changePeersAsync(newNodes, leaderWithTerm.get2()).join();
                     }
 
-                    return CompletableFuture.allOf(changePeersFutures);
-                })
-                .whenComplete((res, th) -> {
-                    if (th != null) {
-                        setBaselineFut.completeExceptionally(th);
-                    } else {
-                        setBaselineFut.complete(null);
+                    return true;
+                } finally {
+                    busyLock.leaveBusy();
+                }
+            }
+
+            @Override
+            public void onError(@NotNull Throwable e) {
+                LOG.error("Error while processing pending assignments event", e);
+            }
+        });
+
+        metaStorageMgr.registerWatchByPrefix(ByteArray.fromString(STABLE_ASSIGNMENTS_PREFIX), new WatchListener() {
+            @Override
+            public boolean onUpdate(@NotNull WatchEvent evt) {
+                if (!busyLock.enterBusy()) {
+                    throw new IgniteInternalException(new NodeStoppingException());
+                }
+
+                try {
+                    assert evt.single();
+
+                    Entry stableAssignmentsWatchEvent = evt.entryEvent().newEntry();
+
+                    if (stableAssignmentsWatchEvent.value() == null) {
+                        return true;
                     }
-                });
 
-        setBaselineFut.join();
-    }
+                    int part = extractPartitionNumber(stableAssignmentsWatchEvent.key());
+                    UUID tblId = extractTableId(stableAssignmentsWatchEvent.key(), STABLE_ASSIGNMENTS_PREFIX);
 
-    /**
-     * Update raft groups of table partitions to new peers list.
-     *
-     * @param oldAssignments Old assignment.
-     * @param newAssignments New assignment.
-     * @param tblId Table ID.
-     * @return Future, which completes, when update finished.
-     */
-    private CompletableFuture<Void> updateRaftTopology(
-            List<List<ClusterNode>> oldAssignments,
-            List<List<ClusterNode>> newAssignments,
-            UUID tblId) {
-        CompletableFuture<?>[] futures = new CompletableFuture<?>[oldAssignments.size()];
+                    String partId = partitionRaftGroupName(tblId, part);
 
-        // TODO: IGNITE-15554 Add logic for assignment recalculation in case of partitions or replicas changes
-        // TODO: Until IGNITE-15554 is implemented it's safe to iterate over partitions and replicas cause there will
-        // TODO: be exact same amount of partitions and replicas for both old and new assignments
-        for (int i = 0; i < oldAssignments.size(); i++) {
-            final int p = i;
+                    var stableAssignments = (List<ClusterNode>) ByteUtils.fromBytes(stableAssignmentsWatchEvent.value());
 
-            List<ClusterNode> oldPartitionAssignment = oldAssignments.get(p);
-            List<ClusterNode> newPartitionAssignment = newAssignments.get(p);
+                    byte[] pendingFromMetastorage = metaStorageMgr.get(pendingPartAssignmentsKey(partId),
+                            stableAssignmentsWatchEvent.revision()).join().value();
 
-            try {
-                futures[i] = raftMgr.changePeers(
-                        raftGroupName(tblId, p),
-                        oldPartitionAssignment,
-                        newPartitionAssignment
-                ).exceptionally(th -> {
-                    LOG.error("Failed to update raft peers for group " + raftGroupName(tblId, p)
-                            + "from " + oldPartitionAssignment + " to " + newPartitionAssignment, th);
-                    return null;
-                });
-            } catch (NodeStoppingException e) {
-                throw new AssertionError("Loza was stopped before Table manager", e);
+                    List<ClusterNode> pendingAssignments = pendingFromMetastorage == null
+                            ? Collections.emptyList()
+                            : (List<ClusterNode>) ByteUtils.fromBytes(pendingFromMetastorage);
+
+                    List<ClusterNode> appliedPeers = Stream.concat(stableAssignments.stream(), pendingAssignments.stream())
+                            .collect(Collectors.toList());
+
+                    try {
+                        ClusterNode localMember = raftMgr.server().clusterService().topologyService().localMember();
+
+                        if (!appliedPeers.contains(localMember)) {
+                            raftMgr.stopRaftGroup(partId);
+                        }
+                    } catch (NodeStoppingException e) {
+                        // no-op
+                    }
+
+                    return true;
+                } finally {
+                    busyLock.leaveBusy();
+                }
             }
-        }
 
-        return CompletableFuture.allOf(futures);
+            @Override
+            public void onError(@NotNull Throwable e) {
+                LOG.error("Error while processing stable assignments event", e);
+            }
+        });
     }
 
     /**
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/raft/RebalanceRaftGroupEventsListener.java b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/raft/RebalanceRaftGroupEventsListener.java
new file mode 100644
index 000000000..f625fed47
--- /dev/null
+++ b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/raft/RebalanceRaftGroupEventsListener.java
@@ -0,0 +1,357 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.table.distributed.raft;
+
+import static org.apache.ignite.internal.metastorage.client.Conditions.notExists;
+import static org.apache.ignite.internal.metastorage.client.Conditions.revision;
+import static org.apache.ignite.internal.metastorage.client.Operations.ops;
+import static org.apache.ignite.internal.metastorage.client.Operations.put;
+import static org.apache.ignite.internal.metastorage.client.Operations.remove;
+import static org.apache.ignite.internal.utils.RebalanceUtil.pendingPartAssignmentsKey;
+import static org.apache.ignite.internal.utils.RebalanceUtil.plannedPartAssignmentsKey;
+import static org.apache.ignite.internal.utils.RebalanceUtil.stablePartAssignmentsKey;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.function.Supplier;
+import org.apache.ignite.configuration.schemas.table.TableConfiguration;
+import org.apache.ignite.internal.configuration.schema.ExtendedTableChange;
+import org.apache.ignite.internal.metastorage.MetaStorageManager;
+import org.apache.ignite.internal.metastorage.client.Entry;
+import org.apache.ignite.internal.metastorage.client.If;
+import org.apache.ignite.internal.raft.server.RaftGroupEventsListener;
+import org.apache.ignite.internal.util.ByteUtils;
+import org.apache.ignite.internal.util.IgniteSpinBusyLock;
+import org.apache.ignite.lang.ByteArray;
+import org.apache.ignite.lang.IgniteInternalException;
+import org.apache.ignite.lang.IgniteLogger;
+import org.apache.ignite.network.ClusterNode;
+import org.apache.ignite.network.NetworkAddress;
+import org.apache.ignite.raft.client.Peer;
+import org.apache.ignite.raft.client.service.RaftGroupService;
+import org.apache.ignite.raft.jraft.Status;
+import org.apache.ignite.raft.jraft.entity.PeerId;
+import org.apache.ignite.raft.jraft.error.RaftError;
+
+/**
+ * Listener for the raft group events, which must provide correct error handling of rebalance process
+ * and start new rebalance after the current one finished.
+ */
+public class RebalanceRaftGroupEventsListener implements RaftGroupEventsListener {
+    /** Ignite logger. */
+    private static final IgniteLogger LOG = IgniteLogger.forClass(RebalanceRaftGroupEventsListener.class);
+
+    /** Meta storage manager. */
+    private final MetaStorageManager metaStorageMgr;
+
+    /** Table configuration instance. */
+    private final TableConfiguration tblConfiguration;
+
+    /** Unique partition id. */
+    private final String partId;
+
+    /** Partition number. */
+    private final int partNum;
+
+    /** Busy lock of parent component for synchronous stop. */
+    private final IgniteSpinBusyLock busyLock;
+
+    /** Executor for scheduling rebalance retries. */
+    private final ScheduledExecutorService rebalanceScheduler;
+
+    /** Supplier of client for raft group of rebalance listener. */
+    private final Supplier<RaftGroupService> raftGroupServiceSupplier;
+
+    /** Attempts to retry the current rebalance in case of errors. */
+    private final AtomicInteger rebalanceAttempts =  new AtomicInteger(0);
+
+    /** Number of retrying of the current rebalance in case of errors. */
+    private static final int REBALANCE_RETRY_THRESHOLD = 10;
+
+    /** Delay between unsuccessful trial of a rebalance and a new trial, ms. */
+    public static final int REBALANCE_RETRY_DELAY_MS = 200;
+
+    /**
+     * Constructs new listener.
+     *
+     * @param metaStorageMgr Meta storage manager.
+     * @param tblConfiguration Table configuration.
+     * @param partId Partition id.
+     * @param partNum Partition number.
+     * @param rebalanceScheduler Executor for scheduling rebalance retries.
+     */
+    public RebalanceRaftGroupEventsListener(
+            MetaStorageManager metaStorageMgr,
+            TableConfiguration tblConfiguration,
+            String partId,
+            int partNum,
+            IgniteSpinBusyLock busyLock,
+            Supplier<RaftGroupService> raftGroupServiceSupplier,
+            ScheduledExecutorService rebalanceScheduler) {
+        this.metaStorageMgr = metaStorageMgr;
+        this.tblConfiguration = tblConfiguration;
+        this.partId = partId;
+        this.partNum = partNum;
+        this.busyLock = busyLock;
+        this.raftGroupServiceSupplier = raftGroupServiceSupplier;
+        this.rebalanceScheduler = rebalanceScheduler;
+    }
+
+    /** {@inheritDoc} */
+    @Override
+    public void onLeaderElected(long term) {
+        if (!busyLock.enterBusy()) {
+            return;
+        }
+
+        try {
+            rebalanceScheduler.schedule(() -> {
+                if (!busyLock.enterBusy()) {
+                    return;
+                }
+
+                try {
+                    rebalanceAttempts.set(0);
+
+                    metaStorageMgr.get(pendingPartAssignmentsKey(partId))
+                            .thenCompose(pendingEntry -> {
+                                if (!pendingEntry.empty()) {
+                                    List<ClusterNode> pendingNodes = (List<ClusterNode>) ByteUtils.fromBytes(pendingEntry.value());
+
+                                    return raftGroupServiceSupplier.get().changePeersAsync(clusterNodesToPeers(pendingNodes), term);
+                                } else {
+                                    return CompletableFuture.completedFuture(null);
+                                }
+                            }).get();
+                } catch (InterruptedException | ExecutionException e) {
+                    // TODO: IGNITE-17013 errors during this call should be handled by retry logic
+                    LOG.error("Couldn't start rebalance for partition {} of table {} on new elected leader for term {}",
+                            e, partNum, tblConfiguration.name().value(), term);
+                } finally {
+                    busyLock.leaveBusy();
+                }
+            }, 0, TimeUnit.MILLISECONDS);
+        } finally {
+            busyLock.leaveBusy();
+        }
+    }
+
+    /** {@inheritDoc} */
+    @Override
+    public void onNewPeersConfigurationApplied(List<PeerId> peers) {
+        if (!busyLock.enterBusy()) {
+            return;
+        }
+
+        try {
+            rebalanceScheduler.schedule(() -> {
+                if (!busyLock.enterBusy()) {
+                    return;
+                }
+
+                try {
+                    doOnNewPeersConfigurationApplied(peers);
+                } finally {
+                    busyLock.leaveBusy();
+                }
+            }, 0, TimeUnit.MILLISECONDS);
+        } finally {
+            busyLock.leaveBusy();
+        }
+    }
+
+    /** {@inheritDoc} */
+    @Override
+    public void onReconfigurationError(Status status, List<PeerId> peers, long term) {
+        if (!busyLock.enterBusy()) {
+            return;
+        }
+
+        try {
+            if (status == null) {
+                // leader stepped down, so we are expecting RebalanceRaftGroupEventsListener.onLeaderElected to be called on a new leader.
+                LOG.info("Leader stepped down during the current rebalance for the partId = {}.", partId);
+
+                return;
+            }
+
+            assert status.getRaftError() == RaftError.ECATCHUP : "According to the JRaft protocol, RaftError.ECATCHUP is expected.";
+
+            LOG.warn("Error occurred during the current rebalance for partId = {}.", partId);
+
+            if (rebalanceAttempts.incrementAndGet() < REBALANCE_RETRY_THRESHOLD) {
+                scheduleChangePeers(peers, term);
+            } else {
+                LOG.error("The number of retries of the rebalance for the partId = {} exceeded the threshold = {}.", partId,
+                        REBALANCE_RETRY_THRESHOLD);
+
+                // TODO: currently we just retry intent to change peers according to the rebalance infinitely, until new leader is elected,
+                // TODO: but rebalance cancel mechanism should be implemented. https://issues.apache.org/jira/browse/IGNITE-17056
+                scheduleChangePeers(peers, term);
+            }
+        } finally {
+            busyLock.leaveBusy();
+        }
+    }
+
+    /**
+     * Schedules changing peers according to the current rebalance.
+     *
+     * @param peers Peers to change configuration for a raft group.
+     * @param term Current known leader term.
+     */
+    private void scheduleChangePeers(List<PeerId> peers, long term) {
+        rebalanceScheduler.schedule(() -> {
+            if (!busyLock.enterBusy()) {
+                return;
+            }
+
+            LOG.info("Started {} attempt to retry the current rebalance for the partId = {}.", rebalanceAttempts.get(), partId);
+
+            try {
+                raftGroupServiceSupplier.get().changePeersAsync(peerIdsToPeers(peers), term).get();
+            } catch (InterruptedException | ExecutionException e) {
+                // TODO: IGNITE-17013 errors during this call should be handled by retry logic
+                LOG.error("Error during the rebalance retry for the partId = {}", e, partId);
+            } finally {
+                busyLock.leaveBusy();
+            }
+        }, REBALANCE_RETRY_DELAY_MS, TimeUnit.MILLISECONDS);
+    }
+
+    /**
+     * Implementation of {@link RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied(List)}.
+     *
+     * @param peers Peers
+     */
+    private void doOnNewPeersConfigurationApplied(List<PeerId> peers) {
+        try {
+            Map<ByteArray, Entry> keys = metaStorageMgr.getAll(
+                    Set.of(
+                            plannedPartAssignmentsKey(partId),
+                            pendingPartAssignmentsKey(partId),
+                            stablePartAssignmentsKey(partId))).get();
+
+            Entry plannedEntry = keys.get(plannedPartAssignmentsKey(partId));
+
+            List<ClusterNode> appliedPeers = resolveClusterNodes(peers,
+                    keys.get(pendingPartAssignmentsKey(partId)).value(), keys.get(stablePartAssignmentsKey(partId)).value());
+
+            tblConfiguration.change(ch -> {
+                List<List<ClusterNode>> assignments =
+                        (List<List<ClusterNode>>) ByteUtils.fromBytes(((ExtendedTableChange) ch).assignments());
+                assignments.set(partNum, appliedPeers);
+                ((ExtendedTableChange) ch).changeAssignments(ByteUtils.toBytes(assignments));
+            }).get();
+
+            if (plannedEntry.value() != null) {
+                if (!metaStorageMgr.invoke(If.iif(
+                        revision(plannedPartAssignmentsKey(partId)).eq(plannedEntry.revision()),
+                        ops(
+                                put(stablePartAssignmentsKey(partId), ByteUtils.toBytes(appliedPeers)),
+                                put(pendingPartAssignmentsKey(partId), plannedEntry.value()),
+                                remove(plannedPartAssignmentsKey(partId)))
+                                .yield(true),
+                        ops().yield(false))).get().getAsBoolean()) {
+                    doOnNewPeersConfigurationApplied(peers);
+                }
+            } else {
+                if (!metaStorageMgr.invoke(If.iif(
+                        notExists(plannedPartAssignmentsKey(partId)),
+                        ops(put(stablePartAssignmentsKey(partId), ByteUtils.toBytes(appliedPeers)),
+                                remove(pendingPartAssignmentsKey(partId))).yield(true),
+                        ops().yield(false))).get().getAsBoolean()) {
+                    doOnNewPeersConfigurationApplied(peers);
+                }
+            }
+
+            rebalanceAttempts.set(0);
+        } catch (InterruptedException | ExecutionException e) {
+            // TODO: IGNITE-17013 errors during this call should be handled by retry logic
+            LOG.error("Couldn't commit new partition configuration to metastore for table = {}, partition = {}",
+                    e, tblConfiguration.name(), partNum);
+        }
+    }
+
+    private static List<ClusterNode> resolveClusterNodes(
+            List<PeerId> peers, byte[] pendingAssignments, byte[] stableAssignments) {
+        Map<NetworkAddress, ClusterNode> resolveRegistry = new HashMap<>();
+
+        if (pendingAssignments != null) {
+            ((List<ClusterNode>) ByteUtils.fromBytes(pendingAssignments)).forEach(n -> resolveRegistry.put(n.address(), n));
+        }
+
+        if (stableAssignments != null) {
+            ((List<ClusterNode>) ByteUtils.fromBytes(stableAssignments)).forEach(n -> resolveRegistry.put(n.address(), n));
+        }
+
+        List<ClusterNode> resolvedNodes = new ArrayList<>(peers.size());
+
+        for (PeerId p : peers) {
+            var addr = NetworkAddress.from(p.getEndpoint().getIp() + ":" + p.getEndpoint().getPort());
+
+            if (resolveRegistry.containsKey(addr)) {
+                resolvedNodes.add(resolveRegistry.get(addr));
+            } else {
+                throw new IgniteInternalException("Can't find appropriate cluster node for raft group peer: " + p);
+            }
+        }
+
+        return resolvedNodes;
+    }
+
+    /**
+     * Transforms list of cluster nodes to the list of peers.
+     *
+     * @param nodes List of cluster nodes to transform.
+     * @return List of transformed peers.
+     */
+    private static List<Peer> clusterNodesToPeers(List<ClusterNode> nodes) {
+        List<Peer> peers = new ArrayList<>(nodes.size());
+
+        for (ClusterNode node : nodes) {
+            peers.add(new Peer(node.address()));
+        }
+
+        return peers;
+    }
+
+    /**
+     * Transforms list of peerIds to list of peers.
+     *
+     * @param peerIds List of peerIds to transform.
+     * @return List of transformed peers.
+     */
+    private static List<Peer> peerIdsToPeers(List<PeerId> peerIds) {
+        List<Peer> peers = new ArrayList<>(peerIds.size());
+
+        for (PeerId peerId : peerIds) {
+            peers.add(new Peer(NetworkAddress.from(peerId.getEndpoint().toString())));
+        }
+
+        return peers;
+    }
+}
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/storage/InternalTableImpl.java b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/storage/InternalTableImpl.java
index c65eb0b46..14acc2932 100644
--- a/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/storage/InternalTableImpl.java
+++ b/modules/table/src/main/java/org/apache/ignite/internal/table/distributed/storage/InternalTableImpl.java
@@ -414,6 +414,21 @@ public class InternalTableImpl implements InternalTable {
         return clusterNodeResolver.apply(raftGroupService.leader().address());
     }
 
+    /** {@inheritDoc} */
+    @Override
+    public RaftGroupService partitionRaftGroupService(int partition) {
+        RaftGroupService raftGroupService = partitionMap.get(partition);
+        if (raftGroupService == null) {
+            throw new IgniteInternalException("No such partition " + partition + " in table " + tableName);
+        }
+
+        if (raftGroupService.leader() == null) {
+            raftGroupService.refreshLeader().join();
+        }
+
+        return raftGroupService;
+    }
+
     private void awaitLeaderInitialization() {
         List<CompletableFuture<Void>> futs = new ArrayList<>();
 
diff --git a/modules/table/src/main/java/org/apache/ignite/internal/utils/RebalanceUtil.java b/modules/table/src/main/java/org/apache/ignite/internal/utils/RebalanceUtil.java
new file mode 100644
index 000000000..90caa2bdd
--- /dev/null
+++ b/modules/table/src/main/java/org/apache/ignite/internal/utils/RebalanceUtil.java
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ignite.internal.utils;
+
+import static org.apache.ignite.internal.metastorage.client.CompoundCondition.and;
+import static org.apache.ignite.internal.metastorage.client.CompoundCondition.or;
+import static org.apache.ignite.internal.metastorage.client.Conditions.notExists;
+import static org.apache.ignite.internal.metastorage.client.Conditions.value;
+import static org.apache.ignite.internal.metastorage.client.Operations.ops;
+import static org.apache.ignite.internal.metastorage.client.Operations.put;
+import static org.apache.ignite.internal.metastorage.client.Operations.remove;
+
+import java.util.Collection;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import org.apache.ignite.internal.affinity.AffinityUtils;
+import org.apache.ignite.internal.metastorage.MetaStorageManager;
+import org.apache.ignite.internal.metastorage.client.If;
+import org.apache.ignite.internal.metastorage.client.StatementResult;
+import org.apache.ignite.internal.util.ByteUtils;
+import org.apache.ignite.lang.ByteArray;
+import org.apache.ignite.network.ClusterNode;
+import org.jetbrains.annotations.NotNull;
+
+/**
+ * Util class for methods needed for the rebalance process.
+ */
+public class RebalanceUtil {
+
+    /**
+     * Update keys that related to rebalance algorithm in Meta Storage. Keys are specific for partition.
+     *
+     * @param partId Unique identifier of a partition.
+     * @param baselineNodes Nodes in baseline.
+     * @param partitions Number of partitions in a table.
+     * @param replicas Number of replicas for a table.
+     * @param revision Revision of Meta Storage that is specific for the assignment update.
+     * @param metaStorageMgr Meta Storage manager.
+     * @return Future representing result of updating keys in {@code metaStorageMgr}
+     */
+    public static @NotNull CompletableFuture<StatementResult> updatePendingAssignmentsKeys(
+            String partId, Collection<ClusterNode> baselineNodes,
+            int partitions, int replicas, long revision, MetaStorageManager metaStorageMgr, int partNum) {
+        ByteArray partChangeTriggerKey = partChangeTriggerKey(partId);
+
+        ByteArray partAssignmentsPendingKey = pendingPartAssignmentsKey(partId);
+
+        ByteArray partAssignmentsPlannedKey = plannedPartAssignmentsKey(partId);
+
+        ByteArray partAssignmentsStableKey = stablePartAssignmentsKey(partId);
+
+        byte[] partAssignmentsBytes = ByteUtils.toBytes(
+                AffinityUtils.calculateAssignments(baselineNodes, partitions, replicas).get(partNum));
+
+        //    if empty(partition.change.trigger.revision) || partition.change.trigger.revision < event.revision:
+        //        if empty(partition.assignments.pending) && partition.assignments.stable != calcPartAssighments():
+        //            partition.assignments.pending = calcPartAssignments()
+        //            partition.change.trigger.revision = event.revision
+        //        else:
+        //            if partition.assignments.pending != calcPartAssignments
+        //                partition.assignments.planned = calcPartAssignments()
+        //                partition.change.trigger.revision = event.revision
+        //            else
+        //                remove(partition.assignments.planned)
+        //    else:
+        //        skip
+        var iif = If.iif(or(notExists(partChangeTriggerKey), value(partChangeTriggerKey).lt(ByteUtils.longToBytes(revision))),
+                If.iif(and(notExists(partAssignmentsPendingKey), value(partAssignmentsStableKey).ne(partAssignmentsBytes)),
+                        ops(
+                                put(partAssignmentsPendingKey, partAssignmentsBytes),
+                                put(partChangeTriggerKey, ByteUtils.longToBytes(revision))
+                        ).yield(),
+                        If.iif(value(partAssignmentsPendingKey).ne(partAssignmentsBytes),
+                                ops(
+                                        put(partAssignmentsPlannedKey, partAssignmentsBytes),
+                                        put(partChangeTriggerKey, ByteUtils.longToBytes(revision))
+                                ).yield(),
+                                ops(remove(partAssignmentsPlannedKey)).yield())),
+                ops().yield());
+
+        return metaStorageMgr.invoke(iif);
+    }
+
+    /** Key prefix for pending assignments. */
+    public static final String PENDING_ASSIGNMENTS_PREFIX = "assignments.pending.";
+
+    /** Key prefix for stable assignments. */
+    public static final String STABLE_ASSIGNMENTS_PREFIX = "assignments.stable.";
+
+    /**
+     * Key that is needed for the rebalance algorithm.
+     *
+     * @param partId Unique identifier of a partition.
+     * @return Key for a partition.
+     * @see <a href="https://github.com/apache/ignite-3/blob/main/modules/table/tech-notes/rebalance.md">Rebalnce documentation</a>
+     */
+    public static ByteArray partChangeTriggerKey(String partId) {
+        return new ByteArray(partId + ".change.trigger");
+    }
+
+    /**
+     * Key that is needed for the rebalance algorithm.
+     *
+     * @param partId Unique identifier of a partition.
+     * @return Key for a partition.
+     * @see <a href="https://github.com/apache/ignite-3/blob/main/modules/table/tech-notes/rebalance.md">Rebalnce documentation</a>
+     */
+    public static ByteArray pendingPartAssignmentsKey(String partId) {
+        return new ByteArray(PENDING_ASSIGNMENTS_PREFIX + partId);
+    }
+
+    /**
+     * Key that is needed for the rebalance algorithm.
+     *
+     * @param partId Unique identifier of a partition.
+     * @return Key for a partition.
+     * @see <a href="https://github.com/apache/ignite-3/blob/main/modules/table/tech-notes/rebalance.md">Rebalnce documentation</a>
+     */
+    public static ByteArray plannedPartAssignmentsKey(String partId) {
+        return new ByteArray("assignments.planned." + partId);
+    }
+
+    /**
+     * Key that is needed for the rebalance algorithm.
+     *
+     * @param partId Unique identifier of a partition.
+     * @return Key for a partition.
+     * @see <a href="https://github.com/apache/ignite-3/blob/main/modules/table/tech-notes/rebalance.md">Rebalnce documentation</a>
+     */
+    public static ByteArray stablePartAssignmentsKey(String partId) {
+        return new ByteArray(STABLE_ASSIGNMENTS_PREFIX + partId);
+    }
+
+    /**
+     * Extract table id from pending key of partition.
+     *
+     * @param key Key.
+     * @return Table id.
+     */
+    public static UUID extractTableId(ByteArray key, String prefix) {
+        var strKey = key.toString();
+
+        return UUID.fromString(strKey.substring(prefix.length(), strKey.indexOf("_part_")));
+    }
+
+    /**
+     * Extract partition number from the rebalance key of partition.
+     *
+     * @param key Key.
+     * @return Partition number.
+     */
+    public static int extractPartitionNumber(ByteArray key) {
+        var strKey = key.toString();
+
+        return Integer.parseInt(strKey.substring(strKey.indexOf("_part_") + "_part_".length()));
+    }
+}
diff --git a/modules/table/src/test/java/org/apache/ignite/internal/table/TableManagerTest.java b/modules/table/src/test/java/org/apache/ignite/internal/table/TableManagerTest.java
index 60cc92552..835df74c3 100644
--- a/modules/table/src/test/java/org/apache/ignite/internal/table/TableManagerTest.java
+++ b/modules/table/src/test/java/org/apache/ignite/internal/table/TableManagerTest.java
@@ -67,6 +67,7 @@ import org.apache.ignite.internal.configuration.schema.ExtendedTableView;
 import org.apache.ignite.internal.configuration.testframework.ConfigurationExtension;
 import org.apache.ignite.internal.configuration.testframework.InjectConfiguration;
 import org.apache.ignite.internal.configuration.testframework.InjectRevisionListenerHolder;
+import org.apache.ignite.internal.metastorage.MetaStorageManager;
 import org.apache.ignite.internal.pagememory.configuration.schema.UnsafeMemoryAllocatorConfigurationSchema;
 import org.apache.ignite.internal.raft.Loza;
 import org.apache.ignite.internal.schema.SchemaDescriptor;
@@ -87,8 +88,8 @@ import org.apache.ignite.internal.testframework.IgniteAbstractTest;
 import org.apache.ignite.internal.tx.LockManager;
 import org.apache.ignite.internal.tx.TxManager;
 import org.apache.ignite.internal.util.ByteUtils;
+import org.apache.ignite.lang.ByteArray;
 import org.apache.ignite.lang.IgniteException;
-import org.apache.ignite.lang.NodeStoppingException;
 import org.apache.ignite.network.ClusterNode;
 import org.apache.ignite.network.NetworkAddress;
 import org.apache.ignite.network.TopologyService;
@@ -152,6 +153,10 @@ public class TableManagerTest extends IgniteAbstractTest {
     @Mock(lenient = true)
     private LockManager lm;
 
+    /** Meta storage manager. */
+    @Mock
+    MetaStorageManager msm;
+
     /**
      * Revision listener holder. It uses for the test configurations:
      * <ul>
@@ -211,6 +216,8 @@ public class TableManagerTest extends IgniteAbstractTest {
             });
         };
 
+        when(msm.registerWatch(any(ByteArray.class), any())).thenReturn(CompletableFuture.completedFuture(1L));
+
         tblManagerFut = new CompletableFuture<>();
     }
 
@@ -232,7 +239,7 @@ public class TableManagerTest extends IgniteAbstractTest {
      */
     @Test
     public void testPreconfiguredTable() throws Exception {
-        when(rm.updateRaftGroup(any(), any(), any(), any())).thenAnswer(mock ->
+        when(rm.updateRaftGroup(any(), any(), any(), any(), any())).thenAnswer(mock ->
                 CompletableFuture.completedFuture(mock(RaftGroupService.class)));
 
         TableManager tableManager = createTableManager(tblManagerFut, false);
@@ -392,8 +399,6 @@ public class TableManagerTest extends IgniteAbstractTest {
 
         assertThrows(IgniteException.class, () -> tableManager.table(fakeTblId));
         assertThrows(IgniteException.class, () -> tableManager.tableAsync(fakeTblId));
-
-        assertThrows(NodeStoppingException.class, () -> tableManager.setBaseline(Collections.singleton("fakeNode0")));
     }
 
     /**
@@ -410,7 +415,7 @@ public class TableManagerTest extends IgniteAbstractTest {
 
         mockManagersAndCreateTable(scmTbl, tblManagerFut);
 
-        verify(rm, times(PARTITIONS)).updateRaftGroup(anyString(), any(), any(), any());
+        verify(rm, times(PARTITIONS)).updateRaftGroup(anyString(), any(), any(), any(), any());
 
         TableManager tableManager = tblManagerFut.join();
 
@@ -519,7 +524,7 @@ public class TableManagerTest extends IgniteAbstractTest {
             CompletableFuture<TableManager> tblManagerFut,
             Phaser phaser
     ) throws Exception {
-        when(rm.updateRaftGroup(any(), any(), any(), any())).thenAnswer(mock -> {
+        when(rm.updateRaftGroup(any(), any(), any(), any(), any())).thenAnswer(mock -> {
             RaftGroupService raftGrpSrvcMock = mock(RaftGroupService.class);
 
             when(raftGrpSrvcMock.leader()).thenReturn(new Peer(new NetworkAddress("localhost", 47500)));
@@ -624,6 +629,7 @@ public class TableManagerTest extends IgniteAbstractTest {
                 ts,
                 tm,
                 dsm = createDataStorageManager(configRegistry, workDir, pageMemoryEngineConfig),
+                msm,
                 sm = new SchemaManager(revisionUpdater, tblsCfg)
         );
 
diff --git a/modules/table/tech-notes/rebalance.md b/modules/table/tech-notes/rebalance.md
index c0d374b33..2f096b52c 100644
--- a/modules/table/tech-notes/rebalance.md
+++ b/modules/table/tech-notes/rebalance.md
@@ -32,7 +32,7 @@ Also, we will need the utility key:
 
 ## Operations, which can trigger rebalance
 Three types of events can trigger the rebalance:
-- API call of any special method like `org.apache.ignite.Ignite.setBaseline`, which will change baseline value in metastore (1 for all tables for now, but maybe it should be separate per table in future)
+- Change of baseline metastore key (1 for all tables for now, but maybe it should be separate per table in future)
 - Configuration change through `org.apache.ignite.configuration.schemas.table.TableChange.changeReplicas` produce metastore update event
 - Configuration change through `org.apache.ignite.configuration.schemas.table.TableChange.changePartitions` produce metastore update event (IMPORTANT: this type of trigger has additional difficulties because of cross raft group data migration and it is out of scope of this document)
 
@@ -96,6 +96,7 @@ metastoreInvoke: \\ atomic
         partition.assignments.pending = empty
     else:
         partition.assignments.pending = partition.assignments.planned
+        remove(partition.assignments.planned)
 ```
 
 Failover helpers (detailed failover scenarious must be developed in future)