You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@ignite.apache.org by GitBox <gi...@apache.org> on 2023/01/11 15:26:37 UTC

[GitHub] [ignite-3] denis-chudov commented on a diff in pull request #1493: IGNITE-18456 Explain treading model in corresponding README.md file f…

denis-chudov commented on code in PR #1493:
URL: https://github.com/apache/ignite-3/pull/1493#discussion_r1067125861


##########
modules/metastorage/README.md:
##########
@@ -0,0 +1,45 @@
+# Metastorage
+
+The module for storing and accessing metadata. This module is linked to another one:
+
+- metastorage-api - The module contains classes that other components can use to access the metastorage service.
+
+To avoid data loss, the storage is distributed using the RAFT consensus algorithm. Every cluster node has access to such distributed
+storage, but the majority of members are learners, and only a small number are voters in terms of RAFT algorithm. Typically, only a small
+number of the cluster's voting nodes make up the raft-group (number of nodes must be odd, either 3 or 5.). The remaining nodes listen to
+metadata updates but do not vote (they called learners in terms of RAFT).
+
+## Threading model
+
+There is only one thread specifically for notifying watchers. The thread is created in KeyValueStorage from the executor:
+
+```java
+private ExecutorService watchExecutor = Executors.newSingleThreadExecutor(new NamedThreadFactory(NamedThreadFactory
+        .threadPrefix(nodeName, "metastorage-watch-executor"),LOG));
+```
+
+Additionally, the storage internally has two thread executors for creating a snapshot:
+
+```java
+private ExecutorService snapshotExecutor = Executors.newFixedThreadPool(2,new NamedThreadFactory(NamedThreadFactory
+        .threadPrefix(nodeName, "metastorage-snapshot-executor"),LOG));
+```
+
+To distinguish to which node the specific thread belongs, each executor has a node name in its prefix.
+
+### Interface methods
+
+Futures are returned by a number of Metastorage Manager operations, like get(), getAll(), invoke(), etc. Those futures are completed when
+the corresponding RAFT command is completed in the Metastorage group. Although the entire replication process took place in the RAFT
+threads, this result appears in the RAFT client executor with prefix <NODE_NAME>%Raft-Group-Client. See RAFT module for more information

Review Comment:
   ```suggestion
   the corresponding RAFT command is completed in the Metastorage group. As the entire replication process takes place in RAFT
   threads, this result appears in the RAFT client executor with prefix <NODE_NAME>%Raft-Group-Client. See RAFT module for more information
   ```



##########
modules/metastorage/README.md:
##########
@@ -0,0 +1,45 @@
+# Metastorage
+
+The module for storing and accessing metadata. This module is linked to another one:
+
+- metastorage-api - The module contains classes that other components can use to access the metastorage service.
+
+To avoid data loss, the storage is distributed using the RAFT consensus algorithm. Every cluster node has access to such distributed
+storage, but the majority of members are learners, and only a small number are voters in terms of RAFT algorithm. Typically, only a small
+number of the cluster's voting nodes make up the raft-group (number of nodes must be odd, either 3 or 5.). The remaining nodes listen to
+metadata updates but do not vote (they called learners in terms of RAFT).
+
+## Threading model
+
+There is only one thread specifically for notifying watchers. The thread is created in KeyValueStorage from the executor:
+
+```java
+private ExecutorService watchExecutor = Executors.newSingleThreadExecutor(new NamedThreadFactory(NamedThreadFactory
+        .threadPrefix(nodeName, "metastorage-watch-executor"),LOG));
+```
+
+Additionally, the storage internally has two thread executors for creating a snapshot:
+
+```java
+private ExecutorService snapshotExecutor = Executors.newFixedThreadPool(2,new NamedThreadFactory(NamedThreadFactory
+        .threadPrefix(nodeName, "metastorage-snapshot-executor"),LOG));
+```
+
+To distinguish to which node the specific thread belongs, each executor has a node name in its prefix.
+
+### Interface methods
+
+Futures are returned by a number of Metastorage Manager operations, like get(), getAll(), invoke(), etc. Those futures are completed when
+the corresponding RAFT command is completed in the Metastorage group. Although the entire replication process took place in the RAFT
+threads, this result appears in the RAFT client executor with prefix <NODE_NAME>%Raft-Group-Client. See RAFT module for more information
+about its threading model.
+
+Although some methods return futures, they are often run synchronously. Futures are dependent on asynchronous Metastorage initialization,

Review Comment:
   shouldn'we specify these methods? "some methods" sounds too uncertain



##########
modules/metastorage/README.md:
##########
@@ -0,0 +1,45 @@
+# Metastorage
+
+The module for storing and accessing metadata. This module is linked to another one:
+
+- metastorage-api - The module contains classes that other components can use to access the metastorage service.
+
+To avoid data loss, the storage is distributed using the RAFT consensus algorithm. Every cluster node has access to such distributed
+storage, but the majority of members are learners, and only a small number are voters in terms of RAFT algorithm. Typically, only a small
+number of the cluster's voting nodes make up the raft-group (number of nodes must be odd, either 3 or 5.). The remaining nodes listen to
+metadata updates but do not vote (they called learners in terms of RAFT).
+
+## Threading model
+
+There is only one thread specifically for notifying watchers. The thread is created in KeyValueStorage from the executor:
+
+```java
+private ExecutorService watchExecutor = Executors.newSingleThreadExecutor(new NamedThreadFactory(NamedThreadFactory
+        .threadPrefix(nodeName, "metastorage-watch-executor"),LOG));
+```
+
+Additionally, the storage internally has two thread executors for creating a snapshot:
+
+```java
+private ExecutorService snapshotExecutor = Executors.newFixedThreadPool(2,new NamedThreadFactory(NamedThreadFactory
+        .threadPrefix(nodeName, "metastorage-snapshot-executor"),LOG));
+```
+
+To distinguish to which node the specific thread belongs, each executor has a node name in its prefix.
+
+### Interface methods
+
+Futures are returned by a number of Metastorage Manager operations, like get(), getAll(), invoke(), etc. Those futures are completed when
+the corresponding RAFT command is completed in the Metastorage group. Although the entire replication process took place in the RAFT
+threads, this result appears in the RAFT client executor with prefix <NODE_NAME>%Raft-Group-Client. See RAFT module for more information
+about its threading model.
+
+Although some methods return futures, they are often run synchronously. Futures are dependent on asynchronous Metastorage initialization,
+since starting an RAFT group requires time. In other words, we have a chance to complete those features in the RAFT client executor thread (
+prefix NODE NAME>%Raft-Group-Client).
+
+### Using common pool
+
+The component uses common ForkJoinPool on start (in fact, it is not necessary, because all components starts asynchronously in the same

Review Comment:
   ```suggestion
   The component uses common ForkJoinPool on start (in fact, it is not necessary, because all components start asynchronously in the same
   ```



##########
modules/metastorage/README.md:
##########
@@ -0,0 +1,45 @@
+# Metastorage
+
+The module for storing and accessing metadata. This module is linked to another one:
+
+- metastorage-api - The module contains classes that other components can use to access the metastorage service.
+
+To avoid data loss, the storage is distributed using the RAFT consensus algorithm. Every cluster node has access to such distributed
+storage, but the majority of members are learners, and only a small number are voters in terms of RAFT algorithm. Typically, only a small
+number of the cluster's voting nodes make up the raft-group (number of nodes must be odd, either 3 or 5.). The remaining nodes listen to
+metadata updates but do not vote (they called learners in terms of RAFT).
+
+## Threading model
+
+There is only one thread specifically for notifying watchers. The thread is created in KeyValueStorage from the executor:
+
+```java
+private ExecutorService watchExecutor = Executors.newSingleThreadExecutor(new NamedThreadFactory(NamedThreadFactory
+        .threadPrefix(nodeName, "metastorage-watch-executor"),LOG));
+```
+
+Additionally, the storage internally has two thread executors for creating a snapshot:
+
+```java
+private ExecutorService snapshotExecutor = Executors.newFixedThreadPool(2,new NamedThreadFactory(NamedThreadFactory
+        .threadPrefix(nodeName, "metastorage-snapshot-executor"),LOG));
+```
+
+To distinguish to which node the specific thread belongs, each executor has a node name in its prefix.
+
+### Interface methods
+
+Futures are returned by a number of Metastorage Manager operations, like get(), getAll(), invoke(), etc. Those futures are completed when
+the corresponding RAFT command is completed in the Metastorage group. Although the entire replication process took place in the RAFT
+threads, this result appears in the RAFT client executor with prefix <NODE_NAME>%Raft-Group-Client. See RAFT module for more information
+about its threading model.
+
+Although some methods return futures, they are often run synchronously. Futures are dependent on asynchronous Metastorage initialization,
+since starting an RAFT group requires time. In other words, we have a chance to complete those features in the RAFT client executor thread (
+prefix NODE NAME>%Raft-Group-Client).
+
+### Using common pool
+
+The component uses common ForkJoinPool on start (in fact, it is not necessary, because all components starts asynchronously in the same
+ForkJoinPool). The using of the common pool is dangerous, because the pool can be busy by another threads that hosted on the same JVM 

Review Comment:
   ```suggestion
   ForkJoinPool). The using of the common pool is dangerous, because the pool can be occupied by another threads that hosted on the same JVM 
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@ignite.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org