You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@iotdb.apache.org by GitBox <gi...@apache.org> on 2021/10/05 09:30:53 UTC

[GitHub] [iotdb] LebronAl opened a new pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

LebronAl opened a new pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079


   see [jira1639](https://issues.apache.org/jira/browse/IOTDB-1639)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43601374/badge)](https://coveralls.io/builds/43601374)
   
   Coverage increased (+0.01%) to 67.457% when pulling **0d399cec91919541811ac7891a1de6c47a68c5c2 on cluster-** into **e4b7f64deb54b3fc186424cf969a68bff23a6fc7 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948446247


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [1 Bug](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [368 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43663165/badge)](https://coveralls.io/builds/43663165)
   
   Coverage increased (+0.02%) to 67.271% when pulling **f4b9e99d8d74d2bc826c4c4403462b93ef63acbe on cluster-** into **1dcc82aad34bfc0820ac28f6a2e70757fef7d219 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-949246737


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [364 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952134463


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.3% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43803596/badge)](https://coveralls.io/builds/43803596)
   
   Coverage increased (+0.002%) to 67.05% when pulling **57a73f23517fe993c493644b71de2ccc19219b09 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] jixuan1989 commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
jixuan1989 commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952567799


   I see both the write and query have a better performance. 
   Could we have some analysis about the reason? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952504612


   # Performance Test
   ## Environment 
   3 nodes, 3 replica, multi-factor=1
   
   ## benchmark config: 
   > CLIENT_NUMBER=20
   > GROUP_NUMBER=20
   > DEVICE_NUMBER=100000
   > SENSOR_NUMBER=10
   > BATCH_SIZE=100
   > LOOP=100
   
   
   ## cluster- branch
   commit id: 6541b80
   ### write 
   ![cluster-write-2](https://user-images.githubusercontent.com/6150814/138994315-7b1b676d-3656-458e-81d2-2be81a90baf7.jpg)
   
   ### query
   ![cluster-read-2-1-1](https://user-images.githubusercontent.com/6150814/138994743-64a69f02-acc8-434e-90fa-da95c4e94329.jpg)
   
   ### master branch
   commit id: 87e1ae4
   ### write
   ![master-write](https://user-images.githubusercontent.com/6150814/138994534-cf5f8ab3-4d7c-468c-9fde-7e020969a19c.jpg)
   
   ### query
   ![master-read](https://user-images.githubusercontent.com/6150814/138994478-d9cb5484-2055-4c9d-8192-8ba163d37e42.jpg)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737203575



##########
File path: server/src/test/java/org/apache/iotdb/db/integration/IoTDBCheckConfigIT.java
##########
@@ -67,15 +67,15 @@ public void setUp() {
     EnvironmentUtils.closeStatMonitor();
     EnvironmentUtils.envSetUp();
 
-    final SecurityManager securityManager =
-        new SecurityManager() {
-          public void checkPermission(Permission permission) {
-            if (permission.getName().startsWith("exitVM")) {
-              throw new AccessControlException("Wrong system config");
-            }
-          }
-        };
-    System.setSecurityManager(securityManager);
+    //    final SecurityManager securityManager =
+    //        new SecurityManager() {
+    //          public void checkPermission(Permission permission) {
+    //            if (permission.getName().startsWith("exitVM")) {
+    //              throw new AccessControlException("Wrong system config");
+    //            }
+    //          }
+    //        };
+    //    System.setSecurityManager(securityManager);

Review comment:
       removed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737210184



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/log/applier/DataLogApplierTest.java
##########
@@ -179,75 +180,101 @@ public void setUp()
     IoTDBDescriptor.getInstance().getConfig().setEnablePartialInsert(false);
     isPartitionEnabled = IoTDBDescriptor.getInstance().getConfig().isEnablePartition();
     IoTDBDescriptor.getInstance().getConfig().setEnablePartition(true);
-    testMetaGroupMember.setClientProvider(
-        new DataClientProvider(new Factory()) {
-          @Override
-          public AsyncDataClient getAsyncDataClient(Node node, int timeout) throws IOException {
-            return new AsyncDataClient(null, null, node, null) {
+    // TODO fixme: restore normal provider
+    ClusterIoTDB.getInstance()
+        .setClientManager(
+            new IClientManager() {

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738899513



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/log/LogDispatcher.java
##########
@@ -66,16 +65,19 @@
   private RaftMember member;
   private boolean useBatchInLogCatchUp =
       ClusterDescriptor.getInstance().getConfig().isUseBatchInLogCatchUp();
+  // each follower has a queue and a dispatch thread is attached in executorService.
   private List<BlockingQueue<SendLogRequest>> nodeLogQueues = new ArrayList<>();
   private ExecutorService executorService;
+
+  // TODO we have no way to close this pool. should change later.
   private static ExecutorService serializationService =
-      Executors.newFixedThreadPool(
-          Runtime.getRuntime().availableProcessors(),
-          new ThreadFactoryBuilder().setDaemon(true).setNameFormat("DispatcherEncoder-%d").build());
+      IoTDBThreadPoolFactory.newFixedThreadPoolWithDaemonThread(
+          Runtime.getRuntime().availableProcessors(), "DispatcherEncoder");
 
   public LogDispatcher(RaftMember member) {
     this.member = member;
-    executorService = Executors.newCachedThreadPool();
+    executorService =
+        IoTDBThreadPoolFactory.newCachedThreadPool("LogDispatcher-" + member.getName());

Review comment:
       I didn't understand. Could you please clarify? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43868069/badge)](https://coveralls.io/builds/43868069)
   
   Coverage decreased (-0.04%) to 67.006% when pulling **c3ee7b66ebd5548c3ecc06a346bd1ce92148f1d3 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-958629531


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-956059060


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-962799582


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [1 Bug](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [197 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.0%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.0%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.0% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43330827/badge)](https://coveralls.io/builds/43330827)
   
   Coverage decreased (-0.01%) to 67.767% when pulling **5282fb9ca9c5d11431e1eb90d3abdbd1d71c0554 on cluster-** into **cbbdc6caf51660e5817a4e9c854831d820315b72 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r736708499



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    cluster.initLocalEngines();
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO duplicate code,consider to solve it later
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster moduel.

Review comment:
       OK




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] neuyilan commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
neuyilan commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r735503715



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDBMBean.java
##########
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public interface ClusterIoTDBMBean {
+  /** @return true only if the log degree is DEBUG and the report is enabled */
+  boolean startRaftInfoReport();
+

Review comment:
       I think the description can be modified according to the code implementation
   "try to enable the raft report, if the log level is lower than debug, return true, otherwise, return false"
   
   BTW, I can not find where the method was used?
   
   

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;

Review comment:
       Change the `MetaGroupMember` class to `MetaGroupEngine` ? 

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();

Review comment:
       ```suggestion
         logger.error("Failed to check cluster config.", e);
         stop();
         return;
   ```

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    cluster.initLocalEngines();
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO duplicate code,consider to solve it later
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the

Review comment:
       Why first call `stopRaftInfoReport` method here?

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+

Review comment:
       From the code style, it's better that the form of comments is the same. 
   As `mvn:spotless `will automatically use `/* */` re-wrap the lines when the comment exceeds one line. So I prefer to use `/**/` comments when commenting on the attribute or fileds.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    cluster.initLocalEngines();
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO duplicate code,consider to solve it later
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster moduel.

Review comment:
       ```suggestion
       // cluster module.
   ```

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    cluster.initLocalEngines();
+

Review comment:
       I think we should consider the result when we  `initLocalEngines`, as this may init failed, it should not go on.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);

Review comment:
       can we create the thread only when we enable report? this will reduce the creation of default threads




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952846892


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.6%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.6%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.6% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948446247


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [1 Bug](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [368 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948522212


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-947301528


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [21 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [361 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.3% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43658315/badge)](https://coveralls.io/builds/43658315)
   
   Coverage decreased (-0.2%) to 67.248% when pulling **5b19d94f058e8c96968f555bde2fb078605c8eda on cluster-** into **e4b7f64deb54b3fc186424cf969a68bff23a6fc7 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43738063/badge)](https://coveralls.io/builds/43738063)
   
   Coverage decreased (-0.006%) to 67.042% when pulling **6541b80b64b772a90f8db4d8fc46198c776860bb on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-960594864


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-962403112


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-958629531


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-947286295


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [21 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [361 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.3% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737958552



##########
File path: .github/workflows/client-go.yml
##########
@@ -13,6 +13,8 @@ on:
     branches:
       - master
       - 'rel/*'
+      #remove me when cluster- branch is merged

Review comment:
       fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43840522/badge)](https://coveralls.io/builds/43840522)
   
   Coverage decreased (-0.06%) to 66.992% when pulling **8724db97e968c0025d3b77f55ad988213d621170 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952575510


   > I see both the write and query have a better performance. Could we have some analysis about the reason?
   
   Of course. Write should be equal to master as the small difference. For query, double confirmed that the test and environment configs are same. Will try to analysis what cause the improvement of query performance .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952575510


   > I see both the write and query have a better performance. Could we have some analysis about the reason?
   
   Of course. Write should be equal to master as the small difference. For query, double confirmed that the test and environment configs are same and run benchmark on master twice with the same result. Will try to analysis what cause the improvement of query performance .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-956059060


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-956059060


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43493667/badge)](https://coveralls.io/builds/43493667)
   
   Coverage decreased (-0.005%) to 67.739% when pulling **2f15139530ebec2a675ab44844b484d58aa67e83 on cluster-** into **c662a3e86de46aecc56236f0c2b693a2c479f38d on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r740726480



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,685 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  /**
+   * TODO: fix me: better to throw exception if the client can not be get. Then we can remove this
+   * field.
+   */
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);

Review comment:
       The design diagram for the new class is shown [here](https://github.com/apache/iotdb/issues/3881). Since multiple dataMember exists on a node, we abstracted a dataEngine to manage it. But since a node only has one metaMember, I don't think it's necessary to abstract a metaEngine that only manages one metaMember. I have changed all metaEngine in ClusterIoTDB back to metaMember and now there is no metaEngine name




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r740726937



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,687 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO: better to throw exception if the client can not be get. Then we can remove this field.
+  private static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node.
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots. */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup.
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  /** initialize the current node and its services */
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+      JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+      return false;
+    }
+    return true;
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    if (!cluster.initLocalEngines()) {
+      logger.error("initLocalEngines error, stop process!");
+      return;
+    }
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO: duplicate code
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  /** Start as a seed node */
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster module.
+    // TODO: it is better to remove coordinator out of metaGroupEngine
+
+    registerManager.register(metaGroupEngine);
+    registerManager.register(dataGroupEngine);
+
+    // rpc service initialize
+    DataGroupServiceImpls dataGroupServiceImpls = new DataGroupServiceImpls();
+    if (ClusterDescriptor.getInstance().getConfig().isUseAsyncServer()) {
+      MetaAsyncService metaAsyncService = new MetaAsyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      MetaRaftService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      DataRaftService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+    } else {
+      MetaSyncService syncService = new MetaSyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initSyncedServiceImpl(syncService);
+      MetaRaftService.getInstance().initSyncedServiceImpl(syncService);
+      DataRaftService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+    }
+    // start RPC service
+    logger.info("start Meta Heartbeat RPC service... ");
+    registerManager.register(MetaRaftHeartBeatService.getInstance());
+    /* TODO: better to start the Meta RPC service until the heartbeatService has elected the leader and quorum of followers have caught up. */
+    logger.info("start Meta RPC service... ");

Review comment:
       this is basically the same logic as before, maybe we can fix the work in todo in the future, what do you think?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-954551921


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [351 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.3% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.2% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-958629531


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-956028483


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [240 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.5% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-953725777


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [402 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.9%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.9%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.9% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737957403



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/client/async/AsyncDataClient.java
##########
@@ -58,104 +62,161 @@ public AsyncDataClient(
 
   public AsyncDataClient(
       TProtocolFactory protocolFactory,
-      TAsyncClientManager clientManager,
+      TAsyncClientManager tClientManager,
       Node node,
-      AsyncClientPool pool)
+      ClientCategory category)
       throws IOException {
     // the difference of the two clients lies in the port
     super(
         protocolFactory,
-        clientManager,
+        tClientManager,
         TNonblockingSocketWrapper.wrap(
-            node.getInternalIp(), node.getDataPort(), RaftServer.getConnectionTimeoutInMS()));
+            node.getInternalIp(),
+            ClientUtils.getPort(node, category),
+            ClusterConstant.getConnectionTimeoutInMS()));
     this.node = node;
-    this.pool = pool;
+    this.category = category;
+  }
+
+  public AsyncDataClient(
+      TProtocolFactory protocolFactory,
+      TAsyncClientManager tClientManager,
+      Node node,
+      ClientCategory category,
+      IClientManager manager)
+      throws IOException {
+    this(protocolFactory, tClientManager, node, category);
+    this.clientManager = manager;
+  }
+
+  public void close() {
+    ___transport.close();
+    ___currentMethod = null;
+  }
+
+  public boolean isValid() {
+    return ___transport != null;
+  }
+
+  /**
+   * return self if clientPool is not null, the method doesn't need to call by user, it will trigger
+   * once client transport complete
+   */
+  private void returnSelf() {
+    logger.debug("return client: ", toString());
+    if (clientManager != null) clientManager.returnAsyncClient(this, node, category);

Review comment:
       Fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-958629531


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/44063721/badge)](https://coveralls.io/builds/44063721)
   
   Coverage increased (+0.03%) to 67.078% when pulling **9d8f695647f370a3bc096f1adf7a1d4079640b47 on cluster-** into **5e1f7809dc0ad1e21bc18f53ab0a6b2e2b30091a on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738175734



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/query/fill/ClusterPreviousFill.java
##########
@@ -120,7 +121,9 @@ private TimeValuePair performPreviousFill(
     }
     CountDownLatch latch = new CountDownLatch(partitionGroups.size());
     PreviousFillHandler handler = new PreviousFillHandler(latch);
-
+    // TODO it is not suitable for register and deregister an Object to JMX to such a frequent
+    // function call.
+    // BUT is it suitable to create a thread pool for each calling??
     ExecutorService fillService = Executors.newFixedThreadPool(partitionGroups.size());

Review comment:
       The comment left over here by @jixuan1989 to consider is there any optimization. It's hard to say current way is bad or not which needs careful performance profile and comparison. And this is a performance optimization instead of bug. I don't think it's a good idea to to do this in this PR.
   
   We could create an issue to track the work. How do you think?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952712066


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.6%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.6%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.6% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737971204



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/client/ClientCategory.java
##########
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+public enum ClientCategory {
+  META("MetaClient"),
+  META_HEARTBEAT("MetaHeartbeatClient"),
+  DATA("DataClient"),
+  DATA_HEARTBEAT("DataHeartbeatClient"),
+  DATA_ASYNC_APPEND_CLIENT("DataAsyncAppendClient");

Review comment:
       This is left over from @jt2594838 testing thrift asynchronism to see if there's a performance benefit to using multiple selectors, and maybe we'll test it later to see if we want to delete it, but this PR retains the same logic as master




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737973634



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/client/async/AsyncDataClient.java
##########
@@ -58,104 +62,161 @@ public AsyncDataClient(
 
   public AsyncDataClient(
       TProtocolFactory protocolFactory,
-      TAsyncClientManager clientManager,
+      TAsyncClientManager tClientManager,
       Node node,
-      AsyncClientPool pool)
+      ClientCategory category)
       throws IOException {
     // the difference of the two clients lies in the port
     super(
         protocolFactory,
-        clientManager,
+        tClientManager,
         TNonblockingSocketWrapper.wrap(
-            node.getInternalIp(), node.getDataPort(), RaftServer.getConnectionTimeoutInMS()));
+            node.getInternalIp(),
+            ClientUtils.getPort(node, category),
+            ClusterConstant.getConnectionTimeoutInMS()));
     this.node = node;
-    this.pool = pool;
+    this.category = category;
+  }
+
+  public AsyncDataClient(
+      TProtocolFactory protocolFactory,
+      TAsyncClientManager tClientManager,
+      Node node,
+      ClientCategory category,
+      IClientManager manager)
+      throws IOException {
+    this(protocolFactory, tClientManager, node, category);
+    this.clientManager = manager;
+  }
+
+  public void close() {
+    ___transport.close();
+    ___currentMethod = null;
+  }
+
+  public boolean isValid() {
+    return ___transport != null;
+  }
+
+  /**
+   * return self if clientPool is not null, the method doesn't need to call by user, it will trigger
+   * once client transport complete
+   */
+  private void returnSelf() {
+    logger.debug("return client: ", toString());
+    if (clientManager != null) clientManager.returnAsyncClient(this, node, category);
   }
 
   @Override
   public void onComplete() {
     super.onComplete();
-    // return itself to the pool if the job is done
-    if (pool != null) {
-      pool.putClient(node, this);
-      pool.onComplete(node);
-    }
+    returnSelf();
+    // TODO: active node status

Review comment:
       removed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-953462064


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [362 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.6%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.6%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.6% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-953725777


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [402 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.9%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.9%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.9% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738258905



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -582,8 +508,9 @@ private boolean joinCluster(Node node, StartUpStatus startUpStatus)
     } else if (resp.getRespNum() == Response.RESPONSE_AGREE) {
       logger.info("Node {} admitted this node into the cluster", node);
       ByteBuffer partitionTableBuffer = resp.partitionTableBytes;
-      acceptPartitionTable(partitionTableBuffer, true);
-      getDataClusterServer().pullSnapshots();
+      acceptVerifiedPartitionTable(partitionTableBuffer, true);
+      // this should be called in ClusterIoTDB TODO
+      // getDataGroupEngine().pullSnapshots();
       return true;

Review comment:
       The method has been called in `ClusterIoTDB`, removed the stale code.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737206310



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/server/member/MetaGroupMemberTest.java
##########
@@ -338,19 +343,20 @@ public void applyRemoveNode(RemoveNodeLog removeNodeLog) {
           }
 
           @Override
-          public DataClusterServer getDataClusterServer() {
+          public DataGroupEngine getDataGroupEngine() {
             return mockDataClusterServer
-                ? MetaGroupMemberTest.this.dataClusterServer
-                : super.getDataClusterServer();
+                ? MetaGroupMemberTest.this.dataGroupEngine
+                : ClusterIoTDB.getInstance().getDataGroupEngine();
           }
 
-          @Override
-          public DataHeartbeatServer getDataHeartbeatServer() {
-            return new DataHeartbeatServer(thisNode, dataClusterServer) {
-              @Override
-              public void start() {}
-            };
-          }
+          // TODO we remove a do-nothing DataHeartbeat here.
+          //          @Override
+          //          public DataHeartbeatServer getDataHeartbeatServer() {
+          //            return new DataHeartbeatServer(thisNode, dataGroupServiceImpls) {
+          //              @Override
+          //              public void start() {}
+          //            };
+          //          }

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737322546



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/integration/BaseSingleNodeTest.java
##########
@@ -44,23 +45,27 @@
   @Before
   public void setUp() throws Exception {
     initConfigs();
-    metaServer = new MetaClusterServer();
-    metaServer.start();
-    metaServer.buildCluster();
+    daemon = ClusterIoTDB.getInstance();
+    daemon.initLocalEngines();
+    DataGroupEngine.getInstance().resetFactory();
+    daemon.activeStartNodeMode();
   }
 
   @After
   public void tearDown() throws Exception {
-    metaServer.stop();
+    // TODO fixme
+    daemon.stop();

Review comment:
       Remove the comments, it should be left over for some mark. We now can pass all ut cases, so won't need it anymore.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948262197


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [20 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [361 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.9%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.9%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.9% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.3% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43659078/badge)](https://coveralls.io/builds/43659078)
   
   Coverage decreased (-0.02%) to 67.233% when pulling **638eb9dec0352f4d78de9b665f929bed3f1a2792 on cluster-** into **516bf6588d18a58deb2088524cd042f1e938cc64 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948262197


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [20 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [361 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.9%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.9%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.9% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.3% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43435673/badge)](https://coveralls.io/builds/43435673)
   
   Coverage decreased (-0.01%) to 67.746% when pulling **326f43eb2db867e92383d40e3218c37a1fc6f007 on cluster-** into **edbb3612ae4b1615547929177048944400e48b9b on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-947286295


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [21 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [361 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.3% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43627679/badge)](https://coveralls.io/builds/43627679)
   
   Coverage decreased (-0.2%) to 67.271% when pulling **15cd94c3ce263991c54083f58405a610d9d5a753 on cluster-** into **e4b7f64deb54b3fc186424cf969a68bff23a6fc7 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43664422/badge)](https://coveralls.io/builds/43664422)
   
   Coverage increased (+0.005%) to 67.257% when pulling **3574f9fc6437f4a5144239b700ce56e5dc3c1f0b on cluster-** into **1dcc82aad34bfc0820ac28f6a2e70757fef7d219 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] jt2594838 commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
jt2594838 commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-954352103


   Much appreciation for your hard work, but there seems to be a lot of code smells, and I think we'd better not let them enter the master branch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737205800



##########
File path: server/src/test/java/org/apache/iotdb/db/integration/IoTDBCheckConfigIT.java
##########
@@ -145,9 +140,7 @@ public void testSameTimeEncoderAfterStartService() throws Exception {
     try {
       IoTDBConfigCheck.getInstance().checkConfig();
     } catch (Throwable t) {
-      assertTrue(false);
-    } finally {
-      System.setSecurityManager(null);
+      fail("should have no configration errors");

Review comment:
       fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43785393/badge)](https://coveralls.io/builds/43785393)
   
   Coverage increased (+0.03%) to 67.08% when pulling **c76201c82c844d3d925f0aeff1b2628c672b0ca4 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] wangchao316 commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
wangchao316 commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737146801



##########
File path: cluster/pom.xml
##########
@@ -125,6 +125,10 @@
             <artifactId>powermock-api-mockito2</artifactId>
             <scope>test</scope>
         </dependency>
+        <dependency>
+            <groupId>org.apache.commons</groupId>
+            <artifactId>commons-pool2</artifactId>
+        </dependency>

Review comment:
       Thanks your contribution. Good.
   need add <version></version>

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/metadata/CMManager.java
##########
@@ -1049,11 +1050,11 @@ public void setCoordinator(Coordinator coordinator) {
           // a non-null result contains correct result even if it is empty, so query next group
           return paths;
         }
-      } catch (IOException | TException e) {
-        throw new MetadataException(e);
       } catch (InterruptedException e) {
         Thread.currentThread().interrupt();
         throw new MetadataException(e);
+      } catch (Exception e) {

Review comment:
       Inspected exceptions are captured for recovery purposes and cannot be done if all inspected exceptions are captured indiscriminately.
   Rectify the fault. Therefore, specific exceptions should be distinguished and captured.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/coordinator/Coordinator.java
##########
@@ -738,7 +739,7 @@ private TSStatus forwardPlan(PhysicalPlan plan, PartitionGroup group) {
         } else {
           status = forwardDataPlanSync(plan, node, group.getHeader());
         }
-      } catch (IOException e) {
+      } catch (Exception e) {
         status = StatusUtils.getStatus(StatusUtils.EXECUTE_STATEMENT_ERROR, e.getMessage());

Review comment:
       Why is the parent class exception used? This changes the logic captured by the upper layer.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/client/async/AsyncDataClient.java
##########
@@ -58,104 +62,161 @@ public AsyncDataClient(
 
   public AsyncDataClient(
       TProtocolFactory protocolFactory,
-      TAsyncClientManager clientManager,
+      TAsyncClientManager tClientManager,
       Node node,
-      AsyncClientPool pool)
+      ClientCategory category)
       throws IOException {
     // the difference of the two clients lies in the port
     super(
         protocolFactory,
-        clientManager,
+        tClientManager,
         TNonblockingSocketWrapper.wrap(
-            node.getInternalIp(), node.getDataPort(), RaftServer.getConnectionTimeoutInMS()));
+            node.getInternalIp(),
+            ClientUtils.getPort(node, category),
+            ClusterConstant.getConnectionTimeoutInMS()));
     this.node = node;
-    this.pool = pool;
+    this.category = category;
+  }
+
+  public AsyncDataClient(
+      TProtocolFactory protocolFactory,
+      TAsyncClientManager tClientManager,
+      Node node,
+      ClientCategory category,
+      IClientManager manager)
+      throws IOException {
+    this(protocolFactory, tClientManager, node, category);
+    this.clientManager = manager;
+  }
+
+  public void close() {
+    ___transport.close();
+    ___currentMethod = null;
+  }
+
+  public boolean isValid() {
+    return ___transport != null;
+  }
+
+  /**
+   * return self if clientPool is not null, the method doesn't need to call by user, it will trigger
+   * once client transport complete
+   */
+  private void returnSelf() {
+    logger.debug("return client: ", toString());
+    if (clientManager != null) clientManager.returnAsyncClient(this, node, category);
   }
 
   @Override
   public void onComplete() {
     super.onComplete();
-    // return itself to the pool if the job is done
-    if (pool != null) {
-      pool.putClient(node, this);
-      pool.onComplete(node);
-    }
+    returnSelf();
+    // TODO: active node status

Review comment:
       TODO ?

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/metadata/CMManager.java
##########
@@ -1180,38 +1180,40 @@ public void setCoordinator(Coordinator coordinator) {
           }
           return partialPaths;
         }
-      } catch (IOException | TException e) {
-        throw new MetadataException(e);
       } catch (InterruptedException e) {
         Thread.currentThread().interrupt();
         throw new MetadataException(e);
+      } catch (Exception e) {
+        throw new MetadataException(e);

Review comment:
       the same as above.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/log/LogDispatcher.java
##########
@@ -66,16 +65,19 @@
   private RaftMember member;
   private boolean useBatchInLogCatchUp =
       ClusterDescriptor.getInstance().getConfig().isUseBatchInLogCatchUp();
+  // each follower has a queue and a dispatch thread is attached in executorService.
   private List<BlockingQueue<SendLogRequest>> nodeLogQueues = new ArrayList<>();
   private ExecutorService executorService;
+
+  // TODO we have no way to close this pool. should change later.
   private static ExecutorService serializationService =
-      Executors.newFixedThreadPool(
-          Runtime.getRuntime().availableProcessors(),
-          new ThreadFactoryBuilder().setDaemon(true).setNameFormat("DispatcherEncoder-%d").build());
+      IoTDBThreadPoolFactory.newFixedThreadPoolWithDaemonThread(
+          Runtime.getRuntime().availableProcessors(), "DispatcherEncoder");
 
   public LogDispatcher(RaftMember member) {
     this.member = member;
-    executorService = Executors.newCachedThreadPool();
+    executorService =
+        IoTDBThreadPoolFactory.newCachedThreadPool("LogDispatcher-" + member.getName());

Review comment:
       Thread names are defined in the class. which is “ThreadName“




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737360401



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -326,59 +294,31 @@ public void start() {
   @Override
   void startBackGroundThreads() {
     super.startBackGroundThreads();
-    reportThread =
-        Executors.newSingleThreadScheduledExecutor(n -> new Thread(n, "NodeReportThread"));
-    hardLinkCleanerThread =
-        Executors.newSingleThreadScheduledExecutor(n -> new Thread(n, "HardLinkCleaner"));
   }
 
   /**
-   * Stop the heartbeat and catch-up thread pool, DataClusterServer, ClientServer and reportThread.
-   * Calling the method twice does not induce side effects.
+   * Stop the heartbeat and catch-up thread pool, DataClusterServer, ClusterTSServiceImpl and
+   * reportThread. Calling the method twice does not induce side effects.
    */
   @Override
   public void stop() {
     super.stop();
-    if (getDataClusterServer() != null) {
-      getDataClusterServer().stop();
-    }
-    if (getDataHeartbeatServer() != null) {
-      getDataHeartbeatServer().stop();
-    }
-    if (clientServer != null) {
-      clientServer.stop();
-    }
-    if (reportThread != null) {
-      reportThread.shutdownNow();
-      try {
-        reportThread.awaitTermination(THREAD_POLL_WAIT_TERMINATION_TIME_S, TimeUnit.SECONDS);
-      } catch (InterruptedException e) {
-        Thread.currentThread().interrupt();
-        logger.error("Unexpected interruption when waiting for reportThread to end", e);
-      }
-    }
-    if (hardLinkCleanerThread != null) {
-      hardLinkCleanerThread.shutdownNow();
-      try {
-        hardLinkCleanerThread.awaitTermination(
-            THREAD_POLL_WAIT_TERMINATION_TIME_S, TimeUnit.SECONDS);
-      } catch (InterruptedException e) {
-        Thread.currentThread().interrupt();
-        logger.error("Unexpected interruption when waiting for hardlinkCleaner to end", e);
-      }
-    }
     logger.info("{}: stopped", name);
   }
 
+  @Override
+  public ServiceType getID() {
+    return ServiceType.CLUSTER_META_ENGINE;
+  }
+
   /**
-   * Start DataClusterServer and ClientServer so this node will be able to respond to other nodes
-   * and clients.
+   * Start DataClusterServer and ClusterTSServiceImpl so this node will be able to respond to other
+   * nodes and clients.
    */
   protected void initSubServers() throws TTransportException, StartupException {
-    getDataClusterServer().start();
-    getDataHeartbeatServer().start();
-    clientServer.setCoordinator(this.coordinator);
-    clientServer.start();
+    //    getDataClusterServer().start();
+    //    getDataHeartbeatServer().start();
+    // TODO FIXME
   }

Review comment:
       Removed the empty method.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737366413



##########
File path: .github/workflows/client-go.yml
##########
@@ -13,6 +13,8 @@ on:
     branches:
       - master
       - 'rel/*'
+      #remove me when cluster- branch is merged
+      - cluster-

Review comment:
       Fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-956059060


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] mychaow commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
mychaow commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r740668533



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,687 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO: better to throw exception if the client can not be get. Then we can remove this field.
+  private static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node.
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots. */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup.
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  /** initialize the current node and its services */
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+      JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+      return false;
+    }
+    return true;
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    if (!cluster.initLocalEngines()) {
+      logger.error("initLocalEngines error, stop process!");
+      return;
+    }
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO: duplicate code
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  /** Start as a seed node */
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster module.
+    // TODO: it is better to remove coordinator out of metaGroupEngine
+
+    registerManager.register(metaGroupEngine);
+    registerManager.register(dataGroupEngine);
+
+    // rpc service initialize
+    DataGroupServiceImpls dataGroupServiceImpls = new DataGroupServiceImpls();
+    if (ClusterDescriptor.getInstance().getConfig().isUseAsyncServer()) {
+      MetaAsyncService metaAsyncService = new MetaAsyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      MetaRaftService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      DataRaftService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+    } else {
+      MetaSyncService syncService = new MetaSyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initSyncedServiceImpl(syncService);
+      MetaRaftService.getInstance().initSyncedServiceImpl(syncService);
+      DataRaftService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+    }
+    // start RPC service
+    logger.info("start Meta Heartbeat RPC service... ");
+    registerManager.register(MetaRaftHeartBeatService.getInstance());
+    /* TODO: better to start the Meta RPC service until the heartbeatService has elected the leader and quorum of followers have caught up. */
+    logger.info("start Meta RPC service... ");

Review comment:
       better to wait for some seconds?

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,685 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  /**
+   * TODO: fix me: better to throw exception if the client can not be get. Then we can remove this
+   * field.
+   */
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);

Review comment:
       why do not change the method name?

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,687 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO: better to throw exception if the client can not be get. Then we can remove this field.
+  private static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node.
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots. */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup.
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  /** initialize the current node and its services */
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+      JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+      return false;
+    }
+    return true;
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    if (!cluster.initLocalEngines()) {
+      logger.error("initLocalEngines error, stop process!");
+      return;
+    }
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO: duplicate code
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  /** Start as a seed node */
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster module.
+    // TODO: it is better to remove coordinator out of metaGroupEngine
+
+    registerManager.register(metaGroupEngine);
+    registerManager.register(dataGroupEngine);
+
+    // rpc service initialize
+    DataGroupServiceImpls dataGroupServiceImpls = new DataGroupServiceImpls();
+    if (ClusterDescriptor.getInstance().getConfig().isUseAsyncServer()) {
+      MetaAsyncService metaAsyncService = new MetaAsyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      MetaRaftService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      DataRaftService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+    } else {
+      MetaSyncService syncService = new MetaSyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initSyncedServiceImpl(syncService);
+      MetaRaftService.getInstance().initSyncedServiceImpl(syncService);
+      DataRaftService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+    }
+    // start RPC service
+    logger.info("start Meta Heartbeat RPC service... ");
+    registerManager.register(MetaRaftHeartBeatService.getInstance());
+    /* TODO: better to start the Meta RPC service until the heartbeatService has elected the leader and quorum of followers have caught up. */
+    logger.info("start Meta RPC service... ");

Review comment:
       ok

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,685 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  /**
+   * TODO: fix me: better to throw exception if the client can not be get. Then we can remove this
+   * field.
+   */
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);

Review comment:
       ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] mychaow commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
mychaow commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r740758267



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,685 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  /**
+   * TODO: fix me: better to throw exception if the client can not be get. Then we can remove this
+   * field.
+   */
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);

Review comment:
       ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43970962/badge)](https://coveralls.io/builds/43970962)
   
   Coverage increased (+0.01%) to 66.97% when pulling **67d8875cdda0032a63c4bbc00a993391ac896fdf on cluster-** into **b05e21c078debcbb020d62ecd6d8a00a932863bd on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737200483



##########
File path: server/src/test/java/org/apache/iotdb/db/integration/IoTDBJMXTest.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.integration;
+
+import org.apache.iotdb.db.utils.EnvironmentUtils;
+import org.apache.iotdb.jdbc.Config;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.sql.Statement;
+
+public class IoTDBJMXTest {
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+    EnvironmentUtils.envSetUp();
+    Class.forName(Config.JDBC_DRIVER_NAME);
+  }
+
+  @AfterClass
+  public static void tearDown() throws Exception {
+    EnvironmentUtils.cleanEnv();
+  }
+
+  @Test
+  public void testThreadPool() {
+    try (Connection connection =
+            DriverManager.getConnection(
+                Config.IOTDB_URL_PREFIX + "127.0.0.1:6667/", "root", "root");
+        Statement statement = connection.createStatement(); ) {
+      // make sure two storage groups having no conflict when registering their JMX info (for their
+      // thread pools)
+      statement.execute("set storage group to root.sg1");
+      statement.execute("set storage group to root.sg2");
+      statement.execute("insert into root.sg1.d1 (time, s1) values (1, 1)");
+      statement.execute("insert into root.sg2.d1 (time, s1) values (1, 1)");
+    } catch (SQLException throwables) {
+      throwables.printStackTrace();
+    }
+  }

Review comment:
       I have communicated with @jixuan1989 that this is the test class he created to register JMX for the test at that time, but it is useless now. I have deleted it




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r733288235



##########
File path: .github/workflows/e2e.yml
##########
@@ -16,6 +16,8 @@ on:
       - master
       - 'rel/*'
       - cluster_new
+      #remove me when cluster- branch is merged

Review comment:
       ditto

##########
File path: .github/workflows/sonar-coveralls.yml
##########
@@ -15,6 +15,8 @@ on:
       - master
       - "rel/*"
       - cluster_new
+      #remove me when cluster- branch is merged

Review comment:
       ditto

##########
File path: .github/workflows/main-unix.yml
##########
@@ -16,6 +16,8 @@ on:
       - master
       - 'rel/*'
       - cluster_new
+      #remove me when cluster- branch is merged

Review comment:
       ditto

##########
File path: .github/workflows/main-win.yml
##########
@@ -15,6 +15,8 @@ on:
       - master
       - 'rel/*'
       - cluster_new
+      #remove me when cluster- branch is merged

Review comment:
       ditto

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDBMBean.java
##########
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public interface ClusterIoTDBMBean {
+  /** @return true only if the log degree is DEBUG and the report is enabled */
+  boolean startRaftInfoReport();
+

Review comment:
       The interface is registered into JMX framework. User can access the method via some JMX console.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;

Review comment:
       Track the work by issue: https://github.com/apache/iotdb/issues/3881

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDBMBean.java
##########
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public interface ClusterIoTDBMBean {
+  /** @return true only if the log degree is DEBUG and the report is enabled */
+  boolean startRaftInfoReport();
+

Review comment:
       Fixed.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientPoolFactoryTest.java
##########
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+import org.apache.iotdb.cluster.utils.ClientUtils;
+
+import org.apache.commons.pool2.impl.GenericKeyedObjectPool;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+
+public class ClientPoolFactoryTest {
+  private ClusterConfig clusterConfig = ClusterDescriptor.getInstance().getConfig();
+
+  private long mockMaxWaitTimeoutMs = 10 * 1000L;
+  private int mockMaxClientPerMember = 10;
+
+  private int maxClientPerNodePerMember = clusterConfig.getMaxClientPerNodePerMember();
+  private long waitClientTimeoutMS = clusterConfig.getWaitClientTimeoutMS();
+
+  private ClientPoolFactory clientPoolFactory;
+  private MockClientManager mockClientManager;
+
+  @Before
+  public void setUp() {
+    clusterConfig.setMaxClientPerNodePerMember(mockMaxClientPerMember);
+    clusterConfig.setWaitClientTimeoutMS(mockMaxWaitTimeoutMs);
+    clientPoolFactory = new ClientPoolFactory();
+    mockClientManager =
+        new MockClientManager() {
+          @Override
+          public void returnAsyncClient(
+              RaftService.AsyncClient client, Node node, ClientCategory category) {
+            assert (client == asyncClient);

Review comment:
       Done.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientManagerTest.java
##########
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ClientManagerTest extends BaseClientTest {
+
+  @Before
+  public void setUp() throws IOException {
+    startDataServer();
+    startMetaServer();
+    startDataHeartbeatServer();
+    startMetaHeartbeatServer();
+  }
+
+  @After
+  public void tearDown() throws IOException, InterruptedException {
+    stopDataServer();
+    stopMetaServer();
+    stopDataHeartbeatServer();
+    stopMetaHeartbeatServer();
+  }
+
+  @Test
+  public void syncClientManagersTest() throws Exception {
+    // ---------Sync cluster clients manager test------------
+    ClientManager clusterManager =
+        new ClientManager(false, ClientManager.Type.RequestForwardClient);
+    RaftService.Client syncClusterClient =
+        clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(syncClusterClient);
+    Assert.assertTrue(syncClusterClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) syncClusterClient).getNode(), defaultNode);
+    Assert.assertTrue(syncClusterClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) syncClusterClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync meta(meta heartbeat) clients manager test------------
+    ClientManager metaManager = new ClientManager(false, ClientManager.Type.MetaGroupClient);
+    RaftService.Client metaClient = metaManager.borrowSyncClient(defaultNode, ClientCategory.META);
+    Assert.assertNotNull(metaClient);
+    Assert.assertTrue(metaClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaClient).getNode(), defaultNode);
+    Assert.assertTrue(metaClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaClient).returnSelf();
+
+    RaftService.Client metaHeartClient =
+        metaManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT);
+    Assert.assertNotNull(metaHeartClient);
+    Assert.assertTrue(metaHeartClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(metaHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync data(data heartbeat) clients manager test------------
+    ClientManager dataManager = new ClientManager(false, ClientManager.Type.DataGroupClient);
+
+    RaftService.Client dataClient = dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+    Assert.assertNotNull(dataClient);
+    Assert.assertTrue(dataClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataClient).getNode(), defaultNode);
+    Assert.assertTrue(dataClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataClient).returnSelf();
+
+    RaftService.Client dataHeartClient =
+        dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT);
+    Assert.assertNotNull(dataHeartClient);
+    Assert.assertTrue(dataHeartClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(dataHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+  }
+
+  @Test
+  public void asyncClientManagersTest() throws Exception {
+    // ---------async cluster clients manager test------------
+    ClientManager clusterManager = new ClientManager(true, ClientManager.Type.RequestForwardClient);
+    RaftService.AsyncClient clusterClient =
+        clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(clusterClient);
+    Assert.assertTrue(clusterClient instanceof AsyncDataClient);
+    Assert.assertEquals(((AsyncDataClient) clusterClient).getNode(), defaultNode);
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isValid());
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isReady());

Review comment:
       Sure, let me try to do it.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();

Review comment:
       Fixed.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    cluster.initLocalEngines();
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO duplicate code,consider to solve it later
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the

Review comment:
       In my opinion, the interfaces in ClusterIoTDBMBean only provides user a chance to look into system status. It can be enable by interface provided by JMX. Is it right? @jixuan1989 

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();

Review comment:
       Moving `JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());` into `try` block looks like more clear.
   
   How do you think?

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientPoolFactoryTest.java
##########
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+import org.apache.iotdb.cluster.utils.ClientUtils;
+
+import org.apache.commons.pool2.impl.GenericKeyedObjectPool;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+
+public class ClientPoolFactoryTest {
+  private ClusterConfig clusterConfig = ClusterDescriptor.getInstance().getConfig();
+
+  private long mockMaxWaitTimeoutMs = 10 * 1000L;
+  private int mockMaxClientPerMember = 10;
+
+  private int maxClientPerNodePerMember = clusterConfig.getMaxClientPerNodePerMember();
+  private long waitClientTimeoutMS = clusterConfig.getWaitClientTimeoutMS();
+
+  private ClientPoolFactory clientPoolFactory;
+  private MockClientManager mockClientManager;
+
+  @Before
+  public void setUp() {
+    clusterConfig.setMaxClientPerNodePerMember(mockMaxClientPerMember);
+    clusterConfig.setWaitClientTimeoutMS(mockMaxWaitTimeoutMs);
+    clientPoolFactory = new ClientPoolFactory();
+    mockClientManager =
+        new MockClientManager() {
+          @Override
+          public void returnAsyncClient(
+              RaftService.AsyncClient client, Node node, ClientCategory category) {
+            assert (client == asyncClient);
+          }
+
+          @Override
+          public void returnSyncClient(
+              RaftService.Client client, Node node, ClientCategory category) {
+            Assert.assertTrue(client == syncClient);
+          }
+        };
+    clientPoolFactory.setClientManager(mockClientManager);
+  }
+
+  @After
+  public void tearDown() {
+    clusterConfig.setMaxClientPerNodePerMember(maxClientPerNodePerMember);
+    clusterConfig.setWaitClientTimeoutMS(waitClientTimeoutMS);
+  }
+
+  @Test
+  public void poolConfigTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+    Node node = constructDefaultNode();
+
+    for (int i = 0; i < mockMaxClientPerMember; i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+    }
+
+    long timeStart = System.currentTimeMillis();
+    try {
+      pool.borrowObject(node);
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof NoSuchElementException);
+    } finally {
+      Assert.assertTrue(System.currentTimeMillis() - timeStart + 10 > mockMaxWaitTimeoutMs);
+    }
+  }
+
+  @Test
+  public void poolRecycleTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+
+    Node node = constructDefaultNode();
+    List<RaftService.AsyncClient> clientList = new ArrayList<>();
+    for (int i = 0; i < pool.getMaxIdlePerKey(); i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+      clientList.add(client);
+    }
+
+    for (RaftService.AsyncClient client : clientList) {
+      pool.returnObject(node, client);
+    }
+
+    for (int i = 0; i < pool.getMaxIdlePerKey(); i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+      Assert.assertTrue(clientList.contains(client));
+    }
+  }
+
+  @Test
+  public void createAsyncDataClientTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+
+    Assert.assertEquals(pool.getMaxTotalPerKey(), mockMaxClientPerMember);
+    Assert.assertEquals(pool.getMaxWaitDuration(), Duration.ofMillis(mockMaxWaitTimeoutMs));
+
+    RaftService.AsyncClient asyncClient = null;
+
+    Node node = constructDefaultNode();
+
+    asyncClient = pool.borrowObject(node);
+    mockClientManager.setAsyncClient(asyncClient);
+    Assert.assertNotNull(asyncClient);
+    Assert.assertTrue(asyncClient instanceof AsyncDataClient);

Review comment:
       Useless code. Remove it. 

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/sync/SyncDataClientTest.java
##########
@@ -4,123 +4,107 @@
 
 package org.apache.iotdb.cluster.client.sync;
 
-import org.apache.iotdb.cluster.client.sync.SyncDataClient.FactorySync;
-import org.apache.iotdb.cluster.rpc.thrift.Node;
-import org.apache.iotdb.cluster.rpc.thrift.RaftService.Client;
-import org.apache.iotdb.rpc.TSocketWrapper;
+import org.apache.iotdb.cluster.client.BaseClientTest;
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
 
 import org.apache.thrift.protocol.TBinaryProtocol;
-import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.apache.thrift.transport.TTransportException;
+import org.junit.Assert;
+import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.net.ServerSocket;
+import java.net.SocketException;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
 
-public class SyncDataClientTest {
+public class SyncDataClientTest extends BaseClientTest {
 
-  @Test
-  public void test() throws IOException, InterruptedException {
-    Node node = new Node();
-    node.setDataPort(40010).setInternalIp("localhost").setClientIp("localhost");
-    ServerSocket serverSocket = new ServerSocket(node.getDataPort());
-    Thread listenThread =
-        new Thread(
-            () -> {
-              while (!Thread.interrupted()) {
-                try {
-                  serverSocket.accept();
-                } catch (IOException e) {
-                  return;
-                }
-              }
-            });
-    listenThread.start();
+  private TProtocolFactory protocolFactory;
+
+  @Before
+  public void setUp() {
+    protocolFactory =
+        ClusterDescriptor.getInstance().getConfig().isRpcThriftCompressionEnabled()
+            ? new TCompactProtocol.Factory()
+            : new TBinaryProtocol.Factory();
+  }
 
+  @Test
+  public void testDataClient() throws IOException, InterruptedException, TTransportException {
     try {
-      SyncClientPool syncClientPool = new SyncClientPool(new FactorySync(new Factory()));
-      SyncDataClient client;
-      client = (SyncDataClient) syncClientPool.getClient(node);
+      startDataServer();
+      SyncDataClient dataClient =
+          new SyncDataClient(protocolFactory, defaultNode, ClientCategory.DATA);
 
-      assertEquals(node, client.getNode());
+      assertEquals(
+          "SyncDataClient{node=Node(internalIp:localhost, metaPort:9003, nodeIdentifier:0, "
+              + "dataPort:40010, clientPort:0, clientIp:localhost),port=40010}",
+          dataClient.toString());
 
-      client.setTimeout(1000);
-      assertEquals(1000, client.getTimeout());
+      assertCheck(dataClient);
 
-      client.putBack();
-      Client newClient = syncClientPool.getClient(node);
-      assertEquals(client, newClient);
-      assertTrue(client.getInputProtocol().getTransport().isOpen());
+      dataClient =
+          new SyncDataClient.SyncDataClientFactory(protocolFactory, ClientCategory.DATA)
+              .makeObject(defaultNode)
+              .getObject();
 
       assertEquals(
-          "DataClient{node=ClusterNode{ internalIp='localhost', metaPort=0, nodeIdentifier=0,"
-              + " dataPort=40010, clientPort=0, clientIp='localhost'}}",
-          client.toString());
-
-      client =
-          new SyncDataClient(
-              new TBinaryProtocol(TSocketWrapper.wrap(node.getInternalIp(), node.getDataPort())));
-      // client without a belong pool will be closed after putBack()
-      client.putBack();
-      assertFalse(client.getInputProtocol().getTransport().isOpen());
+          "SyncDataClient{node=Node(internalIp:localhost, metaPort:9003, nodeIdentifier:0, "
+              + "dataPort:40010, clientPort:0, clientIp:localhost),port=40010}",
+          dataClient.toString());
+
+      assertCheck(dataClient);
+    } catch (Exception e) {
+      e.printStackTrace();

Review comment:
       added condition check.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    cluster.initLocalEngines();
+

Review comment:
       Good idea.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;

Review comment:
       Good catch. We didn't rename the class in the PR to avoid too much conflict when merge with master. We'll start a new PR for rename work.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);

Review comment:
       Yes. I'd like to check `logger.isDebugEnable()` here to decide whether need to create the thread. `allowReport` could be change dynamically, won't check it when start the thread. @jixuan1989 

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientPoolFactoryTest.java
##########
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+import org.apache.iotdb.cluster.utils.ClientUtils;
+
+import org.apache.commons.pool2.impl.GenericKeyedObjectPool;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+
+public class ClientPoolFactoryTest {
+  private ClusterConfig clusterConfig = ClusterDescriptor.getInstance().getConfig();
+
+  private long mockMaxWaitTimeoutMs = 10 * 1000L;
+  private int mockMaxClientPerMember = 10;
+
+  private int maxClientPerNodePerMember = clusterConfig.getMaxClientPerNodePerMember();
+  private long waitClientTimeoutMS = clusterConfig.getWaitClientTimeoutMS();
+
+  private ClientPoolFactory clientPoolFactory;
+  private MockClientManager mockClientManager;
+
+  @Before
+  public void setUp() {
+    clusterConfig.setMaxClientPerNodePerMember(mockMaxClientPerMember);
+    clusterConfig.setWaitClientTimeoutMS(mockMaxWaitTimeoutMs);
+    clientPoolFactory = new ClientPoolFactory();
+    mockClientManager =
+        new MockClientManager() {
+          @Override
+          public void returnAsyncClient(
+              RaftService.AsyncClient client, Node node, ClientCategory category) {
+            assert (client == asyncClient);
+          }
+
+          @Override
+          public void returnSyncClient(
+              RaftService.Client client, Node node, ClientCategory category) {
+            Assert.assertTrue(client == syncClient);
+          }
+        };
+    clientPoolFactory.setClientManager(mockClientManager);
+  }
+
+  @After
+  public void tearDown() {
+    clusterConfig.setMaxClientPerNodePerMember(maxClientPerNodePerMember);
+    clusterConfig.setWaitClientTimeoutMS(waitClientTimeoutMS);
+  }
+
+  @Test
+  public void poolConfigTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+    Node node = constructDefaultNode();
+
+    for (int i = 0; i < mockMaxClientPerMember; i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+    }
+
+    long timeStart = System.currentTimeMillis();
+    try {
+      pool.borrowObject(node);
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof NoSuchElementException);
+    } finally {
+      Assert.assertTrue(System.currentTimeMillis() - timeStart + 10 > mockMaxWaitTimeoutMs);
+    }

Review comment:
       Add null check for borrowed object.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    cluster.initLocalEngines();
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO duplicate code,consider to solve it later
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the

Review comment:
       @jixuan1989 , any idea?

##########
File path: .github/workflows/client-go.yml
##########
@@ -13,6 +13,8 @@ on:
     branches:
       - master
       - 'rel/*'
+      #remove me when cluster- branch is merged

Review comment:
       remove?

##########
File path: .github/workflows/client.yml
##########
@@ -14,6 +14,8 @@ on:
     branches:
       - master
       - "rel/*"
+      #remove me when cluster- branch is merged

Review comment:
       ditto

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);

Review comment:
       After discussion, `logger.isDebugEnable()` could also be update via JMX interface. I think we won't fix this one.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientManagerTest.java
##########
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ClientManagerTest extends BaseClientTest {
+
+  @Before
+  public void setUp() throws IOException {
+    startDataServer();
+    startMetaServer();
+    startDataHeartbeatServer();
+    startMetaHeartbeatServer();
+  }
+
+  @After
+  public void tearDown() throws IOException, InterruptedException {
+    stopDataServer();
+    stopMetaServer();
+    stopDataHeartbeatServer();
+    stopMetaHeartbeatServer();
+  }
+
+  @Test
+  public void syncClientManagersTest() throws Exception {
+    // ---------Sync cluster clients manager test------------
+    ClientManager clusterManager =
+        new ClientManager(false, ClientManager.Type.RequestForwardClient);
+    RaftService.Client syncClusterClient =
+        clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(syncClusterClient);
+    Assert.assertTrue(syncClusterClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) syncClusterClient).getNode(), defaultNode);
+    Assert.assertTrue(syncClusterClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) syncClusterClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync meta(meta heartbeat) clients manager test------------
+    ClientManager metaManager = new ClientManager(false, ClientManager.Type.MetaGroupClient);
+    RaftService.Client metaClient = metaManager.borrowSyncClient(defaultNode, ClientCategory.META);
+    Assert.assertNotNull(metaClient);
+    Assert.assertTrue(metaClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaClient).getNode(), defaultNode);
+    Assert.assertTrue(metaClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaClient).returnSelf();
+
+    RaftService.Client metaHeartClient =
+        metaManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT);
+    Assert.assertNotNull(metaHeartClient);
+    Assert.assertTrue(metaHeartClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(metaHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync data(data heartbeat) clients manager test------------
+    ClientManager dataManager = new ClientManager(false, ClientManager.Type.DataGroupClient);
+
+    RaftService.Client dataClient = dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+    Assert.assertNotNull(dataClient);
+    Assert.assertTrue(dataClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataClient).getNode(), defaultNode);
+    Assert.assertTrue(dataClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataClient).returnSelf();
+
+    RaftService.Client dataHeartClient =
+        dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT);
+    Assert.assertNotNull(dataHeartClient);
+    Assert.assertTrue(dataHeartClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(dataHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+  }
+
+  @Test
+  public void asyncClientManagersTest() throws Exception {
+    // ---------async cluster clients manager test------------
+    ClientManager clusterManager = new ClientManager(true, ClientManager.Type.RequestForwardClient);
+    RaftService.AsyncClient clusterClient =
+        clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(clusterClient);
+    Assert.assertTrue(clusterClient instanceof AsyncDataClient);
+    Assert.assertEquals(((AsyncDataClient) clusterClient).getNode(), defaultNode);
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isValid());
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isReady());

Review comment:
       There is no interface to invalidate async client. And I think the case is covered by verify all clients got from the manager are valid. Could we ignore the branch test? How do you think? 

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    cluster.initLocalEngines();
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO duplicate code,consider to solve it later
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the

Review comment:
       Won't fix.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientPoolFactoryTest.java
##########
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+import org.apache.iotdb.cluster.utils.ClientUtils;
+
+import org.apache.commons.pool2.impl.GenericKeyedObjectPool;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+
+public class ClientPoolFactoryTest {
+  private ClusterConfig clusterConfig = ClusterDescriptor.getInstance().getConfig();
+
+  private long mockMaxWaitTimeoutMs = 10 * 1000L;
+  private int mockMaxClientPerMember = 10;
+
+  private int maxClientPerNodePerMember = clusterConfig.getMaxClientPerNodePerMember();
+  private long waitClientTimeoutMS = clusterConfig.getWaitClientTimeoutMS();
+
+  private ClientPoolFactory clientPoolFactory;
+  private MockClientManager mockClientManager;
+
+  @Before
+  public void setUp() {
+    clusterConfig.setMaxClientPerNodePerMember(mockMaxClientPerMember);
+    clusterConfig.setWaitClientTimeoutMS(mockMaxWaitTimeoutMs);
+    clientPoolFactory = new ClientPoolFactory();
+    mockClientManager =
+        new MockClientManager() {
+          @Override
+          public void returnAsyncClient(
+              RaftService.AsyncClient client, Node node, ClientCategory category) {
+            assert (client == asyncClient);
+          }
+
+          @Override
+          public void returnSyncClient(
+              RaftService.Client client, Node node, ClientCategory category) {
+            Assert.assertTrue(client == syncClient);
+          }
+        };
+    clientPoolFactory.setClientManager(mockClientManager);
+  }
+
+  @After
+  public void tearDown() {
+    clusterConfig.setMaxClientPerNodePerMember(maxClientPerNodePerMember);
+    clusterConfig.setWaitClientTimeoutMS(waitClientTimeoutMS);
+  }
+
+  @Test
+  public void poolConfigTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+    Node node = constructDefaultNode();
+
+    for (int i = 0; i < mockMaxClientPerMember; i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+    }
+
+    long timeStart = System.currentTimeMillis();
+    try {
+      pool.borrowObject(node);
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof NoSuchElementException);
+    } finally {
+      Assert.assertTrue(System.currentTimeMillis() - timeStart + 10 > mockMaxWaitTimeoutMs);
+    }
+  }
+
+  @Test
+  public void poolRecycleTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+
+    Node node = constructDefaultNode();
+    List<RaftService.AsyncClient> clientList = new ArrayList<>();
+    for (int i = 0; i < pool.getMaxIdlePerKey(); i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+      clientList.add(client);
+    }
+
+    for (RaftService.AsyncClient client : clientList) {
+      pool.returnObject(node, client);
+    }
+
+    for (int i = 0; i < pool.getMaxIdlePerKey(); i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+      Assert.assertTrue(clientList.contains(client));
+    }
+  }

Review comment:
       Yes, the idle ones will be destroy by eviction thread periodically. 

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/integration/BaseSingleNodeTest.java
##########
@@ -44,23 +45,27 @@
   @Before
   public void setUp() throws Exception {
     initConfigs();
-    metaServer = new MetaClusterServer();
-    metaServer.start();
-    metaServer.buildCluster();
+    daemon = ClusterIoTDB.getInstance();
+    daemon.initLocalEngines();
+    DataGroupEngine.getInstance().resetFactory();
+    daemon.activeStartNodeMode();
   }
 
   @After
   public void tearDown() throws Exception {
-    metaServer.stop();
+    // TODO fixme
+    daemon.stop();

Review comment:
       Remove the comments, it should be left when develop to do some mark. We now can pass all ut cases, so won't need it anymore.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737363330



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -200,37 +181,22 @@
   private PartitionTable partitionTable;
   /** router calculates the partition groups that a partitioned plan should be sent to */
   private ClusterPlanRouter router;
-  /**
-   * each node contains multiple DataGroupMembers and they are managed by a DataClusterServer acting
-   * as a broker
-   */
-  private DataClusterServer dataClusterServer;
 
-  /** each node starts a data heartbeat server to transfer heartbeat requests */
-  private DataHeartbeatServer dataHeartbeatServer;
-
-  /**
-   * an override of TSServiceImpl, which redirect JDBC and Session requests to the MetaGroupMember
-   * so they can be processed cluster-wide
-   */
-  private ClientServer clientServer;
-
-  private DataClientProvider dataClientProvider;
-
-  /**
-   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
-   * of all raft members in this node
-   */
-  private ScheduledExecutorService reportThread;
+  //  /**
+  //   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the
+  // status
+  //   * of all raft members in this node
+  //   */
+  //  private ScheduledExecutorService reportThread;

Review comment:
       Fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737957643



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+

Review comment:
       Fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738265018



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -1807,15 +1821,16 @@ public void applyRemoveNode(RemoveNodeLog removeNodeLog) {
         new Thread(
                 () -> {
                   try {
-                    Thread.sleep(RaftServer.getHeartbeatIntervalMs());
+                    Thread.sleep(ClusterConstant.getHeartbeatIntervalMs());
                   } catch (InterruptedException e) {
                     Thread.currentThread().interrupt();
                     // ignore
                   }
                   super.stop();
-                  if (clientServer != null) {
-                    clientServer.stop();
-                  }
+                  // TODO FIXME
+                  //                  if (clusterTSServiceImpl != null) {
+                  //                    clusterTSServiceImpl.stop();
+                  //                  }

Review comment:
       Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738251568



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientManagerTest.java
##########
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ClientManagerTest extends BaseClientTest {
+
+  @Before
+  public void setUp() throws IOException {
+    startDataServer();
+    startMetaServer();
+    startDataHeartbeatServer();
+    startMetaHeartbeatServer();
+  }
+
+  @After
+  public void tearDown() throws IOException, InterruptedException {
+    stopDataServer();
+    stopMetaServer();
+    stopDataHeartbeatServer();
+    stopMetaHeartbeatServer();
+  }
+
+  @Test
+  public void syncClientManagersTest() throws Exception {
+    // ---------Sync cluster clients manager test------------
+    ClientManager clusterManager =
+        new ClientManager(false, ClientManager.Type.RequestForwardClient);
+    RaftService.Client syncClusterClient =
+        clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(syncClusterClient);
+    Assert.assertTrue(syncClusterClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) syncClusterClient).getNode(), defaultNode);
+    Assert.assertTrue(syncClusterClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) syncClusterClient).returnSelf();
+

Review comment:
       The recycle logic is tested by `ClientPoolFactoryTest.poolRecycleTest()`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43838706/badge)](https://coveralls.io/builds/43838706)
   
   Coverage decreased (-0.07%) to 66.98% when pulling **99c1da3907030ff79a3509b09891dd90248f9df1 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] jt2594838 commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
jt2594838 commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r735328532



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/client/async/AsyncDataClient.java
##########
@@ -58,104 +62,161 @@ public AsyncDataClient(
 
   public AsyncDataClient(
       TProtocolFactory protocolFactory,
-      TAsyncClientManager clientManager,
+      TAsyncClientManager tClientManager,
       Node node,
-      AsyncClientPool pool)
+      ClientCategory category)
       throws IOException {
     // the difference of the two clients lies in the port
     super(
         protocolFactory,
-        clientManager,
+        tClientManager,
         TNonblockingSocketWrapper.wrap(
-            node.getInternalIp(), node.getDataPort(), RaftServer.getConnectionTimeoutInMS()));
+            node.getInternalIp(),
+            ClientUtils.getPort(node, category),
+            ClusterConstant.getConnectionTimeoutInMS()));
     this.node = node;
-    this.pool = pool;
+    this.category = category;
+  }
+
+  public AsyncDataClient(
+      TProtocolFactory protocolFactory,
+      TAsyncClientManager tClientManager,
+      Node node,
+      ClientCategory category,
+      IClientManager manager)
+      throws IOException {
+    this(protocolFactory, tClientManager, node, category);
+    this.clientManager = manager;
+  }
+
+  public void close() {
+    ___transport.close();
+    ___currentMethod = null;
+  }
+
+  public boolean isValid() {
+    return ___transport != null;
+  }
+
+  /**
+   * return self if clientPool is not null, the method doesn't need to call by user, it will trigger
+   * once client transport complete
+   */
+  private void returnSelf() {
+    logger.debug("return client: ", toString());
+    if (clientManager != null) clientManager.returnAsyncClient(this, node, category);

Review comment:
       Mind the code style here.

##########
File path: .github/workflows/client-go.yml
##########
@@ -13,6 +13,8 @@ on:
     branches:
       - master
       - 'rel/*'
+      #remove me when cluster- branch is merged
+      - cluster-

Review comment:
       It says "remove me" here.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientPoolFactoryTest.java
##########
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+import org.apache.iotdb.cluster.utils.ClientUtils;
+
+import org.apache.commons.pool2.impl.GenericKeyedObjectPool;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+
+public class ClientPoolFactoryTest {
+  private ClusterConfig clusterConfig = ClusterDescriptor.getInstance().getConfig();
+
+  private long mockMaxWaitTimeoutMs = 10 * 1000L;
+  private int mockMaxClientPerMember = 10;
+
+  private int maxClientPerNodePerMember = clusterConfig.getMaxClientPerNodePerMember();
+  private long waitClientTimeoutMS = clusterConfig.getWaitClientTimeoutMS();
+
+  private ClientPoolFactory clientPoolFactory;
+  private MockClientManager mockClientManager;
+
+  @Before
+  public void setUp() {
+    clusterConfig.setMaxClientPerNodePerMember(mockMaxClientPerMember);
+    clusterConfig.setWaitClientTimeoutMS(mockMaxWaitTimeoutMs);
+    clientPoolFactory = new ClientPoolFactory();
+    mockClientManager =
+        new MockClientManager() {
+          @Override
+          public void returnAsyncClient(
+              RaftService.AsyncClient client, Node node, ClientCategory category) {
+            assert (client == asyncClient);
+          }
+
+          @Override
+          public void returnSyncClient(
+              RaftService.Client client, Node node, ClientCategory category) {
+            Assert.assertTrue(client == syncClient);
+          }
+        };
+    clientPoolFactory.setClientManager(mockClientManager);
+  }
+
+  @After
+  public void tearDown() {
+    clusterConfig.setMaxClientPerNodePerMember(maxClientPerNodePerMember);
+    clusterConfig.setWaitClientTimeoutMS(waitClientTimeoutMS);
+  }
+
+  @Test
+  public void poolConfigTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+    Node node = constructDefaultNode();
+
+    for (int i = 0; i < mockMaxClientPerMember; i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+    }
+
+    long timeStart = System.currentTimeMillis();
+    try {
+      pool.borrowObject(node);
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof NoSuchElementException);
+    } finally {
+      Assert.assertTrue(System.currentTimeMillis() - timeStart + 10 > mockMaxWaitTimeoutMs);
+    }

Review comment:
       What if there is no exception thrown?

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/async/AsyncDataClientTest.java
##########
@@ -4,85 +4,76 @@
 
 package org.apache.iotdb.cluster.client.async;
 
-import org.apache.iotdb.cluster.client.async.AsyncDataClient.SingleManagerFactory;
-import org.apache.iotdb.cluster.common.TestUtils;
+import org.apache.iotdb.cluster.client.BaseClientTest;
+import org.apache.iotdb.cluster.client.ClientCategory;
 import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
 import org.apache.iotdb.cluster.config.ClusterDescriptor;
-import org.apache.iotdb.cluster.rpc.thrift.Node;
-import org.apache.iotdb.cluster.server.RaftServer;
-
-import org.apache.thrift.TException;
-import org.apache.thrift.async.AsyncMethodCallback;
-import org.apache.thrift.async.TAsyncClientManager;
-import org.apache.thrift.protocol.TBinaryProtocol.Factory;
-import org.apache.thrift.transport.TNonblockingSocket;
-import org.junit.After;
+
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
-import java.io.IOException;
-
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
-public class AsyncDataClientTest {
+public class AsyncDataClientTest extends BaseClientTest {
 
   private final ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
-  private boolean isAsyncServer;
+  private TProtocolFactory protocolFactory;
 
   @Before
   public void setUp() {
-    isAsyncServer = config.isUseAsyncServer();
     config.setUseAsyncServer(true);
+    protocolFactory =
+        config.isRpcThriftCompressionEnabled()
+            ? new TCompactProtocol.Factory()
+            : new TBinaryProtocol.Factory();
   }
 
-  @After
-  public void tearDown() {
-    config.setUseAsyncServer(isAsyncServer);
+  @Test
+  public void testDataClient() throws Exception {
+
+    AsyncDataClient.AsyncDataClientFactory factory =
+        new AsyncDataClient.AsyncDataClientFactory(protocolFactory, ClientCategory.DATA);
+
+    AsyncDataClient dataClient = factory.makeObject(defaultNode).getObject();
+
+    assertEquals(
+        "AsyncDataClient{node=Node(internalIp:localhost, metaPort:9003, nodeIdentifier:0, "
+            + "dataPort:40010, clientPort:0, clientIp:localhost),port=40010}",
+        dataClient.toString());
+    assertCheck(dataClient);
   }
 
   @Test
-  public void test() throws IOException, TException {
-    AsyncClientPool asyncClientPool = new AsyncClientPool(new SingleManagerFactory(new Factory()));
-    AsyncDataClient client;
-    Node node = TestUtils.getNode(0);
-    client =
-        new AsyncDataClient(
-            new Factory(),
-            new TAsyncClientManager(),
-            new TNonblockingSocket(
-                node.getInternalIp(), node.getDataPort(), RaftServer.getConnectionTimeoutInMS()));
-    assertTrue(client.isReady());
-
-    client = (AsyncDataClient) asyncClientPool.getClient(TestUtils.getNode(0));
-
-    assertEquals(TestUtils.getNode(0), client.getNode());
-
-    client.matchTerm(
-        0,
-        0,
-        TestUtils.getRaftNode(0, 0),
-        new AsyncMethodCallback<Boolean>() {
-          @Override
-          public void onComplete(Boolean aBoolean) {
-            // do nothing
-          }
-
-          @Override
-          public void onError(Exception e) {
-            // do nothing
-          }
-        });
-    assertFalse(client.isReady());
-
-    client.onError(new Exception());
-    assertNull(client.getCurrMethod());
-    assertFalse(client.isReady());
+  public void testMetaHeartbeatClient() throws Exception {
+
+    AsyncDataClient.AsyncDataClientFactory factory =
+        new AsyncDataClient.AsyncDataClientFactory(protocolFactory, ClientCategory.DATA_HEARTBEAT);
+
+    AsyncDataClient dataClient = factory.makeObject(defaultNode).getObject();
 
     assertEquals(
-        "DataClient{node=ClusterNode{ internalIp='192.168.0.0', metaPort=9003, nodeIdentifier=0, dataPort=40010, clientPort=6667, clientIp='0.0.0.0'}}",
-        client.toString());
+        "AsyncDataHeartbeatClient{node=Node(internalIp:localhost, metaPort:9003, nodeIdentifier:0, "
+            + "dataPort:40010, clientPort:0, clientIp:localhost),port=40011}",
+        dataClient.toString());
+    assertCheck(dataClient);
+  }

Review comment:
       The test name is inconsistent with its content.

##########
File path: server/src/test/java/org/apache/iotdb/db/utils/EnvironmentUtils.java
##########
@@ -241,7 +241,9 @@ public static void cleanAllDir() throws IOException {
   }
 
   public static void cleanDir(String dir) throws IOException {
-    FileUtils.deleteDirectory(new File(dir));
+    synchronized (EnvironmentUtils.class) {
+      FileUtils.deleteDirectory(new File(dir));
+    }
   }

Review comment:
       How will `cleanDir` be called concurrently?

##########
File path: server/src/test/java/org/apache/iotdb/db/integration/IoTDBJMXTest.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.integration;
+
+import org.apache.iotdb.db.utils.EnvironmentUtils;
+import org.apache.iotdb.jdbc.Config;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.sql.Statement;
+
+public class IoTDBJMXTest {
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+    EnvironmentUtils.envSetUp();
+    Class.forName(Config.JDBC_DRIVER_NAME);
+  }
+
+  @AfterClass
+  public static void tearDown() throws Exception {
+    EnvironmentUtils.cleanEnv();
+  }
+
+  @Test
+  public void testThreadPool() {
+    try (Connection connection =
+            DriverManager.getConnection(
+                Config.IOTDB_URL_PREFIX + "127.0.0.1:6667/", "root", "root");
+        Statement statement = connection.createStatement(); ) {
+      // make sure two storage groups having no conflict when registering their JMX info (for their
+      // thread pools)
+      statement.execute("set storage group to root.sg1");
+      statement.execute("set storage group to root.sg2");
+      statement.execute("insert into root.sg1.d1 (time, s1) values (1, 1)");
+      statement.execute("insert into root.sg2.d1 (time, s1) values (1, 1)");
+    } catch (SQLException throwables) {
+      throwables.printStackTrace();
+    }
+  }

Review comment:
       How is the result checked?

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -1441,31 +1446,33 @@ public TSStatus processNonPartitionedMetaPlan(PhysicalPlan plan) {
     return result;
   }
 
-  /**
-   * Forward a non-query plan to the data port of "receiver"
-   *
-   * @param plan a non-query plan
-   * @param header to determine which DataGroupMember of "receiver" will process the request.
-   * @return a TSStatus indicating if the forwarding is successful.
-   */
-  private TSStatus forwardDataPlanAsync(PhysicalPlan plan, Node receiver, RaftNode header)
-      throws IOException {
-    RaftService.AsyncClient client =
-        getClientProvider().getAsyncDataClient(receiver, RaftServer.getWriteOperationTimeoutMS());
-    return forwardPlanAsync(plan, receiver, header, client);
-  }
-
-  private TSStatus forwardDataPlanSync(PhysicalPlan plan, Node receiver, RaftNode header)
-      throws IOException {
-    Client client;
-    try {
-      client =
-          getClientProvider().getSyncDataClient(receiver, RaftServer.getWriteOperationTimeoutMS());
-    } catch (TException e) {
-      throw new IOException(e);
-    }
-    return forwardPlanSync(plan, receiver, header, client);
-  }
+  //  /**
+  //   * Forward a non-query plan to the data port of "receiver"
+  //   *
+  //   * @param plan a non-query plan
+  //   * @param header to determine which DataGroupMember of "receiver" will process the request.
+  //   * @return a TSStatus indicating if the forwarding is successful.
+  //   */
+  //  private TSStatus forwardDataPlanAsync(PhysicalPlan plan, Node receiver, RaftNode header)
+  //      throws IOException {
+  //    RaftService.AsyncClient client =
+  //        getClientProvider()
+  //            .getAsyncDataClient(receiver, ClusterConstant.getWriteOperationTimeoutMS());
+  //    return forwardPlanAsync(plan, receiver, header, client);
+  //  }
+  //
+  //  private TSStatus forwardDataPlanSync(PhysicalPlan plan, Node receiver, RaftNode header)
+  //      throws IOException {
+  //    Client client;
+  //    try {
+  //      client =
+  //          getClientProvider()
+  //              .getSyncDataClient(receiver, ClusterConstant.getWriteOperationTimeoutMS());
+  //    } catch (TException e) {
+  //      throw new IOException(e);
+  //    }
+  //    return forwardPlanSync(plan, receiver, header, client);
+  //  }
 

Review comment:
       Remove this.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -326,59 +294,31 @@ public void start() {
   @Override
   void startBackGroundThreads() {
     super.startBackGroundThreads();
-    reportThread =
-        Executors.newSingleThreadScheduledExecutor(n -> new Thread(n, "NodeReportThread"));
-    hardLinkCleanerThread =
-        Executors.newSingleThreadScheduledExecutor(n -> new Thread(n, "HardLinkCleaner"));
   }
 
   /**
-   * Stop the heartbeat and catch-up thread pool, DataClusterServer, ClientServer and reportThread.
-   * Calling the method twice does not induce side effects.
+   * Stop the heartbeat and catch-up thread pool, DataClusterServer, ClusterTSServiceImpl and
+   * reportThread. Calling the method twice does not induce side effects.
    */
   @Override
   public void stop() {
     super.stop();
-    if (getDataClusterServer() != null) {
-      getDataClusterServer().stop();
-    }
-    if (getDataHeartbeatServer() != null) {
-      getDataHeartbeatServer().stop();
-    }
-    if (clientServer != null) {
-      clientServer.stop();
-    }
-    if (reportThread != null) {
-      reportThread.shutdownNow();
-      try {
-        reportThread.awaitTermination(THREAD_POLL_WAIT_TERMINATION_TIME_S, TimeUnit.SECONDS);
-      } catch (InterruptedException e) {
-        Thread.currentThread().interrupt();
-        logger.error("Unexpected interruption when waiting for reportThread to end", e);
-      }
-    }
-    if (hardLinkCleanerThread != null) {
-      hardLinkCleanerThread.shutdownNow();
-      try {
-        hardLinkCleanerThread.awaitTermination(
-            THREAD_POLL_WAIT_TERMINATION_TIME_S, TimeUnit.SECONDS);
-      } catch (InterruptedException e) {
-        Thread.currentThread().interrupt();
-        logger.error("Unexpected interruption when waiting for hardlinkCleaner to end", e);
-      }
-    }
     logger.info("{}: stopped", name);
   }
 
+  @Override
+  public ServiceType getID() {
+    return ServiceType.CLUSTER_META_ENGINE;
+  }
+
   /**
-   * Start DataClusterServer and ClientServer so this node will be able to respond to other nodes
-   * and clients.
+   * Start DataClusterServer and ClusterTSServiceImpl so this node will be able to respond to other
+   * nodes and clients.
    */
   protected void initSubServers() throws TTransportException, StartupException {
-    getDataClusterServer().start();
-    getDataHeartbeatServer().start();
-    clientServer.setCoordinator(this.coordinator);
-    clientServer.start();
+    //    getDataClusterServer().start();
+    //    getDataHeartbeatServer().start();
+    // TODO FIXME
   }

Review comment:
       Maybe this should be removed?

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -582,8 +508,9 @@ private boolean joinCluster(Node node, StartUpStatus startUpStatus)
     } else if (resp.getRespNum() == Response.RESPONSE_AGREE) {
       logger.info("Node {} admitted this node into the cluster", node);
       ByteBuffer partitionTableBuffer = resp.partitionTableBytes;
-      acceptPartitionTable(partitionTableBuffer, true);
-      getDataClusterServer().pullSnapshots();
+      acceptVerifiedPartitionTable(partitionTableBuffer, true);
+      // this should be called in ClusterIoTDB TODO
+      // getDataGroupEngine().pullSnapshots();
       return true;

Review comment:
       Check and remove this.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientPoolFactoryTest.java
##########
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+import org.apache.iotdb.cluster.utils.ClientUtils;
+
+import org.apache.commons.pool2.impl.GenericKeyedObjectPool;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+
+public class ClientPoolFactoryTest {
+  private ClusterConfig clusterConfig = ClusterDescriptor.getInstance().getConfig();
+
+  private long mockMaxWaitTimeoutMs = 10 * 1000L;
+  private int mockMaxClientPerMember = 10;
+
+  private int maxClientPerNodePerMember = clusterConfig.getMaxClientPerNodePerMember();
+  private long waitClientTimeoutMS = clusterConfig.getWaitClientTimeoutMS();
+
+  private ClientPoolFactory clientPoolFactory;
+  private MockClientManager mockClientManager;
+
+  @Before
+  public void setUp() {
+    clusterConfig.setMaxClientPerNodePerMember(mockMaxClientPerMember);
+    clusterConfig.setWaitClientTimeoutMS(mockMaxWaitTimeoutMs);
+    clientPoolFactory = new ClientPoolFactory();
+    mockClientManager =
+        new MockClientManager() {
+          @Override
+          public void returnAsyncClient(
+              RaftService.AsyncClient client, Node node, ClientCategory category) {
+            assert (client == asyncClient);

Review comment:
       Change to JUnit assertion.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientPoolFactoryTest.java
##########
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+import org.apache.iotdb.cluster.utils.ClientUtils;
+
+import org.apache.commons.pool2.impl.GenericKeyedObjectPool;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+
+public class ClientPoolFactoryTest {
+  private ClusterConfig clusterConfig = ClusterDescriptor.getInstance().getConfig();
+
+  private long mockMaxWaitTimeoutMs = 10 * 1000L;
+  private int mockMaxClientPerMember = 10;
+
+  private int maxClientPerNodePerMember = clusterConfig.getMaxClientPerNodePerMember();
+  private long waitClientTimeoutMS = clusterConfig.getWaitClientTimeoutMS();
+
+  private ClientPoolFactory clientPoolFactory;
+  private MockClientManager mockClientManager;
+
+  @Before
+  public void setUp() {
+    clusterConfig.setMaxClientPerNodePerMember(mockMaxClientPerMember);
+    clusterConfig.setWaitClientTimeoutMS(mockMaxWaitTimeoutMs);
+    clientPoolFactory = new ClientPoolFactory();
+    mockClientManager =
+        new MockClientManager() {
+          @Override
+          public void returnAsyncClient(
+              RaftService.AsyncClient client, Node node, ClientCategory category) {
+            assert (client == asyncClient);
+          }
+
+          @Override
+          public void returnSyncClient(
+              RaftService.Client client, Node node, ClientCategory category) {
+            Assert.assertTrue(client == syncClient);
+          }
+        };
+    clientPoolFactory.setClientManager(mockClientManager);
+  }
+
+  @After
+  public void tearDown() {
+    clusterConfig.setMaxClientPerNodePerMember(maxClientPerNodePerMember);
+    clusterConfig.setWaitClientTimeoutMS(waitClientTimeoutMS);
+  }
+
+  @Test
+  public void poolConfigTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+    Node node = constructDefaultNode();
+
+    for (int i = 0; i < mockMaxClientPerMember; i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+    }
+
+    long timeStart = System.currentTimeMillis();
+    try {
+      pool.borrowObject(node);
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof NoSuchElementException);
+    } finally {
+      Assert.assertTrue(System.currentTimeMillis() - timeStart + 10 > mockMaxWaitTimeoutMs);
+    }
+  }
+
+  @Test
+  public void poolRecycleTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+
+    Node node = constructDefaultNode();
+    List<RaftService.AsyncClient> clientList = new ArrayList<>();
+    for (int i = 0; i < pool.getMaxIdlePerKey(); i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+      clientList.add(client);
+    }
+
+    for (RaftService.AsyncClient client : clientList) {
+      pool.returnObject(node, client);
+    }
+
+    for (int i = 0; i < pool.getMaxIdlePerKey(); i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+      Assert.assertTrue(clientList.contains(client));
+    }
+  }

Review comment:
       If more clients than idles are borrowed and returned, will the pool destroy the overflowing ones?

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/integration/BaseSingleNodeTest.java
##########
@@ -44,23 +45,27 @@
   @Before
   public void setUp() throws Exception {
     initConfigs();
-    metaServer = new MetaClusterServer();
-    metaServer.start();
-    metaServer.buildCluster();
+    daemon = ClusterIoTDB.getInstance();
+    daemon.initLocalEngines();
+    DataGroupEngine.getInstance().resetFactory();
+    daemon.activeStartNodeMode();
   }
 
   @After
   public void tearDown() throws Exception {
-    metaServer.stop();
+    // TODO fixme
+    daemon.stop();

Review comment:
       Please make it clear what should be fixed.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -1807,15 +1821,16 @@ public void applyRemoveNode(RemoveNodeLog removeNodeLog) {
         new Thread(
                 () -> {
                   try {
-                    Thread.sleep(RaftServer.getHeartbeatIntervalMs());
+                    Thread.sleep(ClusterConstant.getHeartbeatIntervalMs());
                   } catch (InterruptedException e) {
                     Thread.currentThread().interrupt();
                     // ignore
                   }
                   super.stop();
-                  if (clientServer != null) {
-                    clientServer.stop();
-                  }
+                  // TODO FIXME
+                  //                  if (clusterTSServiceImpl != null) {
+                  //                    clusterTSServiceImpl.stop();
+                  //                  }

Review comment:
       Check the TODOs.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -200,37 +181,22 @@
   private PartitionTable partitionTable;
   /** router calculates the partition groups that a partitioned plan should be sent to */
   private ClusterPlanRouter router;
-  /**
-   * each node contains multiple DataGroupMembers and they are managed by a DataClusterServer acting
-   * as a broker
-   */
-  private DataClusterServer dataClusterServer;
 
-  /** each node starts a data heartbeat server to transfer heartbeat requests */
-  private DataHeartbeatServer dataHeartbeatServer;
-
-  /**
-   * an override of TSServiceImpl, which redirect JDBC and Session requests to the MetaGroupMember
-   * so they can be processed cluster-wide
-   */
-  private ClientServer clientServer;
-
-  private DataClientProvider dataClientProvider;
-
-  /**
-   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
-   * of all raft members in this node
-   */
-  private ScheduledExecutorService reportThread;
+  //  /**
+  //   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the
+  // status
+  //   * of all raft members in this node
+  //   */
+  //  private ScheduledExecutorService reportThread;

Review comment:
       Remove them.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientManagerTest.java
##########
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ClientManagerTest extends BaseClientTest {
+
+  @Before
+  public void setUp() throws IOException {
+    startDataServer();
+    startMetaServer();
+    startDataHeartbeatServer();
+    startMetaHeartbeatServer();
+  }
+
+  @After
+  public void tearDown() throws IOException, InterruptedException {
+    stopDataServer();
+    stopMetaServer();
+    stopDataHeartbeatServer();
+    stopMetaHeartbeatServer();
+  }
+
+  @Test
+  public void syncClientManagersTest() throws Exception {
+    // ---------Sync cluster clients manager test------------
+    ClientManager clusterManager =
+        new ClientManager(false, ClientManager.Type.RequestForwardClient);
+    RaftService.Client syncClusterClient =
+        clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(syncClusterClient);
+    Assert.assertTrue(syncClusterClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) syncClusterClient).getNode(), defaultNode);
+    Assert.assertTrue(syncClusterClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) syncClusterClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync meta(meta heartbeat) clients manager test------------
+    ClientManager metaManager = new ClientManager(false, ClientManager.Type.MetaGroupClient);
+    RaftService.Client metaClient = metaManager.borrowSyncClient(defaultNode, ClientCategory.META);
+    Assert.assertNotNull(metaClient);
+    Assert.assertTrue(metaClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaClient).getNode(), defaultNode);
+    Assert.assertTrue(metaClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaClient).returnSelf();
+
+    RaftService.Client metaHeartClient =
+        metaManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT);
+    Assert.assertNotNull(metaHeartClient);
+    Assert.assertTrue(metaHeartClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(metaHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync data(data heartbeat) clients manager test------------
+    ClientManager dataManager = new ClientManager(false, ClientManager.Type.DataGroupClient);
+
+    RaftService.Client dataClient = dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+    Assert.assertNotNull(dataClient);
+    Assert.assertTrue(dataClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataClient).getNode(), defaultNode);
+    Assert.assertTrue(dataClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataClient).returnSelf();
+
+    RaftService.Client dataHeartClient =
+        dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT);
+    Assert.assertNotNull(dataHeartClient);
+    Assert.assertTrue(dataHeartClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(dataHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+  }
+
+  @Test
+  public void asyncClientManagersTest() throws Exception {
+    // ---------async cluster clients manager test------------
+    ClientManager clusterManager = new ClientManager(true, ClientManager.Type.RequestForwardClient);
+    RaftService.AsyncClient clusterClient =
+        clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(clusterClient);
+    Assert.assertTrue(clusterClient instanceof AsyncDataClient);
+    Assert.assertEquals(((AsyncDataClient) clusterClient).getNode(), defaultNode);
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isValid());
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isReady());

Review comment:
       Maybe you can invalidate the client and test that we can no longer get it from the manager.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/query/fill/ClusterPreviousFill.java
##########
@@ -120,7 +121,9 @@ private TimeValuePair performPreviousFill(
     }
     CountDownLatch latch = new CountDownLatch(partitionGroups.size());
     PreviousFillHandler handler = new PreviousFillHandler(latch);
-
+    // TODO it is not suitable for register and deregister an Object to JMX to such a frequent
+    // function call.
+    // BUT is it suitable to create a thread pool for each calling??
     ExecutorService fillService = Executors.newFixedThreadPool(partitionGroups.size());

Review comment:
       These were temporary solutions, and I expected you to fix them within this PR. However, they were initially for full concurrency, we did not want some failing connections to block others.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/sync/SyncDataClientTest.java
##########
@@ -4,123 +4,107 @@
 
 package org.apache.iotdb.cluster.client.sync;
 
-import org.apache.iotdb.cluster.client.sync.SyncDataClient.FactorySync;
-import org.apache.iotdb.cluster.rpc.thrift.Node;
-import org.apache.iotdb.cluster.rpc.thrift.RaftService.Client;
-import org.apache.iotdb.rpc.TSocketWrapper;
+import org.apache.iotdb.cluster.client.BaseClientTest;
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
 
 import org.apache.thrift.protocol.TBinaryProtocol;
-import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.apache.thrift.transport.TTransportException;
+import org.junit.Assert;
+import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.net.ServerSocket;
+import java.net.SocketException;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
 
-public class SyncDataClientTest {
+public class SyncDataClientTest extends BaseClientTest {
 
-  @Test
-  public void test() throws IOException, InterruptedException {
-    Node node = new Node();
-    node.setDataPort(40010).setInternalIp("localhost").setClientIp("localhost");
-    ServerSocket serverSocket = new ServerSocket(node.getDataPort());
-    Thread listenThread =
-        new Thread(
-            () -> {
-              while (!Thread.interrupted()) {
-                try {
-                  serverSocket.accept();
-                } catch (IOException e) {
-                  return;
-                }
-              }
-            });
-    listenThread.start();
+  private TProtocolFactory protocolFactory;
+
+  @Before
+  public void setUp() {
+    protocolFactory =
+        ClusterDescriptor.getInstance().getConfig().isRpcThriftCompressionEnabled()
+            ? new TCompactProtocol.Factory()
+            : new TBinaryProtocol.Factory();
+  }
 
+  @Test
+  public void testDataClient() throws IOException, InterruptedException, TTransportException {
     try {
-      SyncClientPool syncClientPool = new SyncClientPool(new FactorySync(new Factory()));
-      SyncDataClient client;
-      client = (SyncDataClient) syncClientPool.getClient(node);
+      startDataServer();
+      SyncDataClient dataClient =
+          new SyncDataClient(protocolFactory, defaultNode, ClientCategory.DATA);
 
-      assertEquals(node, client.getNode());
+      assertEquals(
+          "SyncDataClient{node=Node(internalIp:localhost, metaPort:9003, nodeIdentifier:0, "
+              + "dataPort:40010, clientPort:0, clientIp:localhost),port=40010}",
+          dataClient.toString());
 
-      client.setTimeout(1000);
-      assertEquals(1000, client.getTimeout());
+      assertCheck(dataClient);
 
-      client.putBack();
-      Client newClient = syncClientPool.getClient(node);
-      assertEquals(client, newClient);
-      assertTrue(client.getInputProtocol().getTransport().isOpen());
+      dataClient =
+          new SyncDataClient.SyncDataClientFactory(protocolFactory, ClientCategory.DATA)
+              .makeObject(defaultNode)
+              .getObject();
 
       assertEquals(
-          "DataClient{node=ClusterNode{ internalIp='localhost', metaPort=0, nodeIdentifier=0,"
-              + " dataPort=40010, clientPort=0, clientIp='localhost'}}",
-          client.toString());
-
-      client =
-          new SyncDataClient(
-              new TBinaryProtocol(TSocketWrapper.wrap(node.getInternalIp(), node.getDataPort())));
-      // client without a belong pool will be closed after putBack()
-      client.putBack();
-      assertFalse(client.getInputProtocol().getTransport().isOpen());
+          "SyncDataClient{node=Node(internalIp:localhost, metaPort:9003, nodeIdentifier:0, "
+              + "dataPort:40010, clientPort:0, clientIp:localhost),port=40010}",
+          dataClient.toString());
+
+      assertCheck(dataClient);
+    } catch (Exception e) {
+      e.printStackTrace();

Review comment:
       Maybe the test should be failed here.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientManagerTest.java
##########
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ClientManagerTest extends BaseClientTest {
+
+  @Before
+  public void setUp() throws IOException {
+    startDataServer();
+    startMetaServer();
+    startDataHeartbeatServer();
+    startMetaHeartbeatServer();
+  }
+
+  @After
+  public void tearDown() throws IOException, InterruptedException {
+    stopDataServer();
+    stopMetaServer();
+    stopDataHeartbeatServer();
+    stopMetaHeartbeatServer();
+  }
+
+  @Test
+  public void syncClientManagersTest() throws Exception {
+    // ---------Sync cluster clients manager test------------
+    ClientManager clusterManager =
+        new ClientManager(false, ClientManager.Type.RequestForwardClient);
+    RaftService.Client syncClusterClient =
+        clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(syncClusterClient);
+    Assert.assertTrue(syncClusterClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) syncClusterClient).getNode(), defaultNode);
+    Assert.assertTrue(syncClusterClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) syncClusterClient).returnSelf();
+

Review comment:
       Is there any method to check whether the client is returned? Maybe you can get the client again after it is returned and check that they are the same reference.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/query/reader/DatasourceInfoTest.java
##########
@@ -48,20 +50,35 @@
   @Before
   public void setUp() {
     metaGroupMember = new TestMetaGroupMember();
-    metaGroupMember.setClientProvider(
-        new DataClientProvider(new Factory()) {
-          @Override
-          public AsyncDataClient getAsyncDataClient(Node node, int timeout) throws IOException {
-            return new AsyncDataClient(null, null, TestUtils.getNode(0), null) {
+    ClusterIoTDB.getInstance()
+        .setClientManager(
+            new IClientManager() {

Review comment:
       Static fields between tests should be restored after each tests, or some unexpected results may occur when adding further tests.

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,671 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+// we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB.
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO fix me: better to throw exception if the client can not be get. Then we can remove this
+  // field.
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  // split DataGroupServiceImpls into engine and impls
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  // currently, clientManager is only used for those instances who do not belong to any
+  // DataGroup..
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public void initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+    }
+    JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);

Review comment:
       I guess it is because we can re-enable the report during runtime, and it is not convenient to perform concurrency control when re-enabling it.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/log/applier/DataLogApplierTest.java
##########
@@ -179,75 +180,101 @@ public void setUp()
     IoTDBDescriptor.getInstance().getConfig().setEnablePartialInsert(false);
     isPartitionEnabled = IoTDBDescriptor.getInstance().getConfig().isEnablePartition();
     IoTDBDescriptor.getInstance().getConfig().setEnablePartition(true);
-    testMetaGroupMember.setClientProvider(
-        new DataClientProvider(new Factory()) {
-          @Override
-          public AsyncDataClient getAsyncDataClient(Node node, int timeout) throws IOException {
-            return new AsyncDataClient(null, null, node, null) {
+    // TODO fixme: restore normal provider
+    ClusterIoTDB.getInstance()
+        .setClientManager(
+            new IClientManager() {

Review comment:
       I suggest it to be fixed now, otherwise, further tests may be interfered.

##########
File path: server/src/test/java/org/apache/iotdb/db/integration/IoTDBCheckConfigIT.java
##########
@@ -67,15 +67,15 @@ public void setUp() {
     EnvironmentUtils.closeStatMonitor();
     EnvironmentUtils.envSetUp();
 
-    final SecurityManager securityManager =
-        new SecurityManager() {
-          public void checkPermission(Permission permission) {
-            if (permission.getName().startsWith("exitVM")) {
-              throw new AccessControlException("Wrong system config");
-            }
-          }
-        };
-    System.setSecurityManager(securityManager);
+    //    final SecurityManager securityManager =
+    //        new SecurityManager() {
+    //          public void checkPermission(Permission permission) {
+    //            if (permission.getName().startsWith("exitVM")) {
+    //              throw new AccessControlException("Wrong system config");
+    //            }
+    //          }
+    //        };
+    //    System.setSecurityManager(securityManager);

Review comment:
       Remove commented code blocks.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/server/member/MetaGroupMemberTest.java
##########
@@ -338,19 +343,20 @@ public void applyRemoveNode(RemoveNodeLog removeNodeLog) {
           }
 
           @Override
-          public DataClusterServer getDataClusterServer() {
+          public DataGroupEngine getDataGroupEngine() {
             return mockDataClusterServer
-                ? MetaGroupMemberTest.this.dataClusterServer
-                : super.getDataClusterServer();
+                ? MetaGroupMemberTest.this.dataGroupEngine
+                : ClusterIoTDB.getInstance().getDataGroupEngine();
           }
 
-          @Override
-          public DataHeartbeatServer getDataHeartbeatServer() {
-            return new DataHeartbeatServer(thisNode, dataClusterServer) {
-              @Override
-              public void start() {}
-            };
-          }
+          // TODO we remove a do-nothing DataHeartbeat here.
+          //          @Override
+          //          public DataHeartbeatServer getDataHeartbeatServer() {
+          //            return new DataHeartbeatServer(thisNode, dataGroupServiceImpls) {
+          //              @Override
+          //              public void start() {}
+          //            };
+          //          }

Review comment:
       Remove it if necessary.

##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientPoolFactoryTest.java
##########
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+import org.apache.iotdb.cluster.utils.ClientUtils;
+
+import org.apache.commons.pool2.impl.GenericKeyedObjectPool;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+
+public class ClientPoolFactoryTest {
+  private ClusterConfig clusterConfig = ClusterDescriptor.getInstance().getConfig();
+
+  private long mockMaxWaitTimeoutMs = 10 * 1000L;
+  private int mockMaxClientPerMember = 10;
+
+  private int maxClientPerNodePerMember = clusterConfig.getMaxClientPerNodePerMember();
+  private long waitClientTimeoutMS = clusterConfig.getWaitClientTimeoutMS();
+
+  private ClientPoolFactory clientPoolFactory;
+  private MockClientManager mockClientManager;
+
+  @Before
+  public void setUp() {
+    clusterConfig.setMaxClientPerNodePerMember(mockMaxClientPerMember);
+    clusterConfig.setWaitClientTimeoutMS(mockMaxWaitTimeoutMs);
+    clientPoolFactory = new ClientPoolFactory();
+    mockClientManager =
+        new MockClientManager() {
+          @Override
+          public void returnAsyncClient(
+              RaftService.AsyncClient client, Node node, ClientCategory category) {
+            assert (client == asyncClient);
+          }
+
+          @Override
+          public void returnSyncClient(
+              RaftService.Client client, Node node, ClientCategory category) {
+            Assert.assertTrue(client == syncClient);
+          }
+        };
+    clientPoolFactory.setClientManager(mockClientManager);
+  }
+
+  @After
+  public void tearDown() {
+    clusterConfig.setMaxClientPerNodePerMember(maxClientPerNodePerMember);
+    clusterConfig.setWaitClientTimeoutMS(waitClientTimeoutMS);
+  }
+
+  @Test
+  public void poolConfigTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+    Node node = constructDefaultNode();
+
+    for (int i = 0; i < mockMaxClientPerMember; i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+    }
+
+    long timeStart = System.currentTimeMillis();
+    try {
+      pool.borrowObject(node);
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof NoSuchElementException);
+    } finally {
+      Assert.assertTrue(System.currentTimeMillis() - timeStart + 10 > mockMaxWaitTimeoutMs);
+    }
+  }
+
+  @Test
+  public void poolRecycleTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+
+    Node node = constructDefaultNode();
+    List<RaftService.AsyncClient> clientList = new ArrayList<>();
+    for (int i = 0; i < pool.getMaxIdlePerKey(); i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+      clientList.add(client);
+    }
+
+    for (RaftService.AsyncClient client : clientList) {
+      pool.returnObject(node, client);
+    }
+
+    for (int i = 0; i < pool.getMaxIdlePerKey(); i++) {
+      RaftService.AsyncClient client = pool.borrowObject(node);
+      Assert.assertNotNull(client);
+      Assert.assertTrue(clientList.contains(client));
+    }
+  }
+
+  @Test
+  public void createAsyncDataClientTest() throws Exception {
+    GenericKeyedObjectPool<Node, RaftService.AsyncClient> pool =
+        clientPoolFactory.createAsyncDataPool(ClientCategory.DATA);
+
+    Assert.assertEquals(pool.getMaxTotalPerKey(), mockMaxClientPerMember);
+    Assert.assertEquals(pool.getMaxWaitDuration(), Duration.ofMillis(mockMaxWaitTimeoutMs));
+
+    RaftService.AsyncClient asyncClient = null;
+
+    Node node = constructDefaultNode();
+
+    asyncClient = pool.borrowObject(node);
+    mockClientManager.setAsyncClient(asyncClient);
+    Assert.assertNotNull(asyncClient);
+    Assert.assertTrue(asyncClient instanceof AsyncDataClient);

Review comment:
       What is the meaning of `mockClientManager.setAsyncClient(asyncClient);` here?

##########
File path: server/src/test/java/org/apache/iotdb/db/integration/IoTDBCheckConfigIT.java
##########
@@ -145,9 +140,7 @@ public void testSameTimeEncoderAfterStartService() throws Exception {
     try {
       IoTDBConfigCheck.getInstance().checkConfig();
     } catch (Throwable t) {
-      assertTrue(false);
-    } finally {
-      System.setSecurityManager(null);
+      fail("should have no configration errors");

Review comment:
       It would be better to provide the caught exception in the failing message.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738051240



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/coordinator/Coordinator.java
##########
@@ -738,7 +739,7 @@ private TSStatus forwardPlan(PhysicalPlan plan, PartitionGroup group) {
         } else {
           status = forwardDataPlanSync(plan, node, group.getHeader());
         }
-      } catch (IOException e) {
+      } catch (Exception e) {
         status = StatusUtils.getStatus(StatusUtils.EXECUTE_STATEMENT_ERROR, e.getMessage());

Review comment:
       Fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43662740/badge)](https://coveralls.io/builds/43662740)
   
   Coverage decreased (-0.004%) to 67.249% when pulling **f4b9e99d8d74d2bc826c4c4403462b93ef63acbe on cluster-** into **1dcc82aad34bfc0820ac28f6a2e70757fef7d219 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-958629531


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/44009134/badge)](https://coveralls.io/builds/44009134)
   
   Coverage decreased (-0.01%) to 67.036% when pulling **42ee3a31de4af4187ba8ae227bcdd7120d16304f on cluster-** into **5e1f7809dc0ad1e21bc18f53ab0a6b2e2b30091a on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43804457/badge)](https://coveralls.io/builds/43804457)
   
   Coverage decreased (-0.1%) to 66.918% when pulling **57a73f23517fe993c493644b71de2ccc19219b09 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952661331


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [366 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl merged pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl merged pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948438296


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [1 Bug](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [368 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948393757


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [6 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [361 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.0%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.0%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.0% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.4%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.4%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.4% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43808674/badge)](https://coveralls.io/builds/43808674)
   
   Coverage decreased (-0.01%) to 67.035% when pulling **31947d8836589be84f34c1ce6f9131e60072412d on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738252253



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/server/member/MetaGroupMember.java
##########
@@ -1441,31 +1446,33 @@ public TSStatus processNonPartitionedMetaPlan(PhysicalPlan plan) {
     return result;
   }
 
-  /**
-   * Forward a non-query plan to the data port of "receiver"
-   *
-   * @param plan a non-query plan
-   * @param header to determine which DataGroupMember of "receiver" will process the request.
-   * @return a TSStatus indicating if the forwarding is successful.
-   */
-  private TSStatus forwardDataPlanAsync(PhysicalPlan plan, Node receiver, RaftNode header)
-      throws IOException {
-    RaftService.AsyncClient client =
-        getClientProvider().getAsyncDataClient(receiver, RaftServer.getWriteOperationTimeoutMS());
-    return forwardPlanAsync(plan, receiver, header, client);
-  }
-
-  private TSStatus forwardDataPlanSync(PhysicalPlan plan, Node receiver, RaftNode header)
-      throws IOException {
-    Client client;
-    try {
-      client =
-          getClientProvider().getSyncDataClient(receiver, RaftServer.getWriteOperationTimeoutMS());
-    } catch (TException e) {
-      throw new IOException(e);
-    }
-    return forwardPlanSync(plan, receiver, header, client);
-  }
+  //  /**
+  //   * Forward a non-query plan to the data port of "receiver"
+  //   *
+  //   * @param plan a non-query plan
+  //   * @param header to determine which DataGroupMember of "receiver" will process the request.
+  //   * @return a TSStatus indicating if the forwarding is successful.
+  //   */
+  //  private TSStatus forwardDataPlanAsync(PhysicalPlan plan, Node receiver, RaftNode header)
+  //      throws IOException {
+  //    RaftService.AsyncClient client =
+  //        getClientProvider()
+  //            .getAsyncDataClient(receiver, ClusterConstant.getWriteOperationTimeoutMS());
+  //    return forwardPlanAsync(plan, receiver, header, client);
+  //  }
+  //
+  //  private TSStatus forwardDataPlanSync(PhysicalPlan plan, Node receiver, RaftNode header)
+  //      throws IOException {
+  //    Client client;
+  //    try {
+  //      client =
+  //          getClientProvider()
+  //              .getSyncDataClient(receiver, ClusterConstant.getWriteOperationTimeoutMS());
+  //    } catch (TException e) {
+  //      throw new IOException(e);
+  //    }
+  //    return forwardPlanSync(plan, receiver, header, client);
+  //  }
 

Review comment:
       Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43830329/badge)](https://coveralls.io/builds/43830329)
   
   Coverage decreased (-0.04%) to 67.011% when pulling **ceecd3250b7b6ebcdb2f0c16bcfbc2030795836b on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43907718/badge)](https://coveralls.io/builds/43907718)
   
   Coverage decreased (-0.04%) to 67.005% when pulling **de70ac037bf8178428528d7370c31f48fae84f99 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] mychaow commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
mychaow commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r740668533



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,687 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO: better to throw exception if the client can not be get. Then we can remove this field.
+  private static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node.
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots. */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup.
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  /** initialize the current node and its services */
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+      JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+      return false;
+    }
+    return true;
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    if (!cluster.initLocalEngines()) {
+      logger.error("initLocalEngines error, stop process!");
+      return;
+    }
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO: duplicate code
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  /** Start as a seed node */
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster module.
+    // TODO: it is better to remove coordinator out of metaGroupEngine
+
+    registerManager.register(metaGroupEngine);
+    registerManager.register(dataGroupEngine);
+
+    // rpc service initialize
+    DataGroupServiceImpls dataGroupServiceImpls = new DataGroupServiceImpls();
+    if (ClusterDescriptor.getInstance().getConfig().isUseAsyncServer()) {
+      MetaAsyncService metaAsyncService = new MetaAsyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      MetaRaftService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      DataRaftService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+    } else {
+      MetaSyncService syncService = new MetaSyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initSyncedServiceImpl(syncService);
+      MetaRaftService.getInstance().initSyncedServiceImpl(syncService);
+      DataRaftService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+    }
+    // start RPC service
+    logger.info("start Meta Heartbeat RPC service... ");
+    registerManager.register(MetaRaftHeartBeatService.getInstance());
+    /* TODO: better to start the Meta RPC service until the heartbeatService has elected the leader and quorum of followers have caught up. */
+    logger.info("start Meta RPC service... ");

Review comment:
       better to wait for some seconds?

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,685 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  /**
+   * TODO: fix me: better to throw exception if the client can not be get. Then we can remove this
+   * field.
+   */
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);

Review comment:
       why do not change the method name?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] jt2594838 commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
jt2594838 commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738887159



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/log/snapshot/FileSnapshot.java
##########
@@ -350,7 +350,7 @@ private void removeRemoteHardLink(RemoteTsFileResource resource) {
         try {
           client.removeHardLink(resource.getTsFile().getAbsolutePath());
         } catch (TException te) {
-          client.close();
+          if (client != null) client.close();

Review comment:
       As google style suggests, we should use `{}` even for single-line code blocks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-954551921


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [351 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.3% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.2% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] jt2594838 commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
jt2594838 commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738869581



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientManagerTest.java
##########
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ClientManagerTest extends BaseClientTest {
+
+  @Before
+  public void setUp() throws IOException {
+    startDataServer();
+    startMetaServer();
+    startDataHeartbeatServer();
+    startMetaHeartbeatServer();
+  }
+
+  @After
+  public void tearDown() throws IOException, InterruptedException {
+    stopDataServer();
+    stopMetaServer();
+    stopDataHeartbeatServer();
+    stopMetaHeartbeatServer();
+  }
+
+  @Test
+  public void syncClientManagersTest() throws Exception {
+    // ---------Sync cluster clients manager test------------
+    ClientManager clusterManager =
+        new ClientManager(false, ClientManager.Type.RequestForwardClient);
+    RaftService.Client syncClusterClient =
+        clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(syncClusterClient);
+    Assert.assertTrue(syncClusterClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) syncClusterClient).getNode(), defaultNode);
+    Assert.assertTrue(syncClusterClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) syncClusterClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync meta(meta heartbeat) clients manager test------------
+    ClientManager metaManager = new ClientManager(false, ClientManager.Type.MetaGroupClient);
+    RaftService.Client metaClient = metaManager.borrowSyncClient(defaultNode, ClientCategory.META);
+    Assert.assertNotNull(metaClient);
+    Assert.assertTrue(metaClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaClient).getNode(), defaultNode);
+    Assert.assertTrue(metaClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaClient).returnSelf();
+
+    RaftService.Client metaHeartClient =
+        metaManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT);
+    Assert.assertNotNull(metaHeartClient);
+    Assert.assertTrue(metaHeartClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(metaHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync data(data heartbeat) clients manager test------------
+    ClientManager dataManager = new ClientManager(false, ClientManager.Type.DataGroupClient);
+
+    RaftService.Client dataClient = dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+    Assert.assertNotNull(dataClient);
+    Assert.assertTrue(dataClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataClient).getNode(), defaultNode);
+    Assert.assertTrue(dataClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataClient).returnSelf();
+
+    RaftService.Client dataHeartClient =
+        dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT);
+    Assert.assertNotNull(dataHeartClient);
+    Assert.assertTrue(dataHeartClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(dataHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+  }
+
+  @Test
+  public void asyncClientManagersTest() throws Exception {
+    // ---------async cluster clients manager test------------
+    ClientManager clusterManager = new ClientManager(true, ClientManager.Type.RequestForwardClient);
+    RaftService.AsyncClient clusterClient =
+        clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(clusterClient);
+    Assert.assertTrue(clusterClient instanceof AsyncDataClient);
+    Assert.assertEquals(((AsyncDataClient) clusterClient).getNode(), defaultNode);
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isValid());
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isReady());

Review comment:
       Sure, but you'd better leave a comment in the test.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] jt2594838 commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
jt2594838 commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738871690



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/client/ClientCategory.java
##########
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+public enum ClientCategory {
+  META("MetaClient"),
+  META_HEARTBEAT("MetaHeartbeatClient"),
+  DATA("DataClient"),
+  DATA_HEARTBEAT("DataHeartbeatClient"),
+  DATA_ASYNC_APPEND_CLIENT("DataAsyncAppendClient");

Review comment:
       Yes, as async clients use additional selector threads, we assume the number of selectors may affect the performance, since packet disorders influence write performance but not read performance, we added a separate pool to control the number of selectors for writes and observe it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-950746251


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.1% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43804248/badge)](https://coveralls.io/builds/43804248)
   
   Coverage decreased (-0.03%) to 67.018% when pulling **57a73f23517fe993c493644b71de2ccc19219b09 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952712066


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.6%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.6%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.6% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-953768117


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [399 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.9%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.9%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.9% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.2% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/44082221/badge)](https://coveralls.io/builds/44082221)
   
   Coverage decreased (-0.03%) to 67.075% when pulling **a3aa2792e171aa32a477b1ccedd3070291b7fc08 on cluster-** into **f799e3cb71218a4dda970002d9ca4500651d5f35 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43606647/badge)](https://coveralls.io/builds/43606647)
   
   Coverage decreased (-0.006%) to 67.437% when pulling **9bda5b3e6f9b46c74021a5f1b61643e4a2b3dddd on cluster-** into **e4b7f64deb54b3fc186424cf969a68bff23a6fc7 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43494122/badge)](https://coveralls.io/builds/43494122)
   
   Coverage decreased (-0.02%) to 67.728% when pulling **2f15139530ebec2a675ab44844b484d58aa67e83 on cluster-** into **c662a3e86de46aecc56236f0c2b693a2c479f38d on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737209150



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/query/reader/DatasourceInfoTest.java
##########
@@ -48,20 +50,35 @@
   @Before
   public void setUp() {
     metaGroupMember = new TestMetaGroupMember();
-    metaGroupMember.setClientProvider(
-        new DataClientProvider(new Factory()) {
-          @Override
-          public AsyncDataClient getAsyncDataClient(Node node, int timeout) throws IOException {
-            return new AsyncDataClient(null, null, TestUtils.getNode(0), null) {
+    ClusterIoTDB.getInstance()
+        .setClientManager(
+            new IClientManager() {

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952846892


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.6%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.6%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.6% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43812240/badge)](https://coveralls.io/builds/43812240)
   
   Coverage decreased (-0.05%) to 67.001% when pulling **e52b6d7c7765bf4f70b8d6392c56903a4b4eccb0 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738243744



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/async/AsyncDataClientTest.java
##########
@@ -4,85 +4,76 @@
 
 package org.apache.iotdb.cluster.client.async;
 
-import org.apache.iotdb.cluster.client.async.AsyncDataClient.SingleManagerFactory;
-import org.apache.iotdb.cluster.common.TestUtils;
+import org.apache.iotdb.cluster.client.BaseClientTest;
+import org.apache.iotdb.cluster.client.ClientCategory;
 import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
 import org.apache.iotdb.cluster.config.ClusterDescriptor;
-import org.apache.iotdb.cluster.rpc.thrift.Node;
-import org.apache.iotdb.cluster.server.RaftServer;
-
-import org.apache.thrift.TException;
-import org.apache.thrift.async.AsyncMethodCallback;
-import org.apache.thrift.async.TAsyncClientManager;
-import org.apache.thrift.protocol.TBinaryProtocol.Factory;
-import org.apache.thrift.transport.TNonblockingSocket;
-import org.junit.After;
+
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
-import java.io.IOException;
-
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
-public class AsyncDataClientTest {
+public class AsyncDataClientTest extends BaseClientTest {
 
   private final ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
-  private boolean isAsyncServer;
+  private TProtocolFactory protocolFactory;
 
   @Before
   public void setUp() {
-    isAsyncServer = config.isUseAsyncServer();
     config.setUseAsyncServer(true);
+    protocolFactory =
+        config.isRpcThriftCompressionEnabled()
+            ? new TCompactProtocol.Factory()
+            : new TBinaryProtocol.Factory();
   }
 
-  @After
-  public void tearDown() {
-    config.setUseAsyncServer(isAsyncServer);
+  @Test
+  public void testDataClient() throws Exception {
+
+    AsyncDataClient.AsyncDataClientFactory factory =
+        new AsyncDataClient.AsyncDataClientFactory(protocolFactory, ClientCategory.DATA);
+
+    AsyncDataClient dataClient = factory.makeObject(defaultNode).getObject();
+
+    assertEquals(
+        "AsyncDataClient{node=Node(internalIp:localhost, metaPort:9003, nodeIdentifier:0, "
+            + "dataPort:40010, clientPort:0, clientIp:localhost),port=40010}",
+        dataClient.toString());
+    assertCheck(dataClient);
   }
 
   @Test
-  public void test() throws IOException, TException {
-    AsyncClientPool asyncClientPool = new AsyncClientPool(new SingleManagerFactory(new Factory()));
-    AsyncDataClient client;
-    Node node = TestUtils.getNode(0);
-    client =
-        new AsyncDataClient(
-            new Factory(),
-            new TAsyncClientManager(),
-            new TNonblockingSocket(
-                node.getInternalIp(), node.getDataPort(), RaftServer.getConnectionTimeoutInMS()));
-    assertTrue(client.isReady());
-
-    client = (AsyncDataClient) asyncClientPool.getClient(TestUtils.getNode(0));
-
-    assertEquals(TestUtils.getNode(0), client.getNode());
-
-    client.matchTerm(
-        0,
-        0,
-        TestUtils.getRaftNode(0, 0),
-        new AsyncMethodCallback<Boolean>() {
-          @Override
-          public void onComplete(Boolean aBoolean) {
-            // do nothing
-          }
-
-          @Override
-          public void onError(Exception e) {
-            // do nothing
-          }
-        });
-    assertFalse(client.isReady());
-
-    client.onError(new Exception());
-    assertNull(client.getCurrMethod());
-    assertFalse(client.isReady());
+  public void testMetaHeartbeatClient() throws Exception {
+
+    AsyncDataClient.AsyncDataClientFactory factory =
+        new AsyncDataClient.AsyncDataClientFactory(protocolFactory, ClientCategory.DATA_HEARTBEAT);
+
+    AsyncDataClient dataClient = factory.makeObject(defaultNode).getObject();
 
     assertEquals(
-        "DataClient{node=ClusterNode{ internalIp='192.168.0.0', metaPort=9003, nodeIdentifier=0, dataPort=40010, clientPort=6667, clientIp='0.0.0.0'}}",
-        client.toString());
+        "AsyncDataHeartbeatClient{node=Node(internalIp:localhost, metaPort:9003, nodeIdentifier:0, "
+            + "dataPort:40010, clientPort:0, clientIp:localhost),port=40011}",
+        dataClient.toString());
+    assertCheck(dataClient);
+  }

Review comment:
       fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737984178



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/metadata/CMManager.java
##########
@@ -1049,11 +1050,11 @@ public void setCoordinator(Coordinator coordinator) {
           // a non-null result contains correct result even if it is empty, so query next group
           return paths;
         }
-      } catch (IOException | TException e) {
-        throw new MetadataException(e);
       } catch (InterruptedException e) {
         Thread.currentThread().interrupt();
         throw new MetadataException(e);
+      } catch (Exception e) {

Review comment:
       Fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43830182/badge)](https://coveralls.io/builds/43830182)
   
   Coverage decreased (-0.03%) to 67.017% when pulling **ceecd3250b7b6ebcdb2f0c16bcfbc2030795836b on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-960594864


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r740726480



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,685 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  /**
+   * TODO: fix me: better to throw exception if the client can not be get. Then we can remove this
+   * field.
+   */
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);

Review comment:
       The design diagram for the new class is shown [here](https://github.com/apache/iotdb/issues/3881). Since multiple dataMember exists on a node, we abstracted a dataEngine to manage it. But since a node only has one metaMember, I don't think it's necessary to abstract a metaEngine that only manages one metaMember. I have changed all metaEngine in ClusterIoTDB back to metaMember and now there is no metaEngine name

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,687 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO: better to throw exception if the client can not be get. Then we can remove this field.
+  private static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node.
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots. */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup.
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  /** initialize the current node and its services */
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+      JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+      return false;
+    }
+    return true;
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    if (!cluster.initLocalEngines()) {
+      logger.error("initLocalEngines error, stop process!");
+      return;
+    }
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO: duplicate code
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  /** Start as a seed node */
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster module.
+    // TODO: it is better to remove coordinator out of metaGroupEngine
+
+    registerManager.register(metaGroupEngine);
+    registerManager.register(dataGroupEngine);
+
+    // rpc service initialize
+    DataGroupServiceImpls dataGroupServiceImpls = new DataGroupServiceImpls();
+    if (ClusterDescriptor.getInstance().getConfig().isUseAsyncServer()) {
+      MetaAsyncService metaAsyncService = new MetaAsyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      MetaRaftService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      DataRaftService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+    } else {
+      MetaSyncService syncService = new MetaSyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initSyncedServiceImpl(syncService);
+      MetaRaftService.getInstance().initSyncedServiceImpl(syncService);
+      DataRaftService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+    }
+    // start RPC service
+    logger.info("start Meta Heartbeat RPC service... ");
+    registerManager.register(MetaRaftHeartBeatService.getInstance());
+    /* TODO: better to start the Meta RPC service until the heartbeatService has elected the leader and quorum of followers have caught up. */
+    logger.info("start Meta RPC service... ");

Review comment:
       this is basically the same logic as before, maybe we can fix the work in todo in the future, what do you think?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43970962/badge)](https://coveralls.io/builds/43970962)
   
   Coverage increased (+0.01%) to 66.97% when pulling **67d8875cdda0032a63c4bbc00a993391ac896fdf on cluster-** into **b05e21c078debcbb020d62ecd6d8a00a932863bd on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43970962/badge)](https://coveralls.io/builds/43970962)
   
   Coverage increased (+0.01%) to 66.97% when pulling **67d8875cdda0032a63c4bbc00a993391ac896fdf on cluster-** into **b05e21c078debcbb020d62ecd6d8a00a932863bd on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43598955/badge)](https://coveralls.io/builds/43598955)
   
   Coverage decreased (-0.3%) to 67.434% when pulling **50fb186faa9f54e94ffe50bd6d9bfec0adc92d78 on cluster-** into **c662a3e86de46aecc56236f0c2b693a2c479f38d on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948522212


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948438296


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [1 Bug](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [368 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43666649/badge)](https://coveralls.io/builds/43666649)
   
   Coverage increased (+0.02%) to 67.275% when pulling **c184107fc9bb42bf84f9cb6c898188f4206f9344 on cluster-** into **1dcc82aad34bfc0820ac28f6a2e70757fef7d219 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-962799582


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [1 Bug](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [197 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.0%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.0%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.0% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-950746251


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.1% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737199305



##########
File path: server/src/test/java/org/apache/iotdb/db/utils/EnvironmentUtils.java
##########
@@ -241,7 +241,9 @@ public static void cleanAllDir() throws IOException {
   }
 
   public static void cleanDir(String dir) throws IOException {
-    FileUtils.deleteDirectory(new File(dir));
+    synchronized (EnvironmentUtils.class) {
+      FileUtils.deleteDirectory(new File(dir));
+    }
   }

Review comment:
       Has been restored




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43687754/badge)](https://coveralls.io/builds/43687754)
   
   Coverage decreased (-0.02%) to 67.234% when pulling **d14bdeb8cf6ee0eff0e0d458c53701bd7401d988 on cluster-** into **1dcc82aad34bfc0820ac28f6a2e70757fef7d219 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-949246737


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [364 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-947301528


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [21 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [361 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.8%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.8%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.8% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.3% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43626980/badge)](https://coveralls.io/builds/43626980)
   
   Coverage decreased (-0.2%) to 67.259% when pulling **72e0c37cced330d08cfc792c3d845de64543d3da on cluster-** into **e4b7f64deb54b3fc186424cf969a68bff23a6fc7 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-953768117


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [399 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.9%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.9%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.9% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.2% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-956059060


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] mychaow commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
mychaow commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r740668533



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,687 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO: better to throw exception if the client can not be get. Then we can remove this field.
+  private static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node.
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots. */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup.
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  /** initialize the current node and its services */
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+      JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+      return false;
+    }
+    return true;
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    if (!cluster.initLocalEngines()) {
+      logger.error("initLocalEngines error, stop process!");
+      return;
+    }
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO: duplicate code
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  /** Start as a seed node */
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster module.
+    // TODO: it is better to remove coordinator out of metaGroupEngine
+
+    registerManager.register(metaGroupEngine);
+    registerManager.register(dataGroupEngine);
+
+    // rpc service initialize
+    DataGroupServiceImpls dataGroupServiceImpls = new DataGroupServiceImpls();
+    if (ClusterDescriptor.getInstance().getConfig().isUseAsyncServer()) {
+      MetaAsyncService metaAsyncService = new MetaAsyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      MetaRaftService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      DataRaftService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+    } else {
+      MetaSyncService syncService = new MetaSyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initSyncedServiceImpl(syncService);
+      MetaRaftService.getInstance().initSyncedServiceImpl(syncService);
+      DataRaftService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+    }
+    // start RPC service
+    logger.info("start Meta Heartbeat RPC service... ");
+    registerManager.register(MetaRaftHeartBeatService.getInstance());
+    /* TODO: better to start the Meta RPC service until the heartbeatService has elected the leader and quorum of followers have caught up. */
+    logger.info("start Meta RPC service... ");

Review comment:
       better to wait for some seconds?

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,685 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  /**
+   * TODO: fix me: better to throw exception if the client can not be get. Then we can remove this
+   * field.
+   */
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);

Review comment:
       why do not change the method name?

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,687 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO: better to throw exception if the client can not be get. Then we can remove this field.
+  private static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node.
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots. */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup.
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  /** initialize the current node and its services */
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+      JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+      return false;
+    }
+    return true;
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    if (!cluster.initLocalEngines()) {
+      logger.error("initLocalEngines error, stop process!");
+      return;
+    }
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO: duplicate code
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  /** Start as a seed node */
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster module.
+    // TODO: it is better to remove coordinator out of metaGroupEngine
+
+    registerManager.register(metaGroupEngine);
+    registerManager.register(dataGroupEngine);
+
+    // rpc service initialize
+    DataGroupServiceImpls dataGroupServiceImpls = new DataGroupServiceImpls();
+    if (ClusterDescriptor.getInstance().getConfig().isUseAsyncServer()) {
+      MetaAsyncService metaAsyncService = new MetaAsyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      MetaRaftService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      DataRaftService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+    } else {
+      MetaSyncService syncService = new MetaSyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initSyncedServiceImpl(syncService);
+      MetaRaftService.getInstance().initSyncedServiceImpl(syncService);
+      DataRaftService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+    }
+    // start RPC service
+    logger.info("start Meta Heartbeat RPC service... ");
+    registerManager.register(MetaRaftHeartBeatService.getInstance());
+    /* TODO: better to start the Meta RPC service until the heartbeatService has elected the leader and quorum of followers have caught up. */
+    logger.info("start Meta RPC service... ");

Review comment:
       ok

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,685 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  /**
+   * TODO: fix me: better to throw exception if the client can not be get. Then we can remove this
+   * field.
+   */
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);

Review comment:
       ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738050891



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/metadata/CMManager.java
##########
@@ -1180,38 +1180,40 @@ public void setCoordinator(Coordinator coordinator) {
           }
           return partialPaths;
         }
-      } catch (IOException | TException e) {
-        throw new MetadataException(e);
       } catch (InterruptedException e) {
         Thread.currentThread().interrupt();
         throw new MetadataException(e);
+      } catch (Exception e) {
+        throw new MetadataException(e);

Review comment:
       Fixed all related issues. See the PR: https://github.com/apache/iotdb/pull/4252




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-953462064


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [362 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.6%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.6%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.6% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43830411/badge)](https://coveralls.io/builds/43830411)
   
   Coverage decreased (-0.05%) to 66.997% when pulling **ceecd3250b7b6ebcdb2f0c16bcfbc2030795836b on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737985281



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/metadata/CMManager.java
##########
@@ -1180,38 +1180,40 @@ public void setCoordinator(Coordinator coordinator) {
           }
           return partialPaths;
         }
-      } catch (IOException | TException e) {
-        throw new MetadataException(e);
       } catch (InterruptedException e) {
         Thread.currentThread().interrupt();
         throw new MetadataException(e);
+      } catch (Exception e) {
+        throw new MetadataException(e);

Review comment:
       Fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738896492



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientManagerTest.java
##########
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ClientManagerTest extends BaseClientTest {
+
+  @Before
+  public void setUp() throws IOException {
+    startDataServer();
+    startMetaServer();
+    startDataHeartbeatServer();
+    startMetaHeartbeatServer();
+  }
+
+  @After
+  public void tearDown() throws IOException, InterruptedException {
+    stopDataServer();
+    stopMetaServer();
+    stopDataHeartbeatServer();
+    stopMetaHeartbeatServer();
+  }
+
+  @Test
+  public void syncClientManagersTest() throws Exception {
+    // ---------Sync cluster clients manager test------------
+    ClientManager clusterManager =
+        new ClientManager(false, ClientManager.Type.RequestForwardClient);
+    RaftService.Client syncClusterClient =
+        clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(syncClusterClient);
+    Assert.assertTrue(syncClusterClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) syncClusterClient).getNode(), defaultNode);
+    Assert.assertTrue(syncClusterClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) syncClusterClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync meta(meta heartbeat) clients manager test------------
+    ClientManager metaManager = new ClientManager(false, ClientManager.Type.MetaGroupClient);
+    RaftService.Client metaClient = metaManager.borrowSyncClient(defaultNode, ClientCategory.META);
+    Assert.assertNotNull(metaClient);
+    Assert.assertTrue(metaClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaClient).getNode(), defaultNode);
+    Assert.assertTrue(metaClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaClient).returnSelf();
+
+    RaftService.Client metaHeartClient =
+        metaManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT);
+    Assert.assertNotNull(metaHeartClient);
+    Assert.assertTrue(metaHeartClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(metaHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync data(data heartbeat) clients manager test------------
+    ClientManager dataManager = new ClientManager(false, ClientManager.Type.DataGroupClient);
+
+    RaftService.Client dataClient = dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+    Assert.assertNotNull(dataClient);
+    Assert.assertTrue(dataClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataClient).getNode(), defaultNode);
+    Assert.assertTrue(dataClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataClient).returnSelf();
+
+    RaftService.Client dataHeartClient =
+        dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT);
+    Assert.assertNotNull(dataHeartClient);
+    Assert.assertTrue(dataHeartClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(dataHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+  }
+
+  @Test
+  public void asyncClientManagersTest() throws Exception {
+    // ---------async cluster clients manager test------------
+    ClientManager clusterManager = new ClientManager(true, ClientManager.Type.RequestForwardClient);
+    RaftService.AsyncClient clusterClient =
+        clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(clusterClient);
+    Assert.assertTrue(clusterClient instanceof AsyncDataClient);
+    Assert.assertEquals(((AsyncDataClient) clusterClient).getNode(), defaultNode);
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isValid());
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isReady());

Review comment:
       Leave comments in `ClientPoolFactoryTest.java`. Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738887546



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientManagerTest.java
##########
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ClientManagerTest extends BaseClientTest {
+
+  @Before
+  public void setUp() throws IOException {
+    startDataServer();
+    startMetaServer();
+    startDataHeartbeatServer();
+    startMetaHeartbeatServer();
+  }
+
+  @After
+  public void tearDown() throws IOException, InterruptedException {
+    stopDataServer();
+    stopMetaServer();
+    stopDataHeartbeatServer();
+    stopMetaHeartbeatServer();
+  }
+
+  @Test
+  public void syncClientManagersTest() throws Exception {
+    // ---------Sync cluster clients manager test------------
+    ClientManager clusterManager =
+        new ClientManager(false, ClientManager.Type.RequestForwardClient);
+    RaftService.Client syncClusterClient =
+        clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(syncClusterClient);
+    Assert.assertTrue(syncClusterClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) syncClusterClient).getNode(), defaultNode);
+    Assert.assertTrue(syncClusterClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) syncClusterClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync meta(meta heartbeat) clients manager test------------
+    ClientManager metaManager = new ClientManager(false, ClientManager.Type.MetaGroupClient);
+    RaftService.Client metaClient = metaManager.borrowSyncClient(defaultNode, ClientCategory.META);
+    Assert.assertNotNull(metaClient);
+    Assert.assertTrue(metaClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaClient).getNode(), defaultNode);
+    Assert.assertTrue(metaClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaClient).returnSelf();
+
+    RaftService.Client metaHeartClient =
+        metaManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT);
+    Assert.assertNotNull(metaHeartClient);
+    Assert.assertTrue(metaHeartClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(metaHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync data(data heartbeat) clients manager test------------
+    ClientManager dataManager = new ClientManager(false, ClientManager.Type.DataGroupClient);
+
+    RaftService.Client dataClient = dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+    Assert.assertNotNull(dataClient);
+    Assert.assertTrue(dataClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataClient).getNode(), defaultNode);
+    Assert.assertTrue(dataClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataClient).returnSelf();
+
+    RaftService.Client dataHeartClient =
+        dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT);
+    Assert.assertNotNull(dataHeartClient);
+    Assert.assertTrue(dataHeartClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(dataHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+  }
+
+  @Test
+  public void asyncClientManagersTest() throws Exception {
+    // ---------async cluster clients manager test------------
+    ClientManager clusterManager = new ClientManager(true, ClientManager.Type.RequestForwardClient);
+    RaftService.AsyncClient clusterClient =
+        clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(clusterClient);
+    Assert.assertTrue(clusterClient instanceof AsyncDataClient);
+    Assert.assertEquals(((AsyncDataClient) clusterClient).getNode(), defaultNode);
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isValid());
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isReady());

Review comment:
       created JIRA issue: https://issues.apache.org/jira/browse/IOTDB-1904 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43906967/badge)](https://coveralls.io/builds/43906967)
   
   Coverage decreased (-0.1%) to 66.901% when pulling **de70ac037bf8178428528d7370c31f48fae84f99 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] neuyilan commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
neuyilan commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r737189033



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/client/ClientCategory.java
##########
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+public enum ClientCategory {
+  META("MetaClient"),
+  META_HEARTBEAT("MetaHeartbeatClient"),
+  DATA("DataClient"),
+  DATA_HEARTBEAT("DataHeartbeatClient"),
+  DATA_ASYNC_APPEND_CLIENT("DataAsyncAppendClient");

Review comment:
       I don't quite understand why a connection pool is created for the `appendEntry/appendEntries` of the data raft group alone? Maybe I miss something...




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] jt2594838 commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
jt2594838 commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738869147



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/query/fill/ClusterPreviousFill.java
##########
@@ -120,7 +121,9 @@ private TimeValuePair performPreviousFill(
     }
     CountDownLatch latch = new CountDownLatch(partitionGroups.size());
     PreviousFillHandler handler = new PreviousFillHandler(latch);
-
+    // TODO it is not suitable for register and deregister an Object to JMX to such a frequent
+    // function call.
+    // BUT is it suitable to create a thread pool for each calling??
     ExecutorService fillService = Executors.newFixedThreadPool(partitionGroups.size());

Review comment:
       Certainly, I put it up because I thought the focus of this PR is pool refactoring so this is also part of it. Please create a related issue to this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] jt2594838 commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
jt2594838 commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738869581



##########
File path: cluster/src/test/java/org/apache/iotdb/cluster/client/ClientManagerTest.java
##########
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster.client;
+
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.client.sync.SyncMetaClient;
+import org.apache.iotdb.cluster.rpc.thrift.RaftService;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ClientManagerTest extends BaseClientTest {
+
+  @Before
+  public void setUp() throws IOException {
+    startDataServer();
+    startMetaServer();
+    startDataHeartbeatServer();
+    startMetaHeartbeatServer();
+  }
+
+  @After
+  public void tearDown() throws IOException, InterruptedException {
+    stopDataServer();
+    stopMetaServer();
+    stopDataHeartbeatServer();
+    stopMetaHeartbeatServer();
+  }
+
+  @Test
+  public void syncClientManagersTest() throws Exception {
+    // ---------Sync cluster clients manager test------------
+    ClientManager clusterManager =
+        new ClientManager(false, ClientManager.Type.RequestForwardClient);
+    RaftService.Client syncClusterClient =
+        clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(syncClusterClient);
+    Assert.assertTrue(syncClusterClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) syncClusterClient).getNode(), defaultNode);
+    Assert.assertTrue(syncClusterClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) syncClusterClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(clusterManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync meta(meta heartbeat) clients manager test------------
+    ClientManager metaManager = new ClientManager(false, ClientManager.Type.MetaGroupClient);
+    RaftService.Client metaClient = metaManager.borrowSyncClient(defaultNode, ClientCategory.META);
+    Assert.assertNotNull(metaClient);
+    Assert.assertTrue(metaClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaClient).getNode(), defaultNode);
+    Assert.assertTrue(metaClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaClient).returnSelf();
+
+    RaftService.Client metaHeartClient =
+        metaManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT);
+    Assert.assertNotNull(metaHeartClient);
+    Assert.assertTrue(metaHeartClient instanceof SyncMetaClient);
+    Assert.assertEquals(((SyncMetaClient) metaHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(metaHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncMetaClient) metaHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(metaManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    // ---------Sync data(data heartbeat) clients manager test------------
+    ClientManager dataManager = new ClientManager(false, ClientManager.Type.DataGroupClient);
+
+    RaftService.Client dataClient = dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA);
+    Assert.assertNotNull(dataClient);
+    Assert.assertTrue(dataClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataClient).getNode(), defaultNode);
+    Assert.assertTrue(dataClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataClient).returnSelf();
+
+    RaftService.Client dataHeartClient =
+        dataManager.borrowSyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT);
+    Assert.assertNotNull(dataHeartClient);
+    Assert.assertTrue(dataHeartClient instanceof SyncDataClient);
+    Assert.assertEquals(((SyncDataClient) dataHeartClient).getNode(), defaultNode);
+    Assert.assertTrue(dataHeartClient.getInputProtocol().getTransport().isOpen());
+    ((SyncDataClient) dataHeartClient).returnSelf();
+
+    // cluster test
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowSyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.DATA_HEARTBEAT));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META));
+    Assert.assertNull(dataManager.borrowAsyncClient(defaultNode, ClientCategory.META_HEARTBEAT));
+  }
+
+  @Test
+  public void asyncClientManagersTest() throws Exception {
+    // ---------async cluster clients manager test------------
+    ClientManager clusterManager = new ClientManager(true, ClientManager.Type.RequestForwardClient);
+    RaftService.AsyncClient clusterClient =
+        clusterManager.borrowAsyncClient(defaultNode, ClientCategory.DATA);
+
+    Assert.assertNotNull(clusterClient);
+    Assert.assertTrue(clusterClient instanceof AsyncDataClient);
+    Assert.assertEquals(((AsyncDataClient) clusterClient).getNode(), defaultNode);
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isValid());
+    Assert.assertTrue(((AsyncDataClient) clusterClient).isReady());

Review comment:
       You may call some method through the client and since the socket it connects is not listened to by a real server, the call will fail and result in a broken client. Sure you can leave it alone, but you'd better leave a comment in the test.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/44081678/badge)](https://coveralls.io/builds/44081678)
   
   Coverage decreased (-0.01%) to 67.097% when pulling **a3aa2792e171aa32a477b1ccedd3070291b7fc08 on cluster-** into **f799e3cb71218a4dda970002d9ca4500651d5f35 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-962403112


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [210 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.7%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.7%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.7% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952134463


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [365 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.3%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.3%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.3% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-952661331


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [366 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.2%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.2%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.2% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.5% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43299939/badge)](https://coveralls.io/builds/43299939)
   
   Coverage increased (+0.009%) to 67.789% when pulling **cc4cb83f7bf636a6bdc13d385abc759adab16085 on cluster-** into **cbbdc6caf51660e5817a4e9c854831d820315b72 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43276028/badge)](https://coveralls.io/builds/43276028)
   
   Coverage increased (+0.02%) to 67.804% when pulling **22e7743302ef533f967e25c1990fc8933dd6b605 on cluster-** into **cbbdc6caf51660e5817a4e9c854831d820315b72 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43330827/badge)](https://coveralls.io/builds/43330827)
   
   Coverage decreased (-0.01%) to 67.767% when pulling **5282fb9ca9c5d11431e1eb90d3abdbd1d71c0554 on cluster-** into **cbbdc6caf51660e5817a4e9c854831d820315b72 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/44077936/badge)](https://coveralls.io/builds/44077936)
   
   Coverage increased (+0.03%) to 67.078% when pulling **25571b881c307adea7204fad64a4d28ad80c0a83 on cluster-** into **5e1f7809dc0ad1e21bc18f53ab0a6b2e2b30091a on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-962949166


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [180 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![41.6%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '41.6%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [41.6% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.0%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.0%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.0% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] commented on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-948393757


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [6 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [361 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.0%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.0%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.0% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.4%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.4%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.4% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43664648/badge)](https://coveralls.io/builds/43664648)
   
   Coverage decreased (-0.01%) to 67.24% when pulling **3574f9fc6437f4a5144239b700ce56e5dc3c1f0b on cluster-** into **1dcc82aad34bfc0820ac28f6a2e70757fef7d219 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43627401/badge)](https://coveralls.io/builds/43627401)
   
   Coverage decreased (-0.2%) to 67.269% when pulling **15cd94c3ce263991c54083f58405a610d9d5a753 on cluster-** into **e4b7f64deb54b3fc186424cf969a68bff23a6fc7 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738887977



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/query/fill/ClusterPreviousFill.java
##########
@@ -120,7 +121,9 @@ private TimeValuePair performPreviousFill(
     }
     CountDownLatch latch = new CountDownLatch(partitionGroups.size());
     PreviousFillHandler handler = new PreviousFillHandler(latch);
-
+    // TODO it is not suitable for register and deregister an Object to JMX to such a frequent
+    // function call.
+    // BUT is it suitable to create a thread pool for each calling??
     ExecutorService fillService = Executors.newFixedThreadPool(partitionGroups.size());

Review comment:
       created JIRA issue: https://issues.apache.org/jira/browse/IOTDB-1904 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] chengjianyun commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
chengjianyun commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r738888636



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/log/snapshot/FileSnapshot.java
##########
@@ -350,7 +350,7 @@ private void removeRemoteHardLink(RemoteTsFileResource resource) {
         try {
           client.removeHardLink(resource.getTsFile().getAbsolutePath());
         } catch (TException te) {
-          client.close();
+          if (client != null) client.close();

Review comment:
       Sure. let me fix all this related code style bug. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43970962/badge)](https://coveralls.io/builds/43970962)
   
   Coverage increased (+0.01%) to 66.97% when pulling **67d8875cdda0032a63c4bbc00a993391ac896fdf on cluster-** into **b05e21c078debcbb020d62ecd6d8a00a932863bd on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] LebronAl commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
LebronAl commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r740726480



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,685 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  /**
+   * TODO: fix me: better to throw exception if the client can not be get. Then we can remove this
+   * field.
+   */
+  public static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);

Review comment:
       The design diagram for the new class is shown [here](https://github.com/apache/iotdb/issues/3881). Since multiple dataMember exists on a node, we abstracted a dataEngine to manage it. But since a node only has one metaMember, I don't think it's necessary to abstract a metaEngine that only manages one metaMember. I have changed all metaEngine in ClusterIoTDB back to metaMember and now there is no metaEngine name

##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,687 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO: better to throw exception if the client can not be get. Then we can remove this field.
+  private static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node.
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots. */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup.
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  /** initialize the current node and its services */
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+      JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+      return false;
+    }
+    return true;
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    if (!cluster.initLocalEngines()) {
+      logger.error("initLocalEngines error, stop process!");
+      return;
+    }
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO: duplicate code
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  /** Start as a seed node */
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster module.
+    // TODO: it is better to remove coordinator out of metaGroupEngine
+
+    registerManager.register(metaGroupEngine);
+    registerManager.register(dataGroupEngine);
+
+    // rpc service initialize
+    DataGroupServiceImpls dataGroupServiceImpls = new DataGroupServiceImpls();
+    if (ClusterDescriptor.getInstance().getConfig().isUseAsyncServer()) {
+      MetaAsyncService metaAsyncService = new MetaAsyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      MetaRaftService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      DataRaftService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+    } else {
+      MetaSyncService syncService = new MetaSyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initSyncedServiceImpl(syncService);
+      MetaRaftService.getInstance().initSyncedServiceImpl(syncService);
+      DataRaftService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+    }
+    // start RPC service
+    logger.info("start Meta Heartbeat RPC service... ");
+    registerManager.register(MetaRaftHeartBeatService.getInstance());
+    /* TODO: better to start the Meta RPC service until the heartbeatService has elected the leader and quorum of followers have caught up. */
+    logger.info("start Meta RPC service... ");

Review comment:
       this is basically the same logic as before, maybe we can fix the work in todo in the future, what do you think?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] sonarcloud[bot] removed a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-956028483


   SonarCloud Quality Gate failed.&nbsp; &nbsp; ![Quality Gate failed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/failed-16px.png 'Quality Gate failed')
   
   [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG) [0 Bugs](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=BUG)  
   [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=VULNERABILITY)  
   [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=SECURITY_HOTSPOT)  
   [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL) [240 Code Smells](https://sonarcloud.io/project/issues?id=apache_incubator-iotdb&pullRequest=4079&resolved=false&types=CODE_SMELL)
   
   [![42.5%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/40-16px.png '42.5%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list) [42.5% Coverage](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_coverage&view=list)  
   [![2.1%](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/3-16px.png '2.1%')](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list) [2.1% Duplication](https://sonarcloud.io/component_measures?id=apache_incubator-iotdb&pullRequest=4079&metric=new_duplicated_lines_density&view=list)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] mychaow commented on a change in pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
mychaow commented on a change in pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#discussion_r740758059



##########
File path: cluster/src/main/java/org/apache/iotdb/cluster/ClusterIoTDB.java
##########
@@ -0,0 +1,687 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.cluster;
+
+import org.apache.iotdb.cluster.client.ClientCategory;
+import org.apache.iotdb.cluster.client.ClientManager;
+import org.apache.iotdb.cluster.client.IClientManager;
+import org.apache.iotdb.cluster.client.async.AsyncDataClient;
+import org.apache.iotdb.cluster.client.async.AsyncMetaClient;
+import org.apache.iotdb.cluster.client.sync.SyncClientAdaptor;
+import org.apache.iotdb.cluster.client.sync.SyncDataClient;
+import org.apache.iotdb.cluster.config.ClusterConfig;
+import org.apache.iotdb.cluster.config.ClusterConstant;
+import org.apache.iotdb.cluster.config.ClusterDescriptor;
+import org.apache.iotdb.cluster.coordinator.Coordinator;
+import org.apache.iotdb.cluster.exception.ConfigInconsistentException;
+import org.apache.iotdb.cluster.exception.StartUpCheckFailureException;
+import org.apache.iotdb.cluster.metadata.CMManager;
+import org.apache.iotdb.cluster.metadata.MetaPuller;
+import org.apache.iotdb.cluster.partition.slot.SlotPartitionTable;
+import org.apache.iotdb.cluster.partition.slot.SlotStrategy;
+import org.apache.iotdb.cluster.rpc.thrift.Node;
+import org.apache.iotdb.cluster.server.ClusterRPCService;
+import org.apache.iotdb.cluster.server.ClusterTSServiceImpl;
+import org.apache.iotdb.cluster.server.HardLinkCleaner;
+import org.apache.iotdb.cluster.server.Response;
+import org.apache.iotdb.cluster.server.clusterinfo.ClusterInfoServer;
+import org.apache.iotdb.cluster.server.member.MetaGroupMember;
+import org.apache.iotdb.cluster.server.monitor.NodeReport;
+import org.apache.iotdb.cluster.server.raft.DataRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.DataRaftService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftHeartBeatService;
+import org.apache.iotdb.cluster.server.raft.MetaRaftService;
+import org.apache.iotdb.cluster.server.service.DataGroupEngine;
+import org.apache.iotdb.cluster.server.service.DataGroupServiceImpls;
+import org.apache.iotdb.cluster.server.service.MetaAsyncService;
+import org.apache.iotdb.cluster.server.service.MetaSyncService;
+import org.apache.iotdb.cluster.utils.ClusterUtils;
+import org.apache.iotdb.cluster.utils.nodetool.ClusterMonitor;
+import org.apache.iotdb.db.concurrent.IoTDBThreadPoolFactory;
+import org.apache.iotdb.db.conf.IoTDBConfigCheck;
+import org.apache.iotdb.db.conf.IoTDBConstant;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.ConfigurationException;
+import org.apache.iotdb.db.exception.StartupException;
+import org.apache.iotdb.db.exception.query.QueryProcessException;
+import org.apache.iotdb.db.service.IoTDB;
+import org.apache.iotdb.db.service.JMXService;
+import org.apache.iotdb.db.service.RegisterManager;
+import org.apache.iotdb.db.service.thrift.ThriftServiceThread;
+import org.apache.iotdb.db.utils.TestOnly;
+
+import org.apache.thrift.TException;
+import org.apache.thrift.async.TAsyncClientManager;
+import org.apache.thrift.protocol.TBinaryProtocol.Factory;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.iotdb.cluster.config.ClusterConstant.THREAD_POLL_WAIT_TERMINATION_TIME_S;
+import static org.apache.iotdb.cluster.utils.ClusterUtils.UNKNOWN_CLIENT_IP;
+
+/** we do not inherent IoTDB instance, as it may break the singleton mode of IoTDB. */
+public class ClusterIoTDB implements ClusterIoTDBMBean {
+
+  private static final Logger logger = LoggerFactory.getLogger(ClusterIoTDB.class);
+  private final String mbeanName =
+      String.format(
+          "%s:%s=%s", "org.apache.iotdb.cluster.service", IoTDBConstant.JMX_TYPE, "ClusterIoTDB");
+
+  // TODO: better to throw exception if the client can not be get. Then we can remove this field.
+  private static boolean printClientConnectionErrorStack = false;
+
+  // establish the cluster as a seed
+  private static final String MODE_START = "-s";
+  // join an established cluster
+  private static final String MODE_ADD = "-a";
+  // send a request to remove a node, more arguments: ip-of-removed-node
+  // metaport-of-removed-node
+  private static final String MODE_REMOVE = "-r";
+
+  private MetaGroupMember metaGroupEngine;
+
+  private DataGroupEngine dataGroupEngine;
+
+  private Node thisNode;
+  private Coordinator coordinator;
+
+  private final IoTDB iotdb = IoTDB.getInstance();
+
+  // Cluster IoTDB uses a individual registerManager with its parent.
+  private RegisterManager registerManager = new RegisterManager();
+
+  /**
+   * a single thread pool, every "REPORT_INTERVAL_SEC" seconds, "reportThread" will print the status
+   * of all raft members in this node.
+   */
+  private ScheduledExecutorService reportThread;
+
+  private boolean allowReport = true;
+
+  /** hardLinkCleaner will periodically clean expired hardlinks created during snapshots. */
+  private ScheduledExecutorService hardLinkCleanerThread;
+
+  /**
+   * The clientManager is only used by those instances who do not belong to any DataGroup or
+   * MetaGroup.
+   */
+  private IClientManager clientManager;
+
+  private ClusterIoTDB() {
+    // we do not init anything here, so that we can re-initialize the instance in IT.
+  }
+
+  /** initialize the current node and its services */
+  public boolean initLocalEngines() {
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    thisNode = new Node();
+    // set internal rpc ip and ports
+    thisNode.setInternalIp(config.getInternalIp());
+    thisNode.setMetaPort(config.getInternalMetaPort());
+    thisNode.setDataPort(config.getInternalDataPort());
+    // set client rpc ip and ports
+    thisNode.setClientPort(config.getClusterRpcPort());
+    thisNode.setClientIp(IoTDBDescriptor.getInstance().getConfig().getRpcAddress());
+    coordinator = new Coordinator();
+    // local engine
+    TProtocolFactory protocolFactory =
+        ThriftServiceThread.getProtocolFactory(
+            IoTDBDescriptor.getInstance().getConfig().isRpcThriftCompressionEnable());
+    metaGroupEngine = new MetaGroupMember(protocolFactory, thisNode, coordinator);
+    IoTDB.setClusterMode();
+    IoTDB.setMetaManager(CMManager.getInstance());
+    ((CMManager) IoTDB.metaManager).setMetaGroupMember(metaGroupEngine);
+    ((CMManager) IoTDB.metaManager).setCoordinator(coordinator);
+    MetaPuller.getInstance().init(metaGroupEngine);
+
+    // from the scope of the DataGroupEngine,it should be singleton pattern
+    // the way of setting MetaGroupMember in DataGroupEngine may need a better modification in
+    // future commit.
+    DataGroupEngine.setProtocolFactory(protocolFactory);
+    DataGroupEngine.setMetaGroupMember(metaGroupEngine);
+    dataGroupEngine = DataGroupEngine.getInstance();
+    clientManager =
+        new ClientManager(
+            ClusterDescriptor.getInstance().getConfig().isUseAsyncServer(),
+            ClientManager.Type.RequestForwardClient);
+    initTasks();
+    try {
+      // we need to check config after initLocalEngines.
+      startServerCheck();
+      JMXService.registerMBean(metaGroupEngine, metaGroupEngine.getMBeanName());
+    } catch (StartupException e) {
+      logger.error("Failed to check cluster config.", e);
+      stop();
+      return false;
+    }
+    return true;
+  }
+
+  private void initTasks() {
+    reportThread = IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("NodeReportThread");
+    reportThread.scheduleAtFixedRate(
+        this::generateNodeReport,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        ClusterConstant.REPORT_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+    hardLinkCleanerThread =
+        IoTDBThreadPoolFactory.newSingleThreadScheduledExecutor("HardLinkCleaner");
+    hardLinkCleanerThread.scheduleAtFixedRate(
+        new HardLinkCleaner(),
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        ClusterConstant.CLEAN_HARDLINK_INTERVAL_SEC,
+        TimeUnit.SECONDS);
+  }
+
+  /**
+   * Generate a report containing the status of both MetaGroupMember and DataGroupMembers of this
+   * node. This will help to see if the node is in a consistent and right state during debugging.
+   */
+  private void generateNodeReport() {
+    if (logger.isDebugEnabled() && allowReport) {
+      try {
+        NodeReport report = new NodeReport(thisNode);
+        report.setMetaMemberReport(metaGroupEngine.genMemberReport());
+        report.setDataMemberReportList(dataGroupEngine.genMemberReports());
+        logger.debug(report.toString());
+      } catch (Exception e) {
+        logger.error("exception occurred when generating node report", e);
+      }
+    }
+  }
+
+  public static void main(String[] args) {
+    if (args.length < 1) {
+      logger.error(
+          "Usage: <-s|-a|-r> "
+              + "[-D{} <configure folder>] \n"
+              + "-s: start the node as a seed\n"
+              + "-a: start the node as a new node\n"
+              + "-r: remove the node out of the cluster\n",
+          IoTDBConstant.IOTDB_CONF);
+      return;
+    }
+
+    ClusterIoTDB cluster = ClusterIoTDBHolder.INSTANCE;
+    // check config of iotdb,and set some configs in cluster mode
+    try {
+      if (!cluster.serverCheckAndInit()) {
+        return;
+      }
+    } catch (ConfigurationException | IOException e) {
+      logger.error("meet error when doing start checking", e);
+      return;
+    }
+    String mode = args[0];
+    logger.info("Running mode {}", mode);
+
+    // initialize the current node and its services
+    if (!cluster.initLocalEngines()) {
+      logger.error("initLocalEngines error, stop process!");
+      return;
+    }
+
+    // we start IoTDB kernel first. then we start the cluster module.
+    if (MODE_START.equals(mode)) {
+      cluster.activeStartNodeMode();
+    } else if (MODE_ADD.equals(mode)) {
+      cluster.activeAddNodeMode();
+    } else if (MODE_REMOVE.equals(mode)) {
+      try {
+        cluster.doRemoveNode(args);
+      } catch (IOException e) {
+        logger.error("Fail to remove node in cluster", e);
+      }
+    } else {
+      logger.error("Unrecognized mode {}", mode);
+    }
+  }
+
+  private boolean serverCheckAndInit() throws ConfigurationException, IOException {
+    IoTDBConfigCheck.getInstance().checkConfig();
+    // init server's configuration first, because the cluster configuration may read settings from
+    // the server's configuration.
+    IoTDBDescriptor.getInstance().getConfig().setSyncEnable(false);
+    // auto create schema is took over by cluster module, so we disable it in the server module.
+    IoTDBDescriptor.getInstance().getConfig().setAutoCreateSchemaEnabled(false);
+    // check cluster config
+    String checkResult = clusterConfigCheck();
+    if (checkResult != null) {
+      logger.error(checkResult);
+      return false;
+    }
+    return true;
+  }
+
+  private String clusterConfigCheck() {
+    try {
+      ClusterDescriptor.getInstance().replaceHostnameWithIp();
+    } catch (Exception e) {
+      return String.format("replace hostname with ip failed, %s", e.getMessage());
+    }
+    ClusterConfig config = ClusterDescriptor.getInstance().getConfig();
+    // check the initial replicateNum and refuse to start when the replicateNum <= 0
+    if (config.getReplicationNum() <= 0) {
+      return String.format(
+          "ReplicateNum should be greater than 0 instead of %d.", config.getReplicationNum());
+    }
+    // check the initial cluster size and refuse to start when the size < quorum
+    int quorum = config.getReplicationNum() / 2 + 1;
+    if (config.getSeedNodeUrls().size() < quorum) {
+      return String.format(
+          "Seed number less than quorum, seed number: %s, quorum: " + "%s.",
+          config.getSeedNodeUrls().size(), quorum);
+    }
+    // TODO: duplicate code
+    Set<Node> seedNodes = new HashSet<>();
+    for (String url : config.getSeedNodeUrls()) {
+      Node node = ClusterUtils.parseNode(url);
+      if (seedNodes.contains(node)) {
+        return String.format(
+            "SeedNodes must not repeat each other. SeedNodes: %s", config.getSeedNodeUrls());
+      }
+      seedNodes.add(node);
+    }
+    return null;
+  }
+
+  /** Start as a seed node */
+  public void activeStartNodeMode() {
+    try {
+      // start iotdb server first
+      IoTDB.getInstance().active();
+      // some work about cluster
+      preInitCluster();
+      // try to build cluster
+      metaGroupEngine.buildCluster();
+      // register service after cluster build
+      postInitCluster();
+      // init ServiceImpl to handle request of client
+      startClientRPC();
+    } catch (StartupException
+        | StartUpCheckFailureException
+        | ConfigInconsistentException
+        | QueryProcessException e) {
+      logger.error("Fail to start  server", e);
+      stop();
+    }
+  }
+
+  private void preInitCluster() throws StartupException {
+    stopRaftInfoReport();
+    JMXService.registerMBean(this, mbeanName);
+    // register MetaGroupMember. MetaGroupMember has the same position with "StorageEngine" in the
+    // cluster module.
+    // TODO: it is better to remove coordinator out of metaGroupEngine
+
+    registerManager.register(metaGroupEngine);
+    registerManager.register(dataGroupEngine);
+
+    // rpc service initialize
+    DataGroupServiceImpls dataGroupServiceImpls = new DataGroupServiceImpls();
+    if (ClusterDescriptor.getInstance().getConfig().isUseAsyncServer()) {
+      MetaAsyncService metaAsyncService = new MetaAsyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      MetaRaftService.getInstance().initAsyncedServiceImpl(metaAsyncService);
+      DataRaftService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initAsyncedServiceImpl(dataGroupServiceImpls);
+    } else {
+      MetaSyncService syncService = new MetaSyncService(metaGroupEngine);
+      MetaRaftHeartBeatService.getInstance().initSyncedServiceImpl(syncService);
+      MetaRaftService.getInstance().initSyncedServiceImpl(syncService);
+      DataRaftService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+      DataRaftHeartBeatService.getInstance().initSyncedServiceImpl(dataGroupServiceImpls);
+    }
+    // start RPC service
+    logger.info("start Meta Heartbeat RPC service... ");
+    registerManager.register(MetaRaftHeartBeatService.getInstance());
+    /* TODO: better to start the Meta RPC service until the heartbeatService has elected the leader and quorum of followers have caught up. */
+    logger.info("start Meta RPC service... ");

Review comment:
       ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [iotdb] coveralls edited a comment on pull request #4079: [IOTDB-1639] Refactoring the cluster class structure to make it consistent with the server module

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #4079:
URL: https://github.com/apache/iotdb/pull/4079#issuecomment-934266411


   
   [![Coverage Status](https://coveralls.io/builds/43736368/badge)](https://coveralls.io/builds/43736368)
   
   Coverage increased (+0.04%) to 67.088% when pulling **48614f12eb4107e8eab3dab405cf833a6a07fea9 on cluster-** into **955278a8ef82292e8e1d08bef1da6cd083558650 on master**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org