You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/05/06 01:07:06 UTC

[GitHub] [hadoop] tomscut commented on a diff in pull request #4264: HDFS-16568. dfsadmin -reconfig option to start/query reconfig on all live datanodes

tomscut commented on code in PR #4264:
URL: https://github.com/apache/hadoop/pull/4264#discussion_r866422527


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##########
@@ -1915,12 +1919,57 @@ public int reconfig(String[] argv, int i) throws IOException {
   }
 
   int startReconfiguration(final String nodeThpe, final String address)
-      throws IOException {
-    return startReconfiguration(nodeThpe, address, System.out, System.err);
+      throws IOException, InterruptedException {
+    return startReconfigurationUtil(nodeThpe, address, System.out, System.err);
+  }
+
+  int startReconfigurationUtil(final String nodeType, final String address, final PrintStream out,
+      final PrintStream err) throws IOException, InterruptedException {
+    if (!"livenodes".equals(address)) {
+      return startReconfiguration(nodeType, address, out, err);
+    }
+    if (!"datanode".equals(nodeType)) {
+      err.println("Only datanode type supports reconfiguration in bulk.");
+      return 1;
+    }
+    ExecutorService executorService = Executors.newFixedThreadPool(5);
+    DistributedFileSystem dfs = getDFS();
+    DatanodeInfo[] nodes = dfs.getDataNodeStats(DatanodeReportType.LIVE);
+    AtomicInteger successCount = new AtomicInteger();
+    AtomicInteger failCount = new AtomicInteger();
+    if (nodes != null) {
+      for (DatanodeInfo node : nodes) {
+        executorService.submit(() -> {
+          int status = startReconfiguration(nodeType, node.getIpcAddr(false), out, err);
+          if (status == 0) {
+            successCount.incrementAndGet();
+          } else {
+            failCount.incrementAndGet();
+          }
+        });
+      }
+      while ((successCount.get() + failCount.get()) < nodes.length) {

Review Comment:
   This feature makes sense to me. It looks good overall.
   
   Maybe it's better to use CountDownLatch here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org