You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2021/08/27 20:58:11 UTC

[GitHub] [hadoop] LeonGao91 opened a new pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

LeonGao91 opened a new pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346


   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] LeonGao91 commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
LeonGao91 commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r699903225



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +117,93 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(getNnHeartBeatServiceName(nsId, nnId));
+
+    this.resolver = resolver;
+
+    this.nameserviceId = nsId;
+    // Concat a uniq id from original nnId and resolvedHost
+    this.namenodeId = nnId + "-" + resolvedHost;
+    this.resolvedHost = resolvedHost;
+    // Same the original nnid to get the ports from config.
+    this.originalNnId = nnId;
+
+  }
+
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
     this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-      this.localTarget = new NNHAServiceTarget(
-          conf, nameserviceId, namenodeId);
       nnDesc += "-" + namenodeId;
     } else {
       this.localTarget = null;
     }
 
+    if (originalNnId == null) {
+      originalNnId = namenodeId;
+    }
+
     // Get the RPC address for the clients to connect
-    this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
+    this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
+    if (resolvedHost != null) {
+      rpcAddress = resolvedHost + ":"
+          + NetUtils.createSocketAddr(rpcAddress).getPort();
+    }
     LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
 
     // Get the Service RPC address for monitoring
     this.serviceAddress =
-        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
     if (this.serviceAddress == null) {
       LOG.error("Cannot locate RPC service address for NN {}, " +
           "using RPC address {}", nnDesc, this.rpcAddress);
       this.serviceAddress = this.rpcAddress;
     }
+    if (resolvedHost != null) {

Review comment:
       Sure, that will be more clear.
   Maybe I will just do a String split with ":"? The format coming from config should be ok.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] LeonGao91 commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
LeonGao91 commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r700598481



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -120,40 +146,59 @@ protected void serviceInit(Configuration configuration) throws Exception {
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-      this.localTarget = new NNHAServiceTarget(
-          conf, nameserviceId, namenodeId);
       nnDesc += "-" + namenodeId;
     } else {
       this.localTarget = null;
     }
 
+    if (originalNnId == null) {
+      originalNnId = namenodeId;
+    }
     // Get the RPC address for the clients to connect
-    this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
-    LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
+    this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
 
     // Get the Service RPC address for monitoring
     this.serviceAddress =
-        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
     if (this.serviceAddress == null) {
       LOG.error("Cannot locate RPC service address for NN {}, " +
           "using RPC address {}", nnDesc, this.rpcAddress);
       this.serviceAddress = this.rpcAddress;
     }
-    LOG.info("{} Service RPC address: {}", nnDesc, serviceAddress);
 
     // Get the Lifeline RPC address for faster monitoring
     this.lifelineAddress =
-        DFSUtil.getNamenodeLifelineAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeLifelineAddr(conf, nameserviceId, originalNnId);
     if (this.lifelineAddress == null) {
       this.lifelineAddress = this.serviceAddress;
     }
-    LOG.info("{} Lifeline RPC address: {}", nnDesc, lifelineAddress);
 
     // Get the Web address for UI
     this.webAddress =
-        DFSUtil.getNamenodeWebAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeWebAddr(conf, nameserviceId, originalNnId);
+
+    if (resolvedHost != null) {
+      // Get the addresses from resolvedHost plus the configured ports.
+      rpcAddress = resolvedHost + ":"
+          + rpcAddress.split(":")[1];

Review comment:
       Sure, will add a simple method there to parse the host:port string.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] goiri commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
goiri commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r698849492



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +116,91 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(NamenodeHeartbeatService.class.getSimpleName() +

Review comment:
       We probably want a getNnName() to make this more readable.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,35 +426,42 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
+      getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, ret, keys);

Review comment:
       I think it is better to not have the "ret" as a parameter.
   We should return it and use addAll().

##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterNamenodeHeartbeat.java
##########
@@ -203,4 +210,64 @@ public void testHearbeat() throws InterruptedException, IOException {
     standby = normalNss.get(1);
     assertEquals(NAMENODES[1], standby.getNamenodeId());
   }
+
+  @Test
+  public void testNamenodeHeartbeatServiceNNResolution() {
+    String nsId = "test-ns";
+    String nnId = "nn";
+    String rpcPort = "1000";
+    String servicePort = "1001";
+    String lifelinePort = "1002";
+    String webAddressPort = "1003";
+    Configuration conf = generateNamenodeConfiguration(nsId, nnId,
+        rpcPort, servicePort, lifelinePort, webAddressPort);
+
+    Router testRouter = new Router();
+    testRouter.setConf(conf);
+
+    Collection<NamenodeHeartbeatService> heartbeatServices =
+        testRouter.createNamenodeHeartbeatServices();
+
+    assertEquals(2, heartbeatServices.size());
+
+    Iterator<NamenodeHeartbeatService> iterator = heartbeatServices.iterator();
+    NamenodeHeartbeatService service = iterator.next();
+    service.init(conf);
+    assertEquals("test-ns-nn-host01.test:host01.test:1001",
+        service.getNamenodeDesc());
+
+    service = iterator.next();
+    service.init(conf);
+    assertEquals("test-ns-nn-host02.test:host02.test:1001",
+        service.getNamenodeDesc());
+
+  }
+
+  private Configuration generateNamenodeConfiguration(
+      String nsId, String nnId,
+      String rpcPort, String servicePort,

Review comment:
       Make all the ports ints.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,35 +426,42 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
+      getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, ret, keys);
+    }
+    return ret;
+  }
+
+  public static void getResolvedAddressesForNnId(
+      Configuration conf, String nsId, String nnId,
+      DomainNameResolver dnr, String defaultValue,
+      Map<String, InetSocketAddress> ret, String... keys) {
+    String suffix = concatSuffixes(nsId, nnId);
+    String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
+    if (address != null) {
+      InetSocketAddress isa = NetUtils.createSocketAddr(address);
+      try {
+        String[] resolvedHostNames = dnr
+            .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
+        int port = isa.getPort();
+        for (String hostname : resolvedHostNames) {
+          InetSocketAddress inetSocketAddress = new InetSocketAddress(
+              hostname, port);
+          // Concat nn info with host info to make uniq ID
+          String concatId;
+          if (nnId == null || nnId.isEmpty()) {
+            concatId = String
+                .join("-", nsId, hostname, String.valueOf(port));
+          } else {
+            concatId = String
+                .join("-", nsId, nnId, hostname, String.valueOf(port));
           }
-        } catch (UnknownHostException e) {
-          LOG.error("Failed to resolve address: " + address);
+          ret.put(concatId, inetSocketAddress);
         }
+      } catch (UnknownHostException e) {
+        LOG.error("Failed to resolve address: " + address);

Review comment:
       Use logger {} now that we are moving this

##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +116,91 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(NamenodeHeartbeatService.class.getSimpleName() +
+        (nsId == null ? "" : " " + nsId) +
+        (nnId == null ? "" : " " + nnId));
+
+    this.resolver = resolver;
+
+    this.nameserviceId = nsId;
+    // Concat a uniq id from original nnId and resolvedHost
+    this.namenodeId = nnId + "-" + resolvedHost;
+    this.resolvedHost = resolvedHost;
+    // Same the original nnid to get the ports from config.
+    this.originalNnId = nnId;
+
+  }
+
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
     this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-      this.localTarget = new NNHAServiceTarget(
-          conf, nameserviceId, namenodeId);
       nnDesc += "-" + namenodeId;
     } else {
       this.localTarget = null;
     }
 
+    if (originalNnId == null) {
+      originalNnId = namenodeId;
+    }
+
     // Get the RPC address for the clients to connect
-    this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
+    this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
+    if (resolvedHost != null) {
+      rpcAddress = resolvedHost + ":" + rpcAddress.split(":")[1];
+    }
     LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
 
     // Get the Service RPC address for monitoring
     this.serviceAddress =
-        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
     if (this.serviceAddress == null) {
       LOG.error("Cannot locate RPC service address for NN {}, " +
           "using RPC address {}", nnDesc, this.rpcAddress);
       this.serviceAddress = this.rpcAddress;
     }
+    if (resolvedHost != null) {
+      serviceAddress = resolvedHost + ":" + serviceAddress.split(":")[1];
+    }
     LOG.info("{} Service RPC address: {}", nnDesc, serviceAddress);
 
     // Get the Lifeline RPC address for faster monitoring
     this.lifelineAddress =
-        DFSUtil.getNamenodeLifelineAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeLifelineAddr(conf, nameserviceId, originalNnId);
     if (this.lifelineAddress == null) {
       this.lifelineAddress = this.serviceAddress;
     }
+    if (resolvedHost != null) {
+      lifelineAddress = resolvedHost + ":" + lifelineAddress.split(":")[1];
+    }
     LOG.info("{} Lifeline RPC address: {}", nnDesc, lifelineAddress);
 
     // Get the Web address for UI
     this.webAddress =
-        DFSUtil.getNamenodeWebAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeWebAddr(conf, nameserviceId, originalNnId);
+    if (resolvedHost != null) {
+      webAddress = resolvedHost + ":" + webAddress.split(":")[1];

Review comment:
       Extract to port.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,35 +426,42 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
+      getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, ret, keys);
+    }
+    return ret;
+  }
+
+  public static void getResolvedAddressesForNnId(
+      Configuration conf, String nsId, String nnId,
+      DomainNameResolver dnr, String defaultValue,
+      Map<String, InetSocketAddress> ret, String... keys) {
+    String suffix = concatSuffixes(nsId, nnId);
+    String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
+    if (address != null) {
+      InetSocketAddress isa = NetUtils.createSocketAddr(address);
+      try {
+        String[] resolvedHostNames = dnr
+            .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
+        int port = isa.getPort();
+        for (String hostname : resolvedHostNames) {
+          InetSocketAddress inetSocketAddress = new InetSocketAddress(
+              hostname, port);
+          // Concat nn info with host info to make uniq ID

Review comment:
       getConcatId() could be a static method.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] LeonGao91 commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
LeonGao91 commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r698915014



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
##########
@@ -56,9 +56,25 @@
   private final String nnId;
   private final String nsId;
   private final boolean autoFailoverEnabled;
-  
+
   public NNHAServiceTarget(Configuration conf,
       String nsId, String nnId) {
+    this(conf, nsId, nnId, null, null, null);
+  }
+
+  /**
+   * Create a NNHAServiceTarget for a namenode.
+   *
+   * @param conf          HDFS configuration.
+   * @param nsId          nsId of this nn.
+   * @param nnId          nnId of this nn.
+   * @param serviceAddr   Provided service address.
+   * @param addr          Provided service address.
+   * @param lifelineAddr  Provided service address.
+   */
+  public NNHAServiceTarget(Configuration conf,

Review comment:
       Yeah this will look cleaner. I will need to remove the final key word from some vars as the assignment is out of the constructor.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-909081414






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] fengnanli merged pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
fengnanli merged pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] fengnanli commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
fengnanli commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-917291163


   Thanks for the contribution from @LeonGao91  and review from @goiri !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] goiri commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
goiri commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r700547922



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +117,93 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(getNnHeartBeatServiceName(nsId, nnId));
+
+    this.resolver = resolver;
+
+    this.nameserviceId = nsId;
+    // Concat a uniq id from original nnId and resolvedHost
+    this.namenodeId = nnId + "-" + resolvedHost;
+    this.resolvedHost = resolvedHost;
+    // Same the original nnid to get the ports from config.
+    this.originalNnId = nnId;
+
+  }
+
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
     this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-      this.localTarget = new NNHAServiceTarget(
-          conf, nameserviceId, namenodeId);
       nnDesc += "-" + namenodeId;
     } else {
       this.localTarget = null;
     }
 
+    if (originalNnId == null) {
+      originalNnId = namenodeId;
+    }
+
     // Get the RPC address for the clients to connect
-    this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
+    this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
+    if (resolvedHost != null) {
+      rpcAddress = resolvedHost + ":"
+          + NetUtils.createSocketAddr(rpcAddress).getPort();
+    }
     LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
 
     // Get the Service RPC address for monitoring
     this.serviceAddress =
-        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
     if (this.serviceAddress == null) {
       LOG.error("Cannot locate RPC service address for NN {}, " +
           "using RPC address {}", nnDesc, this.rpcAddress);
       this.serviceAddress = this.rpcAddress;
     }
+    if (resolvedHost != null) {

Review comment:
       There might be already a function to do the split and make it resilient without having to set a socket.
   Take a look at NetUtils to see if there is something.
   If there is not, please create a static function that does the split and adds some checks




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-912694724


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 4 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 32s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 40s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 54s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   9m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 26s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 22s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 52s |  |  root: The patch generated 0 new + 50 unchanged - 1 fixed = 50 total (was 51)  |
   | +1 :green_heart: |  mvnsite  |   5m  0s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   3m 50s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 19s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  11m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 54s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 13s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 37s |  |  hadoop-hdfs-client in the patch passed.  |
   | +1 :green_heart: |  unit  | 345m 45s |  |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  37m 25s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 624m 43s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/10/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 296d15a8f07c 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fc1ebc30c7f0349a3fe9e449e700d9ca01fd80d3 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/10/testReport/ |
   | Max. process+thread count | 3083 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] LeonGao91 commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
LeonGao91 commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r699903225



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +117,93 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(getNnHeartBeatServiceName(nsId, nnId));
+
+    this.resolver = resolver;
+
+    this.nameserviceId = nsId;
+    // Concat a uniq id from original nnId and resolvedHost
+    this.namenodeId = nnId + "-" + resolvedHost;
+    this.resolvedHost = resolvedHost;
+    // Same the original nnid to get the ports from config.
+    this.originalNnId = nnId;
+
+  }
+
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
     this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-      this.localTarget = new NNHAServiceTarget(
-          conf, nameserviceId, namenodeId);
       nnDesc += "-" + namenodeId;
     } else {
       this.localTarget = null;
     }
 
+    if (originalNnId == null) {
+      originalNnId = namenodeId;
+    }
+
     // Get the RPC address for the clients to connect
-    this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
+    this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
+    if (resolvedHost != null) {
+      rpcAddress = resolvedHost + ":"
+          + NetUtils.createSocketAddr(rpcAddress).getPort();
+    }
     LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
 
     // Get the Service RPC address for monitoring
     this.serviceAddress =
-        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
     if (this.serviceAddress == null) {
       LOG.error("Cannot locate RPC service address for NN {}, " +
           "using RPC address {}", nnDesc, this.rpcAddress);
       this.serviceAddress = this.rpcAddress;
     }
+    if (resolvedHost != null) {

Review comment:
       Sure, that will be more clear.
   Maybe I will just do a String split with ":"? The format coming from config should be ok.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-909081414


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 3 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 12s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 48s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 53s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  12m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 11s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 45s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 45s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) |  hadoop-hdfs-project: The patch generated 3 new + 15 unchanged - 1 fixed = 18 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 57s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-hdfs-client in the patch passed.  |
   | -1 :x: |  unit  | 332m 26s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  37m 34s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 511m  4s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSHAAdmin |
   |   | hadoop.hdfs.tools.TestDFSZKFailoverController |
   |   | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/4/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux ee7005f7440b 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 78708da4c8215e69fee40fd0eaf4763b08702b22 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/4/testReport/ |
   | Max. process+thread count | 2222 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-912197186


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 4 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 53s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 31s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 40s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 58s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   9m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 29s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 21s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 24s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 45s |  |  root: The patch generated 0 new + 50 unchanged - 1 fixed = 50 total (was 51)  |
   | +1 :green_heart: |  mvnsite  |   4m 54s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   3m 38s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   5m  0s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  10m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 19s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 52s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-hdfs-client in the patch passed.  |
   | +1 :green_heart: |  unit  | 337m 17s |  |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |   1m  8s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 576m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/9/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 084ae065fefe 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 94f7c80648989e769750edad2cf7f9324eed6987 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/9/testReport/ |
   | Max. process+thread count | 1907 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] fengnanli commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
fengnanli commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r697812403



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
##########
@@ -56,9 +56,25 @@
   private final String nnId;
   private final String nsId;
   private final boolean autoFailoverEnabled;
-  
+
   public NNHAServiceTarget(Configuration conf,
       String nsId, String nnId) {
+    this(conf, nsId, nnId, null, null, null);
+  }
+
+  /**
+   * Create a NNHAServiceTarget for a namenode.
+   *
+   * @param conf          HDFS configuration.
+   * @param nsId          nsId of this nn.
+   * @param nnId          nnId of this nn.
+   * @param serviceAddr   Provided service address.
+   * @param addr          Provided service address.
+   * @param lifelineAddr  Provided service address.
+   */
+  public NNHAServiceTarget(Configuration conf,

Review comment:
       Instead of overloading the constructor like this, can we have the common part extracted as a function and call it inside? Having a lot of null check for parameters feels not ideal:
   NNHAServiceTarget(Configuration conf, String nsId, String nnId) {
      // do addrs assignment from config
      .....
      assignAutofailoverAndFencer();
   }
   
   NNHAServiceTarget(Configuration conf, String nsId, String nnId, String serviceAddr, addr, lifelineAddr) {
      // do assignment from param
      ....
      assignAutofailoverAndFencer();
   }
   
   Also please add a simple test for it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] LeonGao91 commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
LeonGao91 commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r698897540



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +116,91 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(NamenodeHeartbeatService.class.getSimpleName() +

Review comment:
       sure, will make a static method for this as it is calling super()




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] goiri commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
goiri commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r698849492



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +116,91 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(NamenodeHeartbeatService.class.getSimpleName() +

Review comment:
       We probably want a getNnName() to make this more readable.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,35 +426,42 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
+      getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, ret, keys);

Review comment:
       I think it is better to not have the "ret" as a parameter.
   We should return it and use addAll().

##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterNamenodeHeartbeat.java
##########
@@ -203,4 +210,64 @@ public void testHearbeat() throws InterruptedException, IOException {
     standby = normalNss.get(1);
     assertEquals(NAMENODES[1], standby.getNamenodeId());
   }
+
+  @Test
+  public void testNamenodeHeartbeatServiceNNResolution() {
+    String nsId = "test-ns";
+    String nnId = "nn";
+    String rpcPort = "1000";
+    String servicePort = "1001";
+    String lifelinePort = "1002";
+    String webAddressPort = "1003";
+    Configuration conf = generateNamenodeConfiguration(nsId, nnId,
+        rpcPort, servicePort, lifelinePort, webAddressPort);
+
+    Router testRouter = new Router();
+    testRouter.setConf(conf);
+
+    Collection<NamenodeHeartbeatService> heartbeatServices =
+        testRouter.createNamenodeHeartbeatServices();
+
+    assertEquals(2, heartbeatServices.size());
+
+    Iterator<NamenodeHeartbeatService> iterator = heartbeatServices.iterator();
+    NamenodeHeartbeatService service = iterator.next();
+    service.init(conf);
+    assertEquals("test-ns-nn-host01.test:host01.test:1001",
+        service.getNamenodeDesc());
+
+    service = iterator.next();
+    service.init(conf);
+    assertEquals("test-ns-nn-host02.test:host02.test:1001",
+        service.getNamenodeDesc());
+
+  }
+
+  private Configuration generateNamenodeConfiguration(
+      String nsId, String nnId,
+      String rpcPort, String servicePort,

Review comment:
       Make all the ports ints.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,35 +426,42 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
+      getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, ret, keys);
+    }
+    return ret;
+  }
+
+  public static void getResolvedAddressesForNnId(
+      Configuration conf, String nsId, String nnId,
+      DomainNameResolver dnr, String defaultValue,
+      Map<String, InetSocketAddress> ret, String... keys) {
+    String suffix = concatSuffixes(nsId, nnId);
+    String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
+    if (address != null) {
+      InetSocketAddress isa = NetUtils.createSocketAddr(address);
+      try {
+        String[] resolvedHostNames = dnr
+            .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
+        int port = isa.getPort();
+        for (String hostname : resolvedHostNames) {
+          InetSocketAddress inetSocketAddress = new InetSocketAddress(
+              hostname, port);
+          // Concat nn info with host info to make uniq ID
+          String concatId;
+          if (nnId == null || nnId.isEmpty()) {
+            concatId = String
+                .join("-", nsId, hostname, String.valueOf(port));
+          } else {
+            concatId = String
+                .join("-", nsId, nnId, hostname, String.valueOf(port));
           }
-        } catch (UnknownHostException e) {
-          LOG.error("Failed to resolve address: " + address);
+          ret.put(concatId, inetSocketAddress);
         }
+      } catch (UnknownHostException e) {
+        LOG.error("Failed to resolve address: " + address);

Review comment:
       Use logger {} now that we are moving this

##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +116,91 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(NamenodeHeartbeatService.class.getSimpleName() +
+        (nsId == null ? "" : " " + nsId) +
+        (nnId == null ? "" : " " + nnId));
+
+    this.resolver = resolver;
+
+    this.nameserviceId = nsId;
+    // Concat a uniq id from original nnId and resolvedHost
+    this.namenodeId = nnId + "-" + resolvedHost;
+    this.resolvedHost = resolvedHost;
+    // Same the original nnid to get the ports from config.
+    this.originalNnId = nnId;
+
+  }
+
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
     this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-      this.localTarget = new NNHAServiceTarget(
-          conf, nameserviceId, namenodeId);
       nnDesc += "-" + namenodeId;
     } else {
       this.localTarget = null;
     }
 
+    if (originalNnId == null) {
+      originalNnId = namenodeId;
+    }
+
     // Get the RPC address for the clients to connect
-    this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
+    this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
+    if (resolvedHost != null) {
+      rpcAddress = resolvedHost + ":" + rpcAddress.split(":")[1];
+    }
     LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
 
     // Get the Service RPC address for monitoring
     this.serviceAddress =
-        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
     if (this.serviceAddress == null) {
       LOG.error("Cannot locate RPC service address for NN {}, " +
           "using RPC address {}", nnDesc, this.rpcAddress);
       this.serviceAddress = this.rpcAddress;
     }
+    if (resolvedHost != null) {
+      serviceAddress = resolvedHost + ":" + serviceAddress.split(":")[1];
+    }
     LOG.info("{} Service RPC address: {}", nnDesc, serviceAddress);
 
     // Get the Lifeline RPC address for faster monitoring
     this.lifelineAddress =
-        DFSUtil.getNamenodeLifelineAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeLifelineAddr(conf, nameserviceId, originalNnId);
     if (this.lifelineAddress == null) {
       this.lifelineAddress = this.serviceAddress;
     }
+    if (resolvedHost != null) {
+      lifelineAddress = resolvedHost + ":" + lifelineAddress.split(":")[1];
+    }
     LOG.info("{} Lifeline RPC address: {}", nnDesc, lifelineAddress);
 
     // Get the Web address for UI
     this.webAddress =
-        DFSUtil.getNamenodeWebAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeWebAddr(conf, nameserviceId, originalNnId);
+    if (resolvedHost != null) {
+      webAddress = resolvedHost + ":" + webAddress.split(":")[1];

Review comment:
       Extract to port.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,35 +426,42 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
+      getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, ret, keys);
+    }
+    return ret;
+  }
+
+  public static void getResolvedAddressesForNnId(
+      Configuration conf, String nsId, String nnId,
+      DomainNameResolver dnr, String defaultValue,
+      Map<String, InetSocketAddress> ret, String... keys) {
+    String suffix = concatSuffixes(nsId, nnId);
+    String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
+    if (address != null) {
+      InetSocketAddress isa = NetUtils.createSocketAddr(address);
+      try {
+        String[] resolvedHostNames = dnr
+            .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
+        int port = isa.getPort();
+        for (String hostname : resolvedHostNames) {
+          InetSocketAddress inetSocketAddress = new InetSocketAddress(
+              hostname, port);
+          // Concat nn info with host info to make uniq ID

Review comment:
       getConcatId() could be a static method.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-911363763


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 4 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  5s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 28s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 41s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 56s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   9m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 10s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 51s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 14s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 29s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 51s |  |  root: The patch generated 0 new + 50 unchanged - 1 fixed = 50 total (was 51)  |
   | +1 :green_heart: |  mvnsite  |   4m 53s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   1m  0s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/7/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) |  hadoop-common in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | +1 :green_heart: |  javadoc  |   4m 55s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  10m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 22s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 47s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-hdfs-client in the patch passed.  |
   | +1 :green_heart: |  unit  | 340m 31s |  |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  37m 32s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 616m 10s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/7/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 1ee8609ded00 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 02c5a99d59f09d70c84842b7290ae00405ac76e7 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/7/testReport/ |
   | Max. process+thread count | 2209 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-911496747


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 4 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 48s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 17s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 38s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 53s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   9m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 21s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  5s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 29s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 48s |  |  root: The patch generated 0 new + 50 unchanged - 1 fixed = 50 total (was 51)  |
   | +1 :green_heart: |  mvnsite  |   4m 50s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   1m  0s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/8/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) |  hadoop-common in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | +1 :green_heart: |  javadoc  |   5m  1s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  10m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 37s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 56s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-hdfs-client in the patch passed.  |
   | -1 :x: |  unit  | 361m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  32m 46s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 631m 39s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | hadoop.hdfs.TestViewDistributedFileSystemContract |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/8/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 335f35a9b61d 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 844283ca979717450d8134f575ad2dc27fadb660 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/8/testReport/ |
   | Max. process+thread count | 2223 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] goiri commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
goiri commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r700548545



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -120,40 +146,59 @@ protected void serviceInit(Configuration configuration) throws Exception {
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-      this.localTarget = new NNHAServiceTarget(
-          conf, nameserviceId, namenodeId);
       nnDesc += "-" + namenodeId;
     } else {
       this.localTarget = null;
     }
 
+    if (originalNnId == null) {
+      originalNnId = namenodeId;
+    }
     // Get the RPC address for the clients to connect
-    this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
-    LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
+    this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
 
     // Get the Service RPC address for monitoring
     this.serviceAddress =
-        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
     if (this.serviceAddress == null) {
       LOG.error("Cannot locate RPC service address for NN {}, " +
           "using RPC address {}", nnDesc, this.rpcAddress);
       this.serviceAddress = this.rpcAddress;
     }
-    LOG.info("{} Service RPC address: {}", nnDesc, serviceAddress);
 
     // Get the Lifeline RPC address for faster monitoring
     this.lifelineAddress =
-        DFSUtil.getNamenodeLifelineAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeLifelineAddr(conf, nameserviceId, originalNnId);
     if (this.lifelineAddress == null) {
       this.lifelineAddress = this.serviceAddress;
     }
-    LOG.info("{} Lifeline RPC address: {}", nnDesc, lifelineAddress);
 
     // Get the Web address for UI
     this.webAddress =
-        DFSUtil.getNamenodeWebAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeWebAddr(conf, nameserviceId, originalNnId);
+
+    if (resolvedHost != null) {
+      // Get the addresses from resolvedHost plus the configured ports.
+      rpcAddress = resolvedHost + ":"
+          + rpcAddress.split(":")[1];

Review comment:
       Try to use something from NetUtils that doesn't open a socket and if it's not there create a method for extractPort or something.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] goiri commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
goiri commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r699774160



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +117,93 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(getNnHeartBeatServiceName(nsId, nnId));
+
+    this.resolver = resolver;
+
+    this.nameserviceId = nsId;
+    // Concat a uniq id from original nnId and resolvedHost
+    this.namenodeId = nnId + "-" + resolvedHost;
+    this.resolvedHost = resolvedHost;
+    // Same the original nnid to get the ports from config.
+    this.originalNnId = nnId;
+
+  }
+
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
     this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-      this.localTarget = new NNHAServiceTarget(
-          conf, nameserviceId, namenodeId);
       nnDesc += "-" + namenodeId;
     } else {
       this.localTarget = null;
     }
 
+    if (originalNnId == null) {
+      originalNnId = namenodeId;
+    }
+
     // Get the RPC address for the clients to connect
-    this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
+    this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
+    if (resolvedHost != null) {
+      rpcAddress = resolvedHost + ":"
+          + NetUtils.createSocketAddr(rpcAddress).getPort();
+    }
     LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
 
     // Get the Service RPC address for monitoring
     this.serviceAddress =
-        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
     if (this.serviceAddress == null) {
       LOG.error("Cannot locate RPC service address for NN {}, " +
           "using RPC address {}", nnDesc, this.rpcAddress);
       this.serviceAddress = this.rpcAddress;
     }
+    if (resolvedHost != null) {

Review comment:
       We do the same thing over and over for the lifeline and the others.
   Maybe do all of them in a single shot?
   The way to extract the port might also be expensive to be honest; creating a socket address is usually bad.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,37 +426,53 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
-          }
-        } catch (UnknownHostException e) {
-          LOG.error("Failed to resolve address: " + address);
+      ret.putAll(getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, keys));
+    }
+    return ret;
+  }
+
+  public static Map<String, InetSocketAddress> getResolvedAddressesForNnId(
+      Configuration conf, String nsId, String nnId,
+      DomainNameResolver dnr, String defaultValue,
+      String... keys) {
+    String suffix = concatSuffixes(nsId, nnId);
+    String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
+    Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
+    if (address != null) {
+      InetSocketAddress isa = NetUtils.createSocketAddr(address);
+      try {
+        String[] resolvedHostNames = dnr
+            .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
+        int port = isa.getPort();
+        for (String hostname : resolvedHostNames) {
+          InetSocketAddress inetSocketAddress = new InetSocketAddress(
+              hostname, port);
+          // Concat nn info with host info to make uniq ID
+          String concatId = getConcatNnId(nsId, nnId, hostname, port);
+          ret.put(concatId, inetSocketAddress);
         }
+      } catch (UnknownHostException e) {
+        LOG.error("Failed to resolve address: {}", address);
       }
     }
     return ret;
   }
 
+  /**
+   * Concat nn info with host info to make uniq ID.
+   * This is mainly used when configured nn is
+   * a domain record that has multiple hosts behind it.
+   */
+  static String getConcatNnId(String nsId, String nnId, String hostname, int port) {

Review comment:
       Add the args to the javadoc too.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,37 +426,53 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
-          }
-        } catch (UnknownHostException e) {
-          LOG.error("Failed to resolve address: " + address);
+      ret.putAll(getResolvedAddressesForNnId(

Review comment:
       I would extract the output of the method before doing putAll().
   If something breaks, it is easier to debug if it points to the exact line too.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,37 +426,53 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
-          }
-        } catch (UnknownHostException e) {
-          LOG.error("Failed to resolve address: " + address);
+      ret.putAll(getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, keys));
+    }
+    return ret;
+  }
+
+  public static Map<String, InetSocketAddress> getResolvedAddressesForNnId(
+      Configuration conf, String nsId, String nnId,
+      DomainNameResolver dnr, String defaultValue,
+      String... keys) {
+    String suffix = concatSuffixes(nsId, nnId);
+    String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
+    Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
+    if (address != null) {
+      InetSocketAddress isa = NetUtils.createSocketAddr(address);
+      try {
+        String[] resolvedHostNames = dnr
+            .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
+        int port = isa.getPort();
+        for (String hostname : resolvedHostNames) {
+          InetSocketAddress inetSocketAddress = new InetSocketAddress(
+              hostname, port);
+          // Concat nn info with host info to make uniq ID
+          String concatId = getConcatNnId(nsId, nnId, hostname, port);
+          ret.put(concatId, inetSocketAddress);
         }
+      } catch (UnknownHostException e) {
+        LOG.error("Failed to resolve address: {}", address);
       }
     }
     return ret;
   }
 
+  /**
+   * Concat nn info with host info to make uniq ID.
+   * This is mainly used when configured nn is
+   * a domain record that has multiple hosts behind it.
+   */
+  static String getConcatNnId(String nsId, String nnId, String hostname, int port) {

Review comment:
       private?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] LeonGao91 commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
LeonGao91 commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-917291530


   thanks for the reviews @goiri @fengnanli !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-909762383


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 3 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 30s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 50s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  3s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 31s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 44s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   4m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 23s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   1m  4s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/5/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) |  hadoop-hdfs-project: The patch generated 3 new + 15 unchanged - 1 fixed = 18 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m  1s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-hdfs-client in the patch passed.  |
   | -1 :x: |  unit  | 243m  3s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  22m 49s |  |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 390m 28s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/5/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux c00376752a94 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4a95e997979f6af4b04d4c0b8ae73108a3edd3e4 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/5/testReport/ |
   | Max. process+thread count | 3638 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] LeonGao91 commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
LeonGao91 commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r698915443



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
##########
@@ -56,9 +56,25 @@
   private final String nnId;
   private final String nsId;
   private final boolean autoFailoverEnabled;
-  
+
   public NNHAServiceTarget(Configuration conf,
       String nsId, String nnId) {
+    this(conf, nsId, nnId, null, null, null);
+  }
+
+  /**
+   * Create a NNHAServiceTarget for a namenode.
+   *
+   * @param conf          HDFS configuration.
+   * @param nsId          nsId of this nn.
+   * @param nnId          nnId of this nn.
+   * @param serviceAddr   Provided service address.
+   * @param addr          Provided service address.
+   * @param lifelineAddr  Provided service address.
+   */
+  public NNHAServiceTarget(Configuration conf,

Review comment:
       Added a simple test per comment.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] goiri commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
goiri commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r699774160



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +117,93 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(getNnHeartBeatServiceName(nsId, nnId));
+
+    this.resolver = resolver;
+
+    this.nameserviceId = nsId;
+    // Concat a uniq id from original nnId and resolvedHost
+    this.namenodeId = nnId + "-" + resolvedHost;
+    this.resolvedHost = resolvedHost;
+    // Same the original nnid to get the ports from config.
+    this.originalNnId = nnId;
+
+  }
+
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
     this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-      this.localTarget = new NNHAServiceTarget(
-          conf, nameserviceId, namenodeId);
       nnDesc += "-" + namenodeId;
     } else {
       this.localTarget = null;
     }
 
+    if (originalNnId == null) {
+      originalNnId = namenodeId;
+    }
+
     // Get the RPC address for the clients to connect
-    this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
+    this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
+    if (resolvedHost != null) {
+      rpcAddress = resolvedHost + ":"
+          + NetUtils.createSocketAddr(rpcAddress).getPort();
+    }
     LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
 
     // Get the Service RPC address for monitoring
     this.serviceAddress =
-        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+        DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
     if (this.serviceAddress == null) {
       LOG.error("Cannot locate RPC service address for NN {}, " +
           "using RPC address {}", nnDesc, this.rpcAddress);
       this.serviceAddress = this.rpcAddress;
     }
+    if (resolvedHost != null) {

Review comment:
       We do the same thing over and over for the lifeline and the others.
   Maybe do all of them in a single shot?
   The way to extract the port might also be expensive to be honest; creating a socket address is usually bad.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,37 +426,53 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
-          }
-        } catch (UnknownHostException e) {
-          LOG.error("Failed to resolve address: " + address);
+      ret.putAll(getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, keys));
+    }
+    return ret;
+  }
+
+  public static Map<String, InetSocketAddress> getResolvedAddressesForNnId(
+      Configuration conf, String nsId, String nnId,
+      DomainNameResolver dnr, String defaultValue,
+      String... keys) {
+    String suffix = concatSuffixes(nsId, nnId);
+    String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
+    Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
+    if (address != null) {
+      InetSocketAddress isa = NetUtils.createSocketAddr(address);
+      try {
+        String[] resolvedHostNames = dnr
+            .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
+        int port = isa.getPort();
+        for (String hostname : resolvedHostNames) {
+          InetSocketAddress inetSocketAddress = new InetSocketAddress(
+              hostname, port);
+          // Concat nn info with host info to make uniq ID
+          String concatId = getConcatNnId(nsId, nnId, hostname, port);
+          ret.put(concatId, inetSocketAddress);
         }
+      } catch (UnknownHostException e) {
+        LOG.error("Failed to resolve address: {}", address);
       }
     }
     return ret;
   }
 
+  /**
+   * Concat nn info with host info to make uniq ID.
+   * This is mainly used when configured nn is
+   * a domain record that has multiple hosts behind it.
+   */
+  static String getConcatNnId(String nsId, String nnId, String hostname, int port) {

Review comment:
       Add the args to the javadoc too.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,37 +426,53 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
-          }
-        } catch (UnknownHostException e) {
-          LOG.error("Failed to resolve address: " + address);
+      ret.putAll(getResolvedAddressesForNnId(

Review comment:
       I would extract the output of the method before doing putAll().
   If something breaks, it is easier to debug if it points to the exact line too.

##########
File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##########
@@ -426,37 +426,53 @@ static String concatSuffixes(String... suffixes) {
     Collection<String> nnIds = getNameNodeIds(conf, nsId);
     Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
     for (String nnId : emptyAsSingletonNull(nnIds)) {
-      String suffix = concatSuffixes(nsId, nnId);
-      String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-      if (address != null) {
-        InetSocketAddress isa = NetUtils.createSocketAddr(address);
-        try {
-          // Datanode should just use FQDN
-          String[] resolvedHostNames = dnr
-              .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-          int port = isa.getPort();
-          for (String hostname : resolvedHostNames) {
-            InetSocketAddress inetSocketAddress = new InetSocketAddress(
-                hostname, port);
-            // Concat nn info with host info to make uniq ID
-            String concatId;
-            if (nnId == null || nnId.isEmpty()) {
-              concatId = String
-                  .join("-", nsId, hostname, String.valueOf(port));
-            } else {
-              concatId = String
-                  .join("-", nsId, nnId, hostname, String.valueOf(port));
-            }
-            ret.put(concatId, inetSocketAddress);
-          }
-        } catch (UnknownHostException e) {
-          LOG.error("Failed to resolve address: " + address);
+      ret.putAll(getResolvedAddressesForNnId(
+          conf, nsId, nnId, dnr, defaultValue, keys));
+    }
+    return ret;
+  }
+
+  public static Map<String, InetSocketAddress> getResolvedAddressesForNnId(
+      Configuration conf, String nsId, String nnId,
+      DomainNameResolver dnr, String defaultValue,
+      String... keys) {
+    String suffix = concatSuffixes(nsId, nnId);
+    String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
+    Map<String, InetSocketAddress> ret = Maps.newLinkedHashMap();
+    if (address != null) {
+      InetSocketAddress isa = NetUtils.createSocketAddr(address);
+      try {
+        String[] resolvedHostNames = dnr
+            .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
+        int port = isa.getPort();
+        for (String hostname : resolvedHostNames) {
+          InetSocketAddress inetSocketAddress = new InetSocketAddress(
+              hostname, port);
+          // Concat nn info with host info to make uniq ID
+          String concatId = getConcatNnId(nsId, nnId, hostname, port);
+          ret.put(concatId, inetSocketAddress);
         }
+      } catch (UnknownHostException e) {
+        LOG.error("Failed to resolve address: {}", address);
       }
     }
     return ret;
   }
 
+  /**
+   * Concat nn info with host info to make uniq ID.
+   * This is mainly used when configured nn is
+   * a domain record that has multiple hosts behind it.
+   */
+  static String getConcatNnId(String nsId, String nnId, String hostname, int port) {

Review comment:
       private?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-907572887


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 15s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 49s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 19s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 16s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 49s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   1m  8s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/1/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) |  hadoop-hdfs-project: The patch generated 1 new + 15 unchanged - 1 fixed = 16 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 57s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 16s |  |  hadoop-hdfs-client in the patch passed.  |
   | +1 :green_heart: |  unit  | 340m 45s |  |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  32m 24s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 503m 59s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.server.federation.router.TestRBFConfigFields |
   |   | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux d22a9233a11f 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ee19f148cf481afd5f06f191c89724e5bbd86960 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/1/testReport/ |
   | Max. process+thread count | 2213 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] goiri commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
goiri commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r700608519



##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
##########
@@ -765,6 +765,25 @@ public void testTrimCreateSocketAddress() {
     assertEquals(defaultAddr.trim(), NetUtils.getHostPortString(addr));
   }
 
+  @Test
+  public void testGetPortFromHostPortString() {
+
+    assertEquals(1002, NetUtils.getPortFromHostPortString("testHost:1002"));
+    try {
+      NetUtils.getPortFromHostPortString("testHost");
+      fail("Should throw exception for wrong format");

Review comment:
       LambdaTestUtils#intercept does this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] LeonGao91 commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
LeonGao91 commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r700643861



##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
##########
@@ -765,6 +765,25 @@ public void testTrimCreateSocketAddress() {
     assertEquals(defaultAddr.trim(), NetUtils.getHostPortString(addr));
   }
 
+  @Test
+  public void testGetPortFromHostPortString() {
+
+    assertEquals(1002, NetUtils.getPortFromHostPortString("testHost:1002"));
+    try {
+      NetUtils.getPortFromHostPortString("testHost");
+      fail("Should throw exception for wrong format");

Review comment:
       thx for the pointer, its pretty handy!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-908343565






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-910391558


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 3 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 33s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 25s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 51s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 51s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  7s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 14s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 44s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/6/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) |  hadoop-hdfs-project: The patch generated 1 new + 15 unchanged - 1 fixed = 16 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 38s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 43s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 16s |  |  hadoop-hdfs-client in the patch passed.  |
   | +1 :green_heart: |  unit  | 330m 27s |  |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  36m 41s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 497m 57s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/6/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux c94215f09482 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ef82f1a5b5f59ab853997f81e1b57a068c009e7a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/6/testReport/ |
   | Max. process+thread count | 2210 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-908382301


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 25s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 14s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 49s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 52s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 53s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 10s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 46s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  hadoop-hdfs-project: The patch generated 0 new + 15 unchanged - 1 fixed = 15 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m  8s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 14s |  |  hadoop-hdfs-client in the patch passed.  |
   | -1 :x: |  unit  | 359m 14s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  32m  6s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 521m 24s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux febf02ee97d1 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 524fb26a1eac7824af7da67094d41fb94433aa44 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/2/testReport/ |
   | Max. process+thread count | 2098 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] LeonGao91 commented on a change in pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
LeonGao91 commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r698897540



##########
File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##########
@@ -113,47 +116,91 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId          Identifier of the nameservice.
+   * @param nnId          Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+      ActiveNamenodeResolver resolver, String nsId, String nnId, String resolvedHost) {
+    super(NamenodeHeartbeatService.class.getSimpleName() +

Review comment:
       sure, will make a static method for this as it is calling super()

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
##########
@@ -56,9 +56,25 @@
   private final String nnId;
   private final String nsId;
   private final boolean autoFailoverEnabled;
-  
+
   public NNHAServiceTarget(Configuration conf,
       String nsId, String nnId) {
+    this(conf, nsId, nnId, null, null, null);
+  }
+
+  /**
+   * Create a NNHAServiceTarget for a namenode.
+   *
+   * @param conf          HDFS configuration.
+   * @param nsId          nsId of this nn.
+   * @param nnId          nnId of this nn.
+   * @param serviceAddr   Provided service address.
+   * @param addr          Provided service address.
+   * @param lifelineAddr  Provided service address.
+   */
+  public NNHAServiceTarget(Configuration conf,

Review comment:
       Yeah this will look cleaner. I will need to remove the final key word from some vars as the assignment is out of the constructor.

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
##########
@@ -56,9 +56,25 @@
   private final String nnId;
   private final String nsId;
   private final boolean autoFailoverEnabled;
-  
+
   public NNHAServiceTarget(Configuration conf,
       String nsId, String nnId) {
+    this(conf, nsId, nnId, null, null, null);
+  }
+
+  /**
+   * Create a NNHAServiceTarget for a namenode.
+   *
+   * @param conf          HDFS configuration.
+   * @param nsId          nsId of this nn.
+   * @param nnId          nnId of this nn.
+   * @param serviceAddr   Provided service address.
+   * @param addr          Provided service address.
+   * @param lifelineAddr  Provided service address.
+   */
+  public NNHAServiceTarget(Configuration conf,

Review comment:
       Added a simple test per comment.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-908343565


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 3 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 10s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   6m 44s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m  8s |  |  trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 35s |  |  trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   9m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  8s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 50s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   6m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 21s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   6m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 30s |  |  hadoop-hdfs-project: The patch generated 0 new + 15 unchanged - 1 fixed = 15 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  |  the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 25s |  |  the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   9m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 27s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  2s |  |  hadoop-hdfs-client in the patch passed.  |
   | -1 :x: |  unit  | 262m 26s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  22m 25s |  |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 442m 44s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.TestDFSStorageStateRecovery |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/3/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 13c7e2376ade 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bf30b1ed03519580cfa8f19a142fa7a17f232d7c |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/3/testReport/ |
   | Max. process+thread count | 2830 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org