You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/04/19 07:56:31 UTC

[GitHub] [hadoop] kokonguyen191 opened a new pull request, #4199: HDFS-14750. Support dynamic handler allocation in routers

kokonguyen191 opened a new pull request, #4199:
URL: https://github.com/apache/hadoop/pull/4199

   ### Description of PR
   Add a `DynamicRouterRpcFairnessPolicyController` class that resizes permit capacity periodically based on traffic to namespaces.
   
   ### How was this patch tested?
   Unit tests and local deployment.
   
   ### For code changes:
   - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1120674224

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 47s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 9 new + 1 unchanged - 0 fixed = 10 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 38s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) |  hadoop-hdfs-rbf in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |  30m 44s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 135m 49s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.server.federation.security.TestRouterSecurityManager |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4199 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 7e1ed08abaf8 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0f020905d0189fd612b81f9463856ae595c570ec |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/2/testReport/ |
   | Max. process+thread count | 2225 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] ferhui commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
ferhui commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1112074429

   @kokonguyen191 Thanks. It's a good feature.
   @fengnanli could you please review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1122125318

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  9s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 27s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   1m 37s | [/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/4/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html) |  hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  22m 14s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |  23m 34s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 130m 28s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
   |  |  org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_FAIR_MINIMUM_HANDLER_COUNT_DEFAULT isn't final but should be  At RBFConfigKeys.java:be  At RBFConfigKeys.java:[line 359] |
   | Failed junit tests | hadoop.hdfs.server.federation.security.TestRouterSecurityManager |
   |   | hadoop.hdfs.server.federation.router.TestRBFConfigFields |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/4/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4199 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 4da96cee138f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0f83aa37fc8e24752323ffb701fac94964e4c096 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/4/testReport/ |
   | Max. process+thread count | 2753 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] ferhui commented on a diff in pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
ferhui commented on code in PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#discussion_r867328607


##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java:
##########
@@ -354,6 +354,10 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
       NoRouterRpcFairnessPolicyController.class;
   public static final String DFS_ROUTER_FAIR_HANDLER_COUNT_KEY_PREFIX =
       FEDERATION_ROUTER_FAIRNESS_PREFIX + "handler.count.";
+  public static final long DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_DEFAULT =
+      600000;
+  public static final String DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_KEY =
+      FEDERATION_ROUTER_FAIRNESS_PREFIX + "policy.controller.dynamic.refresh.interval";

Review Comment:
   It is better that the time unit is second?
   And should add this configuration to hdfs-rbf-default.xml



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] ferhui commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
ferhui commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1120142853

   It seems that we can not know how many handlers are assigned to every nameservice after resizing. It's better to update the configuration, right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] ferhui commented on a diff in pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
ferhui commented on code in PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#discussion_r866622590


##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/fairness/DynamicRouterRpcFairnessPolicyController.java:
##########
@@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.fairness;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.federation.utils.AdjustableSemaphore;
+import org.apache.hadoop.util.concurrent.HadoopExecutors;
+
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_DEFAULT;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_DEFAULT;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
+
+/**
+ * Dynamic fairness policy extending {@link StaticRouterRpcFairnessPolicyController}
+ * and fetching handlers from configuration for all available name services.
+ * The handlers count changes according to traffic to namespaces.
+ * Total handlers might NOT strictly add up to the value defined by DFS_ROUTER_HANDLER_COUNT_KEY.
+ */
+public class DynamicRouterRpcFairnessPolicyController
+    extends StaticRouterRpcFairnessPolicyController {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(DynamicRouterRpcFairnessPolicyController.class);
+
+  private static final ScheduledExecutorService scheduledExecutor = HadoopExecutors
+      .newSingleThreadScheduledExecutor(new ThreadFactoryBuilder().setDaemon(true)
+          .setNameFormat("DynamicRouterRpcFairnessPolicyControllerPermitsResizer").build());
+  private PermitsResizerService permitsResizerService;
+  private ScheduledFuture<?> refreshTask;
+  private int handlerCount;
+
+  /**
+   * Initializes using the same logic as {@link StaticRouterRpcFairnessPolicyController}
+   * and starts a periodic semaphore resizer thread
+   *
+   * @param conf configuration
+   */
+  public DynamicRouterRpcFairnessPolicyController(Configuration conf) {
+    super(conf);
+    handlerCount = conf.getInt(DFS_ROUTER_HANDLER_COUNT_KEY, DFS_ROUTER_HANDLER_COUNT_DEFAULT);
+    long refreshInterval =
+        conf.getTimeDuration(DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_KEY,
+            DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_DEFAULT, TimeUnit.MILLISECONDS);
+    permitsResizerService = new PermitsResizerService();
+    refreshTask = scheduledExecutor
+        .scheduleWithFixedDelay(permitsResizerService, refreshInterval, refreshInterval,
+            TimeUnit.MILLISECONDS);
+  }
+
+  @VisibleForTesting
+  public DynamicRouterRpcFairnessPolicyController(Configuration conf, long refreshInterval) {
+    super(conf);
+    handlerCount = conf.getInt(DFS_ROUTER_HANDLER_COUNT_KEY, DFS_ROUTER_HANDLER_COUNT_DEFAULT);
+    permitsResizerService = new PermitsResizerService();
+    refreshTask = scheduledExecutor
+        .scheduleWithFixedDelay(permitsResizerService, refreshInterval, refreshInterval,
+            TimeUnit.MILLISECONDS);
+  }
+
+  @VisibleForTesting
+  public void refreshPermitsCap() {
+    permitsResizerService.run();
+  }
+
+  @Override
+  public void shutdown() {
+    super.shutdown();
+    if (refreshTask != null) {
+      refreshTask.cancel(true);
+    }
+    if (scheduledExecutor != null) {
+      scheduledExecutor.shutdown();
+    }
+  }
+
+  class PermitsResizerService implements Runnable {
+
+    @Override
+    public synchronized void run() {
+      long totalOps = 0;
+      Map<String, Long> nsOps = new HashMap<>();
+      for (Map.Entry<String, AdjustableSemaphore> entry : permits.entrySet()) {
+        long ops = (rejectedPermitsPerNs.containsKey(entry.getKey()) ?
+            rejectedPermitsPerNs.get(entry.getKey()).longValue() :
+            0) + (acceptedPermitsPerNs.containsKey(entry.getKey()) ?
+            acceptedPermitsPerNs.get(entry.getKey()).longValue() :
+            0);
+        nsOps.put(entry.getKey(), ops);
+        totalOps += ops;
+      }
+
+      for (Map.Entry<String, AdjustableSemaphore> entry : permits.entrySet()) {
+        String ns = entry.getKey();
+        AdjustableSemaphore semaphore = entry.getValue();
+        int oldPermitCap = permitSizes.get(ns);
+        int newPermitCap = (int) Math.ceil((float) nsOps.get(ns) / totalOps * handlerCount);
+        // Leave at least 1 handler even if there's no traffic
+        if (newPermitCap == 0) {
+          newPermitCap = 1;

Review Comment:
   How about adding a new configuration for this.
   e.g. xxx.min.handler.count



##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/fairness/DynamicRouterRpcFairnessPolicyController.java:
##########
@@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.fairness;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.federation.utils.AdjustableSemaphore;
+import org.apache.hadoop.util.concurrent.HadoopExecutors;
+
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_DEFAULT;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_DEFAULT;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
+
+/**
+ * Dynamic fairness policy extending {@link StaticRouterRpcFairnessPolicyController}
+ * and fetching handlers from configuration for all available name services.
+ * The handlers count changes according to traffic to namespaces.
+ * Total handlers might NOT strictly add up to the value defined by DFS_ROUTER_HANDLER_COUNT_KEY.
+ */
+public class DynamicRouterRpcFairnessPolicyController
+    extends StaticRouterRpcFairnessPolicyController {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(DynamicRouterRpcFairnessPolicyController.class);
+
+  private static final ScheduledExecutorService scheduledExecutor = HadoopExecutors
+      .newSingleThreadScheduledExecutor(new ThreadFactoryBuilder().setDaemon(true)
+          .setNameFormat("DynamicRouterRpcFairnessPolicyControllerPermitsResizer").build());
+  private PermitsResizerService permitsResizerService;
+  private ScheduledFuture<?> refreshTask;
+  private int handlerCount;
+
+  /**
+   * Initializes using the same logic as {@link StaticRouterRpcFairnessPolicyController}
+   * and starts a periodic semaphore resizer thread
+   *
+   * @param conf configuration
+   */
+  public DynamicRouterRpcFairnessPolicyController(Configuration conf) {
+    super(conf);
+    handlerCount = conf.getInt(DFS_ROUTER_HANDLER_COUNT_KEY, DFS_ROUTER_HANDLER_COUNT_DEFAULT);
+    long refreshInterval =
+        conf.getTimeDuration(DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_KEY,
+            DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_DEFAULT, TimeUnit.MILLISECONDS);
+    permitsResizerService = new PermitsResizerService();
+    refreshTask = scheduledExecutor
+        .scheduleWithFixedDelay(permitsResizerService, refreshInterval, refreshInterval,
+            TimeUnit.MILLISECONDS);
+  }
+
+  @VisibleForTesting
+  public DynamicRouterRpcFairnessPolicyController(Configuration conf, long refreshInterval) {
+    super(conf);
+    handlerCount = conf.getInt(DFS_ROUTER_HANDLER_COUNT_KEY, DFS_ROUTER_HANDLER_COUNT_DEFAULT);
+    permitsResizerService = new PermitsResizerService();
+    refreshTask = scheduledExecutor
+        .scheduleWithFixedDelay(permitsResizerService, refreshInterval, refreshInterval,
+            TimeUnit.MILLISECONDS);
+  }
+
+  @VisibleForTesting
+  public void refreshPermitsCap() {
+    permitsResizerService.run();
+  }
+
+  @Override
+  public void shutdown() {
+    super.shutdown();
+    if (refreshTask != null) {
+      refreshTask.cancel(true);
+    }
+    if (scheduledExecutor != null) {
+      scheduledExecutor.shutdown();
+    }
+  }
+
+  class PermitsResizerService implements Runnable {
+
+    @Override
+    public synchronized void run() {
+      long totalOps = 0;
+      Map<String, Long> nsOps = new HashMap<>();
+      for (Map.Entry<String, AdjustableSemaphore> entry : permits.entrySet()) {
+        long ops = (rejectedPermitsPerNs.containsKey(entry.getKey()) ?
+            rejectedPermitsPerNs.get(entry.getKey()).longValue() :
+            0) + (acceptedPermitsPerNs.containsKey(entry.getKey()) ?
+            acceptedPermitsPerNs.get(entry.getKey()).longValue() :
+            0);
+        nsOps.put(entry.getKey(), ops);
+        totalOps += ops;
+      }
+
+      for (Map.Entry<String, AdjustableSemaphore> entry : permits.entrySet()) {

Review Comment:
   It seems that maybe handlerCount does not equal to total newPermitCap, right?



##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/fairness/TestDynamicRouterRpcFairnessPolicyController.java:
##########
@@ -0,0 +1,177 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.fairness;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.LongAdder;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
+
+import static org.apache.hadoop.hdfs.server.federation.fairness.RouterRpcFairnessConstants.CONCURRENT_NS;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_MONITOR_NAMENODE;
+
+/**
+ * Test functionality of {@link DynamicRouterRpcFairnessPolicyController).
+ */
+public class TestDynamicRouterRpcFairnessPolicyController {
+
+  private static String nameServices = "ns1.nn1, ns1.nn2, ns2.nn1, ns2.nn2";
+
+  @Test
+  public void testDynamicControllerSimple() throws InterruptedException {
+    verifyDynamicControllerSimple(true);
+    verifyDynamicControllerSimple(false);
+  }
+
+  @Test
+  public void testDynamicControllerAllPermitsAcquired() throws InterruptedException {
+    verifyDynamicControllerAllPermitsAcquired(true);
+    verifyDynamicControllerAllPermitsAcquired(false);
+  }
+
+  private void verifyDynamicControllerSimple(boolean manualRefresh)
+      throws InterruptedException {
+    // 3 permits each ns
+    DynamicRouterRpcFairnessPolicyController controller;
+    if (manualRefresh) {
+      controller = getFairnessPolicyController(9);
+    } else {
+      controller = getFairnessPolicyController(9, 4000);
+    }
+    for (int i = 0; i < 3; i++) {
+      Assert.assertTrue(controller.acquirePermit("ns1"));
+      Assert.assertTrue(controller.acquirePermit("ns2"));
+      Assert.assertTrue(controller.acquirePermit(CONCURRENT_NS));
+    }
+    Assert.assertFalse(controller.acquirePermit("ns1"));
+    Assert.assertFalse(controller.acquirePermit("ns2"));
+    Assert.assertFalse(controller.acquirePermit(CONCURRENT_NS));
+
+    // Release all permits
+    for (int i = 0; i < 3; i++) {
+      controller.releasePermit("ns1");
+      controller.releasePermit("ns2");
+      controller.releasePermit(CONCURRENT_NS);
+    }
+
+    // Inject dummy metrics
+    // Split half half for ns1 and concurrent
+    Map<String, LongAdder> rejectedPermitsPerNs = new HashMap<>();
+    Map<String, LongAdder> acceptedPermitsPerNs = new HashMap<>();
+    injectDummyMetrics(rejectedPermitsPerNs, "ns1", 10);
+    injectDummyMetrics(rejectedPermitsPerNs, "ns2", 0);
+    injectDummyMetrics(rejectedPermitsPerNs, CONCURRENT_NS, 10);
+    controller.setMetrics(rejectedPermitsPerNs, acceptedPermitsPerNs);
+
+    if (manualRefresh) {
+      controller.refreshPermitsCap();
+    } else {
+      Thread.sleep(5000);

Review Comment:
   Better to use GenericTestUtils.waitFor instead of sleep.



##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/fairness/TestDynamicRouterRpcFairnessPolicyController.java:
##########
@@ -0,0 +1,177 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.fairness;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.LongAdder;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
+
+import static org.apache.hadoop.hdfs.server.federation.fairness.RouterRpcFairnessConstants.CONCURRENT_NS;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_MONITOR_NAMENODE;
+
+/**
+ * Test functionality of {@link DynamicRouterRpcFairnessPolicyController).
+ */
+public class TestDynamicRouterRpcFairnessPolicyController {
+
+  private static String nameServices = "ns1.nn1, ns1.nn2, ns2.nn1, ns2.nn2";
+
+  @Test
+  public void testDynamicControllerSimple() throws InterruptedException {
+    verifyDynamicControllerSimple(true);
+    verifyDynamicControllerSimple(false);
+  }
+
+  @Test
+  public void testDynamicControllerAllPermitsAcquired() throws InterruptedException {
+    verifyDynamicControllerAllPermitsAcquired(true);
+    verifyDynamicControllerAllPermitsAcquired(false);
+  }
+
+  private void verifyDynamicControllerSimple(boolean manualRefresh)
+      throws InterruptedException {
+    // 3 permits each ns
+    DynamicRouterRpcFairnessPolicyController controller;
+    if (manualRefresh) {
+      controller = getFairnessPolicyController(9);
+    } else {
+      controller = getFairnessPolicyController(9, 4000);
+    }
+    for (int i = 0; i < 3; i++) {
+      Assert.assertTrue(controller.acquirePermit("ns1"));
+      Assert.assertTrue(controller.acquirePermit("ns2"));
+      Assert.assertTrue(controller.acquirePermit(CONCURRENT_NS));
+    }
+    Assert.assertFalse(controller.acquirePermit("ns1"));
+    Assert.assertFalse(controller.acquirePermit("ns2"));
+    Assert.assertFalse(controller.acquirePermit(CONCURRENT_NS));
+
+    // Release all permits
+    for (int i = 0; i < 3; i++) {
+      controller.releasePermit("ns1");
+      controller.releasePermit("ns2");
+      controller.releasePermit(CONCURRENT_NS);
+    }
+
+    // Inject dummy metrics
+    // Split half half for ns1 and concurrent
+    Map<String, LongAdder> rejectedPermitsPerNs = new HashMap<>();
+    Map<String, LongAdder> acceptedPermitsPerNs = new HashMap<>();
+    injectDummyMetrics(rejectedPermitsPerNs, "ns1", 10);
+    injectDummyMetrics(rejectedPermitsPerNs, "ns2", 0);
+    injectDummyMetrics(rejectedPermitsPerNs, CONCURRENT_NS, 10);
+    controller.setMetrics(rejectedPermitsPerNs, acceptedPermitsPerNs);
+
+    if (manualRefresh) {
+      controller.refreshPermitsCap();
+    } else {
+      Thread.sleep(5000);
+    }
+
+    // Current permits count should be 5:1:5

Review Comment:
   Total old permits don't equal to total new permits



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] kokonguyen191 commented on a diff in pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
kokonguyen191 commented on code in PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#discussion_r866709948


##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/fairness/TestDynamicRouterRpcFairnessPolicyController.java:
##########
@@ -0,0 +1,177 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.fairness;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.LongAdder;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
+
+import static org.apache.hadoop.hdfs.server.federation.fairness.RouterRpcFairnessConstants.CONCURRENT_NS;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_MONITOR_NAMENODE;
+
+/**
+ * Test functionality of {@link DynamicRouterRpcFairnessPolicyController).
+ */
+public class TestDynamicRouterRpcFairnessPolicyController {
+
+  private static String nameServices = "ns1.nn1, ns1.nn2, ns2.nn1, ns2.nn2";
+
+  @Test
+  public void testDynamicControllerSimple() throws InterruptedException {
+    verifyDynamicControllerSimple(true);
+    verifyDynamicControllerSimple(false);
+  }
+
+  @Test
+  public void testDynamicControllerAllPermitsAcquired() throws InterruptedException {
+    verifyDynamicControllerAllPermitsAcquired(true);
+    verifyDynamicControllerAllPermitsAcquired(false);
+  }
+
+  private void verifyDynamicControllerSimple(boolean manualRefresh)
+      throws InterruptedException {
+    // 3 permits each ns
+    DynamicRouterRpcFairnessPolicyController controller;
+    if (manualRefresh) {
+      controller = getFairnessPolicyController(9);
+    } else {
+      controller = getFairnessPolicyController(9, 4000);
+    }
+    for (int i = 0; i < 3; i++) {
+      Assert.assertTrue(controller.acquirePermit("ns1"));
+      Assert.assertTrue(controller.acquirePermit("ns2"));
+      Assert.assertTrue(controller.acquirePermit(CONCURRENT_NS));
+    }
+    Assert.assertFalse(controller.acquirePermit("ns1"));
+    Assert.assertFalse(controller.acquirePermit("ns2"));
+    Assert.assertFalse(controller.acquirePermit(CONCURRENT_NS));
+
+    // Release all permits
+    for (int i = 0; i < 3; i++) {
+      controller.releasePermit("ns1");
+      controller.releasePermit("ns2");
+      controller.releasePermit(CONCURRENT_NS);
+    }
+
+    // Inject dummy metrics
+    // Split half half for ns1 and concurrent
+    Map<String, LongAdder> rejectedPermitsPerNs = new HashMap<>();
+    Map<String, LongAdder> acceptedPermitsPerNs = new HashMap<>();
+    injectDummyMetrics(rejectedPermitsPerNs, "ns1", 10);
+    injectDummyMetrics(rejectedPermitsPerNs, "ns2", 0);
+    injectDummyMetrics(rejectedPermitsPerNs, CONCURRENT_NS, 10);
+    controller.setMetrics(rejectedPermitsPerNs, acceptedPermitsPerNs);
+
+    if (manualRefresh) {
+      controller.refreshPermitsCap();
+    } else {
+      Thread.sleep(5000);
+    }
+
+    // Current permits count should be 5:1:5

Review Comment:
   Yes it's an inherent issue with the approach when permit count scales with traffic.
   
   For example, initial handlers assigned to ns0,ns1,concurrent are 5,5,30 (40 total). After a while, all 3 namespaces have the same traffic and 40 handlers have to be spread between all of them, leading to a tiebreaking situation. Not sure what approach is best to handle these kinds of situations.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] kokonguyen191 commented on a diff in pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
kokonguyen191 commented on code in PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#discussion_r867599248


##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/fairness/DynamicRouterRpcFairnessPolicyController.java:
##########
@@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.fairness;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.federation.utils.AdjustableSemaphore;
+import org.apache.hadoop.util.concurrent.HadoopExecutors;
+
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_DEFAULT;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_DEFAULT;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
+
+/**
+ * Dynamic fairness policy extending {@link StaticRouterRpcFairnessPolicyController}
+ * and fetching handlers from configuration for all available name services.
+ * The handlers count changes according to traffic to namespaces.
+ * Total handlers might NOT strictly add up to the value defined by DFS_ROUTER_HANDLER_COUNT_KEY.
+ */
+public class DynamicRouterRpcFairnessPolicyController
+    extends StaticRouterRpcFairnessPolicyController {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(DynamicRouterRpcFairnessPolicyController.class);
+
+  private static final ScheduledExecutorService scheduledExecutor = HadoopExecutors
+      .newSingleThreadScheduledExecutor(new ThreadFactoryBuilder().setDaemon(true)
+          .setNameFormat("DynamicRouterRpcFairnessPolicyControllerPermitsResizer").build());
+  private PermitsResizerService permitsResizerService;
+  private ScheduledFuture<?> refreshTask;
+  private int handlerCount;
+
+  /**
+   * Initializes using the same logic as {@link StaticRouterRpcFairnessPolicyController}
+   * and starts a periodic semaphore resizer thread
+   *
+   * @param conf configuration
+   */
+  public DynamicRouterRpcFairnessPolicyController(Configuration conf) {
+    super(conf);
+    handlerCount = conf.getInt(DFS_ROUTER_HANDLER_COUNT_KEY, DFS_ROUTER_HANDLER_COUNT_DEFAULT);
+    long refreshInterval =
+        conf.getTimeDuration(DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_KEY,
+            DFS_ROUTER_DYNAMIC_FAIRNESS_CONTROLLER_REFRESH_INTERVAL_DEFAULT, TimeUnit.MILLISECONDS);
+    permitsResizerService = new PermitsResizerService();
+    refreshTask = scheduledExecutor
+        .scheduleWithFixedDelay(permitsResizerService, refreshInterval, refreshInterval,
+            TimeUnit.MILLISECONDS);
+  }
+
+  @VisibleForTesting
+  public DynamicRouterRpcFairnessPolicyController(Configuration conf, long refreshInterval) {
+    super(conf);
+    handlerCount = conf.getInt(DFS_ROUTER_HANDLER_COUNT_KEY, DFS_ROUTER_HANDLER_COUNT_DEFAULT);
+    permitsResizerService = new PermitsResizerService();
+    refreshTask = scheduledExecutor
+        .scheduleWithFixedDelay(permitsResizerService, refreshInterval, refreshInterval,
+            TimeUnit.MILLISECONDS);
+  }
+
+  @VisibleForTesting
+  public void refreshPermitsCap() {
+    permitsResizerService.run();
+  }
+
+  @Override
+  public void shutdown() {
+    super.shutdown();
+    if (refreshTask != null) {
+      refreshTask.cancel(true);
+    }
+    if (scheduledExecutor != null) {
+      scheduledExecutor.shutdown();
+    }
+  }
+
+  class PermitsResizerService implements Runnable {
+
+    @Override
+    public synchronized void run() {
+      long totalOps = 0;
+      Map<String, Long> nsOps = new HashMap<>();
+      for (Map.Entry<String, AdjustableSemaphore> entry : permits.entrySet()) {
+        long ops = (rejectedPermitsPerNs.containsKey(entry.getKey()) ?
+            rejectedPermitsPerNs.get(entry.getKey()).longValue() :
+            0) + (acceptedPermitsPerNs.containsKey(entry.getKey()) ?
+            acceptedPermitsPerNs.get(entry.getKey()).longValue() :
+            0);
+        nsOps.put(entry.getKey(), ops);
+        totalOps += ops;
+      }
+
+      for (Map.Entry<String, AdjustableSemaphore> entry : permits.entrySet()) {
+        String ns = entry.getKey();
+        AdjustableSemaphore semaphore = entry.getValue();
+        int oldPermitCap = permitSizes.get(ns);
+        int newPermitCap = (int) Math.ceil((float) nsOps.get(ns) / totalOps * handlerCount);
+        // Leave at least 1 handler even if there's no traffic
+        if (newPermitCap == 0) {
+          newPermitCap = 1;

Review Comment:
   I think it's better to create another ticket for this change later since there's already a handler count verification handler in `StaticRouterRpcFairnessPolicyController.java` and handling this is not trivial enough to solve in a few lines.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] ferhui commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
ferhui commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1123247825

   @kokonguyen191 Thanks for updates, please take a look at the checkstyle and can raise a new ticket to add document referring to static controller.
   If no other comments, I will merge it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] ferhui merged pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
ferhui merged PR #4199:
URL: https://github.com/apache/hadoop/pull/4199


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1123407075

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 11s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  30m 43s |  |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 137m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/6/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4199 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 6eb815a6f090 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8f31e0806dc88220c534a15bf74cfd599c6dbeca |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/6/testReport/ |
   | Max. process+thread count | 2223 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1123204055

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  8s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 22s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |  32m 10s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 138m 31s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/5/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4199 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 7c6c5c98801b 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 60f7b03ce490200cfeafec292c1138b96b319cff |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/5/testReport/ |
   | Max. process+thread count | 2221 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] kokonguyen191 commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
kokonguyen191 commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1123208770

   Failed test is not related. For the issue with total number of permits not staying constant, it will be initial total handler count <= total number of permits <= initial total handler count + number of namespaces, which seems to me to be an acceptable range of error for performance purposes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1120872757

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  5s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML file.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 31s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  |  31m 48s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 137m 16s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.server.federation.security.TestRouterSecurityManager |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/3/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4199 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 979564f749bf 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 80aa59d44cbdb3ef5df9d2f799d771f95198e990 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/3/testReport/ |
   | Max. process+thread count | 2223 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] kokonguyen191 commented on pull request #4199: HDFS-14750. RBF: Support dynamic handler allocation in routers

Posted by GitBox <gi...@apache.org>.
kokonguyen191 commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1120607395

   > It seems that we can not know how many handlers are assigned to every nameservice after resizing. It's better to update the configuration or add metrics, right?
   
   It's already in `getAvailableHandlerOnPerNs`?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org