You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by st...@apache.org on 2022/04/13 16:32:47 UTC

[hadoop] branch branch-3.3.3 updated (329fd7a60d1 -> d709000fb2b)

This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


    from 329fd7a60d1 HADOOP-18198. Preparing for 3.3.3 release
     new 37a2bd88769 Make upstream aware of 3.3.2 release
     new 6534f0d4fde HDFS-16437 ReverseXML processor doesn't accept XML files without the … (#3926)
     new 089a754fec6 Fix thread safety of EC decoding during concurrent preads (#3881)
     new 2be5a902dcf HADOOP-18109. Ensure that default permissions of directories under internal ViewFS directories are the same as directories on target filesystems. Contributed by Chentao Yu. (3953)
     new 53ea32dd076 HADOOP-18125. Utility to identify git commit / Jira fixVersion discrepancies for RC preparation (#3991)
     new a7fd8a62ff5 HDFS-11041. Unable to unregister FsDatasetState MBean if DataNode is shutdown twice. Contributed by Wei-Chiu Chuang.
     new 8399b25ff8c YARN-11075. Explicitly declare serialVersionUID in LogMutation class. Contributed by Benjamin Teke
     new 728b49b9f38 YARN-11014. YARN incorrectly validates maximum capacity resources on the validation API. Contributed by Benjamin Teke
     new f7f630b7132 HDFS-16428. Source path with storagePolicy cause wrong typeConsumed while rename (#3898). Contributed by lei w.
     new 5e2c30091a5 HADOOP-18155. Refactor tests in TestFileUtil (#4063)
     new 6606da9500b HDFS-16501. Print the exception when reporting a bad block (#4062)
     new ecc1019b38f YARN-10720. YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging. Contributed by Qi Zhu.
     new 25ad7aacdea HDFS-16355. Improve the description of dfs.block.scanner.volume.bytes.per.second (#3724)
     new 581ca342e51 MAPREDUCE-7373. Building MapReduce NativeTask fails on Fedora 34+ (#4120)
     new b43333f7070 HDFS-16507. [SBN read] Avoid purging edit log which is in progress (#4082)
     new d709000fb2b HADOOP-18088. Replace log4j 1.x with reload4j. (#4052)

The 16 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 LICENSE-binary                                     |   9 +-
 dev-support/git-jira-validation/README.md          | 134 +++++++
 .../git_jira_fix_version_check.py                  | 118 ++++++
 .../git-jira-validation/requirements.txt           |   3 +-
 .../resources/assemblies/hadoop-dynamometer.xml    |   2 +-
 .../resources/assemblies/hadoop-hdfs-nfs-dist.xml  |   2 +-
 .../resources/assemblies/hadoop-httpfs-dist.xml    |   2 +-
 .../main/resources/assemblies/hadoop-kms-dist.xml  |   2 +-
 .../resources/assemblies/hadoop-mapreduce-dist.xml |   2 +-
 .../main/resources/assemblies/hadoop-nfs-dist.xml  |   2 +-
 .../src/main/resources/assemblies/hadoop-tools.xml |   2 +-
 .../main/resources/assemblies/hadoop-yarn-dist.xml |   2 +-
 .../hadoop-client-check-invariants/pom.xml         |   4 +-
 .../hadoop-client-check-test-invariants/pom.xml    |   4 +-
 .../hadoop-client-integration-tests/pom.xml        |   9 +-
 .../hadoop-client-minicluster/pom.xml              |  10 +-
 .../hadoop-client-runtime/pom.xml                  |   8 +-
 hadoop-client-modules/hadoop-client/pom.xml        |  14 +-
 hadoop-common-project/hadoop-auth-examples/pom.xml |   6 +-
 hadoop-common-project/hadoop-auth/pom.xml          |  12 +-
 hadoop-common-project/hadoop-common/pom.xml        |   6 +-
 .../main/java/org/apache/hadoop/fs/FileUtil.java   |  36 +-
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java    |   5 -
 .../io/erasurecode/rawcoder/RawErasureDecoder.java |   6 +-
 .../java/org/apache/hadoop/util/GenericsUtil.java  |   2 +-
 .../site/markdown/release/3.3.2/CHANGELOG.3.3.2.md | 350 ++++++++++++++++++
 .../markdown/release/3.3.2/RELEASENOTES.3.3.2.md   |  93 +++++
 .../java/org/apache/hadoop/fs/TestFileUtil.java    | 394 +++++++++++++--------
 .../java/org/apache/hadoop/util/TestClassUtil.java |   2 +-
 hadoop-common-project/hadoop-kms/pom.xml           |   6 +-
 hadoop-common-project/hadoop-minikdc/pom.xml       |   2 +-
 hadoop-common-project/hadoop-nfs/pom.xml           |   6 +-
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml     |   4 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml     |   6 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml        |   6 +-
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml        |   6 +-
 ...HDFS_3.3.1.xml => Apache_Hadoop_HDFS_3.3.2.xml} |   6 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml            |   6 +-
 .../hadoop/hdfs/server/datanode/VolumeScanner.java |   2 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java     |   1 +
 .../hadoop/hdfs/server/namenode/FSDirRenameOp.java |   6 +-
 .../hadoop/hdfs/server/namenode/FSDirectory.java   |   6 +-
 .../hadoop/hdfs/server/namenode/FSEditLog.java     |  11 +-
 .../apache/hadoop/hdfs/server/namenode/INode.java  |  10 +
 .../OfflineImageReconstructor.java                 |   4 +
 .../src/main/resources/hdfs-default.xml            |   2 +-
 .../hadoop/fs/viewfs/TestViewFileSystemHdfs.java   |  19 +
 .../java/org/apache/hadoop/hdfs/TestQuota.java     |  39 ++
 .../hdfs/server/datanode/SimulatedFSDataset.java   |   5 +-
 .../hdfs/server/datanode/TestBlockScanner.java     |  16 +-
 .../offlineImageViewer/TestOfflineImageViewer.java |  42 ++-
 .../src/CMakeLists.txt                             |   1 +
 .../hadoop-mapreduce-client/pom.xml                |   2 +-
 hadoop-mapreduce-project/pom.xml                   |   2 +-
 hadoop-project-dist/pom.xml                        |   2 +-
 hadoop-project/pom.xml                             | 117 +++++-
 hadoop-tools/hadoop-azure/pom.xml                  |   4 +-
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |  14 +
 .../pom.xml                                        |   4 +-
 .../hadoop-yarn-services-core/pom.xml              |   4 +-
 .../hadoop-yarn/hadoop-yarn-client/pom.xml         |   4 +-
 .../hadoop-yarn/hadoop-yarn-common/pom.xml         |   4 +-
 .../src/main/resources/yarn-default.xml            |  12 +
 .../hadoop-yarn-server-resourcemanager/pom.xml     |   4 +-
 .../scheduler/capacity/CapacityScheduler.java      |  16 +
 .../capacity/CapacitySchedulerConfigValidator.java |   2 +
 .../capacity/conf/YarnConfigurationStore.java      |   1 +
 .../TestCapacitySchedulerConfigValidator.java      | 270 +++++++++++++-
 .../yarn/server/webproxy/WebAppProxyServlet.java   |  28 +-
 .../server/webproxy/TestWebAppProxyServlet.java    |  79 ++++-
 70 files changed, 1725 insertions(+), 297 deletions(-)
 create mode 100644 dev-support/git-jira-validation/README.md
 create mode 100644 dev-support/git-jira-validation/git_jira_fix_version_check.py
 copy hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/bindings/CMakeLists.txt => dev-support/git-jira-validation/requirements.txt (97%)
 create mode 100644 hadoop-common-project/hadoop-common/src/site/markdown/release/3.3.2/CHANGELOG.3.3.2.md
 create mode 100644 hadoop-common-project/hadoop-common/src/site/markdown/release/3.3.2/RELEASENOTES.3.3.2.md
 copy hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/{Apache_Hadoop_HDFS_3.3.1.xml => Apache_Hadoop_HDFS_3.3.2.xml} (88%)


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 08/16: YARN-11014. YARN incorrectly validates maximum capacity resources on the validation API. Contributed by Benjamin Teke

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 728b49b9f389d0c212be595cf059bc8c5d2d2e10
Author: Szilard Nemeth <sn...@apache.org>
AuthorDate: Wed Mar 2 14:23:00 2022 +0100

    YARN-11014. YARN incorrectly validates maximum capacity resources on the validation API. Contributed by Benjamin Teke
    
    Change-Id: I5505e1b8aaa394dfac31dade7aed6013e0279adc
---
 .../scheduler/capacity/CapacityScheduler.java      |  16 ++
 .../capacity/CapacitySchedulerConfigValidator.java |   2 +
 .../TestCapacitySchedulerConfigValidator.java      | 270 ++++++++++++++++++++-
 3 files changed, 284 insertions(+), 4 deletions(-)

diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 69e775f84e5..d0d95c388a6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -2056,6 +2056,22 @@ public class CapacityScheduler extends
     }
   }
 
+  /**
+   * Add node to nodeTracker. Used when validating CS configuration by instantiating a new
+   * CS instance.
+   * @param nodesToAdd node to be added
+   */
+  public void addNodes(List<FiCaSchedulerNode> nodesToAdd) {
+    writeLock.lock();
+    try {
+      for (FiCaSchedulerNode node : nodesToAdd) {
+        nodeTracker.addNode(node);
+      }
+    } finally {
+      writeLock.unlock();
+    }
+  }
+
   private void addNode(RMNode nodeManager) {
     writeLock.lock();
     try {
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
index c3b4df4efdf..d180ffb64ba 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
@@ -42,6 +42,7 @@ public final class CapacitySchedulerConfigValidator {
   public static boolean validateCSConfiguration(
           final Configuration oldConf, final Configuration newConf,
           final RMContext rmContext) throws IOException {
+    CapacityScheduler liveScheduler = (CapacityScheduler) rmContext.getScheduler();
     CapacityScheduler newCs = new CapacityScheduler();
     try {
       //TODO: extract all the validation steps and replace reinitialize with
@@ -49,6 +50,7 @@ public final class CapacitySchedulerConfigValidator {
       newCs.setConf(oldConf);
       newCs.setRMContext(rmContext);
       newCs.init(oldConf);
+      newCs.addNodes(liveScheduler.getAllNodes());
       newCs.reinitialize(newConf, rmContext, true);
       return true;
     } finally {
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerConfigValidator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerConfigValidator.java
index 04f4349db1d..ad114d901cf 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerConfigValidator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerConfigValidator.java
@@ -19,13 +19,23 @@
 package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap;
 import org.apache.hadoop.yarn.LocalConfigurationProvider;
+import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.ResourceInformation;
 import org.apache.hadoop.yarn.api.records.impl.LightWeightResource;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
+import org.apache.hadoop.yarn.server.resourcemanager.MockNM;
+import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import org.apache.hadoop.yarn.util.YarnVersionInfo;
+import org.apache.hadoop.yarn.util.resource.DominantResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.ResourceUtils;
 import org.junit.Assert;
 import org.junit.Test;
 import org.mockito.Mockito;
@@ -34,9 +44,71 @@ import java.io.IOException;
 import java.util.HashMap;
 import java.util.Map;
 
+import static org.apache.hadoop.yarn.api.records.ResourceInformation.GPU_URI;
 import static org.junit.Assert.fail;
 
 public class TestCapacitySchedulerConfigValidator {
+  public static final int NODE_MEMORY = 16;
+  public static final int NODE1_VCORES = 8;
+  public static final int NODE2_VCORES = 10;
+  public static final int NODE3_VCORES = 12;
+  public static final Map<String, Long> NODE_GPU = ImmutableMap.of(GPU_URI, 2L);
+  public static final int GB = 1024;
+
+  private static final String PARENT_A = "parentA";
+  private static final String PARENT_B = "parentB";
+  private static final String LEAF_A = "leafA";
+  private static final String LEAF_B = "leafB";
+
+  private static final String PARENT_A_FULL_PATH = CapacitySchedulerConfiguration.ROOT
+      + "." + PARENT_A;
+  private static final String LEAF_A_FULL_PATH = PARENT_A_FULL_PATH
+      + "." + LEAF_A;
+  private static final String PARENT_B_FULL_PATH = CapacitySchedulerConfiguration.ROOT
+      + "." + PARENT_B;
+  private static final String LEAF_B_FULL_PATH = PARENT_B_FULL_PATH
+      + "." + LEAF_B;
+
+  private final Resource A_MINRES = Resource.newInstance(16 * GB, 10);
+  private final Resource B_MINRES = Resource.newInstance(32 * GB, 5);
+  private final Resource FULL_MAXRES = Resource.newInstance(48 * GB, 30);
+  private final Resource PARTIAL_MAXRES = Resource.newInstance(16 * GB, 10);
+  private final Resource VCORE_EXCEEDED_MAXRES = Resource.newInstance(16 * GB, 50);
+  private Resource A_MINRES_GPU;
+  private Resource B_MINRES_GPU;
+  private Resource FULL_MAXRES_GPU;
+  private Resource PARTIAL_MAXRES_GPU;
+  private Resource GPU_EXCEEDED_MAXRES_GPU;
+
+  protected MockRM mockRM = null;
+  protected MockNM nm1 = null;
+  protected MockNM nm2 = null;
+  protected MockNM nm3 = null;
+  protected CapacityScheduler cs;
+
+  public static void setupResources(boolean useGpu) {
+    Map<String, ResourceInformation> riMap = new HashMap<>();
+
+    ResourceInformation memory = ResourceInformation.newInstance(
+        ResourceInformation.MEMORY_MB.getName(),
+        ResourceInformation.MEMORY_MB.getUnits(),
+        YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_MB,
+        YarnConfiguration.DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_MB);
+    ResourceInformation vcores = ResourceInformation.newInstance(
+        ResourceInformation.VCORES.getName(),
+        ResourceInformation.VCORES.getUnits(),
+        YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_VCORES,
+        YarnConfiguration.DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES);
+    riMap.put(ResourceInformation.MEMORY_URI, memory);
+    riMap.put(ResourceInformation.VCORES_URI, vcores);
+    if (useGpu) {
+      riMap.put(ResourceInformation.GPU_URI,
+          ResourceInformation.newInstance(ResourceInformation.GPU_URI, "", 0,
+              ResourceTypes.COUNTABLE, 0, 10L));
+    }
+
+    ResourceUtils.initializeResourcesFromResourceInformationMap(riMap);
+  }
 
   /**
    * Test for the case when the scheduler.minimum-allocation-mb == 0.
@@ -69,7 +141,6 @@ public class TestCapacitySchedulerConfigValidator {
 
   }
 
-
   @Test
   public void testValidateMemoryAllocation() {
     Map<String, String> configs = new HashMap();
@@ -115,7 +186,6 @@ public class TestCapacitySchedulerConfigValidator {
 
   }
 
-
   @Test
   public void testValidateVCores() {
     Map<String, String> configs = new HashMap();
@@ -147,6 +217,106 @@ public class TestCapacitySchedulerConfigValidator {
     }
   }
 
+  @Test
+  public void testValidateCSConfigDefaultRCAbsoluteModeParentMaxMemoryExceeded()
+      throws Exception {
+    setUpMockRM(false);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, FULL_MAXRES);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+      fail("Parent maximum capacity exceeded");
+    } catch (IOException e) {
+      Assert.assertTrue(e.getCause().getMessage()
+          .startsWith("Max resource configuration"));
+    } finally {
+      mockRM.stop();
+    }
+  }
+
+  @Test
+  public void testValidateCSConfigDefaultRCAbsoluteModeParentMaxVcoreExceeded() throws Exception {
+    setUpMockRM(false);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, VCORE_EXCEEDED_MAXRES);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+    } catch (IOException e) {
+      fail("In DefaultResourceCalculator vcore limits are not enforced");
+    } finally {
+      mockRM.stop();
+    }
+  }
+
+  @Test
+  public void testValidateCSConfigDominantRCAbsoluteModeParentMaxMemoryExceeded()
+      throws Exception {
+    setUpMockRM(true);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, FULL_MAXRES);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+      fail("Parent maximum capacity exceeded");
+    } catch (IOException e) {
+      Assert.assertTrue(e.getCause().getMessage()
+          .startsWith("Max resource configuration"));
+    } finally {
+      mockRM.stop();
+    }
+  }
+
+  @Test
+  public void testValidateCSConfigDominantRCAbsoluteModeParentMaxVcoreExceeded() throws Exception {
+    setUpMockRM(true);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, VCORE_EXCEEDED_MAXRES);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+      fail("Parent maximum capacity exceeded");
+    } catch (IOException e) {
+      Assert.assertTrue(e.getCause().getMessage()
+          .startsWith("Max resource configuration"));
+    } finally {
+      mockRM.stop();
+    }
+  }
+
+  @Test
+  public void testValidateCSConfigDominantRCAbsoluteModeParentMaxGPUExceeded() throws Exception {
+    setUpMockRM(true);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, GPU_EXCEEDED_MAXRES_GPU);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+      fail("Parent maximum capacity exceeded");
+    } catch (IOException e) {
+      Assert.assertTrue(e.getCause().getMessage()
+          .startsWith("Max resource configuration"));
+    } finally {
+      mockRM.stop();
+    }
+  }
+
   @Test
   public void testValidateCSConfigStopALeafQueue() throws IOException {
     Configuration oldConfig = CapacitySchedulerConfigGeneratorForTest
@@ -155,7 +325,7 @@ public class TestCapacitySchedulerConfigValidator {
     newConfig
             .set("yarn.scheduler.capacity.root.test1.state", "STOPPED");
     RMContext rmContext = prepareRMContext();
-    Boolean isValidConfig = CapacitySchedulerConfigValidator
+    boolean isValidConfig = CapacitySchedulerConfigValidator
             .validateCSConfiguration(oldConfig, newConfig, rmContext);
     Assert.assertTrue(isValidConfig);
   }
@@ -340,9 +510,11 @@ public class TestCapacitySchedulerConfigValidator {
     Assert.assertTrue(isValidConfig);
   }
 
-
   public static RMContext prepareRMContext() {
+    setupResources(false);
     RMContext rmContext = Mockito.mock(RMContext.class);
+    CapacityScheduler mockCs = Mockito.mock(CapacityScheduler.class);
+    Mockito.when(rmContext.getScheduler()).thenReturn(mockCs);
     LocalConfigurationProvider configProvider = Mockito
             .mock(LocalConfigurationProvider.class);
     Mockito.when(rmContext.getConfigurationProvider())
@@ -361,4 +533,94 @@ public class TestCapacitySchedulerConfigValidator {
             .thenReturn(queuePlacementManager);
     return rmContext;
   }
+
+  private void setUpMockRM(boolean useDominantRC) throws Exception {
+    YarnConfiguration conf = new YarnConfiguration();
+    conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+        ResourceScheduler.class);
+    setupResources(useDominantRC);
+    CapacitySchedulerConfiguration csConf = setupCSConfiguration(conf, useDominantRC);
+
+    mockRM = new MockRM(csConf);
+
+    cs = (CapacityScheduler) mockRM.getResourceScheduler();
+    mockRM.start();
+    cs.start();
+
+    setupNodes(mockRM);
+  }
+
+  private void setupNodes(MockRM newMockRM) throws Exception {
+      nm1 = new MockNM("h1:1234",
+          Resource.newInstance(NODE_MEMORY * GB, NODE1_VCORES, NODE_GPU),
+          newMockRM.getResourceTrackerService(),
+          YarnVersionInfo.getVersion());
+
+      nm1.registerNode();
+
+      nm2 = new MockNM("h2:1234",
+          Resource.newInstance(NODE_MEMORY * GB, NODE2_VCORES, NODE_GPU),
+          newMockRM.getResourceTrackerService(),
+          YarnVersionInfo.getVersion());
+      nm2.registerNode();
+
+      nm3 = new MockNM("h3:1234",
+          Resource.newInstance(NODE_MEMORY * GB, NODE3_VCORES, NODE_GPU),
+          newMockRM.getResourceTrackerService(),
+          YarnVersionInfo.getVersion());
+      nm3.registerNode();
+  }
+
+  private void setupGpuResourceValues() {
+    A_MINRES_GPU = Resource.newInstance(A_MINRES.getMemorySize(), A_MINRES.getVirtualCores(),
+        ImmutableMap.of(GPU_URI, 2L));
+    B_MINRES_GPU =  Resource.newInstance(B_MINRES.getMemorySize(), B_MINRES.getVirtualCores(),
+        ImmutableMap.of(GPU_URI, 2L));
+    FULL_MAXRES_GPU = Resource.newInstance(FULL_MAXRES.getMemorySize(),
+        FULL_MAXRES.getVirtualCores(), ImmutableMap.of(GPU_URI, 6L));
+    PARTIAL_MAXRES_GPU = Resource.newInstance(PARTIAL_MAXRES.getMemorySize(),
+        PARTIAL_MAXRES.getVirtualCores(), ImmutableMap.of(GPU_URI, 4L));
+    GPU_EXCEEDED_MAXRES_GPU = Resource.newInstance(PARTIAL_MAXRES.getMemorySize(),
+        PARTIAL_MAXRES.getVirtualCores(), ImmutableMap.of(GPU_URI, 50L));
+  }
+
+  private CapacitySchedulerConfiguration setupCSConfiguration(YarnConfiguration configuration,
+                                                              boolean useDominantRC) {
+    CapacitySchedulerConfiguration csConf = new CapacitySchedulerConfiguration(configuration);
+    if (useDominantRC) {
+      csConf.set(CapacitySchedulerConfiguration.RESOURCE_CALCULATOR_CLASS,
+          DominantResourceCalculator.class.getName());
+      csConf.set(YarnConfiguration.RESOURCE_TYPES, ResourceInformation.GPU_URI);
+    }
+
+    csConf.setQueues(CapacitySchedulerConfiguration.ROOT,
+        new String[]{PARENT_A, PARENT_B});
+    csConf.setQueues(PARENT_A_FULL_PATH, new String[]{LEAF_A});
+    csConf.setQueues(PARENT_B_FULL_PATH, new String[]{LEAF_B});
+
+    if (useDominantRC) {
+      setupGpuResourceValues();
+      csConf.setMinimumResourceRequirement("", PARENT_A_FULL_PATH, A_MINRES_GPU);
+      csConf.setMinimumResourceRequirement("", PARENT_B_FULL_PATH, B_MINRES_GPU);
+      csConf.setMinimumResourceRequirement("", LEAF_A_FULL_PATH, A_MINRES_GPU);
+      csConf.setMinimumResourceRequirement("", LEAF_B_FULL_PATH, B_MINRES_GPU);
+
+      csConf.setMaximumResourceRequirement("", PARENT_A_FULL_PATH, PARTIAL_MAXRES_GPU);
+      csConf.setMaximumResourceRequirement("", PARENT_B_FULL_PATH, FULL_MAXRES_GPU);
+      csConf.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, PARTIAL_MAXRES_GPU);
+      csConf.setMaximumResourceRequirement("", LEAF_B_FULL_PATH, FULL_MAXRES_GPU);
+    } else {
+      csConf.setMinimumResourceRequirement("", PARENT_A_FULL_PATH, A_MINRES);
+      csConf.setMinimumResourceRequirement("", PARENT_B_FULL_PATH, B_MINRES);
+      csConf.setMinimumResourceRequirement("", LEAF_A_FULL_PATH, A_MINRES);
+      csConf.setMinimumResourceRequirement("", LEAF_B_FULL_PATH, B_MINRES);
+
+      csConf.setMaximumResourceRequirement("", PARENT_A_FULL_PATH, PARTIAL_MAXRES);
+      csConf.setMaximumResourceRequirement("", PARENT_B_FULL_PATH, FULL_MAXRES);
+      csConf.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, PARTIAL_MAXRES);
+      csConf.setMaximumResourceRequirement("", LEAF_B_FULL_PATH, FULL_MAXRES);
+    }
+
+    return csConf;
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 13/16: HDFS-16355. Improve the description of dfs.block.scanner.volume.bytes.per.second (#3724)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 25ad7aacdeabf0ca37e69082fb631027665d1ca4
Author: GuoPhilipse <46...@users.noreply.github.com>
AuthorDate: Sun Mar 27 21:23:48 2022 +0800

    HDFS-16355. Improve the description of dfs.block.scanner.volume.bytes.per.second (#3724)
    
    Co-authored-by: gf13871 <gf...@ly.com>
    Signed-off-by: Akira Ajisaka <aa...@apache.org>
    (cherry picked from commit 046a6204b4a895b98ccd41dde1c9524a6bb0ea31)
    
    Change-Id: I2cae5d1c27a492d896da5338a92c7a86f88a8b43
---
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml      |  2 +-
 .../hadoop/hdfs/server/datanode/TestBlockScanner.java    | 16 +++++++++++-----
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 80c481886d7..78af86b0a3c 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -1591,7 +1591,7 @@
   <name>dfs.block.scanner.volume.bytes.per.second</name>
   <value>1048576</value>
   <description>
-        If this is 0, the DataNode's block scanner will be disabled.  If this
+        If this is configured less than or equal to zero, the DataNode's block scanner will be disabled.  If this
         is positive, this is the number of bytes per second that the DataNode's
         block scanner will try to scan from each volume.
   </description>
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
index c74785923a7..2086e15348e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
@@ -282,11 +282,17 @@ public class TestBlockScanner {
   public void testDisableVolumeScanner() throws Exception {
     Configuration conf = new Configuration();
     disableBlockScanner(conf);
-    TestContext ctx = new TestContext(conf, 1);
-    try {
-      Assert.assertFalse(ctx.datanode.getBlockScanner().isEnabled());
-    } finally {
-      ctx.close();
+    try(TestContext ctx = new TestContext(conf, 1)) {
+      assertFalse(ctx.datanode.getBlockScanner().isEnabled());
+    }
+  }
+
+  @Test(timeout=60000)
+  public void testDisableVolumeScanner2() throws Exception {
+    Configuration conf = new Configuration();
+    conf.setLong(DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND, -1L);
+    try(TestContext ctx = new TestContext(conf, 1)) {
+      assertFalse(ctx.datanode.getBlockScanner().isEnabled());
     }
   }
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 12/16: YARN-10720. YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging. Contributed by Qi Zhu.

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ecc1019b38f937fe9eeff0b6fe294e5aec5e3d7b
Author: Peter Bacsko <pb...@cloudera.com>
AuthorDate: Thu Apr 1 09:21:15 2021 +0200

    YARN-10720. YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging. Contributed by Qi Zhu.
    
    (cherry picked from commit a0deda1a777d8967fb8c08ac976543cda895773d)
    
    Change-Id: I935725ba094d2c35fdc91dd42883bf5b0d506d56
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java | 14 ++++
 .../src/main/resources/yarn-default.xml            | 12 ++++
 .../yarn/server/webproxy/WebAppProxyServlet.java   | 28 ++++++--
 .../server/webproxy/TestWebAppProxyServlet.java    | 79 +++++++++++++++++++++-
 4 files changed, 126 insertions(+), 7 deletions(-)

diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index df482c18598..c1bb6aa68d2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2672,6 +2672,20 @@ public class YarnConfiguration extends Configuration {
 
   public static final String DEFAULT_RM_APPLICATION_HTTPS_POLICY = "NONE";
 
+
+  // If the proxy connection time enabled.
+  public static final String RM_PROXY_TIMEOUT_ENABLED =
+      RM_PREFIX + "proxy.timeout.enabled";
+
+  public static final boolean DEFALUT_RM_PROXY_TIMEOUT_ENABLED =
+      true;
+
+  public static final String RM_PROXY_CONNECTION_TIMEOUT =
+      RM_PREFIX + "proxy.connection.timeout";
+
+  public static final int DEFAULT_RM_PROXY_CONNECTION_TIMEOUT =
+      60000;
+
   /**
    * Interval of time the linux container executor should try cleaning up
    * cgroups entry when cleaning up a container. This is required due to what 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index ff3a8179132..4be357b78a6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -2601,6 +2601,18 @@
     <value/>
   </property>
 
+  <property>
+    <description>Enable the web proxy connection timeout, default is enabled.</description>
+    <name>yarn.resourcemanager.proxy.timeout.enabled</name>
+    <value>true</value>
+  </property>
+
+  <property>
+    <description>The web proxy connection timeout.</description>
+    <name>yarn.resourcemanager.proxy.connection.timeout</name>
+    <value>60000</value>
+  </property>
+
   <!-- Applications' Configuration -->
 
   <property>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java
index 0b6bb65d8db..03b7077bc16 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java
@@ -122,6 +122,9 @@ public class WebAppProxyServlet extends HttpServlet {
     }
   }
 
+  protected void setConf(YarnConfiguration conf){
+    this.conf = conf;
+  }
   /**
    * Default constructor
    */
@@ -230,6 +233,14 @@ public class WebAppProxyServlet extends HttpServlet {
 
     String httpsPolicy = conf.get(YarnConfiguration.RM_APPLICATION_HTTPS_POLICY,
         YarnConfiguration.DEFAULT_RM_APPLICATION_HTTPS_POLICY);
+
+    boolean connectionTimeoutEnabled =
+        conf.getBoolean(YarnConfiguration.RM_PROXY_TIMEOUT_ENABLED,
+        YarnConfiguration.DEFALUT_RM_PROXY_TIMEOUT_ENABLED);
+    int connectionTimeout =
+        conf.getInt(YarnConfiguration.RM_PROXY_CONNECTION_TIMEOUT,
+            YarnConfiguration.DEFAULT_RM_PROXY_CONNECTION_TIMEOUT);
+
     if (httpsPolicy.equals("LENIENT") || httpsPolicy.equals("STRICT")) {
       ProxyCA proxyCA = getProxyCA();
       // ProxyCA could be null when the Proxy is run outside the RM
@@ -250,10 +261,18 @@ public class WebAppProxyServlet extends HttpServlet {
     InetAddress localAddress = InetAddress.getByName(proxyHost);
     LOG.debug("local InetAddress for proxy host: {}", localAddress);
     httpClientBuilder.setDefaultRequestConfig(
-        RequestConfig.custom()
-        .setCircularRedirectsAllowed(true)
-        .setLocalAddress(localAddress)
-        .build());
+        connectionTimeoutEnabled ?
+            RequestConfig.custom()
+                .setCircularRedirectsAllowed(true)
+                .setLocalAddress(localAddress)
+                .setConnectionRequestTimeout(connectionTimeout)
+                .setSocketTimeout(connectionTimeout)
+                .setConnectTimeout(connectionTimeout)
+                .build() :
+            RequestConfig.custom()
+                .setCircularRedirectsAllowed(true)
+                .setLocalAddress(localAddress)
+                .build());
 
     HttpRequestBase base = null;
     if (method.equals(HTTP.GET)) {
@@ -621,7 +640,6 @@ public class WebAppProxyServlet extends HttpServlet {
    * again... If this method returns true, there was a redirect, and
    * it was handled by redirecting the current request to an error page.
    *
-   * @param path the part of the request path after the app id
    * @param id the app id
    * @param req the request object
    * @param resp the response object
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
index f05e05a2d63..6c8993f6e80 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
@@ -23,6 +23,8 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
 
 import java.io.ByteArrayOutputStream;
 import java.io.IOException;
@@ -35,10 +37,14 @@ import java.net.HttpCookie;
 import java.net.HttpURLConnection;
 import java.net.URI;
 import java.net.URL;
+import java.net.SocketTimeoutException;
+import java.util.Collections;
 import java.util.Enumeration;
 import java.util.List;
 import java.util.Map;
 
+import javax.servlet.ServletConfig;
+import javax.servlet.ServletContext;
 import javax.servlet.ServletException;
 import javax.servlet.http.HttpServlet;
 import javax.servlet.http.HttpServletRequest;
@@ -98,6 +104,7 @@ public class TestWebAppProxyServlet {
     context.setContextPath("/foo");
     server.setHandler(context);
     context.addServlet(new ServletHolder(TestServlet.class), "/bar");
+    context.addServlet(new ServletHolder(TimeOutTestServlet.class), "/timeout");
     ((ServerConnector)server.getConnectors()[0]).setHost("localhost");
     server.start();
     originalPort = ((ServerConnector)server.getConnectors()[0]).getLocalPort();
@@ -145,6 +152,29 @@ public class TestWebAppProxyServlet {
     }
   }
 
+  @SuppressWarnings("serial")
+  public static class TimeOutTestServlet extends HttpServlet {
+
+    @Override
+    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
+        throws ServletException, IOException {
+      try {
+        Thread.sleep(10 * 1000);
+      } catch (InterruptedException e) {
+        LOG.warn("doGet() interrupted", e);
+        resp.setStatus(HttpServletResponse.SC_BAD_REQUEST);
+        return;
+      }
+      resp.setStatus(HttpServletResponse.SC_OK);
+    }
+
+    @Override
+    protected void doPost(HttpServletRequest req, HttpServletResponse resp)
+        throws ServletException, IOException {
+      resp.setStatus(HttpServletResponse.SC_OK);
+    }
+  }
+
   @Test(timeout=5000)
   public void testWebAppProxyServlet() throws Exception {
     configuration.set(YarnConfiguration.PROXY_ADDRESS, "localhost:9090");
@@ -256,6 +286,45 @@ public class TestWebAppProxyServlet {
     }
   }
 
+  @Test(expected = SocketTimeoutException.class)
+  public void testWebAppProxyConnectionTimeout()
+      throws IOException, ServletException{
+    HttpServletRequest request = mock(HttpServletRequest.class);
+    when(request.getMethod()).thenReturn("GET");
+    when(request.getRemoteUser()).thenReturn("dr.who");
+    when(request.getPathInfo()).thenReturn("/application_00_0");
+    when(request.getHeaderNames()).thenReturn(Collections.emptyEnumeration());
+
+    HttpServletResponse response = mock(HttpServletResponse.class);
+    when(response.getOutputStream()).thenReturn(null);
+
+    WebAppProxyServlet servlet = new WebAppProxyServlet();
+    YarnConfiguration conf = new YarnConfiguration();
+    conf.setBoolean(YarnConfiguration.RM_PROXY_TIMEOUT_ENABLED,
+        true);
+    conf.setInt(YarnConfiguration.RM_PROXY_CONNECTION_TIMEOUT,
+        1000);
+
+    servlet.setConf(conf);
+
+    ServletConfig config = mock(ServletConfig.class);
+    ServletContext context = mock(ServletContext.class);
+    when(config.getServletContext()).thenReturn(context);
+
+    AppReportFetcherForTest appReportFetcher =
+        new AppReportFetcherForTest(new YarnConfiguration());
+
+    when(config.getServletContext()
+        .getAttribute(WebAppProxy.FETCHER_ATTRIBUTE))
+        .thenReturn(appReportFetcher);
+
+    appReportFetcher.answer = 7;
+
+    servlet.init(config);
+    servlet.doGet(request, response);
+
+  }
+
   @Test(timeout=5000)
   public void testAppReportForEmptyTrackingUrl() throws Exception {
     configuration.set(YarnConfiguration.PROXY_ADDRESS, "localhost:9090");
@@ -391,9 +460,9 @@ public class TestWebAppProxyServlet {
 
   @Test(timeout=5000)
   public void testCheckHttpsStrictAndNotProvided() throws Exception {
-    HttpServletResponse resp = Mockito.mock(HttpServletResponse.class);
+    HttpServletResponse resp = mock(HttpServletResponse.class);
     StringWriter sw = new StringWriter();
-    Mockito.when(resp.getWriter()).thenReturn(new PrintWriter(sw));
+    when(resp.getWriter()).thenReturn(new PrintWriter(sw));
     YarnConfiguration conf = new YarnConfiguration();
     final URI httpLink = new URI("http://foo.com");
     final URI httpsLink = new URI("https://foo.com");
@@ -566,6 +635,12 @@ public class TestWebAppProxyServlet {
         return result;
       } else if (answer == 6) {
         return getDefaultApplicationReport(appId, false);
+      } else if (answer == 7) {
+        // test connection timeout
+        FetchedAppReport result = getDefaultApplicationReport(appId);
+        result.getApplicationReport().setOriginalTrackingUrl("localhost:"
+            + originalPort + "/foo/timeout?a=b#main");
+        return result;
       }
       return null;
     }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 06/16: HDFS-11041. Unable to unregister FsDatasetState MBean if DataNode is shutdown twice. Contributed by Wei-Chiu Chuang.

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a7fd8a62ff5df20f764e7c1bbfd18c971851a0cc
Author: Ayush Saxena <ay...@apache.org>
AuthorDate: Wed Jun 3 12:47:15 2020 +0530

    HDFS-11041. Unable to unregister FsDatasetState MBean if DataNode is shutdown twice. Contributed by Wei-Chiu Chuang.
    
    (cherry picked from commit e8cb2ae409bc1d62f23efef485d1c6f1ff21e86c)
    
    Change-Id: I9f04082d650628bc1b8b62dacaaf472f8a578742
---
 .../hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java    | 1 +
 .../org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java   | 5 ++++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 2ab4b83a3d2..d263d7dfd35 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -2353,6 +2353,7 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
 
     if (mbeanName != null) {
       MBeans.unregister(mbeanName);
+      mbeanName = null;
     }
     
     if (asyncDiskService != null) {
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
index 113da585c9e..417ad3ce74c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
@@ -1367,7 +1367,10 @@ public class SimulatedFSDataset implements FsDatasetSpi<FsVolumeSpi> {
 
   @Override
   public void shutdown() {
-    if (mbeanName != null) MBeans.unregister(mbeanName);
+    if (mbeanName != null) {
+      MBeans.unregister(mbeanName);
+      mbeanName = null;
+    }
   }
 
   @Override


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 10/16: HADOOP-18155. Refactor tests in TestFileUtil (#4063)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5e2c30091a54e4c94a0369d25a041784e81b354f
Author: Wei-Chiu Chuang <we...@apache.org>
AuthorDate: Mon Mar 14 08:40:17 2022 +0800

    HADOOP-18155. Refactor tests in TestFileUtil (#4063)
    
    (cherry picked from commit d0fa9b5775185bd83e4a767a7dfc13ef89c5154a)
    
     Conflicts:
            hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
            hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
    
    Change-Id: I2bba28c56dd08da315856066b58b1778b67bfb45
    Co-authored-by: Gautham B A <ga...@gmail.com>
---
 .../main/java/org/apache/hadoop/fs/FileUtil.java   |  36 +-
 .../java/org/apache/hadoop/fs/TestFileUtil.java    | 394 +++++++++++++--------
 2 files changed, 271 insertions(+), 159 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
index 5e2d6c5badb..13c9d857379 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
@@ -38,6 +38,7 @@ import java.nio.charset.StandardCharsets;
 import java.nio.file.AccessDeniedException;
 import java.nio.file.FileSystems;
 import java.nio.file.Files;
+import java.nio.file.Paths;
 import java.util.ArrayList;
 import java.util.Enumeration;
 import java.util.List;
@@ -970,6 +971,14 @@ public class FileUtil {
           + " would create entry outside of " + outputDir);
     }
 
+    if (entry.isSymbolicLink() || entry.isLink()) {
+      String canonicalTargetPath = getCanonicalPath(entry.getLinkName(), outputDir);
+      if (!canonicalTargetPath.startsWith(targetDirPath)) {
+        throw new IOException(
+            "expanding " + entry.getName() + " would create entry outside of " + outputDir);
+      }
+    }
+
     if (entry.isDirectory()) {
       File subDir = new File(outputDir, entry.getName());
       if (!subDir.mkdirs() && !subDir.isDirectory()) {
@@ -985,10 +994,12 @@ public class FileUtil {
     }
 
     if (entry.isSymbolicLink()) {
-      // Create symbolic link relative to tar parent dir
-      Files.createSymbolicLink(FileSystems.getDefault()
-              .getPath(outputDir.getPath(), entry.getName()),
-          FileSystems.getDefault().getPath(entry.getLinkName()));
+      // Create symlink with canonical target path to ensure that we don't extract
+      // outside targetDirPath
+      String canonicalTargetPath = getCanonicalPath(entry.getLinkName(), outputDir);
+      Files.createSymbolicLink(
+          FileSystems.getDefault().getPath(outputDir.getPath(), entry.getName()),
+          FileSystems.getDefault().getPath(canonicalTargetPath));
       return;
     }
 
@@ -1000,7 +1011,8 @@ public class FileUtil {
     }
 
     if (entry.isLink()) {
-      File src = new File(outputDir, entry.getLinkName());
+      String canonicalTargetPath = getCanonicalPath(entry.getLinkName(), outputDir);
+      File src = new File(canonicalTargetPath);
       HardLink.createHardLink(src, outputFile);
       return;
     }
@@ -1008,6 +1020,20 @@ public class FileUtil {
     org.apache.commons.io.FileUtils.copyToFile(tis, outputFile);
   }
 
+  /**
+   * Gets the canonical path for the given path.
+   *
+   * @param path      The path for which the canonical path needs to be computed.
+   * @param parentDir The parent directory to use if the path is a relative path.
+   * @return The canonical path of the given path.
+   */
+  private static String getCanonicalPath(String path, File parentDir) throws IOException {
+    java.nio.file.Path targetPath = Paths.get(path);
+    return (targetPath.isAbsolute() ?
+        new File(path) :
+        new File(parentDir, path)).getCanonicalPath();
+  }
+
   /**
    * Class for creating hardlinks.
    * Supports Unix, WindXP.
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
index e84d23c058a..03b9d22b98d 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
@@ -42,13 +42,14 @@ import java.net.URISyntaxException;
 import java.net.URL;
 import java.net.UnknownHostException;
 import java.nio.charset.StandardCharsets;
-import java.nio.file.FileSystems;
 import java.nio.file.Files;
+import java.nio.file.Paths;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.List;
+import java.util.Objects;
 import java.util.jar.Attributes;
 import java.util.jar.JarFile;
 import java.util.jar.Manifest;
@@ -60,9 +61,12 @@ import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream;
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.tools.tar.TarEntry;
 import org.apache.tools.tar.TarOutputStream;
+
+import org.assertj.core.api.Assertions;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -158,13 +162,12 @@ public class TestFileUtil {
     FileUtils.forceMkdir(dir1);
     FileUtils.forceMkdir(dir2);
 
-    new File(del, FILE).createNewFile();
-    File tmpFile = new File(tmp, FILE);
-    tmpFile.createNewFile();
+    Verify.createNewFile(new File(del, FILE));
+    File tmpFile = Verify.createNewFile(new File(tmp, FILE));
 
     // create files
-    new File(dir1, FILE).createNewFile();
-    new File(dir2, FILE).createNewFile();
+    Verify.createNewFile(new File(dir1, FILE));
+    Verify.createNewFile(new File(dir2, FILE));
 
     // create a symlink to file
     File link = new File(del, LINK);
@@ -173,7 +176,7 @@ public class TestFileUtil {
     // create a symlink to dir
     File linkDir = new File(del, "tmpDir");
     FileUtil.symLink(tmp.toString(), linkDir.toString());
-    Assert.assertEquals(5, del.listFiles().length);
+    Assert.assertEquals(5, Objects.requireNonNull(del.listFiles()).length);
 
     // create files in partitioned directories
     createFile(partitioned, "part-r-00000", "foo");
@@ -200,13 +203,9 @@ public class TestFileUtil {
   private File createFile(File directory, String name, String contents)
       throws IOException {
     File newFile = new File(directory, name);
-    PrintWriter pw = new PrintWriter(newFile);
-    try {
+    try (PrintWriter pw = new PrintWriter(newFile)) {
       pw.println(contents);
     }
-    finally {
-      pw.close();
-    }
     return newFile;
   }
 
@@ -218,11 +217,11 @@ public class TestFileUtil {
 
     //Test existing directory with no files case 
     File newDir = new File(tmp.getPath(),"test");
-    newDir.mkdir();
+    Verify.mkdir(newDir);
     Assert.assertTrue("Failed to create test dir", newDir.exists());
     files = FileUtil.listFiles(newDir);
     Assert.assertEquals(0, files.length);
-    newDir.delete();
+    assertTrue(newDir.delete());
     Assert.assertFalse("Failed to delete test dir", newDir.exists());
     
     //Test non-existing directory case, this throws 
@@ -244,11 +243,11 @@ public class TestFileUtil {
 
     //Test existing directory with no files case 
     File newDir = new File(tmp.getPath(),"test");
-    newDir.mkdir();
+    Verify.mkdir(newDir);
     Assert.assertTrue("Failed to create test dir", newDir.exists());
     files = FileUtil.list(newDir);
     Assert.assertEquals("New directory unexpectedly contains files", 0, files.length);
-    newDir.delete();
+    assertTrue(newDir.delete());
     Assert.assertFalse("Failed to delete test dir", newDir.exists());
     
     //Test non-existing directory case, this throws 
@@ -266,7 +265,7 @@ public class TestFileUtil {
   public void testFullyDelete() throws IOException {
     boolean ret = FileUtil.fullyDelete(del);
     Assert.assertTrue(ret);
-    Assert.assertFalse(del.exists());
+    Verify.notExists(del);
     validateTmpDir();
   }
 
@@ -279,13 +278,13 @@ public class TestFileUtil {
   @Test (timeout = 30000)
   public void testFullyDeleteSymlinks() throws IOException {
     File link = new File(del, LINK);
-    Assert.assertEquals(5, del.list().length);
+    assertDelListLength(5);
     // Since tmpDir is symlink to tmp, fullyDelete(tmpDir) should not
     // delete contents of tmp. See setupDirs for details.
     boolean ret = FileUtil.fullyDelete(link);
     Assert.assertTrue(ret);
-    Assert.assertFalse(link.exists());
-    Assert.assertEquals(4, del.list().length);
+    Verify.notExists(link);
+    assertDelListLength(4);
     validateTmpDir();
 
     File linkDir = new File(del, "tmpDir");
@@ -293,8 +292,8 @@ public class TestFileUtil {
     // delete contents of tmp. See setupDirs for details.
     ret = FileUtil.fullyDelete(linkDir);
     Assert.assertTrue(ret);
-    Assert.assertFalse(linkDir.exists());
-    Assert.assertEquals(3, del.list().length);
+    Verify.notExists(linkDir);
+    assertDelListLength(3);
     validateTmpDir();
   }
 
@@ -310,16 +309,16 @@ public class TestFileUtil {
     // to make y as a dangling link to file tmp/x
     boolean ret = FileUtil.fullyDelete(tmp);
     Assert.assertTrue(ret);
-    Assert.assertFalse(tmp.exists());
+    Verify.notExists(tmp);
 
     // dangling symlink to file
     File link = new File(del, LINK);
-    Assert.assertEquals(5, del.list().length);
+    assertDelListLength(5);
     // Even though 'y' is dangling symlink to file tmp/x, fullyDelete(y)
     // should delete 'y' properly.
     ret = FileUtil.fullyDelete(link);
     Assert.assertTrue(ret);
-    Assert.assertEquals(4, del.list().length);
+    assertDelListLength(4);
 
     // dangling symlink to directory
     File linkDir = new File(del, "tmpDir");
@@ -327,22 +326,22 @@ public class TestFileUtil {
     // delete tmpDir properly.
     ret = FileUtil.fullyDelete(linkDir);
     Assert.assertTrue(ret);
-    Assert.assertEquals(3, del.list().length);
+    assertDelListLength(3);
   }
 
   @Test (timeout = 30000)
   public void testFullyDeleteContents() throws IOException {
     boolean ret = FileUtil.fullyDeleteContents(del);
     Assert.assertTrue(ret);
-    Assert.assertTrue(del.exists());
-    Assert.assertEquals(0, del.listFiles().length);
+    Verify.exists(del);
+    Assert.assertEquals(0, Objects.requireNonNull(del.listFiles()).length);
     validateTmpDir();
   }
 
   private void validateTmpDir() {
-    Assert.assertTrue(tmp.exists());
-    Assert.assertEquals(1, tmp.listFiles().length);
-    Assert.assertTrue(new File(tmp, FILE).exists());
+    Verify.exists(tmp);
+    Assert.assertEquals(1, Objects.requireNonNull(tmp.listFiles()).length);
+    Verify.exists(new File(tmp, FILE));
   }
 
   /**
@@ -366,15 +365,15 @@ public class TestFileUtil {
    * @throws IOException
    */
   private void setupDirsAndNonWritablePermissions() throws IOException {
-    new MyFile(del, FILE_1_NAME).createNewFile();
+    Verify.createNewFile(new MyFile(del, FILE_1_NAME));
 
     // "file1" is non-deletable by default, see MyFile.delete().
 
-    xSubDir.mkdirs();
-    file2.createNewFile();
+    Verify.mkdirs(xSubDir);
+    Verify.createNewFile(file2);
 
-    xSubSubDir.mkdirs();
-    file22.createNewFile();
+    Verify.mkdirs(xSubSubDir);
+    Verify.createNewFile(file22);
 
     revokePermissions(file22);
     revokePermissions(xSubSubDir);
@@ -382,8 +381,8 @@ public class TestFileUtil {
     revokePermissions(file2);
     revokePermissions(xSubDir);
 
-    ySubDir.mkdirs();
-    file3.createNewFile();
+    Verify.mkdirs(ySubDir);
+    Verify.createNewFile(file3);
 
     File tmpFile = new File(tmp, FILE);
     tmpFile.createNewFile();
@@ -448,6 +447,88 @@ public class TestFileUtil {
     validateAndSetWritablePermissions(false, ret);
   }
 
+  /**
+   * Asserts if the {@link TestFileUtil#del} meets the given expected length.
+   *
+   * @param expectedLength The expected length of the {@link TestFileUtil#del}.
+   */
+  private void assertDelListLength(int expectedLength) {
+    Assertions.assertThat(del.list()).describedAs("del list").isNotNull().hasSize(expectedLength);
+  }
+
+  /**
+   * Helper class to perform {@link File} operation and also verify them.
+   */
+  public static class Verify {
+    /**
+     * Invokes {@link File#createNewFile()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#createNewFile()} on.
+     * @return The result of {@link File#createNewFile()}.
+     * @throws IOException As per {@link File#createNewFile()}.
+     */
+    public static File createNewFile(File file) throws IOException {
+      assertTrue("Unable to create new file " + file, file.createNewFile());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#mkdir()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#mkdir()} on.
+     * @return The result of {@link File#mkdir()}.
+     */
+    public static File mkdir(File file) {
+      assertTrue("Unable to mkdir for " + file, file.mkdir());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#mkdirs()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#mkdirs()} on.
+     * @return The result of {@link File#mkdirs()}.
+     */
+    public static File mkdirs(File file) {
+      assertTrue("Unable to mkdirs for " + file, file.mkdirs());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#delete()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#delete()} on.
+     * @return The result of {@link File#delete()}.
+     */
+    public static File delete(File file) {
+      assertTrue("Unable to delete " + file, file.delete());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#exists()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#exists()} on.
+     * @return The result of {@link File#exists()}.
+     */
+    public static File exists(File file) {
+      assertTrue("Expected file " + file + " doesn't exist", file.exists());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#exists()} on the given {@link File} instance to check if the
+     * {@link File} doesn't exists.
+     *
+     * @param file The file to call {@link File#exists()} on.
+     * @return The negation of the result of {@link File#exists()}.
+     */
+    public static File notExists(File file) {
+      assertFalse("Expected file " + file + " must not exist", file.exists());
+      return file;
+    }
+  }
+
   /**
    * Extend {@link File}. Same as {@link File} except for two things: (1) This
    * treats file1Name as a very special file which is not delete-able
@@ -580,14 +661,13 @@ public class TestFileUtil {
       FileUtil.chmod(partitioned.getAbsolutePath(), "0777", true/*recursive*/);
     }
   }
-  
+
   @Test (timeout = 30000)
-  public void testUnTar() throws IOException {
+  public void testUnTar() throws Exception {
     // make a simple tar:
     final File simpleTar = new File(del, FILE);
-    OutputStream os = new FileOutputStream(simpleTar); 
-    TarOutputStream tos = new TarOutputStream(os);
-    try {
+    OutputStream os = new FileOutputStream(simpleTar);
+    try (TarOutputStream tos = new TarOutputStream(os)) {
       TarEntry te = new TarEntry("/bar/foo");
       byte[] data = "some-content".getBytes("UTF-8");
       te.setSize(data.length);
@@ -596,55 +676,42 @@ public class TestFileUtil {
       tos.closeEntry();
       tos.flush();
       tos.finish();
-    } finally {
-      tos.close();
     }
 
     // successfully untar it into an existing dir:
     FileUtil.unTar(simpleTar, tmp);
     // check result:
-    assertTrue(new File(tmp, "/bar/foo").exists());
+    Verify.exists(new File(tmp, "/bar/foo"));
     assertEquals(12, new File(tmp, "/bar/foo").length());
-    
-    final File regularFile = new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog");
-    regularFile.createNewFile();
-    assertTrue(regularFile.exists());
-    try {
-      FileUtil.unTar(simpleTar, regularFile);
-      assertTrue("An IOException expected.", false);
-    } catch (IOException ioe) {
-      // okay
-    }
+
+    final File regularFile =
+        Verify.createNewFile(new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog"));
+    LambdaTestUtils.intercept(IOException.class, () -> FileUtil.unTar(simpleTar, regularFile));
   }
   
   @Test (timeout = 30000)
   public void testReplaceFile() throws IOException {
-    final File srcFile = new File(tmp, "src");
-    
     // src exists, and target does not exist:
-    srcFile.createNewFile();
-    assertTrue(srcFile.exists());
+    final File srcFile = Verify.createNewFile(new File(tmp, "src"));
     final File targetFile = new File(tmp, "target");
-    assertTrue(!targetFile.exists());
+    Verify.notExists(targetFile);
     FileUtil.replaceFile(srcFile, targetFile);
-    assertTrue(!srcFile.exists());
-    assertTrue(targetFile.exists());
+    Verify.notExists(srcFile);
+    Verify.exists(targetFile);
 
     // src exists and target is a regular file: 
-    srcFile.createNewFile();
-    assertTrue(srcFile.exists());
+    Verify.createNewFile(srcFile);
+    Verify.exists(srcFile);
     FileUtil.replaceFile(srcFile, targetFile);
-    assertTrue(!srcFile.exists());
-    assertTrue(targetFile.exists());
+    Verify.notExists(srcFile);
+    Verify.exists(targetFile);
     
     // src exists, and target is a non-empty directory: 
-    srcFile.createNewFile();
-    assertTrue(srcFile.exists());
-    targetFile.delete();
-    targetFile.mkdirs();
-    File obstacle = new File(targetFile, "obstacle");
-    obstacle.createNewFile();
-    assertTrue(obstacle.exists());
+    Verify.createNewFile(srcFile);
+    Verify.exists(srcFile);
+    Verify.delete(targetFile);
+    Verify.mkdirs(targetFile);
+    File obstacle = Verify.createNewFile(new File(targetFile, "obstacle"));
     assertTrue(targetFile.exists() && targetFile.isDirectory());
     try {
       FileUtil.replaceFile(srcFile, targetFile);
@@ -653,9 +720,9 @@ public class TestFileUtil {
       // okay
     }
     // check up the post-condition: nothing is deleted:
-    assertTrue(srcFile.exists());
+    Verify.exists(srcFile);
     assertTrue(targetFile.exists() && targetFile.isDirectory());
-    assertTrue(obstacle.exists());
+    Verify.exists(obstacle);
   }
   
   @Test (timeout = 30000)
@@ -668,13 +735,13 @@ public class TestFileUtil {
     assertTrue(tmp1.exists() && tmp2.exists());
     assertTrue(tmp1.canWrite() && tmp2.canWrite());
     assertTrue(tmp1.canRead() && tmp2.canRead());
-    tmp1.delete();
-    tmp2.delete();
+    Verify.delete(tmp1);
+    Verify.delete(tmp2);
     assertTrue(!tmp1.exists() && !tmp2.exists());
   }
   
   @Test (timeout = 30000)
-  public void testUnZip() throws IOException {
+  public void testUnZip() throws Exception {
     // make sa simple zip
     final File simpleZip = new File(del, FILE);
     OutputStream os = new FileOutputStream(simpleZip); 
@@ -695,18 +762,12 @@ public class TestFileUtil {
     // successfully unzip it into an existing dir:
     FileUtil.unZip(simpleZip, tmp);
     // check result:
-    assertTrue(new File(tmp, "foo").exists());
+    Verify.exists(new File(tmp, "foo"));
     assertEquals(12, new File(tmp, "foo").length());
-    
-    final File regularFile = new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog");
-    regularFile.createNewFile();
-    assertTrue(regularFile.exists());
-    try {
-      FileUtil.unZip(simpleZip, regularFile);
-      assertTrue("An IOException expected.", false);
-    } catch (IOException ioe) {
-      // okay
-    }
+
+    final File regularFile =
+        Verify.createNewFile(new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog"));
+    LambdaTestUtils.intercept(IOException.class, () -> FileUtil.unZip(simpleZip, regularFile));
   }
 
   @Test (timeout = 30000)
@@ -752,24 +813,24 @@ public class TestFileUtil {
     final File dest = new File(del, "dest");
     boolean result = FileUtil.copy(fs, srcPath, dest, false, conf);
     assertTrue(result);
-    assertTrue(dest.exists());
+    Verify.exists(dest);
     assertEquals(content.getBytes().length 
         + System.getProperty("line.separator").getBytes().length, dest.length());
-    assertTrue(srcFile.exists()); // should not be deleted
+    Verify.exists(srcFile); // should not be deleted
     
     // copy regular file, delete src:
-    dest.delete();
-    assertTrue(!dest.exists());
+    Verify.delete(dest);
+    Verify.notExists(dest);
     result = FileUtil.copy(fs, srcPath, dest, true, conf);
     assertTrue(result);
-    assertTrue(dest.exists());
+    Verify.exists(dest);
     assertEquals(content.getBytes().length 
         + System.getProperty("line.separator").getBytes().length, dest.length());
-    assertTrue(!srcFile.exists()); // should be deleted
+    Verify.notExists(srcFile); // should be deleted
     
     // copy a dir:
-    dest.delete();
-    assertTrue(!dest.exists());
+    Verify.delete(dest);
+    Verify.notExists(dest);
     srcPath = new Path(partitioned.toURI());
     result = FileUtil.copy(fs, srcPath, dest, true, conf);
     assertTrue(result);
@@ -781,7 +842,7 @@ public class TestFileUtil {
       assertEquals(3 
           + System.getProperty("line.separator").getBytes().length, f.length());
     }
-    assertTrue(!partitioned.exists()); // should be deleted
+    Verify.notExists(partitioned); // should be deleted
   }  
 
   @Test (timeout = 30000)
@@ -869,8 +930,8 @@ public class TestFileUtil {
     // create the symlink
     FileUtil.symLink(file.getAbsolutePath(), link.getAbsolutePath());
 
-    Assert.assertTrue(file.exists());
-    Assert.assertTrue(link.exists());
+    Verify.exists(file);
+    Verify.exists(link);
 
     File link2 = new File(del, "_link2");
 
@@ -880,10 +941,10 @@ public class TestFileUtil {
     // Make sure the file still exists
     // (NOTE: this would fail on Java6 on Windows if we didn't
     // copy the file in FileUtil#symlink)
-    Assert.assertTrue(file.exists());
+    Verify.exists(file);
 
-    Assert.assertTrue(link2.exists());
-    Assert.assertFalse(link.exists());
+    Verify.exists(link2);
+    Verify.notExists(link);
   }
 
   /**
@@ -898,13 +959,13 @@ public class TestFileUtil {
     // create the symlink
     FileUtil.symLink(file.getAbsolutePath(), link.getAbsolutePath());
 
-    Assert.assertTrue(file.exists());
-    Assert.assertTrue(link.exists());
+    Verify.exists(file);
+    Verify.exists(link);
 
     // make sure that deleting a symlink works properly
-    Assert.assertTrue(link.delete());
-    Assert.assertFalse(link.exists());
-    Assert.assertTrue(file.exists());
+    Verify.delete(link);
+    Verify.notExists(link);
+    Verify.exists(file);
   }
 
   /**
@@ -931,13 +992,13 @@ public class TestFileUtil {
     Assert.assertEquals(data.length, file.length());
     Assert.assertEquals(data.length, link.length());
 
-    file.delete();
-    Assert.assertFalse(file.exists());
+    Verify.delete(file);
+    Verify.notExists(file);
 
     Assert.assertEquals(0, link.length());
 
-    link.delete();
-    Assert.assertFalse(link.exists());
+    Verify.delete(link);
+    Verify.notExists(link);
   }
 
   /**
@@ -1003,7 +1064,7 @@ public class TestFileUtil {
   public void testSymlinkSameFile() throws IOException {
     File file = new File(del, FILE);
 
-    file.delete();
+    Verify.delete(file);
 
     // Create a symbolic link
     // The operation should succeed
@@ -1076,21 +1137,21 @@ public class TestFileUtil {
 
     String parentDir = untarDir.getCanonicalPath() + Path.SEPARATOR + "name";
     File testFile = new File(parentDir + Path.SEPARATOR + "version");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 0);
     String imageDir = parentDir + Path.SEPARATOR + "image";
     testFile = new File(imageDir + Path.SEPARATOR + "fsimage");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 157);
     String currentDir = parentDir + Path.SEPARATOR + "current";
     testFile = new File(currentDir + Path.SEPARATOR + "fsimage");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 4331);
     testFile = new File(currentDir + Path.SEPARATOR + "edits");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 1033);
     testFile = new File(currentDir + Path.SEPARATOR + "fstime");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 8);
   }
 
@@ -1151,9 +1212,9 @@ public class TestFileUtil {
     }
 
     // create non-jar files, which we expect to not be included in the classpath
-    Assert.assertTrue(new File(tmp, "text.txt").createNewFile());
-    Assert.assertTrue(new File(tmp, "executable.exe").createNewFile());
-    Assert.assertTrue(new File(tmp, "README").createNewFile());
+    Verify.createNewFile(new File(tmp, "text.txt"));
+    Verify.createNewFile(new File(tmp, "executable.exe"));
+    Verify.createNewFile(new File(tmp, "README"));
 
     // create classpath jar
     String wildcardPath = tmp.getCanonicalPath() + File.separator + "*";
@@ -1239,9 +1300,9 @@ public class TestFileUtil {
     }
 
     // create non-jar files, which we expect to not be included in the result
-    assertTrue(new File(tmp, "text.txt").createNewFile());
-    assertTrue(new File(tmp, "executable.exe").createNewFile());
-    assertTrue(new File(tmp, "README").createNewFile());
+    Verify.createNewFile(new File(tmp, "text.txt"));
+    Verify.createNewFile(new File(tmp, "executable.exe"));
+    Verify.createNewFile(new File(tmp, "README"));
 
     // pass in the directory
     String directory = tmp.getCanonicalPath();
@@ -1275,7 +1336,7 @@ public class TestFileUtil {
       uri4 = new URI(uris4);
       uri5 = new URI(uris5);
       uri6 = new URI(uris6);
-    } catch (URISyntaxException use) {
+    } catch (URISyntaxException ignored) {
     }
     // Set up InetAddress
     inet1 = mock(InetAddress.class);
@@ -1298,7 +1359,7 @@ public class TestFileUtil {
       when(InetAddress.getByName(uris3)).thenReturn(inet3);
       when(InetAddress.getByName(uris4)).thenReturn(inet4);
       when(InetAddress.getByName(uris5)).thenReturn(inet5);
-    } catch (UnknownHostException ue) {
+    } catch (UnknownHostException ignored) {
     }
 
     fs1 = mock(FileSystem.class);
@@ -1318,62 +1379,87 @@ public class TestFileUtil {
   @Test
   public void testCompareFsNull() throws Exception {
     setupCompareFs();
-    assertEquals(FileUtil.compareFs(null,fs1),false);
-    assertEquals(FileUtil.compareFs(fs1,null),false);
+    assertFalse(FileUtil.compareFs(null, fs1));
+    assertFalse(FileUtil.compareFs(fs1, null));
   }
 
   @Test
   public void testCompareFsDirectories() throws Exception {
     setupCompareFs();
-    assertEquals(FileUtil.compareFs(fs1,fs1),true);
-    assertEquals(FileUtil.compareFs(fs1,fs2),false);
-    assertEquals(FileUtil.compareFs(fs1,fs5),false);
-    assertEquals(FileUtil.compareFs(fs3,fs4),true);
-    assertEquals(FileUtil.compareFs(fs1,fs6),false);
+    assertTrue(FileUtil.compareFs(fs1, fs1));
+    assertFalse(FileUtil.compareFs(fs1, fs2));
+    assertFalse(FileUtil.compareFs(fs1, fs5));
+    assertTrue(FileUtil.compareFs(fs3, fs4));
+    assertFalse(FileUtil.compareFs(fs1, fs6));
   }
 
   @Test(timeout = 8000)
   public void testCreateSymbolicLinkUsingJava() throws IOException {
     final File simpleTar = new File(del, FILE);
     OutputStream os = new FileOutputStream(simpleTar);
-    TarArchiveOutputStream tos = new TarArchiveOutputStream(os);
-    File untarFile = null;
-    try {
+    try (TarArchiveOutputStream tos = new TarArchiveOutputStream(os)) {
       // Files to tar
       final String tmpDir = "tmp/test";
       File tmpDir1 = new File(tmpDir, "dir1/");
       File tmpDir2 = new File(tmpDir, "dir2/");
-      // Delete the directories if they already exist
-      tmpDir1.mkdirs();
-      tmpDir2.mkdirs();
+      Verify.mkdirs(tmpDir1);
+      Verify.mkdirs(tmpDir2);
 
-      java.nio.file.Path symLink = FileSystems
-          .getDefault().getPath(tmpDir1.getPath() + "/sl");
+      java.nio.file.Path symLink = Paths.get(tmpDir1.getPath(), "sl");
 
       // Create Symbolic Link
-      Files.createSymbolicLink(symLink,
-          FileSystems.getDefault().getPath(tmpDir2.getPath())).toString();
+      Files.createSymbolicLink(symLink, Paths.get(tmpDir2.getPath()));
       assertTrue(Files.isSymbolicLink(symLink.toAbsolutePath()));
-      // put entries in tar file
+      // Put entries in tar file
       putEntriesInTar(tos, tmpDir1.getParentFile());
       tos.close();
 
-      untarFile = new File(tmpDir, "2");
-      // Untar using java
+      File untarFile = new File(tmpDir, "2");
+      // Untar using Java
       FileUtil.unTarUsingJava(simpleTar, untarFile, false);
 
       // Check symbolic link and other directories are there in untar file
       assertTrue(Files.exists(untarFile.toPath()));
-      assertTrue(Files.exists(FileSystems.getDefault().getPath(untarFile
-          .getPath(), tmpDir)));
-      assertTrue(Files.isSymbolicLink(FileSystems.getDefault().getPath(untarFile
-          .getPath().toString(), symLink.toString())));
-
+      assertTrue(Files.exists(Paths.get(untarFile.getPath(), tmpDir)));
+      assertTrue(Files.isSymbolicLink(Paths.get(untarFile.getPath(), symLink.toString())));
     } finally {
       FileUtils.deleteDirectory(new File("tmp"));
-      tos.close();
     }
+  }
+
+  @Test(expected = IOException.class)
+  public void testCreateArbitrarySymlinkUsingJava() throws IOException {
+    final File simpleTar = new File(del, FILE);
+    OutputStream os = new FileOutputStream(simpleTar);
 
+    File rootDir = new File("tmp");
+    try (TarArchiveOutputStream tos = new TarArchiveOutputStream(os)) {
+      tos.setLongFileMode(TarArchiveOutputStream.LONGFILE_GNU);
+
+      // Create arbitrary dir
+      File arbitraryDir = new File(rootDir, "arbitrary-dir/");
+      Verify.mkdirs(arbitraryDir);
+
+      // We will tar from the tar-root lineage
+      File tarRoot = new File(rootDir, "tar-root/");
+      File symlinkRoot = new File(tarRoot, "dir1/");
+      Verify.mkdirs(symlinkRoot);
+
+      // Create Symbolic Link to an arbitrary dir
+      java.nio.file.Path symLink = Paths.get(symlinkRoot.getPath(), "sl");
+      Files.createSymbolicLink(symLink, arbitraryDir.toPath().toAbsolutePath());
+
+      // Put entries in tar file
+      putEntriesInTar(tos, tarRoot);
+      putEntriesInTar(tos, new File(symLink.toFile(), "dir-outside-tar-root/"));
+      tos.close();
+
+      // Untar using Java
+      File untarFile = new File(rootDir, "extracted");
+      FileUtil.unTarUsingJava(simpleTar, untarFile, false);
+    } finally {
+      FileUtils.deleteDirectory(rootDir);
+    }
   }
 
   private void putEntriesInTar(TarArchiveOutputStream tos, File f)
@@ -1450,7 +1536,7 @@ public class TestFileUtil {
     String result = FileUtil.readLink(file);
     Assert.assertEquals("", result);
 
-    file.delete();
+    Verify.delete(file);
   }
 
   /**


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 09/16: HDFS-16428. Source path with storagePolicy cause wrong typeConsumed while rename (#3898). Contributed by lei w.

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f7f630b7132d70a13ffbc69a7a3dd01ccfa91471
Author: Thinker313 <47...@users.noreply.github.com>
AuthorDate: Tue Jan 25 15:26:18 2022 +0800

    HDFS-16428. Source path with storagePolicy cause wrong typeConsumed while rename (#3898). Contributed by lei w.
    
    Signed-off-by: Ayush Saxena <ay...@apache.org>
    Signed-off-by: He Xiaoqiao <he...@apache.org>
---
 .../hadoop/hdfs/server/namenode/FSDirRenameOp.java |  6 +++-
 .../hadoop/hdfs/server/namenode/FSDirectory.java   |  6 +++-
 .../apache/hadoop/hdfs/server/namenode/INode.java  | 10 ++++++
 .../java/org/apache/hadoop/hdfs/TestQuota.java     | 39 ++++++++++++++++++++++
 4 files changed, 59 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
index c60acaa0031..ee0bf8a5fb1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
@@ -80,8 +80,12 @@ class FSDirRenameOp {
     // Assume dstParent existence check done by callers.
     INode dstParent = dst.getINode(-2);
     // Use the destination parent's storage policy for quota delta verify.
+    final boolean isSrcSetSp = src.getLastINode().isSetStoragePolicy();
+    final byte storagePolicyID = isSrcSetSp ?
+        src.getLastINode().getLocalStoragePolicyID() :
+        dstParent.getStoragePolicyID();
     final QuotaCounts delta = src.getLastINode()
-        .computeQuotaUsage(bsps, dstParent.getStoragePolicyID(), false,
+        .computeQuotaUsage(bsps, storagePolicyID, false,
             Snapshot.CURRENT_STATE_ID);
 
     // Reduce the required quota by dst that is being removed
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 7b902d5ff1b..fc17eaebf7f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -1363,9 +1363,13 @@ public class FSDirectory implements Closeable {
     // always verify inode name
     verifyINodeName(inode.getLocalNameBytes());
 
+    final boolean isSrcSetSp = inode.isSetStoragePolicy();
+    final byte storagePolicyID = isSrcSetSp ?
+        inode.getLocalStoragePolicyID() :
+        parent.getStoragePolicyID();
     final QuotaCounts counts = inode
         .computeQuotaUsage(getBlockStoragePolicySuite(),
-            parent.getStoragePolicyID(), false, Snapshot.CURRENT_STATE_ID);
+            storagePolicyID, false, Snapshot.CURRENT_STATE_ID);
     updateCount(existing, pos, counts, checkQuota);
 
     boolean isRename = (inode.getParent() != null);
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
index 03f01eb32ee..8e417fe43aa 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
@@ -340,6 +340,16 @@ public abstract class INode implements INodeAttributes, Diff.Element<byte[]> {
     return false;
   }
 
+  /**
+   * Check if this inode itself has a storage policy set.
+   */
+  public boolean isSetStoragePolicy() {
+    if (isSymlink()) {
+      return false;
+    }
+    return getLocalStoragePolicyID() != HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED;
+  }
+
   /** Cast this inode to an {@link INodeFile}.  */
   public INodeFile asFile() {
     throw new IllegalStateException("Current inode is not a file: "
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
index 79088d3be85..e14ea4dc265 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
@@ -44,6 +44,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.QuotaUsage;
 import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.client.impl.LeaseRenewer;
 import org.apache.hadoop.hdfs.protocol.DSQuotaExceededException;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
@@ -958,6 +959,44 @@ public class TestQuota {
         6 * fileSpace);
   }
 
+  @Test
+  public void testRenameInodeWithStorageType() throws IOException {
+    final int size = 64;
+    final short repl = 1;
+    final Path foo = new Path("/foo");
+    final Path bs1 = new Path(foo, "bs1");
+    final Path wow = new Path(bs1, "wow");
+    final Path bs2 = new Path(foo, "bs2");
+    final Path wow2 = new Path(bs2, "wow2");
+    final Path wow3 = new Path(bs2, "wow3");
+
+    dfs.mkdirs(bs1, FsPermission.getDirDefault());
+    dfs.mkdirs(bs2, FsPermission.getDirDefault());
+    dfs.setQuota(bs1, 1000, 434217728);
+    dfs.setQuota(bs2, 1000, 434217728);
+    // file wow3 without storage policy
+    DFSTestUtil.createFile(dfs, wow3, size, repl, 0);
+
+    dfs.setStoragePolicy(bs2, HdfsConstants.ONESSD_STORAGE_POLICY_NAME);
+
+    DFSTestUtil.createFile(dfs, wow, size, repl, 0);
+    DFSTestUtil.createFile(dfs, wow2, size, repl, 0);
+    assertTrue("Without storage policy, typeConsumed should be 0.",
+        dfs.getQuotaUsage(bs1).getTypeConsumed(StorageType.SSD) == 0);
+    assertTrue("With storage policy, typeConsumed should not be 0.",
+        dfs.getQuotaUsage(bs2).getTypeConsumed(StorageType.SSD) != 0);
+    // wow3 without storage policy , rename will not change typeConsumed
+    dfs.rename(wow3, bs1);
+    assertTrue("Rename src without storagePolicy, dst typeConsumed should not be changed.",
+        dfs.getQuotaUsage(bs2).getTypeConsumed(StorageType.SSD) == 0);
+
+    long srcTypeQuota = dfs.getQuotaUsage(bs2).getTypeQuota(StorageType.SSD);
+    dfs.rename(bs2, bs1);
+    long dstTypeQuota = dfs.getQuotaUsage(bs1).getTypeConsumed(StorageType.SSD);
+    assertTrue("Rename with storage policy, typeConsumed should not be 0.",
+        dstTypeQuota != srcTypeQuota);
+  }
+
   private static void checkContentSummary(final ContentSummary expected,
       final ContentSummary computed) {
     assertEquals(expected.toString(), computed.toString());


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 04/16: HADOOP-18109. Ensure that default permissions of directories under internal ViewFS directories are the same as directories on target filesystems. Contributed by Chentao Yu. (3953)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 2be5a902dcf3dad59c09935a29129f1f861cb8cc
Author: Chentao Yu <ch...@linkedin.com>
AuthorDate: Thu Apr 15 17:46:40 2021 -0700

    HADOOP-18109. Ensure that default permissions of directories under internal ViewFS directories are the same as directories on target filesystems. Contributed by Chentao Yu. (3953)
    
    (cherry picked from commit 19d90e62fb28539f8c79bbb24f703301489825a6)
---
 .../org/apache/hadoop/fs/viewfs/ViewFileSystem.java   |  5 -----
 .../hadoop/fs/viewfs/TestViewFileSystemHdfs.java      | 19 +++++++++++++++++++
 2 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 7503edd45f4..8f333d1506b 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1579,11 +1579,6 @@ public class ViewFileSystem extends FileSystem {
       throw readOnlyMountTable("mkdirs",  dir);
     }
 
-    @Override
-    public boolean mkdirs(Path dir) throws IOException {
-      return mkdirs(dir, null);
-    }
-
     @Override
     public FSDataInputStream open(Path f, int bufferSize)
         throws AccessControlException, FileNotFoundException, IOException {
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
index fcb52577d99..fdc746464f4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
@@ -479,4 +479,23 @@ public class TestViewFileSystemHdfs extends ViewFileSystemBaseTest {
     assertEquals("The owner did not match ", owner, userUgi.getShortUserName());
     otherfs.delete(user1Path, false);
   }
+
+  @Test
+  public void testInternalDirectoryPermissions() throws IOException {
+    LOG.info("Starting testInternalDirectoryPermissions!");
+    Configuration localConf = new Configuration(conf);
+    ConfigUtil.addLinkFallback(
+        localConf, new Path(targetTestRoot, "fallbackDir").toUri());
+    FileSystem fs = FileSystem.get(FsConstants.VIEWFS_URI, localConf);
+    // check that the default permissions on a sub-folder of an internal
+    // directory are the same as those created on non-internal directories.
+    Path subDirOfInternalDir = new Path("/internalDir/dir1");
+    fs.mkdirs(subDirOfInternalDir);
+
+    Path subDirOfRealDir = new Path("/internalDir/linkToDir2/dir1");
+    fs.mkdirs(subDirOfRealDir);
+
+    assertEquals(fs.getFileStatus(subDirOfInternalDir).getPermission(),
+        fs.getFileStatus(subDirOfRealDir).getPermission());
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 01/16: Make upstream aware of 3.3.2 release

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 37a2bd88769c46358118b4384b0b1bf9ce621804
Author: Chao Sun <su...@apple.com>
AuthorDate: Wed Mar 2 17:22:56 2022 -0800

    Make upstream aware of 3.3.2 release
---
 .../site/markdown/release/3.3.2/CHANGELOG.3.3.2.md | 350 +++++++++
 .../markdown/release/3.3.2/RELEASENOTES.3.3.2.md   |  93 +++
 .../dev-support/jdiff/Apache_Hadoop_HDFS_3.3.2.xml | 835 +++++++++++++++++++++
 hadoop-project-dist/pom.xml                        |   2 +-
 4 files changed, 1279 insertions(+), 1 deletion(-)

diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.3.2/CHANGELOG.3.3.2.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.3.2/CHANGELOG.3.3.2.md
new file mode 100644
index 00000000000..162f9928489
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.3.2/CHANGELOG.3.3.2.md
@@ -0,0 +1,350 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop Changelog
+
+## Release 3.3.2 - 2022-02-21
+
+
+
+### IMPORTANT ISSUES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-15814](https://issues.apache.org/jira/browse/HDFS-15814) | Make some parameters configurable for DataNodeDiskMetrics |  Major | hdfs | tomscut | tomscut |
+
+
+### NEW FEATURES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-15288](https://issues.apache.org/jira/browse/HDFS-15288) | Add Available Space Rack Fault Tolerant BPP |  Major | . | Ayush Saxena | Ayush Saxena |
+| [HDFS-16048](https://issues.apache.org/jira/browse/HDFS-16048) | RBF: Print network topology on the router web |  Minor | . | tomscut | tomscut |
+| [HDFS-16337](https://issues.apache.org/jira/browse/HDFS-16337) | Show start time of Datanode on Web |  Minor | . | tomscut | tomscut |
+| [HADOOP-17979](https://issues.apache.org/jira/browse/HADOOP-17979) | Interface EtagSource to allow FileStatus subclasses to provide etags |  Major | fs, fs/azure, fs/s3 | Steve Loughran | Steve Loughran |
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [YARN-10123](https://issues.apache.org/jira/browse/YARN-10123) | Error message around yarn app -stop/start can be improved to highlight that an implementation at framework level is needed for the stop/start functionality to work |  Minor | client, documentation | Siddharth Ahuja | Siddharth Ahuja |
+| [HADOOP-17756](https://issues.apache.org/jira/browse/HADOOP-17756) | Increase precommit job timeout from 20 hours to 24 hours. |  Major | build | Takanobu Asanuma | Takanobu Asanuma |
+| [HDFS-16073](https://issues.apache.org/jira/browse/HDFS-16073) | Remove redundant RPC requests for getFileLinkInfo in ClientNamenodeProtocolTranslatorPB |  Minor | . | lei w | lei w |
+| [HDFS-16074](https://issues.apache.org/jira/browse/HDFS-16074) | Remove an expensive debug string concatenation |  Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang |
+| [HDFS-16080](https://issues.apache.org/jira/browse/HDFS-16080) | RBF: Invoking method in all locations should break the loop after successful result |  Minor | . | Viraj Jasani | Viraj Jasani |
+| [HDFS-16075](https://issues.apache.org/jira/browse/HDFS-16075) | Use empty array constants present in StorageType and DatanodeInfo to avoid creating redundant objects |  Major | . | Viraj Jasani | Viraj Jasani |
+| [MAPREDUCE-7354](https://issues.apache.org/jira/browse/MAPREDUCE-7354) | Use empty array constants present in TaskCompletionEvent to avoid creating redundant objects |  Minor | . | Viraj Jasani | Viraj Jasani |
+| [HDFS-16082](https://issues.apache.org/jira/browse/HDFS-16082) | Avoid non-atomic operations on exceptionsSinceLastBalance and failedTimesSinceLastSuccessfulBalance in Balancer |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HDFS-16076](https://issues.apache.org/jira/browse/HDFS-16076) | Avoid using slow DataNodes for reading by sorting locations |  Major | hdfs | tomscut | tomscut |
+| [HDFS-16085](https://issues.apache.org/jira/browse/HDFS-16085) | Move the getPermissionChecker out of the read lock |  Minor | . | tomscut | tomscut |
+| [YARN-10834](https://issues.apache.org/jira/browse/YARN-10834) | Intra-queue preemption: apps that don't use defined custom resource won't be preempted. |  Major | . | Eric Payne | Eric Payne |
+| [HADOOP-17777](https://issues.apache.org/jira/browse/HADOOP-17777) | Update clover-maven-plugin version from 3.3.0 to 4.4.1 |  Major | . | Wanqiang Ji | Wanqiang Ji |
+| [HDFS-16090](https://issues.apache.org/jira/browse/HDFS-16090) | Fine grained locking for datanodeNetworkCounts |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-17749](https://issues.apache.org/jira/browse/HADOOP-17749) | Remove lock contention in SelectorPool of SocketIOWithTimeout |  Major | common | Xuesen Liang | Xuesen Liang |
+| [HADOOP-17775](https://issues.apache.org/jira/browse/HADOOP-17775) | Remove JavaScript package from Docker environment |  Major | build | Masatake Iwasaki | Masatake Iwasaki |
+| [HADOOP-17402](https://issues.apache.org/jira/browse/HADOOP-17402) | Add GCS FS impl reference to core-default.xml |  Major | fs | Rafal Wojdyla | Rafal Wojdyla |
+| [HADOOP-17794](https://issues.apache.org/jira/browse/HADOOP-17794) | Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS |  Major | documentation, kms, security | Akira Ajisaka | Akira Ajisaka |
+| [HDFS-16122](https://issues.apache.org/jira/browse/HDFS-16122) | Fix DistCpContext#toString() |  Minor | . | tomscut | tomscut |
+| [HADOOP-12665](https://issues.apache.org/jira/browse/HADOOP-12665) | Document hadoop.security.token.service.use\_ip |  Major | documentation | Arpit Agarwal | Akira Ajisaka |
+| [YARN-10456](https://issues.apache.org/jira/browse/YARN-10456) | RM PartitionQueueMetrics records are named QueueMetrics in Simon metrics registry |  Major | resourcemanager | Eric Payne | Eric Payne |
+| [HDFS-15650](https://issues.apache.org/jira/browse/HDFS-15650) | Make the socket timeout for computing checksum of striped blocks configurable |  Minor | datanode, ec, erasure-coding | Yushi Hayasaka | Yushi Hayasaka |
+| [YARN-10858](https://issues.apache.org/jira/browse/YARN-10858) | [UI2] YARN-10826 breaks Queue view |  Major | yarn-ui-v2 | Andras Gyori | Masatake Iwasaki |
+| [HADOOP-16290](https://issues.apache.org/jira/browse/HADOOP-16290) | Enable RpcMetrics units to be configurable |  Major | ipc, metrics | Erik Krogen | Viraj Jasani |
+| [YARN-10860](https://issues.apache.org/jira/browse/YARN-10860) | Make max container per heartbeat configs refreshable |  Major | . | Eric Badger | Eric Badger |
+| [HADOOP-17813](https://issues.apache.org/jira/browse/HADOOP-17813) | Checkstyle - Allow line length: 100 |  Major | . | Akira Ajisaka | Viraj Jasani |
+| [HADOOP-17811](https://issues.apache.org/jira/browse/HADOOP-17811) | ABFS ExponentialRetryPolicy doesn't pick up configuration values |  Minor | documentation, fs/azure | Brian Frank Loss | Brian Frank Loss |
+| [HADOOP-17819](https://issues.apache.org/jira/browse/HADOOP-17819) | Add extensions to ProtobufRpcEngine RequestHeaderProto |  Major | common | Hector Sandoval Chaverri | Hector Sandoval Chaverri |
+| [HDFS-15936](https://issues.apache.org/jira/browse/HDFS-15936) | Solve BlockSender#sendPacket() does not record SocketTimeout exception |  Minor | . | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16153](https://issues.apache.org/jira/browse/HDFS-16153) | Avoid evaluation of LOG.debug statement in QuorumJournalManager |  Trivial | . | wangzhaohui | wangzhaohui |
+| [HDFS-16154](https://issues.apache.org/jira/browse/HDFS-16154) | TestMiniJournalCluster failing intermittently because of not reseting UserGroupInformation completely |  Minor | . | wangzhaohui | wangzhaohui |
+| [HADOOP-17837](https://issues.apache.org/jira/browse/HADOOP-17837) | Make it easier to debug UnknownHostExceptions from NetUtils.connect |  Minor | . | Bryan Beaudreault | Bryan Beaudreault |
+| [HDFS-16175](https://issues.apache.org/jira/browse/HDFS-16175) | Improve the configurable value of Server #PURGE\_INTERVAL\_NANOS |  Major | ipc | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16173](https://issues.apache.org/jira/browse/HDFS-16173) | Improve CopyCommands#Put#executor queue configurability |  Major | fs | JiangHua Zhu | JiangHua Zhu |
+| [HADOOP-17897](https://issues.apache.org/jira/browse/HADOOP-17897) | Allow nested blocks in switch case in checkstyle settings |  Minor | build | Masatake Iwasaki | Masatake Iwasaki |
+| [HADOOP-17857](https://issues.apache.org/jira/browse/HADOOP-17857) | Check real user ACLs in addition to proxied user ACLs |  Major | . | Eric Payne | Eric Payne |
+| [HDFS-16210](https://issues.apache.org/jira/browse/HDFS-16210) | RBF: Add the option of refreshCallQueue to RouterAdmin |  Major | . | Janus Chow | Janus Chow |
+| [HDFS-16221](https://issues.apache.org/jira/browse/HDFS-16221) | RBF: Add usage of refreshCallQueue for Router |  Major | . | Janus Chow | Janus Chow |
+| [HDFS-16223](https://issues.apache.org/jira/browse/HDFS-16223) | AvailableSpaceRackFaultTolerantBlockPlacementPolicy should use chooseRandomWithStorageTypeTwoTrial() for better performance. |  Major | . | Ayush Saxena | Ayush Saxena |
+| [HADOOP-17893](https://issues.apache.org/jira/browse/HADOOP-17893) | Improve PrometheusSink for Namenode TopMetrics |  Major | metrics | Max  Xie | Max  Xie |
+| [HADOOP-17926](https://issues.apache.org/jira/browse/HADOOP-17926) | Maven-eclipse-plugin is no longer needed since Eclipse can import Maven projects by itself. |  Minor | documentation | Rintaro Ikeda | Rintaro Ikeda |
+| [YARN-10935](https://issues.apache.org/jira/browse/YARN-10935) | AM Total Queue Limit goes below per-user AM Limit if parent is full. |  Major | capacity scheduler, capacityscheduler | Eric Payne | Eric Payne |
+| [HADOOP-17939](https://issues.apache.org/jira/browse/HADOOP-17939) | Support building on Apple Silicon |  Major | build, common | Dongjoon Hyun | Dongjoon Hyun |
+| [HADOOP-17941](https://issues.apache.org/jira/browse/HADOOP-17941) | Update xerces to 2.12.1 |  Minor | . | Zhongwei Zhu | Zhongwei Zhu |
+| [HDFS-16246](https://issues.apache.org/jira/browse/HDFS-16246) | Print lockWarningThreshold in InstrumentedLock#logWarning and InstrumentedLock#logWaitWarning |  Minor | . | tomscut | tomscut |
+| [HDFS-16252](https://issues.apache.org/jira/browse/HDFS-16252) | Correct docs for dfs.http.client.retry.policy.spec |  Major | . | Stephen O'Donnell | Stephen O'Donnell |
+| [HDFS-16241](https://issues.apache.org/jira/browse/HDFS-16241) | Standby close reconstruction thread |  Major | . | zhanghuazong | zhanghuazong |
+| [HADOOP-17974](https://issues.apache.org/jira/browse/HADOOP-17974) | Fix the import statements in hadoop-aws module |  Minor | build, fs/azure | Tamas Domok |  |
+| [HDFS-16277](https://issues.apache.org/jira/browse/HDFS-16277) | Improve decision in AvailableSpaceBlockPlacementPolicy |  Major | block placement | guophilipse | guophilipse |
+| [HADOOP-17770](https://issues.apache.org/jira/browse/HADOOP-17770) | WASB : Support disabling buffered reads in positional reads |  Major | . | Anoop Sam John | Anoop Sam John |
+| [HDFS-16282](https://issues.apache.org/jira/browse/HDFS-16282) | Duplicate generic usage information to hdfs debug command |  Minor | tools | daimin | daimin |
+| [YARN-1115](https://issues.apache.org/jira/browse/YARN-1115) | Provide optional means for a scheduler to check real user ACLs |  Major | capacity scheduler, scheduler | Eric Payne |  |
+| [HDFS-16279](https://issues.apache.org/jira/browse/HDFS-16279) | Print detail datanode info when process first storage report |  Minor | . | tomscut | tomscut |
+| [HDFS-16286](https://issues.apache.org/jira/browse/HDFS-16286) | Debug tool to verify the correctness of erasure coding on file |  Minor | erasure-coding, tools | daimin | daimin |
+| [HDFS-16294](https://issues.apache.org/jira/browse/HDFS-16294) | Remove invalid DataNode#CONFIG\_PROPERTY\_SIMULATED |  Major | datanode | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16299](https://issues.apache.org/jira/browse/HDFS-16299) | Fix bug for TestDataNodeVolumeMetrics#verifyDataNodeVolumeMetrics |  Minor | . | tomscut | tomscut |
+| [HDFS-16301](https://issues.apache.org/jira/browse/HDFS-16301) | Improve BenchmarkThroughput#SIZE naming standardization |  Minor | benchmarks, test | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16287](https://issues.apache.org/jira/browse/HDFS-16287) | Support to make dfs.namenode.avoid.read.slow.datanode  reconfigurable |  Major | . | Haiyang Hu | Haiyang Hu |
+| [HDFS-16321](https://issues.apache.org/jira/browse/HDFS-16321) | Fix invalid config in TestAvailableSpaceRackFaultTolerantBPP |  Minor | test | guophilipse | guophilipse |
+| [HDFS-16315](https://issues.apache.org/jira/browse/HDFS-16315) | Add metrics related to Transfer and NativeCopy for DataNode |  Major | . | tomscut | tomscut |
+| [HADOOP-17998](https://issues.apache.org/jira/browse/HADOOP-17998) | Allow get command to run with multi threads. |  Major | fs | Chengwei Wang | Chengwei Wang |
+| [HDFS-16344](https://issues.apache.org/jira/browse/HDFS-16344) | Improve DirectoryScanner.Stats#toString |  Major | . | tomscut | tomscut |
+| [HADOOP-18023](https://issues.apache.org/jira/browse/HADOOP-18023) | Allow cp command to run with multi threads. |  Major | fs | Chengwei Wang | Chengwei Wang |
+| [HDFS-16314](https://issues.apache.org/jira/browse/HDFS-16314) | Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable |  Major | . | Haiyang Hu | Haiyang Hu |
+| [HADOOP-18026](https://issues.apache.org/jira/browse/HADOOP-18026) | Fix default value of Magic committer |  Minor | common | guophilipse | guophilipse |
+| [HDFS-16345](https://issues.apache.org/jira/browse/HDFS-16345) | Fix test cases fail in TestBlockStoragePolicy |  Major | build | guophilipse | guophilipse |
+| [HADOOP-18040](https://issues.apache.org/jira/browse/HADOOP-18040) | Use maven.test.failure.ignore instead of ignoreTestFailure |  Major | build | Akira Ajisaka | Akira Ajisaka |
+| [HADOOP-17643](https://issues.apache.org/jira/browse/HADOOP-17643) | WASB : Make metadata checks case insensitive |  Major | . | Anoop Sam John | Anoop Sam John |
+| [HADOOP-18033](https://issues.apache.org/jira/browse/HADOOP-18033) | Upgrade fasterxml Jackson to 2.13.0 |  Major | build | Akira Ajisaka | Viraj Jasani |
+| [HDFS-16327](https://issues.apache.org/jira/browse/HDFS-16327) | Make dfs.namenode.max.slowpeer.collect.nodes reconfigurable |  Major | . | tomscut | tomscut |
+| [HDFS-16375](https://issues.apache.org/jira/browse/HDFS-16375) | The FBR lease ID should be exposed to the log |  Major | . | tomscut | tomscut |
+| [HDFS-16386](https://issues.apache.org/jira/browse/HDFS-16386) | Reduce DataNode load when FsDatasetAsyncDiskService is working |  Major | datanode | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16391](https://issues.apache.org/jira/browse/HDFS-16391) | Avoid evaluation of LOG.debug statement in NameNodeHeartbeatService |  Trivial | . | wangzhaohui | wangzhaohui |
+| [YARN-8234](https://issues.apache.org/jira/browse/YARN-8234) | Improve RM system metrics publisher's performance by pushing events to timeline server in batch |  Critical | resourcemanager, timelineserver | Hu Ziqian | Ashutosh Gupta |
+| [HADOOP-18052](https://issues.apache.org/jira/browse/HADOOP-18052) | Support Apple Silicon in start-build-env.sh |  Major | build | Akira Ajisaka | Akira Ajisaka |
+| [HADOOP-18056](https://issues.apache.org/jira/browse/HADOOP-18056) | DistCp: Filter duplicates in the source paths |  Major | . | Ayush Saxena | Ayush Saxena |
+| [HADOOP-18065](https://issues.apache.org/jira/browse/HADOOP-18065) | ExecutorHelper.logThrowableFromAfterExecute() is too noisy. |  Minor | . | Mukund Thakur | Mukund Thakur |
+| [HDFS-16043](https://issues.apache.org/jira/browse/HDFS-16043) | Add markedDeleteBlockScrubberThread to delete blocks asynchronously |  Major | hdfs, namanode | Xiangyi Zhu | Xiangyi Zhu |
+| [HADOOP-18094](https://issues.apache.org/jira/browse/HADOOP-18094) | Disable S3A auditing by default. |  Blocker | fs/s3 | Steve Loughran | Steve Loughran |
+
+
+### BUG FIXES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [YARN-10438](https://issues.apache.org/jira/browse/YARN-10438) | Handle null containerId in ClientRMService#getContainerReport() |  Major | resourcemanager | Raghvendra Singh | Shubham Gupta |
+| [YARN-10428](https://issues.apache.org/jira/browse/YARN-10428) | Zombie applications in the YARN queue using FAIR + sizebasedweight |  Critical | capacityscheduler | Guang Yang | Andras Gyori |
+| [HDFS-15916](https://issues.apache.org/jira/browse/HDFS-15916) | DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff |  Major | distcp | Srinivasu Majeti | Ayush Saxena |
+| [HDFS-15977](https://issues.apache.org/jira/browse/HDFS-15977) | Call explicit\_bzero only if it is available |  Major | libhdfs++ | Akira Ajisaka | Akira Ajisaka |
+| [HADOOP-14922](https://issues.apache.org/jira/browse/HADOOP-14922) | Build of Mapreduce Native Task module fails with unknown opcode "bswap" |  Major | . | Anup Halarnkar | Anup Halarnkar |
+| [HADOOP-17700](https://issues.apache.org/jira/browse/HADOOP-17700) | ExitUtil#halt info log should log HaltException |  Major | . | Viraj Jasani | Viraj Jasani |
+| [YARN-10770](https://issues.apache.org/jira/browse/YARN-10770) | container-executor permission is wrong in SecureContainer.md |  Major | documentation | Akira Ajisaka | Siddharth Ahuja |
+| [YARN-10691](https://issues.apache.org/jira/browse/YARN-10691) | DominantResourceCalculator isInvalidDivisor should consider only countable resource types |  Major | . | Bilwa S T | Bilwa S T |
+| [HDFS-16031](https://issues.apache.org/jira/browse/HDFS-16031) | Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap |  Major | . | Narges Shadab | Narges Shadab |
+| [MAPREDUCE-7348](https://issues.apache.org/jira/browse/MAPREDUCE-7348) | TestFrameworkUploader#testNativeIO fails |  Major | test | Akira Ajisaka | Akira Ajisaka |
+| [HDFS-15915](https://issues.apache.org/jira/browse/HDFS-15915) | Race condition with async edits logging due to updating txId outside of the namesystem log |  Major | hdfs, namenode | Konstantin Shvachko | Konstantin Shvachko |
+| [HDFS-16040](https://issues.apache.org/jira/browse/HDFS-16040) | RpcQueueTime metric counts requeued calls as unique events. |  Major | hdfs | Simbarashe Dzinamarira | Simbarashe Dzinamarira |
+| [MAPREDUCE-7287](https://issues.apache.org/jira/browse/MAPREDUCE-7287) | Distcp will delete existing file ,  If we use "-delete and -update" options and distcp file. |  Major | distcp | zhengchenyu | zhengchenyu |
+| [HDFS-15998](https://issues.apache.org/jira/browse/HDFS-15998) | Fix NullPointException In listOpenFiles |  Major | . | Haiyang Hu | Haiyang Hu |
+| [HDFS-16050](https://issues.apache.org/jira/browse/HDFS-16050) | Some dynamometer tests fail |  Major | test | Akira Ajisaka | Akira Ajisaka |
+| [HADOOP-17631](https://issues.apache.org/jira/browse/HADOOP-17631) | Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when restrictSystemProps=true |  Minor | common | Steve Loughran | Steve Loughran |
+| [YARN-10809](https://issues.apache.org/jira/browse/YARN-10809) | testWithHbaseConfAtHdfsFileSystem consistently failing |  Major | . | Viraj Jasani | Viraj Jasani |
+| [YARN-10803](https://issues.apache.org/jira/browse/YARN-10803) | [JDK 11] TestRMFailoverProxyProvider and TestNoHaRMFailoverProxyProvider fails by ClassCastException |  Major | test | Akira Ajisaka | Akira Ajisaka |
+| [HDFS-16057](https://issues.apache.org/jira/browse/HDFS-16057) | Make sure the order for location in ENTERING\_MAINTENANCE state |  Minor | . | tomscut | tomscut |
+| [HDFS-16055](https://issues.apache.org/jira/browse/HDFS-16055) | Quota is not preserved in snapshot INode |  Major | hdfs | Siyao Meng | Siyao Meng |
+| [HDFS-16068](https://issues.apache.org/jira/browse/HDFS-16068) | WebHdfsFileSystem has a possible connection leak in connection with HttpFS |  Major | . | Takanobu Asanuma | Takanobu Asanuma |
+| [YARN-10767](https://issues.apache.org/jira/browse/YARN-10767) | Yarn Logs Command retrying on Standby RM for 30 times |  Major | . | D M Murali Krishna Reddy | D M Murali Krishna Reddy |
+| [HADOOP-17760](https://issues.apache.org/jira/browse/HADOOP-17760) | Delete hadoop.ssl.enabled and dfs.https.enable from docs and core-default.xml |  Major | documentation | Takanobu Asanuma | Takanobu Asanuma |
+| [HDFS-13671](https://issues.apache.org/jira/browse/HDFS-13671) | Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet |  Major | . | Yiqun Lin | Haibin Huang |
+| [HDFS-16061](https://issues.apache.org/jira/browse/HDFS-16061) | DFTestUtil.waitReplication can produce false positives |  Major | hdfs | Ahmed Hussein | Ahmed Hussein |
+| [HDFS-14575](https://issues.apache.org/jira/browse/HDFS-14575) | LeaseRenewer#daemon threads leak in DFSClient |  Major | . | Tao Yang | Renukaprasad C |
+| [YARN-10826](https://issues.apache.org/jira/browse/YARN-10826) | [UI2] Upgrade Node.js to at least v12.22.1 |  Major | yarn-ui-v2 | Akira Ajisaka | Masatake Iwasaki |
+| [HADOOP-17769](https://issues.apache.org/jira/browse/HADOOP-17769) | Upgrade JUnit to 4.13.2 |  Major | . | Ahmed Hussein | Ahmed Hussein |
+| [YARN-10824](https://issues.apache.org/jira/browse/YARN-10824) | Title not set for JHS and NM webpages |  Major | . | Rajshree Mishra | Bilwa S T |
+| [HDFS-16092](https://issues.apache.org/jira/browse/HDFS-16092) | Avoid creating LayoutFlags redundant objects |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-17764](https://issues.apache.org/jira/browse/HADOOP-17764) | S3AInputStream read does not re-open the input stream on the second read retry attempt |  Major | fs/s3 | Zamil Majdy | Zamil Majdy |
+| [HDFS-16109](https://issues.apache.org/jira/browse/HDFS-16109) | Fix flaky some unit tests since they offen timeout |  Minor | test | tomscut | tomscut |
+| [HDFS-16108](https://issues.apache.org/jira/browse/HDFS-16108) | Incorrect log placeholders used in JournalNodeSyncer |  Minor | . | Viraj Jasani | Viraj Jasani |
+| [MAPREDUCE-7353](https://issues.apache.org/jira/browse/MAPREDUCE-7353) | Mapreduce job fails when NM is stopped |  Major | . | Bilwa S T | Bilwa S T |
+| [HDFS-16121](https://issues.apache.org/jira/browse/HDFS-16121) | Iterative snapshot diff report can generate duplicate records for creates, deletes and Renames |  Major | snapshots | Srinivasu Majeti | Shashikant Banerjee |
+| [HDFS-15796](https://issues.apache.org/jira/browse/HDFS-15796) | ConcurrentModificationException error happens on NameNode occasionally |  Critical | hdfs | Daniel Ma | Daniel Ma |
+| [HADOOP-17793](https://issues.apache.org/jira/browse/HADOOP-17793) | Better token validation |  Major | . | Artem Smotrakov | Artem Smotrakov |
+| [HDFS-16042](https://issues.apache.org/jira/browse/HDFS-16042) | DatanodeAdminMonitor scan should be delay based |  Major | datanode | Ahmed Hussein | Ahmed Hussein |
+| [HADOOP-17803](https://issues.apache.org/jira/browse/HADOOP-17803) | Remove WARN logging from LoggingAuditor when executing a request outside an audit span |  Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh |
+| [HDFS-16127](https://issues.apache.org/jira/browse/HDFS-16127) | Improper pipeline close recovery causes a permanent write failure or data loss. |  Major | . | Kihwal Lee | Kihwal Lee |
+| [HADOOP-17028](https://issues.apache.org/jira/browse/HADOOP-17028) | ViewFS should initialize target filesystems lazily |  Major | client-mounts, fs, viewfs | Uma Maheswara Rao G | Abhishek Das |
+| [HADOOP-17801](https://issues.apache.org/jira/browse/HADOOP-17801) | No error message reported when bucket doesn't exist in S3AFS |  Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh |
+| [HADOOP-17796](https://issues.apache.org/jira/browse/HADOOP-17796) | Upgrade jetty version to 9.4.43 |  Major | . | Wei-Chiu Chuang | Renukaprasad C |
+| [HDFS-12920](https://issues.apache.org/jira/browse/HDFS-12920) | HDFS default value change (with adding time unit) breaks old version MR tarball work with Hadoop 3.x |  Critical | configuration, hdfs | Junping Du | Akira Ajisaka |
+| [HDFS-16145](https://issues.apache.org/jira/browse/HDFS-16145) | CopyListing fails with FNF exception with snapshot diff |  Major | distcp | Shashikant Banerjee | Shashikant Banerjee |
+| [YARN-10813](https://issues.apache.org/jira/browse/YARN-10813) | Set default capacity of root for node labels |  Major | . | Andras Gyori | Andras Gyori |
+| [HDFS-16144](https://issues.apache.org/jira/browse/HDFS-16144) | Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions) |  Major | . | Stephen O'Donnell | Stephen O'Donnell |
+| [HADOOP-17817](https://issues.apache.org/jira/browse/HADOOP-17817) | HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled |  Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh |
+| [YARN-9551](https://issues.apache.org/jira/browse/YARN-9551) | TestTimelineClientV2Impl.testSyncCall fails intermittently |  Minor | ATSv2, test | Prabhu Joseph | Andras Gyori |
+| [HDFS-15175](https://issues.apache.org/jira/browse/HDFS-15175) | Multiple CloseOp shared block instance causes the standby namenode to crash when rolling editlog |  Critical | . | Yicong Cai | Wan Chang |
+| [YARN-10869](https://issues.apache.org/jira/browse/YARN-10869) | CS considers only the default maximum-allocation-mb/vcore property as a maximum when it creates dynamic queues |  Major | capacity scheduler | Benjamin Teke | Benjamin Teke |
+| [YARN-10789](https://issues.apache.org/jira/browse/YARN-10789) | RM HA startup can fail due to race conditions in ZKConfigurationStore |  Major | . | Tarun Parimi | Tarun Parimi |
+| [HADOOP-17812](https://issues.apache.org/jira/browse/HADOOP-17812) | NPE in S3AInputStream read() after failure to reconnect to store |  Major | fs/s3 | Bobby Wang | Bobby Wang |
+| [YARN-6221](https://issues.apache.org/jira/browse/YARN-6221) | Entities missing from ATS when summary log file info got returned to the ATS before the domain log |  Critical | yarn | Sushmitha Sreenivasan | Xiaomin Zhang |
+| [MAPREDUCE-7258](https://issues.apache.org/jira/browse/MAPREDUCE-7258) | HistoryServerRest.html#Task\_Counters\_API, modify the jobTaskCounters's itemName from "taskcounterGroup" to "taskCounterGroup". |  Minor | documentation | jenny | jenny |
+| [HADOOP-17370](https://issues.apache.org/jira/browse/HADOOP-17370) | Upgrade commons-compress to 1.21 |  Major | common | Dongjoon Hyun | Akira Ajisaka |
+| [HDFS-16151](https://issues.apache.org/jira/browse/HDFS-16151) | Improve the parameter comments related to ProtobufRpcEngine2#Server() |  Minor | documentation | JiangHua Zhu | JiangHua Zhu |
+| [HADOOP-17844](https://issues.apache.org/jira/browse/HADOOP-17844) | Upgrade JSON smart to 2.4.7 |  Major | . | Renukaprasad C | Renukaprasad C |
+| [HDFS-16177](https://issues.apache.org/jira/browse/HDFS-16177) | Bug fix for Util#receiveFile |  Minor | . | tomscut | tomscut |
+| [YARN-10814](https://issues.apache.org/jira/browse/YARN-10814) | YARN shouldn't start with empty hadoop.http.authentication.signature.secret.file |  Major | . | Benjamin Teke | Tamas Domok |
+| [HADOOP-17858](https://issues.apache.org/jira/browse/HADOOP-17858) | Avoid possible class loading deadlock with VerifierNone initialization |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-17869](https://issues.apache.org/jira/browse/HADOOP-17869) | fs.s3a.connection.maximum should be bigger than fs.s3a.threads.max |  Major | common | Dongjoon Hyun | Dongjoon Hyun |
+| [HADOOP-17886](https://issues.apache.org/jira/browse/HADOOP-17886) | Upgrade ant to 1.10.11 |  Major | . | Ahmed Hussein | Ahmed Hussein |
+| [HADOOP-17874](https://issues.apache.org/jira/browse/HADOOP-17874) | ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-15129](https://issues.apache.org/jira/browse/HADOOP-15129) | Datanode caches namenode DNS lookup failure and cannot startup |  Minor | ipc | Karthik Palaniappan | Chris Nauroth |
+| [HADOOP-17870](https://issues.apache.org/jira/browse/HADOOP-17870) | HTTP Filesystem to qualify paths in open()/getFileStatus() |  Minor | fs | VinothKumar Raman | VinothKumar Raman |
+| [HADOOP-17899](https://issues.apache.org/jira/browse/HADOOP-17899) | Avoid using implicit dependency on junit-jupiter-api |  Major | test | Masatake Iwasaki | Masatake Iwasaki |
+| [YARN-10901](https://issues.apache.org/jira/browse/YARN-10901) | Permission checking error on an existing directory in LogAggregationFileController#verifyAndCreateRemoteLogDir |  Major | nodemanager | Tamas Domok | Tamas Domok |
+| [HADOOP-17804](https://issues.apache.org/jira/browse/HADOOP-17804) | Prometheus metrics only include the last set of labels |  Major | common | Adam Binford | Adam Binford |
+| [HDFS-16207](https://issues.apache.org/jira/browse/HDFS-16207) | Remove NN logs stack trace for non-existent xattr query |  Major | namenode | Ahmed Hussein | Ahmed Hussein |
+| [HDFS-16187](https://issues.apache.org/jira/browse/HDFS-16187) | SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing |  Major | snapshots | Srinivasu Majeti | Shashikant Banerjee |
+| [HDFS-16198](https://issues.apache.org/jira/browse/HDFS-16198) | Short circuit read leaks Slot objects when InvalidToken exception is thrown |  Major | . | Eungsop Yoo | Eungsop Yoo |
+| [YARN-10870](https://issues.apache.org/jira/browse/YARN-10870) | Missing user filtering check -\> yarn.webapp.filter-entity-list-by-user for RM Scheduler page |  Major | yarn | Siddharth Ahuja | Gergely Pollák |
+| [HADOOP-17891](https://issues.apache.org/jira/browse/HADOOP-17891) | lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries |  Major | . | L. C. Hsieh | L. C. Hsieh |
+| [HADOOP-17919](https://issues.apache.org/jira/browse/HADOOP-17919) | Fix command line example in Hadoop Cluster Setup documentation |  Minor | documentation | Rintaro Ikeda | Rintaro Ikeda |
+| [YARN-9606](https://issues.apache.org/jira/browse/YARN-9606) | Set sslfactory for AuthenticatedURL() while creating LogsCLI#webServiceClient |  Major | . | Bilwa S T | Bilwa S T |
+| [HDFS-16233](https://issues.apache.org/jira/browse/HDFS-16233) | Do not use exception handler to implement copy-on-write for EnumCounters |  Major | namenode | Wei-Chiu Chuang | Wei-Chiu Chuang |
+| [HDFS-16235](https://issues.apache.org/jira/browse/HDFS-16235) | Deadlock in LeaseRenewer for static remove method |  Major | hdfs | angerszhu | angerszhu |
+| [HADOOP-17940](https://issues.apache.org/jira/browse/HADOOP-17940) | Upgrade Kafka to 2.8.1 |  Major | . | Takanobu Asanuma | Takanobu Asanuma |
+| [YARN-10970](https://issues.apache.org/jira/browse/YARN-10970) | Standby RM should expose prom endpoint |  Major | resourcemanager | Max  Xie | Max  Xie |
+| [HADOOP-17934](https://issues.apache.org/jira/browse/HADOOP-17934) | NullPointerException when no HTTP response set on AbfsRestOperation |  Major | fs/azure | Josh Elser | Josh Elser |
+| [HDFS-16181](https://issues.apache.org/jira/browse/HDFS-16181) | [SBN Read] Fix metric of RpcRequestCacheMissAmount can't display when tailEditLog form JN |  Critical | . | wangzhaohui | wangzhaohui |
+| [HADOOP-17922](https://issues.apache.org/jira/browse/HADOOP-17922) | Lookup old S3 encryption configs for JCEKS |  Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh |
+| [HADOOP-17925](https://issues.apache.org/jira/browse/HADOOP-17925) | BUILDING.txt should not encourage to activate docs profile on building binary artifacts |  Minor | documentation | Rintaro Ikeda | Masatake Iwasaki |
+| [HADOOP-16532](https://issues.apache.org/jira/browse/HADOOP-16532) | Fix TestViewFsTrash to use the correct homeDir. |  Minor | test, viewfs | Steve Loughran | Xing Lin |
+| [HDFS-16268](https://issues.apache.org/jira/browse/HDFS-16268) | Balancer stuck when moving striped blocks due to NPE |  Major | balancer & mover, erasure-coding | Leon Gao | Leon Gao |
+| [HDFS-16271](https://issues.apache.org/jira/browse/HDFS-16271) | RBF: NullPointerException when setQuota through routers with quota disabled |  Major | . | Chengwei Wang | Chengwei Wang |
+| [YARN-10976](https://issues.apache.org/jira/browse/YARN-10976) | Fix resource leak due to Files.walk |  Minor | . | lujie | lujie |
+| [HADOOP-17932](https://issues.apache.org/jira/browse/HADOOP-17932) | Distcp file length comparison have no effect |  Major | common, tools, tools/distcp | yinan zhan | yinan zhan |
+| [HDFS-16272](https://issues.apache.org/jira/browse/HDFS-16272) | Int overflow in computing safe length during EC block recovery |  Critical | 3.1.1 | daimin | daimin |
+| [HADOOP-17953](https://issues.apache.org/jira/browse/HADOOP-17953) | S3A: ITestS3AFileContextStatistics test to lookup global or per-bucket configuration for encryption algorithm |  Minor | fs/s3 | Mehakmeet Singh | Mehakmeet Singh |
+| [HADOOP-17971](https://issues.apache.org/jira/browse/HADOOP-17971) | Exclude IBM Java security classes from being shaded/relocated |  Major | build | Nicholas Marion | Nicholas Marion |
+| [HDFS-7612](https://issues.apache.org/jira/browse/HDFS-7612) | TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir |  Major | test | Konstantin Shvachko | Michael Kuchenbecker |
+| [HDFS-16269](https://issues.apache.org/jira/browse/HDFS-16269) | [Fix] Improve NNThroughputBenchmark#blockReport operation |  Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu |
+| [HADOOP-17945](https://issues.apache.org/jira/browse/HADOOP-17945) | JsonSerialization raises EOFException reading JSON data stored on google GCS |  Major | fs | Steve Loughran | Steve Loughran |
+| [HDFS-16259](https://issues.apache.org/jira/browse/HDFS-16259) | Catch and re-throw sub-classes of AccessControlException thrown by any permission provider plugins (eg Ranger) |  Major | namenode | Stephen O'Donnell | Stephen O'Donnell |
+| [HADOOP-17988](https://issues.apache.org/jira/browse/HADOOP-17988) | Disable JIRA plugin for YETUS on Hadoop |  Critical | build | Gautham Banasandra | Gautham Banasandra |
+| [HDFS-16311](https://issues.apache.org/jira/browse/HDFS-16311) | Metric metadataOperationRate calculation error in DataNodeVolumeMetrics |  Major | . | tomscut | tomscut |
+| [HADOOP-18002](https://issues.apache.org/jira/browse/HADOOP-18002) | abfs rename idempotency broken -remove recovery |  Major | fs/azure | Steve Loughran | Steve Loughran |
+| [HDFS-16182](https://issues.apache.org/jira/browse/HDFS-16182) | numOfReplicas is given the wrong value in  BlockPlacementPolicyDefault$chooseTarget can cause DataStreamer to fail with Heterogeneous Storage |  Major | namanode | Max  Xie | Max  Xie |
+| [HADOOP-17999](https://issues.apache.org/jira/browse/HADOOP-17999) | No-op implementation of setWriteChecksum and setVerifyChecksum in ViewFileSystem |  Major | . | Abhishek Das | Abhishek Das |
+| [HDFS-16329](https://issues.apache.org/jira/browse/HDFS-16329) | Fix log format for BlockManager |  Minor | . | tomscut | tomscut |
+| [HDFS-16330](https://issues.apache.org/jira/browse/HDFS-16330) | Fix incorrect placeholder for Exception logs in DiskBalancer |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HDFS-16328](https://issues.apache.org/jira/browse/HDFS-16328) | Correct disk balancer param desc |  Minor | documentation, hdfs | guophilipse | guophilipse |
+| [HDFS-16334](https://issues.apache.org/jira/browse/HDFS-16334) | Correct NameNode ACL description |  Minor | documentation | guophilipse | guophilipse |
+| [HDFS-16343](https://issues.apache.org/jira/browse/HDFS-16343) | Add some debug logs when the dfsUsed are not used during Datanode startup |  Major | datanode | Mukul Kumar Singh | Mukul Kumar Singh |
+| [YARN-10991](https://issues.apache.org/jira/browse/YARN-10991) | Fix to ignore the grouping "[]" for resourcesStr in parseResourcesString method |  Minor | distributed-shell | Ashutosh Gupta | Ashutosh Gupta |
+| [HADOOP-17975](https://issues.apache.org/jira/browse/HADOOP-17975) | Fallback to simple auth does not work for a secondary DistributedFileSystem instance |  Major | ipc | István Fajth | István Fajth |
+| [HDFS-16350](https://issues.apache.org/jira/browse/HDFS-16350) | Datanode start time should be set after RPC server starts successfully |  Minor | . | Viraj Jasani | Viraj Jasani |
+| [YARN-11007](https://issues.apache.org/jira/browse/YARN-11007) | Correct words in YARN documents |  Minor | documentation | guophilipse | guophilipse |
+| [YARN-10975](https://issues.apache.org/jira/browse/YARN-10975) | EntityGroupFSTimelineStore#ActiveLogParser parses already processed files |  Major | timelineserver | Prabhu Joseph | Ravuri Sushma sree |
+| [HDFS-16332](https://issues.apache.org/jira/browse/HDFS-16332) | Expired block token causes slow read due to missing handling in sasl handshake |  Major | datanode, dfs, dfsclient | Shinya Yoshida | Shinya Yoshida |
+| [HDFS-16293](https://issues.apache.org/jira/browse/HDFS-16293) | Client sleeps and holds 'dataQueue' when DataNodes are congested |  Major | hdfs-client | Yuanxin Zhu | Yuanxin Zhu |
+| [YARN-9063](https://issues.apache.org/jira/browse/YARN-9063) | ATS 1.5 fails to start if RollingLevelDb files are corrupt or missing |  Major | timelineserver, timelineservice | Tarun Parimi | Ashutosh Gupta |
+| [HDFS-16333](https://issues.apache.org/jira/browse/HDFS-16333) | fix balancer bug when transfer an EC block |  Major | balancer & mover, erasure-coding | qinyuren | qinyuren |
+| [YARN-11020](https://issues.apache.org/jira/browse/YARN-11020) | [UI2] No container is found for an application attempt with a single AM container |  Major | yarn-ui-v2 | Andras Gyori | Andras Gyori |
+| [HDFS-16373](https://issues.apache.org/jira/browse/HDFS-16373) | Fix MiniDFSCluster restart in case of multiple namenodes |  Major | . | Ayush Saxena | Ayush Saxena |
+| [HADOOP-18048](https://issues.apache.org/jira/browse/HADOOP-18048) | [branch-3.3] Dockerfile\_aarch64 build fails with fatal error: Python.h: No such file or directory |  Major | . | Siyao Meng | Siyao Meng |
+| [HDFS-16377](https://issues.apache.org/jira/browse/HDFS-16377) | Should CheckNotNull before access FsDatasetSpi |  Major | . | tomscut | tomscut |
+| [YARN-6862](https://issues.apache.org/jira/browse/YARN-6862) | Nodemanager resource usage metrics sometimes are negative |  Major | nodemanager | YunFan Zhou | Benjamin Teke |
+| [HADOOP-13500](https://issues.apache.org/jira/browse/HADOOP-13500) | Synchronizing iteration of Configuration properties object |  Major | conf | Jason Darrell Lowe | Dhananjay Badaya |
+| [YARN-10178](https://issues.apache.org/jira/browse/YARN-10178) | Global Scheduler async thread crash caused by 'Comparison method violates its general contract |  Major | capacity scheduler | tuyu | Andras Gyori |
+| [YARN-11053](https://issues.apache.org/jira/browse/YARN-11053) | AuxService should not use class name as default system classes |  Major | auxservices | Cheng Pan | Cheng Pan |
+| [HDFS-16395](https://issues.apache.org/jira/browse/HDFS-16395) | Remove useless NNThroughputBenchmark#dummyActionNoSynch() |  Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu |
+| [HADOOP-18045](https://issues.apache.org/jira/browse/HADOOP-18045) | Disable TestDynamometerInfra |  Major | test | Akira Ajisaka | Akira Ajisaka |
+| [HDFS-14099](https://issues.apache.org/jira/browse/HDFS-14099) | Unknown frame descriptor when decompressing multiple frames in ZStandardDecompressor |  Major | . | xuzq | xuzq |
+| [HADOOP-18063](https://issues.apache.org/jira/browse/HADOOP-18063) | Remove unused import AbstractJavaKeyStoreProvider in Shell class |  Minor | . | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16409](https://issues.apache.org/jira/browse/HDFS-16409) | Fix typo: testHasExeceptionsReturnsCorrectValue -\> testHasExceptionsReturnsCorrectValue |  Trivial | . | Ashutosh Gupta | Ashutosh Gupta |
+| [HDFS-16408](https://issues.apache.org/jira/browse/HDFS-16408) | Ensure LeaseRecheckIntervalMs is greater than zero |  Major | namenode | Jingxuan Fu | Jingxuan Fu |
+| [HDFS-16410](https://issues.apache.org/jira/browse/HDFS-16410) | Insecure Xml parsing in OfflineEditsXmlLoader |  Minor | . | Ashutosh Gupta | Ashutosh Gupta |
+| [HDFS-16420](https://issues.apache.org/jira/browse/HDFS-16420) | Avoid deleting unique data blocks when deleting redundancy striped blocks |  Critical | ec, erasure-coding | qinyuren | Jackson Wang |
+| [YARN-10561](https://issues.apache.org/jira/browse/YARN-10561) | Upgrade node.js to 12.22.1 and yarn to 1.22.5 in YARN application catalog webapp |  Critical | webapp | Akira Ajisaka | Akira Ajisaka |
+| [HADOOP-18096](https://issues.apache.org/jira/browse/HADOOP-18096) | Distcp: Sync moves filtered file to home directory rather than deleting |  Critical | . | Ayush Saxena | Ayush Saxena |
+
+
+### TESTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [MAPREDUCE-7342](https://issues.apache.org/jira/browse/MAPREDUCE-7342) | Stop RMService in TestClientRedirect.testRedirect() |  Minor | . | Zhengxi Li | Zhengxi Li |
+| [MAPREDUCE-7311](https://issues.apache.org/jira/browse/MAPREDUCE-7311) | Fix non-idempotent test in TestTaskProgressReporter |  Minor | . | Zhengxi Li | Zhengxi Li |
+| [HADOOP-17936](https://issues.apache.org/jira/browse/HADOOP-17936) | TestLocalFSCopyFromLocal.testDestinationFileIsToParentDirectory failure after reverting HADOOP-16878 |  Major | . | Chao Sun | Chao Sun |
+| [HDFS-15862](https://issues.apache.org/jira/browse/HDFS-15862) | Make TestViewfsWithNfs3.testNfsRenameSingleNN() idempotent |  Minor | nfs | Zhengxi Li | Zhengxi Li |
+
+
+### SUB-TASKS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [YARN-10337](https://issues.apache.org/jira/browse/YARN-10337) | TestRMHATimelineCollectors fails on hadoop trunk |  Major | test, yarn | Ahmed Hussein | Bilwa S T |
+| [HDFS-15457](https://issues.apache.org/jira/browse/HDFS-15457) | TestFsDatasetImpl fails intermittently |  Major | hdfs | Ahmed Hussein | Ahmed Hussein |
+| [HADOOP-17424](https://issues.apache.org/jira/browse/HADOOP-17424) | Replace HTrace with No-Op tracer |  Major | . | Siyao Meng | Siyao Meng |
+| [HADOOP-17705](https://issues.apache.org/jira/browse/HADOOP-17705) | S3A to add option fs.s3a.endpoint.region to set AWS region |  Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh |
+| [HADOOP-17670](https://issues.apache.org/jira/browse/HADOOP-17670) | S3AFS and ABFS to log IOStats at DEBUG mode or optionally at INFO level in close() |  Minor | fs/azure, fs/s3 | Mehakmeet Singh | Mehakmeet Singh |
+| [HADOOP-17511](https://issues.apache.org/jira/browse/HADOOP-17511) | Add an Audit plugin point for S3A auditing/context |  Major | . | Steve Loughran | Steve Loughran |
+| [HADOOP-17470](https://issues.apache.org/jira/browse/HADOOP-17470) | Collect more S3A IOStatistics |  Major | fs/s3 | Steve Loughran | Steve Loughran |
+| [HADOOP-17735](https://issues.apache.org/jira/browse/HADOOP-17735) | Upgrade aws-java-sdk to 1.11.1026 |  Major | build, fs/s3 | Steve Loughran | Steve Loughran |
+| [HADOOP-17547](https://issues.apache.org/jira/browse/HADOOP-17547) | Magic committer to downgrade abort in cleanup if list uploads fails with access denied |  Major | fs/s3 | Steve Loughran | Bogdan Stolojan |
+| [HADOOP-17771](https://issues.apache.org/jira/browse/HADOOP-17771) | S3AFS creation fails "Unable to find a region via the region provider chain." |  Blocker | fs/s3 | Steve Loughran | Steve Loughran |
+| [HDFS-15659](https://issues.apache.org/jira/browse/HDFS-15659) | Set dfs.namenode.redundancy.considerLoad to false in MiniDFSCluster |  Major | test | Akira Ajisaka | Ahmed Hussein |
+| [HADOOP-17774](https://issues.apache.org/jira/browse/HADOOP-17774) | bytesRead FS statistic showing twice the correct value in S3A |  Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh |
+| [HADOOP-17290](https://issues.apache.org/jira/browse/HADOOP-17290) | ABFS: Add Identifiers to Client Request Header |  Major | fs/azure | Sumangala Patki | Sumangala Patki |
+| [HADOOP-17250](https://issues.apache.org/jira/browse/HADOOP-17250) | ABFS: Random read perf improvement |  Major | fs/azure | Sneha Vijayarajan | Mukund Thakur |
+| [HADOOP-17596](https://issues.apache.org/jira/browse/HADOOP-17596) | ABFS: Change default Readahead Queue Depth from num(processors) to const |  Major | fs/azure | Sumangala Patki | Sumangala Patki |
+| [HADOOP-17715](https://issues.apache.org/jira/browse/HADOOP-17715) | ABFS: Append blob tests with non HNS accounts fail |  Minor | . | Sneha Varma | Sneha Varma |
+| [HADOOP-17714](https://issues.apache.org/jira/browse/HADOOP-17714) | ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs |  Minor | test | Sneha Varma | Sneha Varma |
+| [HDFS-16140](https://issues.apache.org/jira/browse/HDFS-16140) | TestBootstrapAliasmap fails by BindException |  Major | test | Akira Ajisaka | Akira Ajisaka |
+| [HADOOP-13887](https://issues.apache.org/jira/browse/HADOOP-13887) | Encrypt S3A data client-side with AWS SDK (S3-CSE) |  Minor | fs/s3 | Jeeyoung Kim | Mehakmeet Singh |
+| [HADOOP-17458](https://issues.apache.org/jira/browse/HADOOP-17458) | S3A to treat "SdkClientException: Data read has a different length than the expected" as EOFException |  Minor | fs/s3 | Steve Loughran | Bogdan Stolojan |
+| [HADOOP-17628](https://issues.apache.org/jira/browse/HADOOP-17628) | Distcp contract test is really slow with ABFS and S3A; timing out |  Minor | fs/azure, fs/s3, test, tools/distcp | Bilahari T H | Steve Loughran |
+| [HADOOP-17822](https://issues.apache.org/jira/browse/HADOOP-17822) | fs.s3a.acl.default not working after S3A Audit feature added |  Major | fs/s3 | Steve Loughran | Steve Loughran |
+| [HADOOP-17139](https://issues.apache.org/jira/browse/HADOOP-17139) | Re-enable optimized copyFromLocal implementation in S3AFileSystem |  Minor | fs/s3 | Sahil Takiar | Bogdan Stolojan |
+| [HADOOP-17823](https://issues.apache.org/jira/browse/HADOOP-17823) | S3A Tests to skip if S3Guard and S3-CSE are enabled. |  Major | build, fs/s3 | Mehakmeet Singh | Mehakmeet Singh |
+| [HDFS-16184](https://issues.apache.org/jira/browse/HDFS-16184) | De-flake TestBlockScanner#testSkipRecentAccessFile |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-17677](https://issues.apache.org/jira/browse/HADOOP-17677) | Distcp is unable to determine region with S3 PrivateLink endpoints |  Major | fs/s3, tools/distcp | KJ |  |
+| [HDFS-16192](https://issues.apache.org/jira/browse/HDFS-16192) | ViewDistributedFileSystem#rename wrongly using src in the place of dst. |  Major | . | Uma Maheswara Rao G | Uma Maheswara Rao G |
+| [HADOOP-17156](https://issues.apache.org/jira/browse/HADOOP-17156) | Clear abfs readahead requests on stream close |  Major | fs/azure | Rajesh Balamohan | Mukund Thakur |
+| [HADOOP-17618](https://issues.apache.org/jira/browse/HADOOP-17618) | ABFS: Partially obfuscate SAS object IDs in Logs |  Major | fs/azure | Sumangala Patki | Sumangala Patki |
+| [HADOOP-17894](https://issues.apache.org/jira/browse/HADOOP-17894) | CredentialProviderFactory.getProviders() recursion loading JCEKS file from s3a |  Major | conf, fs/s3 | Steve Loughran | Steve Loughran |
+| [HADOOP-17126](https://issues.apache.org/jira/browse/HADOOP-17126) | implement non-guava Precondition checkNotNull |  Major | . | Ahmed Hussein | Ahmed Hussein |
+| [HADOOP-17195](https://issues.apache.org/jira/browse/HADOOP-17195) | Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs |  Major | fs/azure | Mehakmeet Singh | Mehakmeet Singh |
+| [HADOOP-17929](https://issues.apache.org/jira/browse/HADOOP-17929) | implement non-guava Precondition checkArgument |  Major | . | Ahmed Hussein | Ahmed Hussein |
+| [HADOOP-17198](https://issues.apache.org/jira/browse/HADOOP-17198) | Support S3 Access Points |  Major | fs/s3 | Steve Loughran | Bogdan Stolojan |
+| [HADOOP-17871](https://issues.apache.org/jira/browse/HADOOP-17871) | S3A CSE: minor tuning |  Minor | fs/s3 | Steve Loughran | Mehakmeet Singh |
+| [HADOOP-17947](https://issues.apache.org/jira/browse/HADOOP-17947) | Provide alternative to Guava VisibleForTesting |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-17930](https://issues.apache.org/jira/browse/HADOOP-17930) | implement non-guava Precondition checkState |  Major | . | Ahmed Hussein | Ahmed Hussein |
+| [HADOOP-17374](https://issues.apache.org/jira/browse/HADOOP-17374) | AliyunOSS: support ListObjectsV2 |  Major | fs/oss | wujinhu | wujinhu |
+| [HADOOP-17863](https://issues.apache.org/jira/browse/HADOOP-17863) | ABFS: Fix compiler deprecation warning in TextFileBasedIdentityHandler |  Minor | fs/azure | Sumangala Patki | Sumangala Patki |
+| [HADOOP-17928](https://issues.apache.org/jira/browse/HADOOP-17928) | s3a: set fs.s3a.downgrade.syncable.exceptions = true by default |  Major | fs/s3 | Steve Loughran | Steve Loughran |
+| [HDFS-16336](https://issues.apache.org/jira/browse/HDFS-16336) | De-flake TestRollingUpgrade#testRollback |  Minor | hdfs, test | Kevin Wikant | Viraj Jasani |
+| [HDFS-16171](https://issues.apache.org/jira/browse/HDFS-16171) | De-flake testDecommissionStatus |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-17226](https://issues.apache.org/jira/browse/HADOOP-17226) | Failure of ITestAssumeRole.testRestrictedCommitActions |  Minor | fs/s3, test | Steve Loughran | Steve Loughran |
+| [HADOOP-14334](https://issues.apache.org/jira/browse/HADOOP-14334) | S3 SSEC  tests to downgrade when running against a mandatory encryption object store |  Minor | fs/s3, test | Steve Loughran | Monthon Klongklaew |
+| [HADOOP-16223](https://issues.apache.org/jira/browse/HADOOP-16223) | remove misleading fs.s3a.delegation.tokens.enabled prompt |  Minor | fs/s3 | Steve Loughran |  |
+
+
+### OTHER:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-16078](https://issues.apache.org/jira/browse/HDFS-16078) | Remove unused parameters for DatanodeManager.handleLifeline() |  Minor | . | tomscut | tomscut |
+| [HDFS-16079](https://issues.apache.org/jira/browse/HDFS-16079) | Improve the block state change log |  Minor | . | tomscut | tomscut |
+| [HDFS-16089](https://issues.apache.org/jira/browse/HDFS-16089) | EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor |  Minor | . | tomscut | tomscut |
+| [HDFS-16298](https://issues.apache.org/jira/browse/HDFS-16298) | Improve error msg for BlockMissingException |  Minor | . | tomscut | tomscut |
+| [HDFS-16312](https://issues.apache.org/jira/browse/HDFS-16312) | Fix typo for DataNodeVolumeMetrics and ProfilingFileIoEvents |  Minor | . | tomscut | tomscut |
+| [HADOOP-18005](https://issues.apache.org/jira/browse/HADOOP-18005) | Correct log format for LdapGroupsMapping |  Minor | . | tomscut | tomscut |
+| [HDFS-16319](https://issues.apache.org/jira/browse/HDFS-16319) | Add metrics doc for ReadLockLongHoldCount and WriteLockLongHoldCount |  Minor | . | tomscut | tomscut |
+| [HDFS-16326](https://issues.apache.org/jira/browse/HDFS-16326) | Simplify the code for DiskBalancer |  Minor | . | tomscut | tomscut |
+| [HDFS-16335](https://issues.apache.org/jira/browse/HDFS-16335) | Fix HDFSCommands.md |  Minor | . | tomscut | tomscut |
+| [HDFS-16339](https://issues.apache.org/jira/browse/HDFS-16339) | Show the threshold when mover threads quota is exceeded |  Minor | . | tomscut | tomscut |
+| [YARN-10820](https://issues.apache.org/jira/browse/YARN-10820) | Make GetClusterNodesRequestPBImpl thread safe |  Major | client | Prabhu Joseph | SwathiChandrashekar |
+| [HADOOP-17808](https://issues.apache.org/jira/browse/HADOOP-17808) | ipc.Client not setting interrupt flag after catching InterruptedException |  Minor | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-17834](https://issues.apache.org/jira/browse/HADOOP-17834) | Bump aliyun-sdk-oss to 3.13.0 |  Major | . | Siyao Meng | Siyao Meng |
+| [HADOOP-17950](https://issues.apache.org/jira/browse/HADOOP-17950) | Provide replacement for deprecated APIs of commons-io IOUtils |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-17955](https://issues.apache.org/jira/browse/HADOOP-17955) | Bump netty to the latest 4.1.68 |  Major | . | Takanobu Asanuma | Takanobu Asanuma |
+| [HADOOP-17946](https://issues.apache.org/jira/browse/HADOOP-17946) | Update commons-lang to latest 3.x |  Minor | . | Sean Busbey | Renukaprasad C |
+| [HDFS-16323](https://issues.apache.org/jira/browse/HDFS-16323) | DatanodeHttpServer doesn't require handler state map while retrieving filter handlers |  Minor | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-13464](https://issues.apache.org/jira/browse/HADOOP-13464) | update GSON to 2.7+ |  Minor | build | Sean Busbey | Igor Dvorzhak |
+| [HADOOP-18061](https://issues.apache.org/jira/browse/HADOOP-18061) | Update the year to 2022 |  Major | . | Ayush Saxena | Ayush Saxena |
+
+
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.3.2/RELEASENOTES.3.3.2.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.3.2/RELEASENOTES.3.3.2.md
new file mode 100644
index 00000000000..9948d8ff322
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.3.2/RELEASENOTES.3.3.2.md
@@ -0,0 +1,93 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop  3.3.2 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements.
+
+
+---
+
+* [HDFS-15288](https://issues.apache.org/jira/browse/HDFS-15288) | *Major* | **Add Available Space Rack Fault Tolerant BPP**
+
+Added a new BlockPlacementPolicy: "AvailableSpaceRackFaultTolerantBlockPlacementPolicy" which uses the same optimization logic as the AvailableSpaceBlockPlacementPolicy along with spreading the replicas across maximum number of racks, similar to BlockPlacementPolicyRackFaultTolerant.
+The BPP can be configured by setting the blockplacement policy class as org.apache.hadoop.hdfs.server.blockmanagement.AvailableSpaceRackFaultTolerantBlockPlacementPolicy
+
+
+---
+
+* [HADOOP-17424](https://issues.apache.org/jira/browse/HADOOP-17424) | *Major* | **Replace HTrace with No-Op tracer**
+
+Dependency on HTrace and TraceAdmin protocol/utility were removed. Tracing functionality is no-op until alternative tracer implementation is added.
+
+
+---
+
+* [HDFS-15814](https://issues.apache.org/jira/browse/HDFS-15814) | *Major* | **Make some parameters configurable for DataNodeDiskMetrics**
+
+**WARNING: No release note provided for this change.**
+
+
+---
+
+* [YARN-10820](https://issues.apache.org/jira/browse/YARN-10820) | *Major* | **Make GetClusterNodesRequestPBImpl thread safe**
+
+Added syncronization so that the "yarn node list" command does not fail intermittently
+
+
+---
+
+* [HADOOP-13887](https://issues.apache.org/jira/browse/HADOOP-13887) | *Minor* | **Encrypt S3A data client-side with AWS SDK (S3-CSE)**
+
+Adds support for client side encryption in AWS S3,
+with keys managed by AWS-KMS.
+
+Read the documentation in encryption.md very, very carefully before
+use and consider it unstable.
+
+S3-CSE is enabled in the existing configuration option
+"fs.s3a.server-side-encryption-algorithm":
+
+fs.s3a.server-side-encryption-algorithm=CSE-KMS
+fs.s3a.server-side-encryption.key=\<KMS\_KEY\_ID\>
+
+You cannot enable CSE and SSE in the same client, although
+you can still enable a default SSE option in the S3 console.
+
+\* Not compatible with S3Guard.   
+\* Filesystem list/get status operations subtract 16 bytes from the length
+  of all files \>= 16 bytes long to compensate for the padding which CSE
+  adds.
+\* The SDK always warns about the specific algorithm chosen being
+  deprecated. It is critical to use this algorithm for ranged
+  GET requests to work (i.e. random IO). Ignore.
+\* Unencrypted files CANNOT BE READ.
+  The entire bucket SHOULD be encrypted with S3-CSE.
+\* Uploading files may be a bit slower as blocks are now
+  written sequentially.
+\* The Multipart Upload API is disabled when S3-CSE is active.
+
+
+---
+
+* [YARN-8234](https://issues.apache.org/jira/browse/YARN-8234) | *Critical* | **Improve RM system metrics publisher's performance by pushing events to timeline server in batch**
+
+When Timeline Service V1 or V1.5 is used, if "yarn.resourcemanager.system-metrics-publisher.timeline-server-v1.enable-batch" is set to true, ResourceManager sends timeline events in batch. The default value is false. If this functionality is enabled, the maximum number that events published in batch is configured by "yarn.resourcemanager.system-metrics-publisher.timeline-server-v1.batch-size". The default value is 1000. The interval of publishing events can be configured by "yarn.resourc [...]
+
+
+
diff --git a/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.3.2.xml b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.3.2.xml
new file mode 100644
index 00000000000..b4d954cb53e
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.3.2.xml
@@ -0,0 +1,835 @@
+<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
+<!-- Generated by the JDiff Javadoc doclet -->
+<!-- (http://www.jdiff.org) -->
+<!-- on Mon Feb 21 21:15:43 GMT 2022 -->
+
+<api
+  xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
+  xsi:noNamespaceSchemaLocation='api.xsd'
+  name="Apache Hadoop HDFS 3.3.2"
+  jdversion="1.0.9">
+
+<!--  Command line arguments =  -doclet org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet -docletpath /build/source/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar -verbose -classpath /build/source/hadoop-hdfs-project/hadoop-hdfs/target/classes:/build/source/hadoop-common-project/hadoop-auth/target/hadoop-auth-3.3.2.jar:/maven/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/maven/org/ap [...]
+<package name="org.apache.hadoop.hdfs">
+  <doc>
+  <![CDATA[<p>A distributed implementation of {@link
+org.apache.hadoop.fs.FileSystem}.  This is loosely modelled after
+Google's <a href="http://research.google.com/archive/gfs.html">GFS</a>.</p>
+
+<p>The most important difference is that unlike GFS, Hadoop DFS files 
+have strictly one writer at any one time.  Bytes are always appended 
+to the end of the writer's stream.  There is no notion of "record appends"
+or "mutations" that are then checked or reordered.  Writers simply emit 
+a byte stream.  That byte stream is guaranteed to be stored in the 
+order written.</p>]]>
+  </doc>
+</package>
+<package name="org.apache.hadoop.hdfs.net">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer.sasl">
+</package>
+<package name="org.apache.hadoop.hdfs.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.client">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.server">
+  <!-- start interface org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean -->
+  <interface name="JournalNodeMXBean"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getJournalsStatus" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get status information (e.g., whether formatted) of JournalNode's journals.
+ 
+ @return A string presenting status for each journal]]>
+      </doc>
+    </method>
+    <method name="getHostAndPort" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get host and port of JournalNode.
+
+ @return colon separated host and port.]]>
+      </doc>
+    </method>
+    <method name="getClusterIds" return="java.util.List"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get list of the clusters of JournalNode's journals
+ as one JournalNode may support multiple clusters.
+
+ @return list of clusters.]]>
+      </doc>
+    </method>
+    <method name="getVersion" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Gets the version of Hadoop.
+
+ @return the version of Hadoop.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[This is the JMX management interface for JournalNode information]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean -->
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.block">
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.delegation">
+</package>
+<package name="org.apache.hadoop.hdfs.server.aliasmap">
+  <!-- start class org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap -->
+  <class name="InMemoryAliasMap" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMapProtocol"/>
+    <implements name="org.apache.hadoop.conf.Configurable"/>
+    <method name="setConf"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+    </method>
+    <method name="getConf" return="org.apache.hadoop.conf.Configuration"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="init" return="org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="list" return="org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMapProtocol.IterationResult"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="marker" type="java.util.Optional"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="read" return="java.util.Optional"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="block" type="org.apache.hadoop.hdfs.protocol.Block"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="write"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="block" type="org.apache.hadoop.hdfs.protocol.Block"/>
+      <param name="providedStorageLocation" type="org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getBlockPoolId" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="fromProvidedStorageLocationBytes" return="org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="providedStorageLocationDbFormat" type="byte[]"/>
+      <exception name="InvalidProtocolBufferException" type="org.apache.hadoop.thirdparty.protobuf.InvalidProtocolBufferException"/>
+    </method>
+    <method name="fromBlockBytes" return="org.apache.hadoop.hdfs.protocol.Block"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="blockDbFormat" type="byte[]"/>
+      <exception name="InvalidProtocolBufferException" type="org.apache.hadoop.thirdparty.protobuf.InvalidProtocolBufferException"/>
+    </method>
+    <method name="toProtoBufBytes" return="byte[]"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="providedStorageLocation" type="org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="toProtoBufBytes" return="byte[]"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="block" type="org.apache.hadoop.hdfs.protocol.Block"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="transferForBootstrap"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="response" type="javax.servlet.http.HttpServletResponse"/>
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="aliasMap" type="org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Transfer this aliasmap for bootstrapping standby Namenodes. The map is
+ transferred as a tar.gz archive. This archive needs to be extracted on the
+ standby Namenode.
+
+ @param response http response.
+ @param conf configuration to use.
+ @param aliasMap aliasmap to transfer.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="completeBootstrapTransfer"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="aliasMap" type="java.io.File"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Extract the aliasmap archive to complete the bootstrap process. This method
+ has to be called after the aliasmap archive is transfered from the primary
+ Namenode.
+
+ @param aliasMap location of the aliasmap.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[InMemoryAliasMap is an implementation of the InMemoryAliasMapProtocol for
+ use with LevelDB.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.balancer">
+</package>
+<package name="org.apache.hadoop.hdfs.server.blockmanagement">
+</package>
+<package name="org.apache.hadoop.hdfs.server.common">
+  <!-- start interface org.apache.hadoop.hdfs.server.common.BlockAlias -->
+  <interface name="BlockAlias"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getBlock" return="org.apache.hadoop.hdfs.protocol.Block"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <doc>
+    <![CDATA[Interface used to load provided blocks.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.server.common.BlockAlias -->
+  <!-- start class org.apache.hadoop.hdfs.server.common.FileRegion -->
+  <class name="FileRegion" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.hdfs.server.common.BlockAlias"/>
+    <constructor name="FileRegion" type="long, org.apache.hadoop.fs.Path, long, long, long"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="FileRegion" type="long, org.apache.hadoop.fs.Path, long, long, long, byte[]"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="FileRegion" type="long, org.apache.hadoop.fs.Path, long, long"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="FileRegion" type="org.apache.hadoop.hdfs.protocol.Block, org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getBlock" return="org.apache.hadoop.hdfs.protocol.Block"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getProvidedStorageLocation" return="org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="equals" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="o" type="java.lang.Object"/>
+    </method>
+    <method name="hashCode" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <doc>
+    <![CDATA[This class is used to represent provided blocks that are file regions,
+ i.e., can be described using (path, offset, length).]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.common.FileRegion -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.common.blockaliasmap">
+  <!-- start class org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap -->
+  <class name="BlockAliasMap" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="BlockAliasMap"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getReader" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns a reader to the alias map.
+ @param opts reader options
+ @param blockPoolID block pool id to use
+ @return {@link Reader} to the alias map. If a Reader for the blockPoolID
+ cannot be created, this will return null.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getWriter" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns the writer for the alias map.
+ @param opts writer options.
+ @param blockPoolID block pool id to use
+ @return {@link Writer} to the alias map.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="refresh"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Refresh the alias map.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="close"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[An abstract class used to read and write block maps for provided blocks.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.common.blockaliasmap.impl">
+  <!-- start class org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap -->
+  <class name="LevelDBFileRegionAliasMap" extends="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.conf.Configurable"/>
+    <constructor name="LevelDBFileRegionAliasMap"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setConf"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+    </method>
+    <method name="getConf" return="org.apache.hadoop.conf.Configuration"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getReader" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getWriter" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="refresh"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <field name="LOG" type="org.slf4j.Logger"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[A LevelDB based implementation of {@link BlockAliasMap}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap -->
+  <!-- start class org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap -->
+  <class name="TextFileRegionAliasMap" extends="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.conf.Configurable"/>
+    <constructor name="TextFileRegionAliasMap"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setConf"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+    </method>
+    <method name="getConf" return="org.apache.hadoop.conf.Configuration"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getReader" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getWriter" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="refresh"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="blockPoolIDFromFileName" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="file" type="org.apache.hadoop.fs.Path"/>
+    </method>
+    <method name="fileNameFromBlockPoolID" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="blockPoolID" type="java.lang.String"/>
+    </method>
+    <field name="LOG" type="org.slf4j.Logger"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[This class is used for block maps stored as text files,
+ with a specified delimiter.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset.impl">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web.webhdfs">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.command">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.connectors">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.datamodel">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.planner">
+</package>
+<package name="org.apache.hadoop.hdfs.server.mover">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode">
+  <!-- start interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <interface name="AuditLogger"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="initialize"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Called during initialization of the logger.
+
+ @param conf The configuration object.]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <doc>
+      <![CDATA[Called to log an audit event.
+ <p>
+ This method must return as quickly as possible, since it's called
+ in a critical section of the NameNode's operation.
+
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's
+             metadata (permissions, owner, times, etc).]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Interface defining an audit logger.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <!-- start class org.apache.hadoop.hdfs.server.namenode.DefaultAuditLogger -->
+  <class name="DefaultAuditLogger" extends="org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="DefaultAuditLogger"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="initialize"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+    </method>
+    <method name="logAuditMessage"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="message" type="java.lang.String"/>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="status" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="ugi" type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="status" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="callerContext" type="org.apache.hadoop.ipc.CallerContext"/>
+      <param name="ugi" type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+    </method>
+    <field name="STRING_BUILDER" type="java.lang.ThreadLocal"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="protected"
+      deprecated="not deprecated">
+    </field>
+    <field name="isCallerContextEnabled" type="boolean"
+      transient="false" volatile="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+    </field>
+    <field name="callerContextMaxLen" type="int"
+      transient="false" volatile="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The maximum bytes a caller context string can have.]]>
+      </doc>
+    </field>
+    <field name="callerSignatureMaxLen" type="int"
+      transient="false" volatile="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+    </field>
+    <field name="logTokenTrackingId" type="boolean"
+      transient="false" volatile="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[adds a tracking ID for all audit log events.]]>
+      </doc>
+    </field>
+    <field name="debugCmdSet" type="java.util.Set"
+      transient="false" volatile="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[List of commands to provide debug messages.]]>
+      </doc>
+    </field>
+    <doc>
+    <![CDATA[This class provides an interface for Namenode and Router to Audit events
+ information. This class can be extended and can be used when no access logger
+ is defined in the config file.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.DefaultAuditLogger -->
+  <!-- start class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <class name="HdfsAuditLogger" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.hdfs.server.namenode.AuditLogger"/>
+    <constructor name="HdfsAuditLogger"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="logAuditEvent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="status" type="org.apache.hadoop.fs.FileStatus"/>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="callerContext" type="org.apache.hadoop.ipc.CallerContext"/>
+      <param name="ugi" type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String, String,
+ FileStatus)} with additional parameters related to logging delegation token
+ tracking IDs.
+ 
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's metadata
+          (permissions, owner, times, etc).
+ @param callerContext Context information of the caller
+ @param ugi UserGroupInformation of the current user, or null if not logging
+          token tracking information
+ @param dtSecretManager The token secret manager, or null if not logging
+          token tracking information]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="ugi" type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String,
+ String, FileStatus, CallerContext, UserGroupInformation,
+ DelegationTokenSecretManager)} without {@link CallerContext} information.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Extension of {@link AuditLogger}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <!-- start class org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider -->
+  <class name="INodeAttributeProvider" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="INodeAttributeProvider"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="start"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Initialize the provider. This method is called at NameNode startup
+ time.]]>
+      </doc>
+    </method>
+    <method name="stop"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Shutdown the provider. This method is called at NameNode shutdown time.]]>
+      </doc>
+    </method>
+    <method name="getAttributes" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="fullPath" type="java.lang.String"/>
+      <param name="inode" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="pathElements" type="java.lang.String[]"/>
+      <param name="inode" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="components" type="byte[][]"/>
+      <param name="inode" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getExternalAccessControlEnforcer" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="defaultEnforcer" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"/>
+      <doc>
+      <![CDATA[Can be over-ridden by implementations to provide a custom Access Control
+ Enforcer that can provide an alternate implementation of the
+ default permission checking logic.
+ @param defaultEnforcer The Default AccessControlEnforcer
+ @return The AccessControlEnforcer to use]]>
+      </doc>
+    </method>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.ha">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.window">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.web.resources">
+</package>
+<package name="org.apache.hadoop.hdfs.tools">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineEditsViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineImageViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.util">
+</package>
+<package name="org.apache.hadoop.hdfs.web">
+</package>
+<package name="org.apache.hadoop.hdfs.web.resources">
+</package>
+
+</api>
diff --git a/hadoop-project-dist/pom.xml b/hadoop-project-dist/pom.xml
index 67995cb40b2..7707c8921f4 100644
--- a/hadoop-project-dist/pom.xml
+++ b/hadoop-project-dist/pom.xml
@@ -134,7 +134,7 @@
         <activeByDefault>false</activeByDefault>
       </activation>
       <properties>
-        <jdiff.stable.api>3.3.1</jdiff.stable.api>
+        <jdiff.stable.api>3.3.2</jdiff.stable.api>
         <jdiff.stability>-unstable</jdiff.stability>
         <!-- Commented out for HADOOP-11776 -->
         <!-- Uncomment param name="${jdiff.compatibility}" in javadoc doclet if compatibility is not empty -->


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 14/16: MAPREDUCE-7373. Building MapReduce NativeTask fails on Fedora 34+ (#4120)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 581ca342e512a6ac654195551db84976473b9fa8
Author: Kengo Seki <se...@apache.org>
AuthorDate: Wed Mar 30 22:47:45 2022 +0900

    MAPREDUCE-7373. Building MapReduce NativeTask fails on Fedora 34+ (#4120)
    
    (cherry picked from commit dc4a680da8bcacf152cc8638d86dd171a7901245)
    
    Change-Id: Ia9ad34b5c3c0f767169fc48a1866c04ff73b1093
---
 .../hadoop-mapreduce-client-nativetask/src/CMakeLists.txt                | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
index ae3b9c6029e..4c32838afb0 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
@@ -27,6 +27,7 @@ set(GTEST_SRC_DIR ${CMAKE_SOURCE_DIR}/../../../../hadoop-common-project/hadoop-c
 # Add extra compiler and linker flags.
 # -Wno-sign-compare
 hadoop_add_compiler_flags("-DNDEBUG -DSIMPLE_MEMCPY -fno-strict-aliasing -fsigned-char")
+set(CMAKE_CXX_STANDARD 11)
 
 # Source location.
 set(SRC main/native)


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 16/16: HADOOP-18088. Replace log4j 1.x with reload4j. (#4052)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d709000fb2b58b518c08247b99f4225100439251
Author: Masatake Iwasaki <iw...@apache.org>
AuthorDate: Thu Apr 7 08:33:13 2022 +0900

    HADOOP-18088. Replace log4j 1.x with reload4j. (#4052)
    
    Co-authored-by: Wei-Chiu Chuang <we...@apache.org>
---
 LICENSE-binary                                     |   9 +-
 .../resources/assemblies/hadoop-dynamometer.xml    |   2 +-
 .../resources/assemblies/hadoop-hdfs-nfs-dist.xml  |   2 +-
 .../resources/assemblies/hadoop-httpfs-dist.xml    |   2 +-
 .../main/resources/assemblies/hadoop-kms-dist.xml  |   2 +-
 .../resources/assemblies/hadoop-mapreduce-dist.xml |   2 +-
 .../main/resources/assemblies/hadoop-nfs-dist.xml  |   2 +-
 .../src/main/resources/assemblies/hadoop-tools.xml |   2 +-
 .../main/resources/assemblies/hadoop-yarn-dist.xml |   2 +-
 .../hadoop-client-check-invariants/pom.xml         |   4 +-
 .../hadoop-client-check-test-invariants/pom.xml    |   4 +-
 .../hadoop-client-integration-tests/pom.xml        |   9 +-
 .../hadoop-client-minicluster/pom.xml              |  10 +-
 .../hadoop-client-runtime/pom.xml                  |   8 +-
 hadoop-client-modules/hadoop-client/pom.xml        |  14 +--
 hadoop-common-project/hadoop-auth-examples/pom.xml |   6 +-
 hadoop-common-project/hadoop-auth/pom.xml          |  12 ++-
 hadoop-common-project/hadoop-common/pom.xml        |   6 +-
 .../java/org/apache/hadoop/util/GenericsUtil.java  |   2 +-
 .../java/org/apache/hadoop/util/TestClassUtil.java |   2 +-
 hadoop-common-project/hadoop-kms/pom.xml           |   6 +-
 hadoop-common-project/hadoop-minikdc/pom.xml       |   2 +-
 hadoop-common-project/hadoop-nfs/pom.xml           |   6 +-
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml     |   4 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml     |   6 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml        |   6 +-
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml        |   6 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml            |   6 +-
 .../hadoop-mapreduce-client/pom.xml                |   2 +-
 hadoop-mapreduce-project/pom.xml                   |   2 +-
 hadoop-project/pom.xml                             | 117 +++++++++++++++++++--
 hadoop-tools/hadoop-azure/pom.xml                  |   4 +-
 .../pom.xml                                        |   4 +-
 .../hadoop-yarn-services-core/pom.xml              |   4 +-
 .../hadoop-yarn/hadoop-yarn-client/pom.xml         |   4 +-
 .../hadoop-yarn/hadoop-yarn-common/pom.xml         |   4 +-
 .../hadoop-yarn-server-resourcemanager/pom.xml     |   4 +-
 37 files changed, 195 insertions(+), 94 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 7a712a5ac98..0e93a3aba9f 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -208,6 +208,7 @@ License Version 2.0:
 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java
 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/TimeoutFuture.java
 
+ch.qos.reload4j:reload4j:1.2.18.3
 com.aliyun:aliyun-java-sdk-core:3.4.0
 com.aliyun:aliyun-java-sdk-ecs:4.2.0
 com.aliyun:aliyun-java-sdk-ram:3.0.0
@@ -273,7 +274,6 @@ io.reactivex:rxjava-string:1.1.1
 io.reactivex:rxnetty:0.4.20
 io.swagger:swagger-annotations:1.5.4
 javax.inject:javax.inject:1
-log4j:log4j:1.2.17
 net.java.dev.jna:jna:5.2.0
 net.minidev:accessors-smart:2.4.7
 net.minidev:json-smart:2.4.7
@@ -436,9 +436,10 @@ org.codehaus.mojo:animal-sniffer-annotations:1.17
 org.jruby.jcodings:jcodings:1.0.13
 org.jruby.joni:joni:2.1.2
 org.ojalgo:ojalgo:43.0
-org.slf4j:jul-to-slf4j:1.7.30
-org.slf4j:slf4j-api:1.7.30
-org.slf4j:slf4j-log4j12:1.7.30
+org.slf4j:jcl-over-slf4j:1.7.35
+org.slf4j:jul-to-slf4j:1.7.35
+org.slf4j:slf4j-api:1.7.35
+org.slf4j:slf4j-reload4j:1.7.35
 
 
 CDDL 1.1 + GPLv2 with classpath exception
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-dynamometer.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-dynamometer.xml
index 448035262e1..b2ce562231c 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-dynamometer.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-dynamometer.xml
@@ -66,7 +66,7 @@
       <excludes>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
       </excludes>
     </dependencySet>
   </dependencySets>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-hdfs-nfs-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-hdfs-nfs-dist.xml
index 0edfdeb7b0d..af5d89d7efe 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-hdfs-nfs-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-hdfs-nfs-dist.xml
@@ -40,7 +40,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml
index d698a3005d4..bec2f94b95e 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml
@@ -69,7 +69,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml
index ff6f99080ca..e5e6834b042 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml
@@ -69,7 +69,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
index 06a55d6d06a..28d5ebe9f60 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
@@ -179,7 +179,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
         <exclude>jdiff:jdiff:jar</exclude>
       </excludes>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-nfs-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-nfs-dist.xml
index cb3d9cdf249..59000c07113 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-nfs-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-nfs-dist.xml
@@ -40,7 +40,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
index 054d8c0ace2..1b9140f419b 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
@@ -214,7 +214,7 @@
         <exclude>org.apache.hadoop:hadoop-pipes</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
       </excludes>
     </dependencySet>
   </dependencySets>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml
index 4da4ac5acb9..cd86ce4e417 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml
@@ -309,7 +309,7 @@
         <exclude>org.apache.hadoop:*</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
index 9d1deb63642..c58353c3ddd 100644
--- a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
@@ -84,8 +84,8 @@
                     <exclude>org.slf4j:slf4j-api</exclude>
                     <!-- Leave commons-logging unshaded so downstream users can configure logging. -->
                     <exclude>commons-logging:commons-logging</exclude>
-                    <!-- Leave log4j unshaded so downstream users can configure logging. -->
-                    <exclude>log4j:log4j</exclude>
+                    <!-- Leave reload4j unshaded so downstream users can configure logging. -->
+                    <exclude>ch.qos.reload4j:reload4j</exclude>
                     <!-- Leave javax annotations we need exposed -->
                     <exclude>com.google.code.findbugs:jsr305</exclude>
                     <!-- Leave bouncycastle unshaded because it's signed with a special Oracle certificate so it can be a custom JCE security provider -->
diff --git a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
index b96210dde7d..c7d7a8ee749 100644
--- a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
@@ -88,8 +88,8 @@
                     <exclude>org.slf4j:slf4j-api</exclude>
                     <!-- Leave commons-logging unshaded so downstream users can configure logging. -->
                     <exclude>commons-logging:commons-logging</exclude>
-                    <!-- Leave log4j unshaded so downstream users can configure logging. -->
-                    <exclude>log4j:log4j</exclude>
+                    <!-- Leave reload4j unshaded so downstream users can configure logging. -->
+                    <exclude>ch.qos.reload4j:reload4j</exclude>
                     <!-- Leave JUnit unshaded so downstream can use our test helper classes -->
                     <exclude>junit:junit</exclude>
                     <!-- JUnit brings in hamcrest -->
diff --git a/hadoop-client-modules/hadoop-client-integration-tests/pom.xml b/hadoop-client-modules/hadoop-client-integration-tests/pom.xml
index 51210210204..d74c9c19ceb 100644
--- a/hadoop-client-modules/hadoop-client-integration-tests/pom.xml
+++ b/hadoop-client-modules/hadoop-client-integration-tests/pom.xml
@@ -33,8 +33,8 @@
 
   <dependencies>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
@@ -42,11 +42,6 @@
       <artifactId>slf4j-api</artifactId>
       <scope>test</scope>
     </dependency>
-    <dependency>
-      <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
-      <scope>test</scope>
-    </dependency>
     <dependency>
       <groupId>junit</groupId>
       <artifactId>junit</artifactId>
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index d5ca75cbb4f..aa64544e7d1 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -193,8 +193,12 @@
           <artifactId>slf4j-log4j12</artifactId>
         </exclusion>
         <exclusion>
-          <groupId>log4j</groupId>
-          <artifactId>log4j</artifactId>
+          <groupId>org.slf4j</groupId>
+          <artifactId>slf4j-reload4j</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>ch.qos.reload4j</groupId>
+          <artifactId>reload4j</artifactId>
         </exclusion>
         <exclusion>
           <groupId>com.fasterxml.jackson.core</groupId>
@@ -682,7 +686,7 @@
                       <exclude>commons-logging:commons-logging</exclude>
                       <exclude>junit:junit</exclude>
                       <exclude>com.google.code.findbugs:jsr305</exclude>
-                      <exclude>log4j:log4j</exclude>
+                      <exclude>ch.qos.reload4j:reload4j</exclude>
                       <exclude>org.eclipse.jetty.websocket:websocket-common</exclude>
                       <exclude>org.eclipse.jetty.websocket:websocket-api</exclude>
                       <!-- We need a filter that matches just those things that are included in the above artiacts -->
diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index cf9b95286eb..d4f636de712 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -103,8 +103,8 @@
          * one of the three custom log4j appenders we have
       -->
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
       <optional>true</optional>
     </dependency>
@@ -150,8 +150,8 @@
                       <exclude>org.slf4j:slf4j-api</exclude>
                       <!-- Leave commons-logging unshaded so downstream users can configure logging. -->
                       <exclude>commons-logging:commons-logging</exclude>
-                      <!-- Leave log4j unshaded so downstream users can configure logging. -->
-                      <exclude>log4j:log4j</exclude>
+                      <!-- Leave reload4j unshaded so downstream users can configure logging. -->
+                      <exclude>ch.qos.reload4j:reload4j</exclude>
                       <!-- Leave javax APIs that are stable -->
                       <!-- the jdk ships part of the javax.annotation namespace, so if we want to relocate this we'll have to care it out by class :( -->
                       <exclude>com.google.code.findbugs:jsr305</exclude>
diff --git a/hadoop-client-modules/hadoop-client/pom.xml b/hadoop-client-modules/hadoop-client/pom.xml
index 9670a8a39a6..17411217240 100644
--- a/hadoop-client-modules/hadoop-client/pom.xml
+++ b/hadoop-client-modules/hadoop-client/pom.xml
@@ -206,8 +206,8 @@
           <artifactId>commons-cli</artifactId>
         </exclusion>
         <exclusion>
-          <groupId>log4j</groupId>
-          <artifactId>log4j</artifactId>
+          <groupId>ch.qos.reload4j</groupId>
+          <artifactId>reload4j</artifactId>
         </exclusion>
         <exclusion>
           <groupId>com.sun.jersey</groupId>
@@ -282,11 +282,6 @@
           <groupId>io.netty</groupId>
           <artifactId>netty</artifactId>
         </exclusion>
-        <!-- No slf4j backends for downstream clients -->
-        <exclusion>
-          <groupId>org.slf4j</groupId>
-          <artifactId>slf4j-log4j12</artifactId>
-        </exclusion>
       </exclusions>
     </dependency>
 
@@ -315,11 +310,6 @@
           <groupId>io.netty</groupId>
           <artifactId>netty</artifactId>
         </exclusion>
-        <!-- No slf4j backends for downstream clients -->
-        <exclusion>
-          <groupId>org.slf4j</groupId>
-          <artifactId>slf4j-log4j12</artifactId>
-        </exclusion>
       </exclusions>
     </dependency>
 
diff --git a/hadoop-common-project/hadoop-auth-examples/pom.xml b/hadoop-common-project/hadoop-auth-examples/pom.xml
index 27580e50c8a..ce5130d49a0 100644
--- a/hadoop-common-project/hadoop-auth-examples/pom.xml
+++ b/hadoop-common-project/hadoop-auth-examples/pom.xml
@@ -47,13 +47,13 @@
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
   </dependencies>
diff --git a/hadoop-common-project/hadoop-auth/pom.xml b/hadoop-common-project/hadoop-auth/pom.xml
index 923be91e903..c2812869638 100644
--- a/hadoop-common-project/hadoop-auth/pom.xml
+++ b/hadoop-common-project/hadoop-auth/pom.xml
@@ -82,13 +82,13 @@
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
@@ -176,6 +176,12 @@
       <artifactId>apacheds-server-integ</artifactId>
       <version>${apacheds.version}</version>
       <scope>test</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>log4j</groupId>
+          <artifactId>log4j</artifactId>
+        </exclusion>
+      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.directory.server</groupId>
diff --git a/hadoop-common-project/hadoop-common/pom.xml b/hadoop-common-project/hadoop-common/pom.xml
index 086a77f26d9..791429c8fff 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -159,8 +159,8 @@
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -205,7 +205,7 @@
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java
index 0aba34845a6..334e370214e 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java
@@ -85,7 +85,7 @@ public class GenericsUtil {
     }
     Logger log = LoggerFactory.getLogger(clazz);
     try {
-      Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter");
+      Class log4jClass = Class.forName("org.slf4j.impl.Reload4jLoggerAdapter");
       return log4jClass.isInstance(log);
     } catch (ClassNotFoundException e) {
       return false;
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
index 98e182236c9..04337929abd 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
@@ -35,6 +35,6 @@ public class TestClassUtil {
     Assert.assertTrue("Containing jar does not exist on file system ",
         jarFile.exists());
     Assert.assertTrue("Incorrect jar file " + containingJar,
-        jarFile.getName().matches("log4j.*[.]jar"));
+        jarFile.getName().matches("reload4j.*[.]jar"));
   }
 }
diff --git a/hadoop-common-project/hadoop-kms/pom.xml b/hadoop-common-project/hadoop-kms/pom.xml
index 71be87347a9..986cfe4a00d 100644
--- a/hadoop-common-project/hadoop-kms/pom.xml
+++ b/hadoop-common-project/hadoop-kms/pom.xml
@@ -134,8 +134,8 @@
       <type>test-jar</type>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -145,7 +145,7 @@
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-common-project/hadoop-minikdc/pom.xml b/hadoop-common-project/hadoop-minikdc/pom.xml
index 746d72c429c..441ac244f39 100644
--- a/hadoop-common-project/hadoop-minikdc/pom.xml
+++ b/hadoop-common-project/hadoop-minikdc/pom.xml
@@ -40,7 +40,7 @@
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-common-project/hadoop-nfs/pom.xml b/hadoop-common-project/hadoop-nfs/pom.xml
index baddec82727..06af6768118 100644
--- a/hadoop-common-project/hadoop-nfs/pom.xml
+++ b/hadoop-common-project/hadoop-nfs/pom.xml
@@ -79,13 +79,13 @@
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
index f85db539eba..e468a2e1547 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
@@ -48,8 +48,8 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
           <artifactId>commons-logging</artifactId>
         </exclusion>
         <exclusion>
-          <groupId>log4j</groupId>
-          <artifactId>log4j</artifactId>
+          <groupId>ch.qos.reload4j</groupId>
+          <artifactId>reload4j</artifactId>
         </exclusion>
       </exclusions>
     </dependency>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index e571d744e54..6470e3aa757 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -179,8 +179,8 @@
       <type>test-jar</type>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -190,7 +190,7 @@
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
diff --git a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
index 0d8ef6c4c0d..442a3601295 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
@@ -134,8 +134,8 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -160,7 +160,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>provided</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
index b37a1de11e1..02d5bfae3b6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
@@ -54,8 +54,8 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
           <artifactId>commons-logging</artifactId>
         </exclusion>
         <exclusion>
-          <groupId>log4j</groupId>
-          <artifactId>log4j</artifactId>
+          <groupId>ch.qos.reload4j</groupId>
+          <artifactId>reload4j</artifactId>
         </exclusion>
       </exclusions>
     </dependency>
@@ -71,7 +71,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>provided</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index df5d2cce9a6..8aa86dd3b0e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -118,8 +118,8 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -162,7 +162,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>provided</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
index df6f081a8da..f862ecd6831 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
@@ -86,7 +86,7 @@
     </dependency>
     <dependency>
      <groupId>org.slf4j</groupId>
-       <artifactId>slf4j-log4j12</artifactId>
+       <artifactId>slf4j-reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>
diff --git a/hadoop-mapreduce-project/pom.xml b/hadoop-mapreduce-project/pom.xml
index cba6031809b..b0951176442 100644
--- a/hadoop-mapreduce-project/pom.xml
+++ b/hadoop-mapreduce-project/pom.xml
@@ -88,7 +88,7 @@
     </dependency>
     <dependency>
      <groupId>org.slf4j</groupId>
-       <artifactId>slf4j-log4j12</artifactId>
+       <artifactId>slf4j-reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 66dd3fe6ac6..93ec9dcd84d 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -81,8 +81,8 @@
     <httpcore.version>4.4.13</httpcore.version>
 
     <!-- SLF4J/LOG4J version -->
-    <slf4j.version>1.7.30</slf4j.version>
-    <log4j.version>1.2.17</log4j.version>
+    <slf4j.version>1.7.36</slf4j.version>
+    <reload4j.version>1.2.18.3</reload4j.version>
 
     <!-- com.google.re2j version -->
     <re2j.version>1.1</re2j.version>
@@ -298,12 +298,28 @@
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-common</artifactId>
         <version>${hadoop.version}</version>
+        <exclusions>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-reload4j</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-common</artifactId>
         <version>${hadoop.version}</version>
         <type>test-jar</type>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hadoop</groupId>
@@ -374,12 +390,24 @@
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-core</artifactId>
         <version>${hadoop.version}</version>
+        <exclusions>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-reload4j</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
 
       <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
         <version>${hadoop.version}</version>
+        <exclusions>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-reload4j</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
 
       <dependency>
@@ -953,9 +981,9 @@
         <version>${commons-logging-api.version}</version>
       </dependency>
       <dependency>
-        <groupId>log4j</groupId>
-        <artifactId>log4j</artifactId>
-        <version>${log4j.version}</version>
+        <groupId>ch.qos.reload4j</groupId>
+        <artifactId>reload4j</artifactId>
+        <version>${reload4j.version}</version>
         <exclusions>
           <exclusion>
             <groupId>com.sun.jdmk</groupId>
@@ -1099,7 +1127,7 @@
       </dependency>
       <dependency>
         <groupId>org.slf4j</groupId>
-        <artifactId>slf4j-log4j12</artifactId>
+        <artifactId>slf4j-reload4j</artifactId>
         <version>${slf4j.version}</version>
       </dependency>
       <dependency>
@@ -1305,6 +1333,10 @@
             <groupId>org.apache.kerby</groupId>
             <artifactId>kerby-config</artifactId>
           </exclusion>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
           <exclusion>
             <groupId>org.slf4j</groupId>
             <artifactId>slf4j-api</artifactId>
@@ -1313,6 +1345,10 @@
             <groupId>org.slf4j</groupId>
             <artifactId>slf4j-log4j12</artifactId>
           </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-reload4j</artifactId>
+          </exclusion>
         </exclusions>
       </dependency>
       <dependency>
@@ -1341,6 +1377,14 @@
             <groupId>io.netty</groupId>
             <artifactId>netty-transport-native-epoll</artifactId>
           </exclusion>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
         </exclusions>
       </dependency>
       <dependency>
@@ -1480,6 +1524,10 @@
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
          </exclusion>
+         <exclusion>
+           <groupId>log4j</groupId>
+           <artifactId>log4j</artifactId>
+         </exclusion>
        </exclusions>
      </dependency>
      <dependency>
@@ -1594,6 +1642,10 @@
             <artifactId>jdk.tools</artifactId>
             <groupId>jdk.tools</groupId>
           </exclusion>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
         </exclusions>
       </dependency>
       <dependency>
@@ -1602,6 +1654,16 @@
         <version>${hbase.version}</version>
         <scope>test</scope>
         <classifier>tests</classifier>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hbase</groupId>
@@ -1619,6 +1681,28 @@
         <groupId>org.apache.hbase</groupId>
         <artifactId>hbase-server</artifactId>
         <version>${hbase.version}</version>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+        </exclusions>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.hbase</groupId>
+        <artifactId>hbase-server</artifactId>
+        <version>${hbase.version}</version>
+        <scope>test</scope>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hbase</groupId>
@@ -1626,6 +1710,16 @@
         <version>${hbase.version}</version>
         <scope>test</scope>
         <classifier>tests</classifier>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hbase</groupId>
@@ -1650,6 +1744,14 @@
             <artifactId>jdk.tools</artifactId>
             <groupId>jdk.tools</groupId>
           </exclusion>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
         </exclusions>
         </dependency>
         <dependency>
@@ -2160,6 +2262,9 @@
                     <exclude>com.sun.jersey.jersey-test-framework:*</exclude>
                     <exclude>com.google.inject:guice</exclude>
                     <exclude>org.ow2.asm:asm</exclude>
+
+                    <exclude>org.slf4j:slf4j-log4j12</exclude>
+                    <exclude>log4j:log4j</exclude>
                   </excludes>
                   <includes>
                     <!-- for JDK 8 support -->
diff --git a/hadoop-tools/hadoop-azure/pom.xml b/hadoop-tools/hadoop-azure/pom.xml
index c8c5cc37742..6eb7f98c4d6 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -245,8 +245,8 @@
     </dependency>
 
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>test</scope>
     </dependency>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
index 387d4a97417..cb2a32d70bf 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
@@ -46,8 +46,8 @@
     </dependency>
 
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop.thirdparty</groupId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml
index 02b6b7124dc..fb8cc764f98 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml
@@ -118,8 +118,8 @@
     </dependency>
 
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
index 368a0251aed..6977afde460 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
@@ -47,8 +47,8 @@
       <artifactId>commons-cli</artifactId>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.eclipse.jetty.websocket</groupId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
index 63ed238ed27..77c493d3ca6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
@@ -164,8 +164,8 @@
       <artifactId>jersey-guice</artifactId>
     </dependency>
     <dependency>
-     <groupId>log4j</groupId>
-     <artifactId>log4j</artifactId>
+     <groupId>ch.qos.reload4j</groupId>
+     <artifactId>reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>com.fasterxml.jackson.core</groupId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
index 68574b45703..40e5b7a0f04 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
@@ -160,8 +160,8 @@
       <artifactId>hadoop-shaded-guava</artifactId>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 03/16: Fix thread safety of EC decoding during concurrent preads (#3881)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 089a754fec648f08c24bbd7bd407488970abcd2b
Author: daimin <da...@outlook.com>
AuthorDate: Fri Feb 11 10:20:00 2022 +0800

    Fix thread safety of EC decoding during concurrent preads (#3881)
    
    (cherry picked from commit 0e74f1e467fde9622af4eb8f18312583d2354c0f)
---
 .../apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java    | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
index 249930ebe3f..2ebe94b0385 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
@@ -81,7 +81,7 @@ public abstract class RawErasureDecoder {
    * @param outputs output buffers to put decoded data into according to
    *                erasedIndexes, ready for read after the call
    */
-  public void decode(ByteBuffer[] inputs, int[] erasedIndexes,
+  public synchronized void decode(ByteBuffer[] inputs, int[] erasedIndexes,
                      ByteBuffer[] outputs) throws IOException {
     ByteBufferDecodingState decodingState = new ByteBufferDecodingState(this,
         inputs, erasedIndexes, outputs);
@@ -130,7 +130,7 @@ public abstract class RawErasureDecoder {
    *                erasedIndexes, ready for read after the call
    * @throws IOException if the decoder is closed.
    */
-  public void decode(byte[][] inputs, int[] erasedIndexes, byte[][] outputs)
+  public synchronized void decode(byte[][] inputs, int[] erasedIndexes, byte[][] outputs)
       throws IOException {
     ByteArrayDecodingState decodingState = new ByteArrayDecodingState(this,
         inputs, erasedIndexes, outputs);
@@ -163,7 +163,7 @@ public abstract class RawErasureDecoder {
    *                erasedIndexes, ready for read after the call
    * @throws IOException if the decoder is closed
    */
-  public void decode(ECChunk[] inputs, int[] erasedIndexes,
+  public synchronized void decode(ECChunk[] inputs, int[] erasedIndexes,
                      ECChunk[] outputs) throws IOException {
     ByteBuffer[] newInputs = CoderUtil.toBuffers(inputs);
     ByteBuffer[] newOutputs = CoderUtil.toBuffers(outputs);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 05/16: HADOOP-18125. Utility to identify git commit / Jira fixVersion discrepancies for RC preparation (#3991)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 53ea32dd076f54fa7af421ae99422500737dca3c
Author: Viraj Jasani <vj...@apache.org>
AuthorDate: Tue Feb 22 08:30:38 2022 +0530

    HADOOP-18125. Utility to identify git commit / Jira fixVersion discrepancies for RC preparation (#3991)
    
    Signed-off-by: Wei-Chiu Chuang <we...@apache.org>
    (cherry picked from commit 697e5d463640a7107a622262eb2d333d0458fd8b)
---
 dev-support/git-jira-validation/README.md          | 134 +++++++++++++++++++++
 .../git_jira_fix_version_check.py                  | 118 ++++++++++++++++++
 dev-support/git-jira-validation/requirements.txt   |  18 +++
 3 files changed, 270 insertions(+)

diff --git a/dev-support/git-jira-validation/README.md b/dev-support/git-jira-validation/README.md
new file mode 100644
index 00000000000..308c54228d1
--- /dev/null
+++ b/dev-support/git-jira-validation/README.md
@@ -0,0 +1,134 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+Apache Hadoop Git/Jira FixVersion validation
+============================================================
+
+Git commits in Apache Hadoop contains Jira number of the format
+HADOOP-XXXX or HDFS-XXXX or YARN-XXXX or MAPREDUCE-XXXX.
+While creating a release candidate, we also include changelist
+and this changelist can be identified based on Fixed/Closed Jiras
+with the correct fix versions. However, sometimes we face few
+inconsistencies between fixed Jira and Git commit message.
+
+git_jira_fix_version_check.py script takes care of
+identifying all git commits with commit
+messages with any of these issues:
+
+1. commit is reverted as per commit message
+2. commit does not contain Jira number format in message
+3. Jira does not have expected fixVersion
+4. Jira has expected fixVersion, but it is not yet resolved
+
+Moreover, this script also finds any resolved Jira with expected
+fixVersion but without any corresponding commit present.
+
+This should be useful as part of RC preparation.
+
+git_jira_fix_version_check supports python3 and it required
+installation of jira:
+
+```
+$ python3 --version
+Python 3.9.7
+
+$ python3 -m venv ./venv
+
+$ ./venv/bin/pip install -r dev-support/git-jira-validation/requirements.txt
+
+$ ./venv/bin/python dev-support/git-jira-validation/git_jira_fix_version_check.py
+
+```
+
+The script also requires below inputs:
+```
+1. First commit hash to start excluding commits from history:
+   Usually we can provide latest commit hash from last tagged release
+   so that the script will only loop through all commits in git commit
+   history before this commit hash. e.g for 3.3.2 release, we can provide
+   git hash: fa4915fdbbbec434ab41786cb17b82938a613f16
+   because this commit bumps up hadoop pom versions to 3.3.2:
+   https://github.com/apache/hadoop/commit/fa4915fdbbbec434ab41786cb17b82938a613f16
+
+2. Fix Version:
+   Exact fixVersion that we would like to compare all Jira's fixVersions
+   with. e.g for 3.3.2 release, it should be 3.3.2.
+
+3. JIRA Project Name:
+   The exact name of Project as case-sensitive e.g HADOOP / OZONE
+
+4. Path of project's working dir with release branch checked-in:
+   Path of project from where we want to compare git hashes from. Local fork
+   of the project should be up-to date with upstream and expected release
+   branch should be checked-in.
+
+5. Jira server url (default url: https://issues.apache.org/jira):
+   Default value of server points to ASF Jiras but this script can be
+   used outside of ASF Jira too.
+```
+
+
+Example of script execution:
+```
+JIRA Project Name (e.g HADOOP / OZONE etc): HADOOP
+First commit hash to start excluding commits from history: fa4915fdbbbec434ab41786cb17b82938a613f16
+Fix Version: 3.3.2
+Jira server url (default: https://issues.apache.org/jira):
+Path of project's working dir with release branch checked-in: /Users/vjasani/Documents/src/hadoop-3.3/hadoop
+
+Check git status output and verify expected branch
+
+On branch branch-3.3.2
+Your branch is up to date with 'origin/branch-3.3.2'.
+
+nothing to commit, working tree clean
+
+
+Jira/Git commit message diff starting: ##############################################
+Jira not present with version: 3.3.2. 	 Commit: 8cd8e435fb43a251467ca74fadcb14f21a3e8163 HADOOP-17198. Support S3 Access Points  (#3260) (branch-3.3.2) (#3955)
+WARN: Jira not found. 			 Commit: 8af28b7cca5c6020de94e739e5373afc69f399e5 Updated the index as per 3.3.2 release
+WARN: Jira not found. 			 Commit: e42e483d0085aa46543ebcb1196dd155ddb447d0 Make upstream aware of 3.3.1 release
+Commit seems reverted. 			 Commit: 6db1165380cd308fb74c9d17a35c1e57174d1e09 Revert "HDFS-14099. Unknown frame descriptor when decompressing multiple frames (#3836)"
+Commit seems reverted. 			 Commit: 1e3f94fa3c3d4a951d4f7438bc13e6f008f228f4 Revert "HDFS-16333. fix balancer bug when transfer an EC block (#3679)"
+Jira not present with version: 3.3.2. 	 Commit: ce0bc7b473a62a580c1227a4de6b10b64b045d3a HDFS-16344. Improve DirectoryScanner.Stats#toString (#3695)
+Jira not present with version: 3.3.2. 	 Commit: 30f0629d6e6f735c9f4808022f1a1827c5531f75 HDFS-16339. Show the threshold when mover threads quota is exceeded (#3689)
+Jira not present with version: 3.3.2. 	 Commit: e449daccf486219e3050254d667b74f92e8fc476 YARN-11007. Correct words in YARN documents (#3680)
+Commit seems reverted. 			 Commit: 5c189797828e60a3329fd920ecfb99bcbccfd82d Revert "HDFS-16336. Addendum: De-flake TestRollingUpgrade#testRollback (#3686)"
+Jira not present with version: 3.3.2. 	 Commit: 544dffd179ed756bc163e4899e899a05b93d9234 HDFS-16171. De-flake testDecommissionStatus (#3280)
+Jira not present with version: 3.3.2. 	 Commit: c6914b1cb6e4cab8263cd3ae5cc00bc7a8de25de HDFS-16350. Datanode start time should be set after RPC server starts successfully (#3711)
+Jira not present with version: 3.3.2. 	 Commit: 328d3b84dfda9399021ccd1e3b7afd707e98912d HDFS-16336. Addendum: De-flake TestRollingUpgrade#testRollback (#3686)
+Jira not present with version: 3.3.2. 	 Commit: 3ae8d4ccb911c9ababd871824a2fafbb0272c016 HDFS-16336. De-flake TestRollingUpgrade#testRollback (#3686)
+Jira not present with version: 3.3.2. 	 Commit: 15d3448e25c797b7d0d401afdec54683055d4bb5 HADOOP-17975. Fallback to simple auth does not work for a secondary DistributedFileSystem instance. (#3579)
+Jira not present with version: 3.3.2. 	 Commit: dd50261219de71eaa0a1ad28529953e12dfb92e0 YARN-10991. Fix to ignore the grouping "[]" for resourcesStr in parseResourcesString method (#3592)
+Jira not present with version: 3.3.2. 	 Commit: ef462b21bf03b10361d2f9ea7b47d0f7360e517f HDFS-16332. Handle invalid token exception in sasl handshake (#3677)
+WARN: Jira not found. 			 Commit: b55edde7071419410ea5bea4ce6462b980e48f5b Also update hadoop.version to 3.3.2
+...
+...
+...
+Found first commit hash after which git history is redundant. commit: fa4915fdbbbec434ab41786cb17b82938a613f16
+Exiting successfully
+Jira/Git commit message diff completed: ##############################################
+
+Any resolved Jira with fixVersion 3.3.2 but corresponding commit not present
+Starting diff: ##############################################
+HADOOP-18066 is marked resolved with fixVersion 3.3.2 but no corresponding commit found
+HADOOP-17936 is marked resolved with fixVersion 3.3.2 but no corresponding commit found
+Completed diff: ##############################################
+
+
+```
+
diff --git a/dev-support/git-jira-validation/git_jira_fix_version_check.py b/dev-support/git-jira-validation/git_jira_fix_version_check.py
new file mode 100644
index 00000000000..c2e12a13aae
--- /dev/null
+++ b/dev-support/git-jira-validation/git_jira_fix_version_check.py
@@ -0,0 +1,118 @@
+#!/usr/bin/env python3
+############################################################################
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+############################################################################
+"""An application to assist Release Managers with ensuring that histories in
+Git and fixVersions in JIRA are in agreement. See README.md for a detailed
+explanation.
+"""
+
+
+import os
+import re
+import subprocess
+
+from jira import JIRA
+
+jira_project_name = input("JIRA Project Name (e.g HADOOP / OZONE etc): ") \
+                    or "HADOOP"
+# Define project_jira_keys with - appended. e.g for HADOOP Jiras,
+# project_jira_keys should include HADOOP-, HDFS-, YARN-, MAPREDUCE-
+project_jira_keys = [jira_project_name + '-']
+if jira_project_name == 'HADOOP':
+    project_jira_keys.append('HDFS-')
+    project_jira_keys.append('YARN-')
+    project_jira_keys.append('MAPREDUCE-')
+
+first_exclude_commit_hash = input("First commit hash to start excluding commits from history: ")
+fix_version = input("Fix Version: ")
+
+jira_server_url = input(
+    "Jira server url (default: https://issues.apache.org/jira): ") \
+        or "https://issues.apache.org/jira"
+
+jira = JIRA(server=jira_server_url)
+
+local_project_dir = input("Path of project's working dir with release branch checked-in: ")
+os.chdir(local_project_dir)
+
+GIT_STATUS_MSG = subprocess.check_output(['git', 'status']).decode("utf-8")
+print('\nCheck git status output and verify expected branch\n')
+print(GIT_STATUS_MSG)
+
+print('\nJira/Git commit message diff starting: ##############################################')
+
+issue_set_from_commit_msg = set()
+
+for commit in subprocess.check_output(['git', 'log', '--pretty=oneline']).decode(
+        "utf-8").splitlines():
+    if commit.startswith(first_exclude_commit_hash):
+        print("Found first commit hash after which git history is redundant. commit: "
+              + first_exclude_commit_hash)
+        print("Exiting successfully")
+        break
+    if re.search('revert', commit, re.IGNORECASE):
+        print("Commit seems reverted. \t\t\t Commit: " + commit)
+        continue
+    ACTUAL_PROJECT_JIRA = None
+    for project_jira in project_jira_keys:
+        if project_jira in commit:
+            ACTUAL_PROJECT_JIRA = project_jira
+            break
+    if not ACTUAL_PROJECT_JIRA:
+        print("WARN: Jira not found. \t\t\t Commit: " + commit)
+        continue
+    JIRA_NUM = ''
+    for c in commit.split(ACTUAL_PROJECT_JIRA)[1]:
+        if c.isdigit():
+            JIRA_NUM = JIRA_NUM + c
+        else:
+            break
+    issue = jira.issue(ACTUAL_PROJECT_JIRA + JIRA_NUM)
+    EXPECTED_FIX_VERSION = False
+    for version in issue.fields.fixVersions:
+        if version.name == fix_version:
+            EXPECTED_FIX_VERSION = True
+            break
+    if not EXPECTED_FIX_VERSION:
+        print("Jira not present with version: " + fix_version + ". \t Commit: " + commit)
+        continue
+    if issue.fields.status is None or issue.fields.status.name not in ('Resolved', 'Closed'):
+        print("Jira is not resolved yet? \t\t Commit: " + commit)
+    else:
+        # This means Jira corresponding to current commit message is resolved with expected
+        # fixVersion.
+        # This is no-op by default, if needed, convert to print statement.
+        issue_set_from_commit_msg.add(ACTUAL_PROJECT_JIRA + JIRA_NUM)
+
+print('Jira/Git commit message diff completed: ##############################################')
+
+print('\nAny resolved Jira with fixVersion ' + fix_version
+      + ' but corresponding commit not present')
+print('Starting diff: ##############################################')
+all_issues_with_fix_version = jira.search_issues(
+    'project=' + jira_project_name + ' and status in (Resolved,Closed) and fixVersion='
+    + fix_version)
+
+for issue in all_issues_with_fix_version:
+    if issue.key not in issue_set_from_commit_msg:
+        print(issue.key + ' is marked resolved with fixVersion ' + fix_version
+            + ' but no corresponding commit found')
+
+print('Completed diff: ##############################################')
diff --git a/dev-support/git-jira-validation/requirements.txt b/dev-support/git-jira-validation/requirements.txt
new file mode 100644
index 00000000000..ae7535a119f
--- /dev/null
+++ b/dev-support/git-jira-validation/requirements.txt
@@ -0,0 +1,18 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+jira==3.1.1


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 02/16: HDFS-16437 ReverseXML processor doesn't accept XML files without the … (#3926)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6534f0d4fdea48a3c8dd7df12b0048fdf3b3c233
Author: singer-bin <m1...@163.com>
AuthorDate: Sun Feb 6 13:05:57 2022 +0800

    HDFS-16437 ReverseXML processor doesn't accept XML files without the … (#3926)
    
    (cherry picked from commit 125e3b616040b4f98956aa946cc51e99f7d596c2)
    
    Change-Id: I03e4f2af17f0e4a8245c9c2c8ea1cb2cb41f777a
---
 .../OfflineImageReconstructor.java                 |  4 +++
 .../offlineImageViewer/TestOfflineImageViewer.java | 42 +++++++++++++++++++---
 2 files changed, 42 insertions(+), 4 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java
index 9ad4b090649..203bcc13284 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java
@@ -1761,6 +1761,10 @@ class OfflineImageReconstructor {
       XMLEvent ev = expectTag("[section header]", true);
       if (ev.getEventType() == XMLStreamConstants.END_ELEMENT) {
         if (ev.asEndElement().getName().getLocalPart().equals("fsimage")) {
+          if(unprocessedSections.size() == 1 && unprocessedSections.contains
+                  (SnapshotDiffSectionProcessor.NAME)){
+            break;
+          }
           throw new IOException("FSImage XML ended prematurely, without " +
               "including section(s) " + StringUtils.join(", ",
               unprocessedSections));
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
index 7bf3bfc1f8e..8980e18b68e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
@@ -1122,17 +1122,17 @@ public class TestOfflineImageViewer {
     LOG.info("Creating reverseImage.xml=" + reverseImageXml.getAbsolutePath() +
         ", reverseImage=" + reverseImage.getAbsolutePath() +
         ", reverseImage2Xml=" + reverseImage2Xml.getAbsolutePath());
-    if (OfflineImageViewerPB.run(new String[] { "-p", "XML",
+    if (OfflineImageViewerPB.run(new String[] {"-p", "XML",
          "-i", originalFsimage.getAbsolutePath(),
          "-o", reverseImageXml.getAbsolutePath() }) != 0) {
       throw new IOException("oiv returned failure creating first XML file.");
     }
-    if (OfflineImageViewerPB.run(new String[] { "-p", "ReverseXML",
+    if (OfflineImageViewerPB.run(new String[] {"-p", "ReverseXML",
           "-i", reverseImageXml.getAbsolutePath(),
           "-o", reverseImage.getAbsolutePath() }) != 0) {
       throw new IOException("oiv returned failure recreating fsimage file.");
     }
-    if (OfflineImageViewerPB.run(new String[] { "-p", "XML",
+    if (OfflineImageViewerPB.run(new String[] {"-p", "XML",
         "-i", reverseImage.getAbsolutePath(),
         "-o", reverseImage2Xml.getAbsolutePath() }) != 0) {
       throw new IOException("oiv returned failure creating second " +
@@ -1141,7 +1141,7 @@ public class TestOfflineImageViewer {
     // The XML file we wrote based on the re-created fsimage should be the
     // same as the one we dumped from the original fsimage.
     Assert.assertEquals("",
-      GenericTestUtils.getFilesDiff(reverseImageXml, reverseImage2Xml));
+        GenericTestUtils.getFilesDiff(reverseImageXml, reverseImage2Xml));
   }
 
   /**
@@ -1176,6 +1176,40 @@ public class TestOfflineImageViewer {
     }
   }
 
+  /**
+   * Tests that the ReverseXML processor doesn't accept XML files without the SnapshotDiffSection.
+   */
+  @Test
+  public void testReverseXmlWithoutSnapshotDiffSection() throws Throwable {
+    File imageWSDS = new File(tempDir, "imageWithoutSnapshotDiffSection.xml");
+    try(PrintWriter writer = new PrintWriter(imageWSDS, "UTF-8")) {
+      writer.println("<?xml version=\"1.0\"?>");
+      writer.println("<fsimage>");
+      writer.println("<version>");
+      writer.println("<layoutVersion>-66</layoutVersion>");
+      writer.println("<onDiskVersion>1</onDiskVersion>");
+      writer.println("<oivRevision>545bbef596c06af1c3c8dca1ce29096a64608478</oivRevision>");
+      writer.println("</version>");
+      writer.println("<FileUnderConstructionSection></FileUnderConstructionSection>");
+      writer.println("<ErasureCodingSection></ErasureCodingSection>");
+      writer.println("<INodeSection><lastInodeId>91488</lastInodeId><numInodes>0</numInodes>" +
+              "</INodeSection>");
+      writer.println("<SecretManagerSection><currentId>90</currentId><tokenSequenceNumber>35" +
+              "</tokenSequenceNumber><numDelegationKeys>0</numDelegationKeys><numTokens>0" +
+              "</numTokens></SecretManagerSection>");
+      writer.println("<INodeReferenceSection></INodeReferenceSection>");
+      writer.println("<SnapshotSection><snapshotCounter>0</snapshotCounter><numSnapshots>0" +
+              "</numSnapshots></SnapshotSection>");
+      writer.println("<NameSection><namespaceId>326384987</namespaceId></NameSection>");
+      writer.println("<CacheManagerSection><nextDirectiveId>1</nextDirectiveId><numPools>0" +
+              "</numPools><numDirectives>0</numDirectives></CacheManagerSection>");
+      writer.println("<INodeDirectorySection></INodeDirectorySection>");
+      writer.println("</fsimage>");
+    }
+      OfflineImageReconstructor.run(imageWSDS.getAbsolutePath(),
+              imageWSDS.getAbsolutePath() + ".out");
+  }
+
   @Test
   public void testFileDistributionCalculatorForException() throws Exception {
     File fsimageFile = null;


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 15/16: HDFS-16507. [SBN read] Avoid purging edit log which is in progress (#4082)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b43333f7070386122091638cacd1e2335830c1b7
Author: litao <to...@gmail.com>
AuthorDate: Thu Mar 31 14:01:48 2022 +0800

    HDFS-16507. [SBN read] Avoid purging edit log which is in progress (#4082)
---
 .../org/apache/hadoop/hdfs/server/namenode/FSEditLog.java     | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
index 8b34dfea954..c3e31bcba69 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
@@ -1512,11 +1512,12 @@ public class FSEditLog implements LogsPurgeable {
     if (!isOpenForWrite()) {
       return;
     }
-    
-    assert curSegmentTxId == HdfsServerConstants.INVALID_TXID || // on format this is no-op
-      minTxIdToKeep <= curSegmentTxId :
-      "cannot purge logs older than txid " + minTxIdToKeep +
-      " when current segment starts at " + curSegmentTxId;
+
+    Preconditions.checkArgument(
+        curSegmentTxId == HdfsServerConstants.INVALID_TXID || // on format this is no-op
+        minTxIdToKeep <= curSegmentTxId,
+        "cannot purge logs older than txid " + minTxIdToKeep +
+        " when current segment starts at " + curSegmentTxId);
     if (minTxIdToKeep == 0) {
       return;
     }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 07/16: YARN-11075. Explicitly declare serialVersionUID in LogMutation class. Contributed by Benjamin Teke

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8399b25ff8cd0b9750ab5bccae7c1cd528c27e7a
Author: Szilard Nemeth <sn...@apache.org>
AuthorDate: Tue Mar 1 18:05:04 2022 +0100

    YARN-11075. Explicitly declare serialVersionUID in LogMutation class. Contributed by Benjamin Teke
---
 .../resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java  | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java
index 4480bc34dcc..425d63f6a66 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java
@@ -53,6 +53,7 @@ public abstract class YarnConfigurationStore {
    * audit logging and recovery.
    */
   public static class LogMutation implements Serializable {
+    private static final long serialVersionUID = 7754046036718906356L;
     private Map<String, String> updates;
     private String user;
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 11/16: HDFS-16501. Print the exception when reporting a bad block (#4062)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6606da9500b8f8eab5d74d2aa0daa10b2f8660e8
Author: qinyuren <14...@qq.com>
AuthorDate: Wed Mar 23 14:03:17 2022 +0800

    HDFS-16501. Print the exception when reporting a bad block (#4062)
    
    Reviewed-by: tomscut <li...@bigo.sg>
    (cherry picked from commit 45ce1cce50c3ff65676d946e96bbc7846ad3131a)
---
 .../main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
index 0367b4a7aa3..2c666a38317 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
@@ -293,7 +293,7 @@ public class VolumeScanner extends Thread {
             volume, block);
         return;
       }
-      LOG.warn("Reporting bad {} on {}", block, volume);
+      LOG.warn("Reporting bad {} on {}", block, volume, e);
       scanner.datanode.handleBadBlock(block, e, true);
     }
   }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org