You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by st...@apache.org on 2022/04/14 16:19:43 UTC

[hadoop] branch branch-3.3.3 updated (2fc14c3a48f -> 877ef944f96)

This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


 discard 2fc14c3a48f HADOOP-18088. Replace log4j 1.x with reload4j. (#4052)
 discard 171e6450957 HDFS-16507. [SBN read] Avoid purging edit log which is in progress (#4082)
 discard 422e76cb092 MAPREDUCE-7373. Building MapReduce NativeTask fails on Fedora 34+ (#4120)
 discard 47452d86f5c HDFS-16355. Improve the description of dfs.block.scanner.volume.bytes.per.second (#3724)
 discard d4282e1322b YARN-10720. YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging. Contributed by Qi Zhu.
 discard b61c8eb7c06 HDFS-16501. Print the exception when reporting a bad block (#4062)
 discard 2a63322c1da HADOOP-18155. Refactor tests in TestFileUtil (#4063)
 discard 722823a68d1 HDFS-16428. Source path with storagePolicy cause wrong typeConsumed while rename (#3898). Contributed by lei w.
 discard 2c5cb7bd0db YARN-11014. YARN incorrectly validates maximum capacity resources on the validation API. Contributed by Benjamin Teke
 discard dbbab88f383 YARN-11075. Explicitly declare serialVersionUID in LogMutation class. Contributed by Benjamin Teke
 discard fe3a4d564ce HDFS-11041. Unable to unregister FsDatasetState MBean if DataNode is shutdown twice. Contributed by Wei-Chiu Chuang.
 discard c6df4bdc655 HADOOP-18125. Utility to identify git commit / Jira fixVersion discrepancies for RC preparation (#3991)
 discard a22da94623e HADOOP-18109. Ensure that default permissions of directories under internal ViewFS directories are the same as directories on target filesystems. Contributed by Chentao Yu. (3953)
 discard d4e0f3448cb HDFS-16422. Fix thread safety of EC decoding during concurrent preads (#3881)
 discard 0d261c5e686 HDFS-16437 ReverseXML processor doesn't accept XML files without the … (#3926)
     new a1c06735265 HADOOP-18198. Preparing for 3.3.3 release
     new 980fab91686 HDFS-16437 ReverseXML processor doesn't accept XML files without the … (#3926)
     new 686a934a5ec HDFS-16422. Fix thread safety of EC decoding during concurrent preads (#3881)
     new 38d448e40b2 HADOOP-18109. Ensure that default permissions of directories under internal ViewFS directories are the same as directories on target filesystems. Contributed by Chentao Yu. (3953)
     new c6b9fcfd6c5 HADOOP-18125. Utility to identify git commit / Jira fixVersion discrepancies for RC preparation (#3991)
     new 51b3a5b22c6 HDFS-11041. Unable to unregister FsDatasetState MBean if DataNode is shutdown twice. Contributed by Wei-Chiu Chuang.
     new a981df3aecf YARN-11075. Explicitly declare serialVersionUID in LogMutation class. Contributed by Benjamin Teke
     new cb40b7d7414 YARN-11014. YARN incorrectly validates maximum capacity resources on the validation API. Contributed by Benjamin Teke
     new 5c3fdf0f4ba HDFS-16428. Source path with storagePolicy cause wrong typeConsumed while rename (#3898). Contributed by lei w.
     new fd96d5c2d52 HADOOP-18155. Refactor tests in TestFileUtil (#4063)
     new 376904e4229 HDFS-16501. Print the exception when reporting a bad block (#4062)
     new 52aba525c3a YARN-10720. YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging. Contributed by Qi Zhu.
     new 63c07519de1 HDFS-16355. Improve the description of dfs.block.scanner.volume.bytes.per.second (#3724)
     new 73c459db0cf MAPREDUCE-7373. Building MapReduce NativeTask fails on Fedora 34+ (#4120)
     new d5845474e25 HDFS-16507. [SBN read] Avoid purging edit log which is in progress (#4082)
     new 877ef944f96 HADOOP-18088. Replace log4j 1.x with reload4j. (#4052)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (2fc14c3a48f)
            \
             N -- N -- N   refs/heads/branch-3.3.3 (877ef944f96)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 16 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 hadoop-assemblies/pom.xml                                             | 4 ++--
 hadoop-build-tools/pom.xml                                            | 2 +-
 hadoop-client-modules/hadoop-client-api/pom.xml                       | 4 ++--
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml          | 4 ++--
 hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml     | 4 ++--
 hadoop-client-modules/hadoop-client-integration-tests/pom.xml         | 4 ++--
 hadoop-client-modules/hadoop-client-minicluster/pom.xml               | 4 ++--
 hadoop-client-modules/hadoop-client-runtime/pom.xml                   | 4 ++--
 hadoop-client-modules/hadoop-client/pom.xml                           | 4 ++--
 hadoop-client-modules/pom.xml                                         | 2 +-
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml             | 4 ++--
 hadoop-cloud-storage-project/hadoop-cos/pom.xml                       | 2 +-
 hadoop-cloud-storage-project/pom.xml                                  | 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml                      | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml                    | 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml                             | 4 ++--
 hadoop-common-project/hadoop-common/pom.xml                           | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml                              | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml                          | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml                              | 4 ++--
 hadoop-common-project/hadoop-registry/pom.xml                         | 4 ++--
 hadoop-common-project/pom.xml                                         | 4 ++--
 hadoop-dist/pom.xml                                                   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml                        | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml                        | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml                 | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml                           | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml                           | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml                               | 4 ++--
 hadoop-hdfs-project/pom.xml                                           | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml       | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml    | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml      | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml                        | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml        | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-nativetask/pom.xml                        | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml              | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml            | 4 ++--
 hadoop-mapreduce-project/pom.xml                                      | 4 ++--
 hadoop-maven-plugins/pom.xml                                          | 2 +-
 hadoop-minicluster/pom.xml                                            | 4 ++--
 hadoop-project-dist/pom.xml                                           | 4 ++--
 hadoop-project/pom.xml                                                | 4 ++--
 hadoop-tools/hadoop-aliyun/pom.xml                                    | 2 +-
 hadoop-tools/hadoop-archive-logs/pom.xml                              | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml                                  | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml                                       | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml                            | 2 +-
 hadoop-tools/hadoop-azure/pom.xml                                     | 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml                                  | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml                                    | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen/pom.xml   | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-dist/pom.xml       | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/pom.xml      | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/pom.xml   | 4 ++--
 hadoop-tools/hadoop-dynamometer/pom.xml                               | 4 ++--
 hadoop-tools/hadoop-extras/pom.xml                                    | 4 ++--
 hadoop-tools/hadoop-fs2img/pom.xml                                    | 4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml                                   | 4 ++--
 hadoop-tools/hadoop-kafka/pom.xml                                     | 4 ++--
 hadoop-tools/hadoop-openstack/pom.xml                                 | 4 ++--
 hadoop-tools/hadoop-pipes/pom.xml                                     | 4 ++--
 hadoop-tools/hadoop-resourceestimator/pom.xml                         | 2 +-
 hadoop-tools/hadoop-rumen/pom.xml                                     | 4 ++--
 hadoop-tools/hadoop-sls/pom.xml                                       | 4 ++--
 hadoop-tools/hadoop-streaming/pom.xml                                 | 4 ++--
 hadoop-tools/hadoop-tools-dist/pom.xml                                | 4 ++--
 hadoop-tools/pom.xml                                                  | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml               | 4 ++--
 .../hadoop-yarn-applications-catalog-docker/pom.xml                   | 2 +-
 .../hadoop-yarn-applications-catalog-webapp/pom.xml                   | 2 +-
 .../hadoop-yarn-applications/hadoop-yarn-applications-catalog/pom.xml | 2 +-
 .../hadoop-yarn-applications-distributedshell/pom.xml                 | 4 ++--
 .../hadoop-yarn-applications-mawo-core/pom.xml                        | 2 +-
 .../hadoop-yarn-applications/hadoop-yarn-applications-mawo/pom.xml    | 2 +-
 .../hadoop-yarn-applications-unmanaged-am-launcher/pom.xml            | 4 ++--
 .../hadoop-yarn-services/hadoop-yarn-services-api/pom.xml             | 2 +-
 .../hadoop-yarn-services/hadoop-yarn-services-core/pom.xml            | 2 +-
 .../hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/pom.xml | 2 +-
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml      | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml            | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml            | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml               | 2 +-
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml          | 4 ++--
 .../hadoop-yarn-server-applicationhistoryservice/pom.xml              | 4 ++--
 .../hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml  | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml         | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml     | 4 ++--
 .../hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml  | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml  | 4 ++--
 .../hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml   | 4 ++--
 .../hadoop-yarn-server-timeline-pluginstorage/pom.xml                 | 4 ++--
 .../hadoop-yarn-server-timelineservice-documentstore/pom.xml          | 2 +-
 .../hadoop-yarn-server-timelineservice-hbase-tests/pom.xml            | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase-client/pom.xml           | 2 +-
 .../hadoop-yarn-server-timelineservice-hbase-common/pom.xml           | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase-server-1/pom.xml         | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml         | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase-server/pom.xml           | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase/pom.xml                  | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml     | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml           | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml            | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/pom.xml              | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml                | 4 ++--
 hadoop-yarn-project/hadoop-yarn/pom.xml                               | 4 ++--
 hadoop-yarn-project/pom.xml                                           | 4 ++--
 pom.xml                                                               | 4 ++--
 111 files changed, 203 insertions(+), 203 deletions(-)


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 14/16: MAPREDUCE-7373. Building MapReduce NativeTask fails on Fedora 34+ (#4120)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 73c459db0cfd21e697376c72b6ae5c4bb49eab1f
Author: Kengo Seki <se...@apache.org>
AuthorDate: Wed Mar 30 22:47:45 2022 +0900

    MAPREDUCE-7373. Building MapReduce NativeTask fails on Fedora 34+ (#4120)
    
    (cherry picked from commit dc4a680da8bcacf152cc8638d86dd171a7901245)
    
    Change-Id: Ia9ad34b5c3c0f767169fc48a1866c04ff73b1093
---
 .../hadoop-mapreduce-client-nativetask/src/CMakeLists.txt                | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
index ae3b9c6029e..4c32838afb0 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
@@ -27,6 +27,7 @@ set(GTEST_SRC_DIR ${CMAKE_SOURCE_DIR}/../../../../hadoop-common-project/hadoop-c
 # Add extra compiler and linker flags.
 # -Wno-sign-compare
 hadoop_add_compiler_flags("-DNDEBUG -DSIMPLE_MEMCPY -fno-strict-aliasing -fsigned-char")
+set(CMAKE_CXX_STANDARD 11)
 
 # Source location.
 set(SRC main/native)


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 02/16: HDFS-16437 ReverseXML processor doesn't accept XML files without the … (#3926)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 980fab9168670f4c9f2fb596ea28e7613bb39d7b
Author: singer-bin <m1...@163.com>
AuthorDate: Sun Feb 6 13:05:57 2022 +0800

    HDFS-16437 ReverseXML processor doesn't accept XML files without the … (#3926)
    
    (cherry picked from commit 125e3b616040b4f98956aa946cc51e99f7d596c2)
    
    Change-Id: I03e4f2af17f0e4a8245c9c2c8ea1cb2cb41f777a
---
 .../OfflineImageReconstructor.java                 |  4 +++
 .../offlineImageViewer/TestOfflineImageViewer.java | 42 +++++++++++++++++++---
 2 files changed, 42 insertions(+), 4 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java
index 9ad4b090649..203bcc13284 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java
@@ -1761,6 +1761,10 @@ class OfflineImageReconstructor {
       XMLEvent ev = expectTag("[section header]", true);
       if (ev.getEventType() == XMLStreamConstants.END_ELEMENT) {
         if (ev.asEndElement().getName().getLocalPart().equals("fsimage")) {
+          if(unprocessedSections.size() == 1 && unprocessedSections.contains
+                  (SnapshotDiffSectionProcessor.NAME)){
+            break;
+          }
           throw new IOException("FSImage XML ended prematurely, without " +
               "including section(s) " + StringUtils.join(", ",
               unprocessedSections));
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
index 7bf3bfc1f8e..8980e18b68e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
@@ -1122,17 +1122,17 @@ public class TestOfflineImageViewer {
     LOG.info("Creating reverseImage.xml=" + reverseImageXml.getAbsolutePath() +
         ", reverseImage=" + reverseImage.getAbsolutePath() +
         ", reverseImage2Xml=" + reverseImage2Xml.getAbsolutePath());
-    if (OfflineImageViewerPB.run(new String[] { "-p", "XML",
+    if (OfflineImageViewerPB.run(new String[] {"-p", "XML",
          "-i", originalFsimage.getAbsolutePath(),
          "-o", reverseImageXml.getAbsolutePath() }) != 0) {
       throw new IOException("oiv returned failure creating first XML file.");
     }
-    if (OfflineImageViewerPB.run(new String[] { "-p", "ReverseXML",
+    if (OfflineImageViewerPB.run(new String[] {"-p", "ReverseXML",
           "-i", reverseImageXml.getAbsolutePath(),
           "-o", reverseImage.getAbsolutePath() }) != 0) {
       throw new IOException("oiv returned failure recreating fsimage file.");
     }
-    if (OfflineImageViewerPB.run(new String[] { "-p", "XML",
+    if (OfflineImageViewerPB.run(new String[] {"-p", "XML",
         "-i", reverseImage.getAbsolutePath(),
         "-o", reverseImage2Xml.getAbsolutePath() }) != 0) {
       throw new IOException("oiv returned failure creating second " +
@@ -1141,7 +1141,7 @@ public class TestOfflineImageViewer {
     // The XML file we wrote based on the re-created fsimage should be the
     // same as the one we dumped from the original fsimage.
     Assert.assertEquals("",
-      GenericTestUtils.getFilesDiff(reverseImageXml, reverseImage2Xml));
+        GenericTestUtils.getFilesDiff(reverseImageXml, reverseImage2Xml));
   }
 
   /**
@@ -1176,6 +1176,40 @@ public class TestOfflineImageViewer {
     }
   }
 
+  /**
+   * Tests that the ReverseXML processor doesn't accept XML files without the SnapshotDiffSection.
+   */
+  @Test
+  public void testReverseXmlWithoutSnapshotDiffSection() throws Throwable {
+    File imageWSDS = new File(tempDir, "imageWithoutSnapshotDiffSection.xml");
+    try(PrintWriter writer = new PrintWriter(imageWSDS, "UTF-8")) {
+      writer.println("<?xml version=\"1.0\"?>");
+      writer.println("<fsimage>");
+      writer.println("<version>");
+      writer.println("<layoutVersion>-66</layoutVersion>");
+      writer.println("<onDiskVersion>1</onDiskVersion>");
+      writer.println("<oivRevision>545bbef596c06af1c3c8dca1ce29096a64608478</oivRevision>");
+      writer.println("</version>");
+      writer.println("<FileUnderConstructionSection></FileUnderConstructionSection>");
+      writer.println("<ErasureCodingSection></ErasureCodingSection>");
+      writer.println("<INodeSection><lastInodeId>91488</lastInodeId><numInodes>0</numInodes>" +
+              "</INodeSection>");
+      writer.println("<SecretManagerSection><currentId>90</currentId><tokenSequenceNumber>35" +
+              "</tokenSequenceNumber><numDelegationKeys>0</numDelegationKeys><numTokens>0" +
+              "</numTokens></SecretManagerSection>");
+      writer.println("<INodeReferenceSection></INodeReferenceSection>");
+      writer.println("<SnapshotSection><snapshotCounter>0</snapshotCounter><numSnapshots>0" +
+              "</numSnapshots></SnapshotSection>");
+      writer.println("<NameSection><namespaceId>326384987</namespaceId></NameSection>");
+      writer.println("<CacheManagerSection><nextDirectiveId>1</nextDirectiveId><numPools>0" +
+              "</numPools><numDirectives>0</numDirectives></CacheManagerSection>");
+      writer.println("<INodeDirectorySection></INodeDirectorySection>");
+      writer.println("</fsimage>");
+    }
+      OfflineImageReconstructor.run(imageWSDS.getAbsolutePath(),
+              imageWSDS.getAbsolutePath() + ".out");
+  }
+
   @Test
   public void testFileDistributionCalculatorForException() throws Exception {
     File fsimageFile = null;


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 04/16: HADOOP-18109. Ensure that default permissions of directories under internal ViewFS directories are the same as directories on target filesystems. Contributed by Chentao Yu. (3953)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 38d448e40b27a0c3a904b3fa4d9ba66044155b25
Author: Chentao Yu <ch...@linkedin.com>
AuthorDate: Thu Apr 15 17:46:40 2021 -0700

    HADOOP-18109. Ensure that default permissions of directories under internal ViewFS directories are the same as directories on target filesystems. Contributed by Chentao Yu. (3953)
    
    (cherry picked from commit 19d90e62fb28539f8c79bbb24f703301489825a6)
---
 .../org/apache/hadoop/fs/viewfs/ViewFileSystem.java   |  5 -----
 .../hadoop/fs/viewfs/TestViewFileSystemHdfs.java      | 19 +++++++++++++++++++
 2 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 7503edd45f4..8f333d1506b 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1579,11 +1579,6 @@ public class ViewFileSystem extends FileSystem {
       throw readOnlyMountTable("mkdirs",  dir);
     }
 
-    @Override
-    public boolean mkdirs(Path dir) throws IOException {
-      return mkdirs(dir, null);
-    }
-
     @Override
     public FSDataInputStream open(Path f, int bufferSize)
         throws AccessControlException, FileNotFoundException, IOException {
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
index fcb52577d99..fdc746464f4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
@@ -479,4 +479,23 @@ public class TestViewFileSystemHdfs extends ViewFileSystemBaseTest {
     assertEquals("The owner did not match ", owner, userUgi.getShortUserName());
     otherfs.delete(user1Path, false);
   }
+
+  @Test
+  public void testInternalDirectoryPermissions() throws IOException {
+    LOG.info("Starting testInternalDirectoryPermissions!");
+    Configuration localConf = new Configuration(conf);
+    ConfigUtil.addLinkFallback(
+        localConf, new Path(targetTestRoot, "fallbackDir").toUri());
+    FileSystem fs = FileSystem.get(FsConstants.VIEWFS_URI, localConf);
+    // check that the default permissions on a sub-folder of an internal
+    // directory are the same as those created on non-internal directories.
+    Path subDirOfInternalDir = new Path("/internalDir/dir1");
+    fs.mkdirs(subDirOfInternalDir);
+
+    Path subDirOfRealDir = new Path("/internalDir/linkToDir2/dir1");
+    fs.mkdirs(subDirOfRealDir);
+
+    assertEquals(fs.getFileStatus(subDirOfInternalDir).getPermission(),
+        fs.getFileStatus(subDirOfRealDir).getPermission());
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 03/16: HDFS-16422. Fix thread safety of EC decoding during concurrent preads (#3881)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 686a934a5ec32eb8af6a7a79bae72889fc5bb485
Author: daimin <da...@outlook.com>
AuthorDate: Fri Feb 11 10:20:00 2022 +0800

    HDFS-16422. Fix thread safety of EC decoding during concurrent preads (#3881)
    
    (cherry picked from commit 0e74f1e467fde9622af4eb8f18312583d2354c0f)
    
    Change-Id: If28915934ed6f4ad7a68d280cadc8c563e2daaba
---
 .../apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java    | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
index 249930ebe3f..2ebe94b0385 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
@@ -81,7 +81,7 @@ public abstract class RawErasureDecoder {
    * @param outputs output buffers to put decoded data into according to
    *                erasedIndexes, ready for read after the call
    */
-  public void decode(ByteBuffer[] inputs, int[] erasedIndexes,
+  public synchronized void decode(ByteBuffer[] inputs, int[] erasedIndexes,
                      ByteBuffer[] outputs) throws IOException {
     ByteBufferDecodingState decodingState = new ByteBufferDecodingState(this,
         inputs, erasedIndexes, outputs);
@@ -130,7 +130,7 @@ public abstract class RawErasureDecoder {
    *                erasedIndexes, ready for read after the call
    * @throws IOException if the decoder is closed.
    */
-  public void decode(byte[][] inputs, int[] erasedIndexes, byte[][] outputs)
+  public synchronized void decode(byte[][] inputs, int[] erasedIndexes, byte[][] outputs)
       throws IOException {
     ByteArrayDecodingState decodingState = new ByteArrayDecodingState(this,
         inputs, erasedIndexes, outputs);
@@ -163,7 +163,7 @@ public abstract class RawErasureDecoder {
    *                erasedIndexes, ready for read after the call
    * @throws IOException if the decoder is closed
    */
-  public void decode(ECChunk[] inputs, int[] erasedIndexes,
+  public synchronized void decode(ECChunk[] inputs, int[] erasedIndexes,
                      ECChunk[] outputs) throws IOException {
     ByteBuffer[] newInputs = CoderUtil.toBuffers(inputs);
     ByteBuffer[] newOutputs = CoderUtil.toBuffers(outputs);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 01/16: HADOOP-18198. Preparing for 3.3.3 release

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a1c067352651871f2552f7b744daf7117a314c81
Author: Steve Loughran <st...@cloudera.com>
AuthorDate: Tue Apr 12 14:41:48 2022 +0100

    HADOOP-18198. Preparing for 3.3.3 release
    
    Change-Id: Idebf79191dc91dad52073f2c63ee9ab3a99464d9
---
 hadoop-assemblies/pom.xml                                             | 4 ++--
 hadoop-build-tools/pom.xml                                            | 2 +-
 hadoop-client-modules/hadoop-client-api/pom.xml                       | 4 ++--
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml          | 4 ++--
 hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml     | 4 ++--
 hadoop-client-modules/hadoop-client-integration-tests/pom.xml         | 4 ++--
 hadoop-client-modules/hadoop-client-minicluster/pom.xml               | 4 ++--
 hadoop-client-modules/hadoop-client-runtime/pom.xml                   | 4 ++--
 hadoop-client-modules/hadoop-client/pom.xml                           | 4 ++--
 hadoop-client-modules/pom.xml                                         | 2 +-
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml             | 4 ++--
 hadoop-cloud-storage-project/hadoop-cos/pom.xml                       | 2 +-
 hadoop-cloud-storage-project/pom.xml                                  | 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml                      | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml                    | 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml                             | 4 ++--
 hadoop-common-project/hadoop-common/pom.xml                           | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml                              | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml                          | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml                              | 4 ++--
 hadoop-common-project/hadoop-registry/pom.xml                         | 4 ++--
 hadoop-common-project/pom.xml                                         | 4 ++--
 hadoop-dist/pom.xml                                                   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml                        | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml                        | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml                 | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml                           | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml                           | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml                               | 4 ++--
 hadoop-hdfs-project/pom.xml                                           | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml       | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml    | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml      | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml                        | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml        | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-nativetask/pom.xml                        | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml              | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml            | 4 ++--
 hadoop-mapreduce-project/pom.xml                                      | 4 ++--
 hadoop-maven-plugins/pom.xml                                          | 2 +-
 hadoop-minicluster/pom.xml                                            | 4 ++--
 hadoop-project-dist/pom.xml                                           | 4 ++--
 hadoop-project/pom.xml                                                | 4 ++--
 hadoop-tools/hadoop-aliyun/pom.xml                                    | 2 +-
 hadoop-tools/hadoop-archive-logs/pom.xml                              | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml                                  | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml                                       | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml                            | 2 +-
 hadoop-tools/hadoop-azure/pom.xml                                     | 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml                                  | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml                                    | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen/pom.xml   | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-dist/pom.xml       | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/pom.xml      | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/pom.xml   | 4 ++--
 hadoop-tools/hadoop-dynamometer/pom.xml                               | 4 ++--
 hadoop-tools/hadoop-extras/pom.xml                                    | 4 ++--
 hadoop-tools/hadoop-fs2img/pom.xml                                    | 4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml                                   | 4 ++--
 hadoop-tools/hadoop-kafka/pom.xml                                     | 4 ++--
 hadoop-tools/hadoop-openstack/pom.xml                                 | 4 ++--
 hadoop-tools/hadoop-pipes/pom.xml                                     | 4 ++--
 hadoop-tools/hadoop-resourceestimator/pom.xml                         | 2 +-
 hadoop-tools/hadoop-rumen/pom.xml                                     | 4 ++--
 hadoop-tools/hadoop-sls/pom.xml                                       | 4 ++--
 hadoop-tools/hadoop-streaming/pom.xml                                 | 4 ++--
 hadoop-tools/hadoop-tools-dist/pom.xml                                | 4 ++--
 hadoop-tools/pom.xml                                                  | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml               | 4 ++--
 .../hadoop-yarn-applications-catalog-docker/pom.xml                   | 2 +-
 .../hadoop-yarn-applications-catalog-webapp/pom.xml                   | 2 +-
 .../hadoop-yarn-applications/hadoop-yarn-applications-catalog/pom.xml | 2 +-
 .../hadoop-yarn-applications-distributedshell/pom.xml                 | 4 ++--
 .../hadoop-yarn-applications-mawo-core/pom.xml                        | 2 +-
 .../hadoop-yarn-applications/hadoop-yarn-applications-mawo/pom.xml    | 2 +-
 .../hadoop-yarn-applications-unmanaged-am-launcher/pom.xml            | 4 ++--
 .../hadoop-yarn-services/hadoop-yarn-services-api/pom.xml             | 2 +-
 .../hadoop-yarn-services/hadoop-yarn-services-core/pom.xml            | 2 +-
 .../hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/pom.xml | 2 +-
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml      | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml            | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml            | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml               | 2 +-
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml          | 4 ++--
 .../hadoop-yarn-server-applicationhistoryservice/pom.xml              | 4 ++--
 .../hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml  | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml         | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml     | 4 ++--
 .../hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml  | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml  | 4 ++--
 .../hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml   | 4 ++--
 .../hadoop-yarn-server-timeline-pluginstorage/pom.xml                 | 4 ++--
 .../hadoop-yarn-server-timelineservice-documentstore/pom.xml          | 2 +-
 .../hadoop-yarn-server-timelineservice-hbase-tests/pom.xml            | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase-client/pom.xml           | 2 +-
 .../hadoop-yarn-server-timelineservice-hbase-common/pom.xml           | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase-server-1/pom.xml         | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml         | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase-server/pom.xml           | 4 ++--
 .../hadoop-yarn-server-timelineservice-hbase/pom.xml                  | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml     | 4 ++--
 .../hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml           | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml            | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/pom.xml              | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml                | 4 ++--
 hadoop-yarn-project/hadoop-yarn/pom.xml                               | 4 ++--
 hadoop-yarn-project/pom.xml                                           | 4 ++--
 pom.xml                                                               | 4 ++--
 111 files changed, 203 insertions(+), 203 deletions(-)

diff --git a/hadoop-assemblies/pom.xml b/hadoop-assemblies/pom.xml
index 7f313ef1f9e..390d9c16d2d 100644
--- a/hadoop-assemblies/pom.xml
+++ b/hadoop-assemblies/pom.xml
@@ -23,11 +23,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-assemblies</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop Assemblies</name>
   <description>Apache Hadoop Assemblies</description>
 
diff --git a/hadoop-build-tools/pom.xml b/hadoop-build-tools/pom.xml
index dffc9692c23..ac36b48198e 100644
--- a/hadoop-build-tools/pom.xml
+++ b/hadoop-build-tools/pom.xml
@@ -18,7 +18,7 @@
   <parent>
     <artifactId>hadoop-main</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-build-tools</artifactId>
diff --git a/hadoop-client-modules/hadoop-client-api/pom.xml b/hadoop-client-modules/hadoop-client-api/pom.xml
index 217a4ad98ec..ac4a5ac23aa 100644
--- a/hadoop-client-modules/hadoop-client-api/pom.xml
+++ b/hadoop-client-modules/hadoop-client-api/pom.xml
@@ -18,11 +18,11 @@
 <parent>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-project</artifactId>
-   <version>3.3.2</version>
+   <version>3.3.3-SNAPSHOT</version>
    <relativePath>../../hadoop-project</relativePath>
 </parent>
   <artifactId>hadoop-client-api</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>jar</packaging>
 
   <description>Apache Hadoop Client</description>
diff --git a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
index f99ac01aed3..9d1deb63642 100644
--- a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
@@ -18,11 +18,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-client-check-invariants</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>pom</packaging>
 
   <description>
diff --git a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
index 3f3564f9fac..b96210dde7d 100644
--- a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
@@ -18,11 +18,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-client-check-test-invariants</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>pom</packaging>
 
   <description>
diff --git a/hadoop-client-modules/hadoop-client-integration-tests/pom.xml b/hadoop-client-modules/hadoop-client-integration-tests/pom.xml
index 21f8c99bfe4..51210210204 100644
--- a/hadoop-client-modules/hadoop-client-integration-tests/pom.xml
+++ b/hadoop-client-modules/hadoop-client-integration-tests/pom.xml
@@ -18,11 +18,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-client-integration-tests</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
 
   <description>Checks that we can use the generated artifacts</description>
   <name>Apache Hadoop Client Packaging Integration Tests</name>
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index 680c2504d8e..d5ca75cbb4f 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -18,11 +18,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-client-minicluster</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>jar</packaging>
 
   <description>Apache Hadoop Minicluster for Clients</description>
diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index e2b7e1f671d..cf9b95286eb 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -18,11 +18,11 @@
 <parent>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-project</artifactId>
-   <version>3.3.2</version>
+   <version>3.3.3-SNAPSHOT</version>
    <relativePath>../../hadoop-project</relativePath>
 </parent>
   <artifactId>hadoop-client-runtime</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>jar</packaging>
 
   <description>Apache Hadoop Client</description>
diff --git a/hadoop-client-modules/hadoop-client/pom.xml b/hadoop-client-modules/hadoop-client/pom.xml
index 7674812016d..9670a8a39a6 100644
--- a/hadoop-client-modules/hadoop-client/pom.xml
+++ b/hadoop-client-modules/hadoop-client/pom.xml
@@ -18,11 +18,11 @@
 <parent>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-project-dist</artifactId>
-   <version>3.3.2</version>
+   <version>3.3.3-SNAPSHOT</version>
    <relativePath>../../hadoop-project-dist</relativePath>
 </parent>
   <artifactId>hadoop-client</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
 
   <description>Apache Hadoop Client aggregation pom with dependencies exposed</description>
   <name>Apache Hadoop Client Aggregator</name>
diff --git a/hadoop-client-modules/pom.xml b/hadoop-client-modules/pom.xml
index e53c7bfc286..7a2340bece8 100644
--- a/hadoop-client-modules/pom.xml
+++ b/hadoop-client-modules/pom.xml
@@ -18,7 +18,7 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-client-modules</artifactId>
diff --git a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
index 8b2f667b4ee..270798e5ba5 100644
--- a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
+++ b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
@@ -18,11 +18,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-cloud-storage</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>jar</packaging>
 
   <description>Apache Hadoop Cloud Storage</description>
diff --git a/hadoop-cloud-storage-project/hadoop-cos/pom.xml b/hadoop-cloud-storage-project/hadoop-cos/pom.xml
index 3b19dab6f93..bb904be34e5 100644
--- a/hadoop-cloud-storage-project/hadoop-cos/pom.xml
+++ b/hadoop-cloud-storage-project/hadoop-cos/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-cos</artifactId>
diff --git a/hadoop-cloud-storage-project/pom.xml b/hadoop-cloud-storage-project/pom.xml
index ef303624bb6..ad97719c934 100644
--- a/hadoop-cloud-storage-project/pom.xml
+++ b/hadoop-cloud-storage-project/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-cloud-storage-project</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Cloud Storage Project</description>
   <name>Apache Hadoop Cloud Storage Project</name>
   <packaging>pom</packaging>
diff --git a/hadoop-common-project/hadoop-annotations/pom.xml b/hadoop-common-project/hadoop-annotations/pom.xml
index d059350b365..5ce04846f2a 100644
--- a/hadoop-common-project/hadoop-annotations/pom.xml
+++ b/hadoop-common-project/hadoop-annotations/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-annotations</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Annotations</description>
   <name>Apache Hadoop Annotations</name>
   <packaging>jar</packaging>
diff --git a/hadoop-common-project/hadoop-auth-examples/pom.xml b/hadoop-common-project/hadoop-auth-examples/pom.xml
index f4af8183c0f..27580e50c8a 100644
--- a/hadoop-common-project/hadoop-auth-examples/pom.xml
+++ b/hadoop-common-project/hadoop-auth-examples/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-auth-examples</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>war</packaging>
 
   <name>Apache Hadoop Auth Examples</name>
diff --git a/hadoop-common-project/hadoop-auth/pom.xml b/hadoop-common-project/hadoop-auth/pom.xml
index 7b814b0d6a1..923be91e903 100644
--- a/hadoop-common-project/hadoop-auth/pom.xml
+++ b/hadoop-common-project/hadoop-auth/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-auth</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>jar</packaging>
 
   <name>Apache Hadoop Auth</name>
diff --git a/hadoop-common-project/hadoop-common/pom.xml b/hadoop-common-project/hadoop-common/pom.xml
index b3d52b9fa7c..086a77f26d9 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project-dist</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project-dist</relativePath>
   </parent>
   <artifactId>hadoop-common</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Common</description>
   <name>Apache Hadoop Common</name>
   <packaging>jar</packaging>
diff --git a/hadoop-common-project/hadoop-kms/pom.xml b/hadoop-common-project/hadoop-kms/pom.xml
index b73b8b688c0..71be87347a9 100644
--- a/hadoop-common-project/hadoop-kms/pom.xml
+++ b/hadoop-common-project/hadoop-kms/pom.xml
@@ -22,11 +22,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-kms</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>jar</packaging>
 
   <name>Apache Hadoop KMS</name>
diff --git a/hadoop-common-project/hadoop-minikdc/pom.xml b/hadoop-common-project/hadoop-minikdc/pom.xml
index 98296a6bbbe..746d72c429c 100644
--- a/hadoop-common-project/hadoop-minikdc/pom.xml
+++ b/hadoop-common-project/hadoop-minikdc/pom.xml
@@ -18,12 +18,12 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-minikdc</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop MiniKDC</description>
   <name>Apache Hadoop MiniKDC</name>
   <packaging>jar</packaging>
diff --git a/hadoop-common-project/hadoop-nfs/pom.xml b/hadoop-common-project/hadoop-nfs/pom.xml
index c74e7cd638a..baddec82727 100644
--- a/hadoop-common-project/hadoop-nfs/pom.xml
+++ b/hadoop-common-project/hadoop-nfs/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-nfs</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>jar</packaging>
 
   <name>Apache Hadoop NFS</name>
diff --git a/hadoop-common-project/hadoop-registry/pom.xml b/hadoop-common-project/hadoop-registry/pom.xml
index 85687450479..5cfc2fe6735 100644
--- a/hadoop-common-project/hadoop-registry/pom.xml
+++ b/hadoop-common-project/hadoop-registry/pom.xml
@@ -19,12 +19,12 @@
   <parent>
     <artifactId>hadoop-project</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-registry</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop Registry</name>
 
   <dependencies>
diff --git a/hadoop-common-project/pom.xml b/hadoop-common-project/pom.xml
index ce331dd43ff..31f8021c654 100644
--- a/hadoop-common-project/pom.xml
+++ b/hadoop-common-project/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-common-project</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Common Project</description>
   <name>Apache Hadoop Common Project</name>
   <packaging>pom</packaging>
diff --git a/hadoop-dist/pom.xml b/hadoop-dist/pom.xml
index ba5a9621981..9a6d8ab4af0 100644
--- a/hadoop-dist/pom.xml
+++ b/hadoop-dist/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-dist</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Distribution</description>
   <name>Apache Hadoop Distribution</name>
   <packaging>jar</packaging>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
index 7172f276ce1..f85db539eba 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
@@ -20,11 +20,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project-dist</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project-dist</relativePath>
   </parent>
   <artifactId>hadoop-hdfs-client</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop HDFS Client</description>
   <name>Apache Hadoop HDFS Client</name>
   <packaging>jar</packaging>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index 192674e3843..e571d744e54 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -22,11 +22,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-hdfs-httpfs</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>jar</packaging>
 
   <name>Apache Hadoop HttpFS</name>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml
index 87be3c939f6..a53a99d51c0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml
@@ -20,11 +20,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project-dist</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project-dist</relativePath>
   </parent>
   <artifactId>hadoop-hdfs-native-client</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop HDFS Native Client</description>
   <name>Apache Hadoop HDFS Native Client</name>
   <packaging>jar</packaging>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
index 77e8cab19cd..0d8ef6c4c0d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
@@ -20,11 +20,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-hdfs-nfs</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop HDFS-NFS</description>
   <name>Apache Hadoop HDFS-NFS</name>
   <packaging>jar</packaging>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
index 7d0a77d670a..b37a1de11e1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
@@ -20,11 +20,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project-dist</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project-dist</relativePath>
   </parent>
   <artifactId>hadoop-hdfs-rbf</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop HDFS-RBF</description>
   <name>Apache Hadoop HDFS-RBF</name>
   <packaging>jar</packaging>
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index e38875c69f9..df5d2cce9a6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -20,11 +20,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project-dist</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project-dist</relativePath>
   </parent>
   <artifactId>hadoop-hdfs</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop HDFS</description>
   <name>Apache Hadoop HDFS</name>
   <packaging>jar</packaging>
diff --git a/hadoop-hdfs-project/pom.xml b/hadoop-hdfs-project/pom.xml
index 5779dc0c5ea..491df2986c1 100644
--- a/hadoop-hdfs-project/pom.xml
+++ b/hadoop-hdfs-project/pom.xml
@@ -20,11 +20,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-hdfs-project</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop HDFS Project</description>
   <name>Apache Hadoop HDFS Project</name>
   <packaging>pom</packaging>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml
index 25046f93010..05434a62861 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-mapreduce-client</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-mapreduce-client-app</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop MapReduce App</name>
 
   <properties>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
index 846f5bbd2ef..e88878a896e 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-mapreduce-client</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-mapreduce-client-common</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop MapReduce Common</name>
 
   <properties>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
index 0ecb1fe5ee6..95c8c665fb3 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-mapreduce-client</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-mapreduce-client-core</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop MapReduce Core</name>
 
   <properties>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/pom.xml
index c4d8fcf0f21..672f7437d13 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-mapreduce-client</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-mapreduce-client-hs-plugins</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop MapReduce HistoryServer Plugins</name>
 
   <properties>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
index 09e3af52873..d7d17b939c1 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-mapreduce-client</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-mapreduce-client-hs</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop MapReduce HistoryServer</name>
 
   <properties>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml
index c769a8f4d3d..b1b2cadf47d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-mapreduce-client</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop MapReduce JobClient</name>
 
   <properties>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
index efa999c8bb6..4f34773eee0 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-mapreduce-client</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-mapreduce-client-nativetask</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop MapReduce NativeTask</name>
 
   <properties>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml
index 272b68d7b93..137a341398a 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-mapreduce-client</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-mapreduce-client-shuffle</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop MapReduce Shuffle</name>
 
   <properties>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml
index 093f9090df3..34dad48cb8d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml
@@ -18,11 +18,11 @@
     <parent>
         <artifactId>hadoop-mapreduce-client</artifactId>
         <groupId>org.apache.hadoop</groupId>
-        <version>3.3.2</version>
+        <version>3.3.3-SNAPSHOT</version>
     </parent>
     <modelVersion>4.0.0</modelVersion>
     <artifactId>hadoop-mapreduce-client-uploader</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <name>Apache Hadoop MapReduce Uploader</name>
 
     <dependencies>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
index 5d85ef70160..df6f081a8da 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-mapreduce-client</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop MapReduce Client</name>
   <packaging>pom</packaging>
 
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
index 4cf7371e8a8..686de8d45a7 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-mapreduce-examples</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop MapReduce Examples</description>
   <name>Apache Hadoop MapReduce Examples</name>
   <packaging>jar</packaging>
diff --git a/hadoop-mapreduce-project/pom.xml b/hadoop-mapreduce-project/pom.xml
index 1163ee307a0..cba6031809b 100644
--- a/hadoop-mapreduce-project/pom.xml
+++ b/hadoop-mapreduce-project/pom.xml
@@ -18,11 +18,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-mapreduce</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>pom</packaging>
   <name>Apache Hadoop MapReduce</name>
   <url>https://hadoop.apache.org/</url>
diff --git a/hadoop-maven-plugins/pom.xml b/hadoop-maven-plugins/pom.xml
index 000b339c6de..3034133ac4b 100644
--- a/hadoop-maven-plugins/pom.xml
+++ b/hadoop-maven-plugins/pom.xml
@@ -19,7 +19,7 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-maven-plugins</artifactId>
diff --git a/hadoop-minicluster/pom.xml b/hadoop-minicluster/pom.xml
index 59bdf8181b1..e8da9c870e0 100644
--- a/hadoop-minicluster/pom.xml
+++ b/hadoop-minicluster/pom.xml
@@ -18,11 +18,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-minicluster</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>jar</packaging>
 
   <description>Apache Hadoop Mini-Cluster</description>
diff --git a/hadoop-project-dist/pom.xml b/hadoop-project-dist/pom.xml
index af04baa5924..7707c8921f4 100644
--- a/hadoop-project-dist/pom.xml
+++ b/hadoop-project-dist/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-project-dist</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Project Dist POM</description>
   <name>Apache Hadoop Project Dist POM</name>
   <packaging>pom</packaging>
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index aa54020df2f..66dd3fe6ac6 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -20,10 +20,10 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-main</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <artifactId>hadoop-project</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Project POM</description>
   <name>Apache Hadoop Project POM</name>
   <packaging>pom</packaging>
diff --git a/hadoop-tools/hadoop-aliyun/pom.xml b/hadoop-tools/hadoop-aliyun/pom.xml
index ead87c36fff..0575245934a 100644
--- a/hadoop-tools/hadoop-aliyun/pom.xml
+++ b/hadoop-tools/hadoop-aliyun/pom.xml
@@ -18,7 +18,7 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-aliyun</artifactId>
diff --git a/hadoop-tools/hadoop-archive-logs/pom.xml b/hadoop-tools/hadoop-archive-logs/pom.xml
index afdd768fcc7..ecc5d0efb8a 100644
--- a/hadoop-tools/hadoop-archive-logs/pom.xml
+++ b/hadoop-tools/hadoop-archive-logs/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-archive-logs</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Archive Logs</description>
   <name>Apache Hadoop Archive Logs</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-archives/pom.xml b/hadoop-tools/hadoop-archives/pom.xml
index c4b1e46dca6..0bef641cb5c 100644
--- a/hadoop-tools/hadoop-archives/pom.xml
+++ b/hadoop-tools/hadoop-archives/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-archives</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Archives</description>
   <name>Apache Hadoop Archives</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-aws/pom.xml b/hadoop-tools/hadoop-aws/pom.xml
index 3d293bdce55..8ef1152579c 100644
--- a/hadoop-tools/hadoop-aws/pom.xml
+++ b/hadoop-tools/hadoop-aws/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-aws</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop Amazon Web Services support</name>
   <description>
     This module contains code to support integration with Amazon Web Services.
diff --git a/hadoop-tools/hadoop-azure-datalake/pom.xml b/hadoop-tools/hadoop-azure-datalake/pom.xml
index 98e13d43c40..a9c6ba29ae9 100644
--- a/hadoop-tools/hadoop-azure-datalake/pom.xml
+++ b/hadoop-tools/hadoop-azure-datalake/pom.xml
@@ -19,7 +19,7 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-azure-datalake</artifactId>
diff --git a/hadoop-tools/hadoop-azure/pom.xml b/hadoop-tools/hadoop-azure/pom.xml
index 546d28d5e8e..c8c5cc37742 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -19,7 +19,7 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-azure</artifactId>
diff --git a/hadoop-tools/hadoop-datajoin/pom.xml b/hadoop-tools/hadoop-datajoin/pom.xml
index 033079345a0..e6586152880 100644
--- a/hadoop-tools/hadoop-datajoin/pom.xml
+++ b/hadoop-tools/hadoop-datajoin/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-datajoin</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Data Join</description>
   <name>Apache Hadoop Data Join</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-distcp/pom.xml b/hadoop-tools/hadoop-distcp/pom.xml
index c0d6f246007..5e306ea938d 100644
--- a/hadoop-tools/hadoop-distcp/pom.xml
+++ b/hadoop-tools/hadoop-distcp/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-distcp</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Distributed Copy</description>
   <name>Apache Hadoop Distributed Copy</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen/pom.xml b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen/pom.xml
index feaca5bb8fc..685dedbb7d9 100644
--- a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen/pom.xml
+++ b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-dynamometer-blockgen</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Dynamometer Block Listing Generator</description>
   <name>Apache Hadoop Dynamometer Block Listing Generator</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-dist/pom.xml b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-dist/pom.xml
index 6f73fd2c3e5..bf9641af0cf 100644
--- a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-dist/pom.xml
+++ b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-dist/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project-dist</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../../hadoop-project-dist</relativePath>
   </parent>
   <artifactId>hadoop-dynamometer-dist</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Dynamometer Dist</description>
   <name>Apache Hadoop Dynamometer Dist</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/pom.xml b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/pom.xml
index b390f628755..399e2662c10 100644
--- a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/pom.xml
+++ b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-dynamometer-infra</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Dynamometer Cluster Simulator</description>
   <name>Apache Hadoop Dynamometer Cluster Simulator</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/pom.xml b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/pom.xml
index 2b8915f4e13..23f7589cd80 100644
--- a/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/pom.xml
+++ b/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-dynamometer-workload</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Dynamometer Workload Simulator</description>
   <name>Apache Hadoop Dynamometer Workload Simulator</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-dynamometer/pom.xml b/hadoop-tools/hadoop-dynamometer/pom.xml
index 0478dfd95e3..f7cd668a488 100644
--- a/hadoop-tools/hadoop-dynamometer/pom.xml
+++ b/hadoop-tools/hadoop-dynamometer/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-dynamometer</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Dynamometer</description>
   <name>Apache Hadoop Dynamometer</name>
   <packaging>pom</packaging>
diff --git a/hadoop-tools/hadoop-extras/pom.xml b/hadoop-tools/hadoop-extras/pom.xml
index d04b2662e32..461a14a4721 100644
--- a/hadoop-tools/hadoop-extras/pom.xml
+++ b/hadoop-tools/hadoop-extras/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-extras</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Extras</description>
   <name>Apache Hadoop Extras</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-fs2img/pom.xml b/hadoop-tools/hadoop-fs2img/pom.xml
index 272fbcdd661..bc4974e6b48 100644
--- a/hadoop-tools/hadoop-fs2img/pom.xml
+++ b/hadoop-tools/hadoop-fs2img/pom.xml
@@ -17,12 +17,12 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <groupId>org.apache.hadoop</groupId>
   <artifactId>hadoop-fs2img</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Image Generation Tool</description>
   <name>Apache Hadoop Image Generation Tool</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-gridmix/pom.xml b/hadoop-tools/hadoop-gridmix/pom.xml
index 7e221f376f4..69e0c79c7d4 100644
--- a/hadoop-tools/hadoop-gridmix/pom.xml
+++ b/hadoop-tools/hadoop-gridmix/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-gridmix</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Gridmix</description>
   <name>Apache Hadoop Gridmix</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-kafka/pom.xml b/hadoop-tools/hadoop-kafka/pom.xml
index 3cfef2c1ccc..395ead4bbed 100644
--- a/hadoop-tools/hadoop-kafka/pom.xml
+++ b/hadoop-tools/hadoop-kafka/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-kafka</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop Kafka Library support</name>
   <description>
     This module contains code to support integration with Kafka.
diff --git a/hadoop-tools/hadoop-openstack/pom.xml b/hadoop-tools/hadoop-openstack/pom.xml
index f44f491fd91..1f45250f86d 100644
--- a/hadoop-tools/hadoop-openstack/pom.xml
+++ b/hadoop-tools/hadoop-openstack/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-openstack</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop OpenStack support</name>
   <description>
     This module contains code to support integration with OpenStack.
diff --git a/hadoop-tools/hadoop-pipes/pom.xml b/hadoop-tools/hadoop-pipes/pom.xml
index 3b81254b358..b3eb23d1762 100644
--- a/hadoop-tools/hadoop-pipes/pom.xml
+++ b/hadoop-tools/hadoop-pipes/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-pipes</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Pipes</description>
   <name>Apache Hadoop Pipes</name>
   <packaging>pom</packaging>
diff --git a/hadoop-tools/hadoop-resourceestimator/pom.xml b/hadoop-tools/hadoop-resourceestimator/pom.xml
index 4106ca75531..bf8a4ecca07 100644
--- a/hadoop-tools/hadoop-resourceestimator/pom.xml
+++ b/hadoop-tools/hadoop-resourceestimator/pom.xml
@@ -25,7 +25,7 @@
     <parent>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-project</artifactId>
-        <version>3.3.2</version>
+        <version>3.3.3-SNAPSHOT</version>
         <relativePath>../../hadoop-project</relativePath>
     </parent>
     <artifactId>hadoop-resourceestimator</artifactId>
diff --git a/hadoop-tools/hadoop-rumen/pom.xml b/hadoop-tools/hadoop-rumen/pom.xml
index e2c3da82d37..4a4a6d12061 100644
--- a/hadoop-tools/hadoop-rumen/pom.xml
+++ b/hadoop-tools/hadoop-rumen/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-rumen</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Rumen</description>
   <name>Apache Hadoop Rumen</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-sls/pom.xml b/hadoop-tools/hadoop-sls/pom.xml
index 168a963f7f3..e972b1f0598 100644
--- a/hadoop-tools/hadoop-sls/pom.xml
+++ b/hadoop-tools/hadoop-sls/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-sls</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Scheduler Load Simulator</description>
   <name>Apache Hadoop Scheduler Load Simulator</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-streaming/pom.xml b/hadoop-tools/hadoop-streaming/pom.xml
index a0bb1adceb9..0546e49fbdf 100644
--- a/hadoop-tools/hadoop-streaming/pom.xml
+++ b/hadoop-tools/hadoop-streaming/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-streaming</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop MapReduce Streaming</description>
   <name>Apache Hadoop MapReduce Streaming</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/hadoop-tools-dist/pom.xml b/hadoop-tools/hadoop-tools-dist/pom.xml
index a9f8afe5410..f1a0631d7c7 100644
--- a/hadoop-tools/hadoop-tools-dist/pom.xml
+++ b/hadoop-tools/hadoop-tools-dist/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project-dist</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project-dist</relativePath>
   </parent>
   <artifactId>hadoop-tools-dist</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Tools Dist</description>
   <name>Apache Hadoop Tools Dist</name>
   <packaging>jar</packaging>
diff --git a/hadoop-tools/pom.xml b/hadoop-tools/pom.xml
index 2bacc957b95..d7306106952 100644
--- a/hadoop-tools/pom.xml
+++ b/hadoop-tools/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-tools</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Tools</description>
   <name>Apache Hadoop Tools</name>
   <packaging>pom</packaging>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
index 54341c949a9..b2af522797b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-api</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN API</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker/pom.xml
index ab7f23a7788..626dedd86cc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hadoop-yarn-applications-catalog</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
 
   <name>Apache Hadoop YARN Application Catalog Docker Image</name>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml
index 2227d0da559..7c22f09650c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hadoop-yarn-applications-catalog</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
 
   <name>Apache Hadoop YARN Application Catalog Webapp</name>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/pom.xml
index face755f08f..c875a6485f8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/pom.xml
@@ -19,7 +19,7 @@
     <parent>
         <artifactId>hadoop-yarn-applications</artifactId>
         <groupId>org.apache.hadoop</groupId>
-        <version>3.3.2</version>
+        <version>3.3.3-SNAPSHOT</version>
     </parent>
 
     <groupId>org.apache.hadoop</groupId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
index 10d466c27e9..387d4a97417 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn-applications</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-applications-distributedshell</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN DistributedShell</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/pom.xml
index 81a13a5296a..570727cfcaf 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/pom.xml
@@ -15,7 +15,7 @@
     <parent>
         <artifactId>hadoop-yarn-applications-mawo</artifactId>
         <groupId>org.apache.hadoop.applications.mawo</groupId>
-        <version>3.3.2</version>
+        <version>3.3.3-SNAPSHOT</version>
     </parent>
   <modelVersion>4.0.0</modelVersion>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/pom.xml
index f32da7fcbce..7806265b4d5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/pom.xml
@@ -15,7 +15,7 @@
     <parent>
         <artifactId>hadoop-yarn-applications</artifactId>
         <groupId>org.apache.hadoop</groupId>
-        <version>3.3.2</version>
+        <version>3.3.3-SNAPSHOT</version>
     </parent>
     <modelVersion>4.0.0</modelVersion>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml
index 82be7a63035..f71ddb745c0 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn-applications</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-applications-unmanaged-am-launcher</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Unmanaged Am Launcher</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
index 628cff166b5..1b391feb1d8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
@@ -19,7 +19,7 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-yarn-services</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <artifactId>hadoop-yarn-services-api</artifactId>
   <name>Apache Hadoop YARN Services API</name>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml
index 922fd295ee1..02b6b7124dc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml
@@ -19,7 +19,7 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-yarn-services</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <artifactId>hadoop-yarn-services-core</artifactId>
   <packaging>jar</packaging>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/pom.xml
index fa7eac44cc0..bddf157d50b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/pom.xml
@@ -19,7 +19,7 @@
     <parent>
         <artifactId>hadoop-yarn-applications</artifactId>
         <groupId>org.apache.hadoop</groupId>
-        <version>3.3.2</version>
+        <version>3.3.3-SNAPSHOT</version>
     </parent>
     <modelVersion>4.0.0</modelVersion>
     <artifactId>hadoop-yarn-services</artifactId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml
index d7cd489e744..da588956b47 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-applications</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Applications</name>
   <packaging>pom</packaging>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
index 64f252a1ebe..368a0251aed 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
@@ -17,10 +17,10 @@
   <parent>
     <artifactId>hadoop-yarn</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <artifactId>hadoop-yarn-client</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Client</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
index 419348e868a..63ed238ed27 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-common</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Common</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
index 49ff3917581..e0af8fbf114 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
@@ -18,7 +18,7 @@
     <parent>
         <artifactId>hadoop-yarn</artifactId>
         <groupId>org.apache.hadoop</groupId>
-        <version>3.3.2</version>
+        <version>3.3.3-SNAPSHOT</version>
     </parent>
     <modelVersion>4.0.0</modelVersion>
     <artifactId>hadoop-yarn-csi</artifactId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
index fb5d28f9346..10d67b1312a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-registry</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Registry</name>
 
   <dependencies>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
index 3d72dcd6ab9..b0214691647 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
@@ -22,11 +22,11 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-applicationhistoryservice</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN ApplicationHistoryService</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
index 553e2495062..46d3d084074 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-common</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Server Common</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
index e421ed49ee1..0acb46494cc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-nodemanager</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN NodeManager</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
index 8ab1e711038..68574b45703 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN ResourceManager</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml
index c813a545375..c4578a6ab9b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml
@@ -19,12 +19,12 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.hadoop</groupId>
   <artifactId>hadoop-yarn-server-router</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Router</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml
index 46a00d623fd..8d0db155313 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml
@@ -17,10 +17,10 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <artifactId>hadoop-yarn-server-sharedcachemanager</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN SharedCacheManager</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
index 908182f8fcd..f95d7ff5eea 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
@@ -19,10 +19,10 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <artifactId>hadoop-yarn-server-tests</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Server Tests</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/pom.xml
index 7668d78f55f..466df5116b5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/pom.xml
@@ -22,11 +22,11 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-timeline-pluginstorage</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Timeline Plugin Storage</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/pom.xml
index b8cb4f26b1b..06e6a2d3dd0 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/pom.xml
@@ -19,7 +19,7 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-timelineservice-documentstore</artifactId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml
index 1a9e008040a..ea415601b9e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml
@@ -22,11 +22,11 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-timelineservice-hbase-tests</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN TimelineService HBase tests</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/pom.xml
index cfbc46d0bdd..e34c4091cbe 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/pom.xml
@@ -22,7 +22,7 @@
   <parent>
     <artifactId>hadoop-yarn-server-timelineservice-hbase</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-timelineservice-hbase-client</artifactId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/pom.xml
index 40114d08b96..22a3cf17253 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/pom.xml
@@ -22,13 +22,13 @@
   <parent>
     <artifactId>hadoop-yarn-server-timelineservice-hbase</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
   <artifactId>hadoop-yarn-server-timelineservice-hbase-common</artifactId>
   <name>Apache Hadoop YARN TimelineService HBase Common</name>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
 
   <properties>
     <!-- Needed for generating FindBugs warnings using parent pom -->
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-1/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-1/pom.xml
index 5f10bfc200d..d4f8c185876 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-1/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-1/pom.xml
@@ -22,13 +22,13 @@
   <parent>
     <artifactId>hadoop-yarn-server-timelineservice-hbase-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
 
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-timelineservice-hbase-server-1</artifactId>
   <name>Apache Hadoop YARN TimelineService HBase Server 1.2</name>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
 
   <properties>
     <!-- Needed for generating FindBugs warnings using parent pom -->
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml
index d3401bd021e..991ff40998d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml
@@ -22,13 +22,13 @@
   <parent>
     <artifactId>hadoop-yarn-server-timelineservice-hbase-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
   <artifactId>hadoop-yarn-server-timelineservice-hbase-server-2</artifactId>
   <name>Apache Hadoop YARN TimelineService HBase Server 2.0</name>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
 
   <properties>
     <!-- Needed for generating FindBugs warnings using parent pom -->
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/pom.xml
index 03e746d9214..40d5972d2aa 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/pom.xml
@@ -22,12 +22,12 @@
   <parent>
     <artifactId>hadoop-yarn-server-timelineservice-hbase</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
   <artifactId>hadoop-yarn-server-timelineservice-hbase-server</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN TimelineService HBase Servers</name>
   <packaging>pom</packaging>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/pom.xml
index aee60bb80bc..749f36048d5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/pom.xml
@@ -22,12 +22,12 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
 
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-timelineservice-hbase</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN TimelineService HBase Backend</name>
   <packaging>pom</packaging>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
index 163d94115be..2486c6d74d6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
@@ -22,11 +22,11 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-timelineservice</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Timeline Service</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
index 77f6d79074c..060a0d0ccbc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn-server</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server-web-proxy</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Web Proxy</name>
 
   <properties>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
index 62609cf99d9..80ef56df581 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-server</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Server</name>
   <packaging>pom</packaging>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/pom.xml
index 1d224b79109..608c525840f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/pom.xml
@@ -19,11 +19,11 @@
   <parent>
     <artifactId>hadoop-yarn</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-site</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN Site</name>
   <packaging>pom</packaging>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index 175d2946dd1..589bdf9dc05 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -20,11 +20,11 @@
   <parent>
     <artifactId>hadoop-yarn</artifactId>
     <groupId>org.apache.hadoop</groupId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>hadoop-yarn-ui</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <name>Apache Hadoop YARN UI</name>
   <packaging>${packagingType}</packaging>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/pom.xml b/hadoop-yarn-project/hadoop-yarn/pom.xml
index 8d0d379b411..9cdcbd90ea7 100644
--- a/hadoop-yarn-project/hadoop-yarn/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/pom.xml
@@ -17,11 +17,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-yarn</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>pom</packaging>
   <name>Apache Hadoop YARN</name>
 
diff --git a/hadoop-yarn-project/pom.xml b/hadoop-yarn-project/pom.xml
index d85744d7ca5..468a9fca511 100644
--- a/hadoop-yarn-project/pom.xml
+++ b/hadoop-yarn-project/pom.xml
@@ -18,11 +18,11 @@
   <parent>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-project</artifactId>
-    <version>3.3.2</version>
+    <version>3.3.3-SNAPSHOT</version>
     <relativePath>../hadoop-project</relativePath>
   </parent>
   <artifactId>hadoop-yarn-project</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <packaging>pom</packaging>
   <name>Apache Hadoop YARN Project</name>
   <url>https://hadoop.apache.org/yarn/</url>
diff --git a/pom.xml b/pom.xml
index 7dabc07810f..660649ed6d1 100644
--- a/pom.xml
+++ b/pom.xml
@@ -18,7 +18,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/x
   <modelVersion>4.0.0</modelVersion>
   <groupId>org.apache.hadoop</groupId>
   <artifactId>hadoop-main</artifactId>
-  <version>3.3.2</version>
+  <version>3.3.3-SNAPSHOT</version>
   <description>Apache Hadoop Main</description>
   <name>Apache Hadoop Main</name>
   <packaging>pom</packaging>
@@ -80,7 +80,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/x
 
   <properties>
     <!-- required as child projects with different version can't use ${project.version} -->
-    <hadoop.version>3.3.2</hadoop.version>
+    <hadoop.version>3.3.3-SNAPSHOT</hadoop.version>
 
     <distMgmtSnapshotsId>apache.snapshots.https</distMgmtSnapshotsId>
     <distMgmtSnapshotsName>Apache Development Snapshot Repository</distMgmtSnapshotsName>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 07/16: YARN-11075. Explicitly declare serialVersionUID in LogMutation class. Contributed by Benjamin Teke

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a981df3aecf84d615a8b25740808fca9aa3bb870
Author: Szilard Nemeth <sn...@apache.org>
AuthorDate: Tue Mar 1 18:05:04 2022 +0100

    YARN-11075. Explicitly declare serialVersionUID in LogMutation class. Contributed by Benjamin Teke
---
 .../resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java  | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java
index 4480bc34dcc..425d63f6a66 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java
@@ -53,6 +53,7 @@ public abstract class YarnConfigurationStore {
    * audit logging and recovery.
    */
   public static class LogMutation implements Serializable {
+    private static final long serialVersionUID = 7754046036718906356L;
     private Map<String, String> updates;
     private String user;
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 12/16: YARN-10720. YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging. Contributed by Qi Zhu.

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 52aba525c3a3213966826fd551698d9c6b765ae4
Author: Peter Bacsko <pb...@cloudera.com>
AuthorDate: Thu Apr 1 09:21:15 2021 +0200

    YARN-10720. YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging. Contributed by Qi Zhu.
    
    (cherry picked from commit a0deda1a777d8967fb8c08ac976543cda895773d)
    
    Change-Id: I935725ba094d2c35fdc91dd42883bf5b0d506d56
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java | 14 ++++
 .../src/main/resources/yarn-default.xml            | 12 ++++
 .../yarn/server/webproxy/WebAppProxyServlet.java   | 28 ++++++--
 .../server/webproxy/TestWebAppProxyServlet.java    | 79 +++++++++++++++++++++-
 4 files changed, 126 insertions(+), 7 deletions(-)

diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index df482c18598..c1bb6aa68d2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2672,6 +2672,20 @@ public class YarnConfiguration extends Configuration {
 
   public static final String DEFAULT_RM_APPLICATION_HTTPS_POLICY = "NONE";
 
+
+  // If the proxy connection time enabled.
+  public static final String RM_PROXY_TIMEOUT_ENABLED =
+      RM_PREFIX + "proxy.timeout.enabled";
+
+  public static final boolean DEFALUT_RM_PROXY_TIMEOUT_ENABLED =
+      true;
+
+  public static final String RM_PROXY_CONNECTION_TIMEOUT =
+      RM_PREFIX + "proxy.connection.timeout";
+
+  public static final int DEFAULT_RM_PROXY_CONNECTION_TIMEOUT =
+      60000;
+
   /**
    * Interval of time the linux container executor should try cleaning up
    * cgroups entry when cleaning up a container. This is required due to what 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index ff3a8179132..4be357b78a6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -2601,6 +2601,18 @@
     <value/>
   </property>
 
+  <property>
+    <description>Enable the web proxy connection timeout, default is enabled.</description>
+    <name>yarn.resourcemanager.proxy.timeout.enabled</name>
+    <value>true</value>
+  </property>
+
+  <property>
+    <description>The web proxy connection timeout.</description>
+    <name>yarn.resourcemanager.proxy.connection.timeout</name>
+    <value>60000</value>
+  </property>
+
   <!-- Applications' Configuration -->
 
   <property>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java
index 0b6bb65d8db..03b7077bc16 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java
@@ -122,6 +122,9 @@ public class WebAppProxyServlet extends HttpServlet {
     }
   }
 
+  protected void setConf(YarnConfiguration conf){
+    this.conf = conf;
+  }
   /**
    * Default constructor
    */
@@ -230,6 +233,14 @@ public class WebAppProxyServlet extends HttpServlet {
 
     String httpsPolicy = conf.get(YarnConfiguration.RM_APPLICATION_HTTPS_POLICY,
         YarnConfiguration.DEFAULT_RM_APPLICATION_HTTPS_POLICY);
+
+    boolean connectionTimeoutEnabled =
+        conf.getBoolean(YarnConfiguration.RM_PROXY_TIMEOUT_ENABLED,
+        YarnConfiguration.DEFALUT_RM_PROXY_TIMEOUT_ENABLED);
+    int connectionTimeout =
+        conf.getInt(YarnConfiguration.RM_PROXY_CONNECTION_TIMEOUT,
+            YarnConfiguration.DEFAULT_RM_PROXY_CONNECTION_TIMEOUT);
+
     if (httpsPolicy.equals("LENIENT") || httpsPolicy.equals("STRICT")) {
       ProxyCA proxyCA = getProxyCA();
       // ProxyCA could be null when the Proxy is run outside the RM
@@ -250,10 +261,18 @@ public class WebAppProxyServlet extends HttpServlet {
     InetAddress localAddress = InetAddress.getByName(proxyHost);
     LOG.debug("local InetAddress for proxy host: {}", localAddress);
     httpClientBuilder.setDefaultRequestConfig(
-        RequestConfig.custom()
-        .setCircularRedirectsAllowed(true)
-        .setLocalAddress(localAddress)
-        .build());
+        connectionTimeoutEnabled ?
+            RequestConfig.custom()
+                .setCircularRedirectsAllowed(true)
+                .setLocalAddress(localAddress)
+                .setConnectionRequestTimeout(connectionTimeout)
+                .setSocketTimeout(connectionTimeout)
+                .setConnectTimeout(connectionTimeout)
+                .build() :
+            RequestConfig.custom()
+                .setCircularRedirectsAllowed(true)
+                .setLocalAddress(localAddress)
+                .build());
 
     HttpRequestBase base = null;
     if (method.equals(HTTP.GET)) {
@@ -621,7 +640,6 @@ public class WebAppProxyServlet extends HttpServlet {
    * again... If this method returns true, there was a redirect, and
    * it was handled by redirecting the current request to an error page.
    *
-   * @param path the part of the request path after the app id
    * @param id the app id
    * @param req the request object
    * @param resp the response object
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
index f05e05a2d63..6c8993f6e80 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
@@ -23,6 +23,8 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
 
 import java.io.ByteArrayOutputStream;
 import java.io.IOException;
@@ -35,10 +37,14 @@ import java.net.HttpCookie;
 import java.net.HttpURLConnection;
 import java.net.URI;
 import java.net.URL;
+import java.net.SocketTimeoutException;
+import java.util.Collections;
 import java.util.Enumeration;
 import java.util.List;
 import java.util.Map;
 
+import javax.servlet.ServletConfig;
+import javax.servlet.ServletContext;
 import javax.servlet.ServletException;
 import javax.servlet.http.HttpServlet;
 import javax.servlet.http.HttpServletRequest;
@@ -98,6 +104,7 @@ public class TestWebAppProxyServlet {
     context.setContextPath("/foo");
     server.setHandler(context);
     context.addServlet(new ServletHolder(TestServlet.class), "/bar");
+    context.addServlet(new ServletHolder(TimeOutTestServlet.class), "/timeout");
     ((ServerConnector)server.getConnectors()[0]).setHost("localhost");
     server.start();
     originalPort = ((ServerConnector)server.getConnectors()[0]).getLocalPort();
@@ -145,6 +152,29 @@ public class TestWebAppProxyServlet {
     }
   }
 
+  @SuppressWarnings("serial")
+  public static class TimeOutTestServlet extends HttpServlet {
+
+    @Override
+    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
+        throws ServletException, IOException {
+      try {
+        Thread.sleep(10 * 1000);
+      } catch (InterruptedException e) {
+        LOG.warn("doGet() interrupted", e);
+        resp.setStatus(HttpServletResponse.SC_BAD_REQUEST);
+        return;
+      }
+      resp.setStatus(HttpServletResponse.SC_OK);
+    }
+
+    @Override
+    protected void doPost(HttpServletRequest req, HttpServletResponse resp)
+        throws ServletException, IOException {
+      resp.setStatus(HttpServletResponse.SC_OK);
+    }
+  }
+
   @Test(timeout=5000)
   public void testWebAppProxyServlet() throws Exception {
     configuration.set(YarnConfiguration.PROXY_ADDRESS, "localhost:9090");
@@ -256,6 +286,45 @@ public class TestWebAppProxyServlet {
     }
   }
 
+  @Test(expected = SocketTimeoutException.class)
+  public void testWebAppProxyConnectionTimeout()
+      throws IOException, ServletException{
+    HttpServletRequest request = mock(HttpServletRequest.class);
+    when(request.getMethod()).thenReturn("GET");
+    when(request.getRemoteUser()).thenReturn("dr.who");
+    when(request.getPathInfo()).thenReturn("/application_00_0");
+    when(request.getHeaderNames()).thenReturn(Collections.emptyEnumeration());
+
+    HttpServletResponse response = mock(HttpServletResponse.class);
+    when(response.getOutputStream()).thenReturn(null);
+
+    WebAppProxyServlet servlet = new WebAppProxyServlet();
+    YarnConfiguration conf = new YarnConfiguration();
+    conf.setBoolean(YarnConfiguration.RM_PROXY_TIMEOUT_ENABLED,
+        true);
+    conf.setInt(YarnConfiguration.RM_PROXY_CONNECTION_TIMEOUT,
+        1000);
+
+    servlet.setConf(conf);
+
+    ServletConfig config = mock(ServletConfig.class);
+    ServletContext context = mock(ServletContext.class);
+    when(config.getServletContext()).thenReturn(context);
+
+    AppReportFetcherForTest appReportFetcher =
+        new AppReportFetcherForTest(new YarnConfiguration());
+
+    when(config.getServletContext()
+        .getAttribute(WebAppProxy.FETCHER_ATTRIBUTE))
+        .thenReturn(appReportFetcher);
+
+    appReportFetcher.answer = 7;
+
+    servlet.init(config);
+    servlet.doGet(request, response);
+
+  }
+
   @Test(timeout=5000)
   public void testAppReportForEmptyTrackingUrl() throws Exception {
     configuration.set(YarnConfiguration.PROXY_ADDRESS, "localhost:9090");
@@ -391,9 +460,9 @@ public class TestWebAppProxyServlet {
 
   @Test(timeout=5000)
   public void testCheckHttpsStrictAndNotProvided() throws Exception {
-    HttpServletResponse resp = Mockito.mock(HttpServletResponse.class);
+    HttpServletResponse resp = mock(HttpServletResponse.class);
     StringWriter sw = new StringWriter();
-    Mockito.when(resp.getWriter()).thenReturn(new PrintWriter(sw));
+    when(resp.getWriter()).thenReturn(new PrintWriter(sw));
     YarnConfiguration conf = new YarnConfiguration();
     final URI httpLink = new URI("http://foo.com");
     final URI httpsLink = new URI("https://foo.com");
@@ -566,6 +635,12 @@ public class TestWebAppProxyServlet {
         return result;
       } else if (answer == 6) {
         return getDefaultApplicationReport(appId, false);
+      } else if (answer == 7) {
+        // test connection timeout
+        FetchedAppReport result = getDefaultApplicationReport(appId);
+        result.getApplicationReport().setOriginalTrackingUrl("localhost:"
+            + originalPort + "/foo/timeout?a=b#main");
+        return result;
       }
       return null;
     }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 06/16: HDFS-11041. Unable to unregister FsDatasetState MBean if DataNode is shutdown twice. Contributed by Wei-Chiu Chuang.

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 51b3a5b22c6b7994da5db18f960bb78d5bb41ff6
Author: Ayush Saxena <ay...@apache.org>
AuthorDate: Wed Jun 3 12:47:15 2020 +0530

    HDFS-11041. Unable to unregister FsDatasetState MBean if DataNode is shutdown twice. Contributed by Wei-Chiu Chuang.
    
    (cherry picked from commit e8cb2ae409bc1d62f23efef485d1c6f1ff21e86c)
    
    Change-Id: I9f04082d650628bc1b8b62dacaaf472f8a578742
---
 .../hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java    | 1 +
 .../org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java   | 5 ++++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 2ab4b83a3d2..d263d7dfd35 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -2353,6 +2353,7 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
 
     if (mbeanName != null) {
       MBeans.unregister(mbeanName);
+      mbeanName = null;
     }
     
     if (asyncDiskService != null) {
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
index 113da585c9e..417ad3ce74c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
@@ -1367,7 +1367,10 @@ public class SimulatedFSDataset implements FsDatasetSpi<FsVolumeSpi> {
 
   @Override
   public void shutdown() {
-    if (mbeanName != null) MBeans.unregister(mbeanName);
+    if (mbeanName != null) {
+      MBeans.unregister(mbeanName);
+      mbeanName = null;
+    }
   }
 
   @Override


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 10/16: HADOOP-18155. Refactor tests in TestFileUtil (#4063)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit fd96d5c2d5278aa6e7d527efa80761384c87bc26
Author: Wei-Chiu Chuang <we...@apache.org>
AuthorDate: Mon Mar 14 08:40:17 2022 +0800

    HADOOP-18155. Refactor tests in TestFileUtil (#4063)
    
    (cherry picked from commit d0fa9b5775185bd83e4a767a7dfc13ef89c5154a)
    
     Conflicts:
            hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
            hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
    
    Change-Id: I2bba28c56dd08da315856066b58b1778b67bfb45
    Co-authored-by: Gautham B A <ga...@gmail.com>
---
 .../main/java/org/apache/hadoop/fs/FileUtil.java   |  36 +-
 .../java/org/apache/hadoop/fs/TestFileUtil.java    | 394 +++++++++++++--------
 2 files changed, 271 insertions(+), 159 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
index 5e2d6c5badb..13c9d857379 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
@@ -38,6 +38,7 @@ import java.nio.charset.StandardCharsets;
 import java.nio.file.AccessDeniedException;
 import java.nio.file.FileSystems;
 import java.nio.file.Files;
+import java.nio.file.Paths;
 import java.util.ArrayList;
 import java.util.Enumeration;
 import java.util.List;
@@ -970,6 +971,14 @@ public class FileUtil {
           + " would create entry outside of " + outputDir);
     }
 
+    if (entry.isSymbolicLink() || entry.isLink()) {
+      String canonicalTargetPath = getCanonicalPath(entry.getLinkName(), outputDir);
+      if (!canonicalTargetPath.startsWith(targetDirPath)) {
+        throw new IOException(
+            "expanding " + entry.getName() + " would create entry outside of " + outputDir);
+      }
+    }
+
     if (entry.isDirectory()) {
       File subDir = new File(outputDir, entry.getName());
       if (!subDir.mkdirs() && !subDir.isDirectory()) {
@@ -985,10 +994,12 @@ public class FileUtil {
     }
 
     if (entry.isSymbolicLink()) {
-      // Create symbolic link relative to tar parent dir
-      Files.createSymbolicLink(FileSystems.getDefault()
-              .getPath(outputDir.getPath(), entry.getName()),
-          FileSystems.getDefault().getPath(entry.getLinkName()));
+      // Create symlink with canonical target path to ensure that we don't extract
+      // outside targetDirPath
+      String canonicalTargetPath = getCanonicalPath(entry.getLinkName(), outputDir);
+      Files.createSymbolicLink(
+          FileSystems.getDefault().getPath(outputDir.getPath(), entry.getName()),
+          FileSystems.getDefault().getPath(canonicalTargetPath));
       return;
     }
 
@@ -1000,7 +1011,8 @@ public class FileUtil {
     }
 
     if (entry.isLink()) {
-      File src = new File(outputDir, entry.getLinkName());
+      String canonicalTargetPath = getCanonicalPath(entry.getLinkName(), outputDir);
+      File src = new File(canonicalTargetPath);
       HardLink.createHardLink(src, outputFile);
       return;
     }
@@ -1008,6 +1020,20 @@ public class FileUtil {
     org.apache.commons.io.FileUtils.copyToFile(tis, outputFile);
   }
 
+  /**
+   * Gets the canonical path for the given path.
+   *
+   * @param path      The path for which the canonical path needs to be computed.
+   * @param parentDir The parent directory to use if the path is a relative path.
+   * @return The canonical path of the given path.
+   */
+  private static String getCanonicalPath(String path, File parentDir) throws IOException {
+    java.nio.file.Path targetPath = Paths.get(path);
+    return (targetPath.isAbsolute() ?
+        new File(path) :
+        new File(parentDir, path)).getCanonicalPath();
+  }
+
   /**
    * Class for creating hardlinks.
    * Supports Unix, WindXP.
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
index e84d23c058a..03b9d22b98d 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
@@ -42,13 +42,14 @@ import java.net.URISyntaxException;
 import java.net.URL;
 import java.net.UnknownHostException;
 import java.nio.charset.StandardCharsets;
-import java.nio.file.FileSystems;
 import java.nio.file.Files;
+import java.nio.file.Paths;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.List;
+import java.util.Objects;
 import java.util.jar.Attributes;
 import java.util.jar.JarFile;
 import java.util.jar.Manifest;
@@ -60,9 +61,12 @@ import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream;
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.tools.tar.TarEntry;
 import org.apache.tools.tar.TarOutputStream;
+
+import org.assertj.core.api.Assertions;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -158,13 +162,12 @@ public class TestFileUtil {
     FileUtils.forceMkdir(dir1);
     FileUtils.forceMkdir(dir2);
 
-    new File(del, FILE).createNewFile();
-    File tmpFile = new File(tmp, FILE);
-    tmpFile.createNewFile();
+    Verify.createNewFile(new File(del, FILE));
+    File tmpFile = Verify.createNewFile(new File(tmp, FILE));
 
     // create files
-    new File(dir1, FILE).createNewFile();
-    new File(dir2, FILE).createNewFile();
+    Verify.createNewFile(new File(dir1, FILE));
+    Verify.createNewFile(new File(dir2, FILE));
 
     // create a symlink to file
     File link = new File(del, LINK);
@@ -173,7 +176,7 @@ public class TestFileUtil {
     // create a symlink to dir
     File linkDir = new File(del, "tmpDir");
     FileUtil.symLink(tmp.toString(), linkDir.toString());
-    Assert.assertEquals(5, del.listFiles().length);
+    Assert.assertEquals(5, Objects.requireNonNull(del.listFiles()).length);
 
     // create files in partitioned directories
     createFile(partitioned, "part-r-00000", "foo");
@@ -200,13 +203,9 @@ public class TestFileUtil {
   private File createFile(File directory, String name, String contents)
       throws IOException {
     File newFile = new File(directory, name);
-    PrintWriter pw = new PrintWriter(newFile);
-    try {
+    try (PrintWriter pw = new PrintWriter(newFile)) {
       pw.println(contents);
     }
-    finally {
-      pw.close();
-    }
     return newFile;
   }
 
@@ -218,11 +217,11 @@ public class TestFileUtil {
 
     //Test existing directory with no files case 
     File newDir = new File(tmp.getPath(),"test");
-    newDir.mkdir();
+    Verify.mkdir(newDir);
     Assert.assertTrue("Failed to create test dir", newDir.exists());
     files = FileUtil.listFiles(newDir);
     Assert.assertEquals(0, files.length);
-    newDir.delete();
+    assertTrue(newDir.delete());
     Assert.assertFalse("Failed to delete test dir", newDir.exists());
     
     //Test non-existing directory case, this throws 
@@ -244,11 +243,11 @@ public class TestFileUtil {
 
     //Test existing directory with no files case 
     File newDir = new File(tmp.getPath(),"test");
-    newDir.mkdir();
+    Verify.mkdir(newDir);
     Assert.assertTrue("Failed to create test dir", newDir.exists());
     files = FileUtil.list(newDir);
     Assert.assertEquals("New directory unexpectedly contains files", 0, files.length);
-    newDir.delete();
+    assertTrue(newDir.delete());
     Assert.assertFalse("Failed to delete test dir", newDir.exists());
     
     //Test non-existing directory case, this throws 
@@ -266,7 +265,7 @@ public class TestFileUtil {
   public void testFullyDelete() throws IOException {
     boolean ret = FileUtil.fullyDelete(del);
     Assert.assertTrue(ret);
-    Assert.assertFalse(del.exists());
+    Verify.notExists(del);
     validateTmpDir();
   }
 
@@ -279,13 +278,13 @@ public class TestFileUtil {
   @Test (timeout = 30000)
   public void testFullyDeleteSymlinks() throws IOException {
     File link = new File(del, LINK);
-    Assert.assertEquals(5, del.list().length);
+    assertDelListLength(5);
     // Since tmpDir is symlink to tmp, fullyDelete(tmpDir) should not
     // delete contents of tmp. See setupDirs for details.
     boolean ret = FileUtil.fullyDelete(link);
     Assert.assertTrue(ret);
-    Assert.assertFalse(link.exists());
-    Assert.assertEquals(4, del.list().length);
+    Verify.notExists(link);
+    assertDelListLength(4);
     validateTmpDir();
 
     File linkDir = new File(del, "tmpDir");
@@ -293,8 +292,8 @@ public class TestFileUtil {
     // delete contents of tmp. See setupDirs for details.
     ret = FileUtil.fullyDelete(linkDir);
     Assert.assertTrue(ret);
-    Assert.assertFalse(linkDir.exists());
-    Assert.assertEquals(3, del.list().length);
+    Verify.notExists(linkDir);
+    assertDelListLength(3);
     validateTmpDir();
   }
 
@@ -310,16 +309,16 @@ public class TestFileUtil {
     // to make y as a dangling link to file tmp/x
     boolean ret = FileUtil.fullyDelete(tmp);
     Assert.assertTrue(ret);
-    Assert.assertFalse(tmp.exists());
+    Verify.notExists(tmp);
 
     // dangling symlink to file
     File link = new File(del, LINK);
-    Assert.assertEquals(5, del.list().length);
+    assertDelListLength(5);
     // Even though 'y' is dangling symlink to file tmp/x, fullyDelete(y)
     // should delete 'y' properly.
     ret = FileUtil.fullyDelete(link);
     Assert.assertTrue(ret);
-    Assert.assertEquals(4, del.list().length);
+    assertDelListLength(4);
 
     // dangling symlink to directory
     File linkDir = new File(del, "tmpDir");
@@ -327,22 +326,22 @@ public class TestFileUtil {
     // delete tmpDir properly.
     ret = FileUtil.fullyDelete(linkDir);
     Assert.assertTrue(ret);
-    Assert.assertEquals(3, del.list().length);
+    assertDelListLength(3);
   }
 
   @Test (timeout = 30000)
   public void testFullyDeleteContents() throws IOException {
     boolean ret = FileUtil.fullyDeleteContents(del);
     Assert.assertTrue(ret);
-    Assert.assertTrue(del.exists());
-    Assert.assertEquals(0, del.listFiles().length);
+    Verify.exists(del);
+    Assert.assertEquals(0, Objects.requireNonNull(del.listFiles()).length);
     validateTmpDir();
   }
 
   private void validateTmpDir() {
-    Assert.assertTrue(tmp.exists());
-    Assert.assertEquals(1, tmp.listFiles().length);
-    Assert.assertTrue(new File(tmp, FILE).exists());
+    Verify.exists(tmp);
+    Assert.assertEquals(1, Objects.requireNonNull(tmp.listFiles()).length);
+    Verify.exists(new File(tmp, FILE));
   }
 
   /**
@@ -366,15 +365,15 @@ public class TestFileUtil {
    * @throws IOException
    */
   private void setupDirsAndNonWritablePermissions() throws IOException {
-    new MyFile(del, FILE_1_NAME).createNewFile();
+    Verify.createNewFile(new MyFile(del, FILE_1_NAME));
 
     // "file1" is non-deletable by default, see MyFile.delete().
 
-    xSubDir.mkdirs();
-    file2.createNewFile();
+    Verify.mkdirs(xSubDir);
+    Verify.createNewFile(file2);
 
-    xSubSubDir.mkdirs();
-    file22.createNewFile();
+    Verify.mkdirs(xSubSubDir);
+    Verify.createNewFile(file22);
 
     revokePermissions(file22);
     revokePermissions(xSubSubDir);
@@ -382,8 +381,8 @@ public class TestFileUtil {
     revokePermissions(file2);
     revokePermissions(xSubDir);
 
-    ySubDir.mkdirs();
-    file3.createNewFile();
+    Verify.mkdirs(ySubDir);
+    Verify.createNewFile(file3);
 
     File tmpFile = new File(tmp, FILE);
     tmpFile.createNewFile();
@@ -448,6 +447,88 @@ public class TestFileUtil {
     validateAndSetWritablePermissions(false, ret);
   }
 
+  /**
+   * Asserts if the {@link TestFileUtil#del} meets the given expected length.
+   *
+   * @param expectedLength The expected length of the {@link TestFileUtil#del}.
+   */
+  private void assertDelListLength(int expectedLength) {
+    Assertions.assertThat(del.list()).describedAs("del list").isNotNull().hasSize(expectedLength);
+  }
+
+  /**
+   * Helper class to perform {@link File} operation and also verify them.
+   */
+  public static class Verify {
+    /**
+     * Invokes {@link File#createNewFile()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#createNewFile()} on.
+     * @return The result of {@link File#createNewFile()}.
+     * @throws IOException As per {@link File#createNewFile()}.
+     */
+    public static File createNewFile(File file) throws IOException {
+      assertTrue("Unable to create new file " + file, file.createNewFile());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#mkdir()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#mkdir()} on.
+     * @return The result of {@link File#mkdir()}.
+     */
+    public static File mkdir(File file) {
+      assertTrue("Unable to mkdir for " + file, file.mkdir());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#mkdirs()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#mkdirs()} on.
+     * @return The result of {@link File#mkdirs()}.
+     */
+    public static File mkdirs(File file) {
+      assertTrue("Unable to mkdirs for " + file, file.mkdirs());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#delete()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#delete()} on.
+     * @return The result of {@link File#delete()}.
+     */
+    public static File delete(File file) {
+      assertTrue("Unable to delete " + file, file.delete());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#exists()} on the given {@link File} instance.
+     *
+     * @param file The file to call {@link File#exists()} on.
+     * @return The result of {@link File#exists()}.
+     */
+    public static File exists(File file) {
+      assertTrue("Expected file " + file + " doesn't exist", file.exists());
+      return file;
+    }
+
+    /**
+     * Invokes {@link File#exists()} on the given {@link File} instance to check if the
+     * {@link File} doesn't exists.
+     *
+     * @param file The file to call {@link File#exists()} on.
+     * @return The negation of the result of {@link File#exists()}.
+     */
+    public static File notExists(File file) {
+      assertFalse("Expected file " + file + " must not exist", file.exists());
+      return file;
+    }
+  }
+
   /**
    * Extend {@link File}. Same as {@link File} except for two things: (1) This
    * treats file1Name as a very special file which is not delete-able
@@ -580,14 +661,13 @@ public class TestFileUtil {
       FileUtil.chmod(partitioned.getAbsolutePath(), "0777", true/*recursive*/);
     }
   }
-  
+
   @Test (timeout = 30000)
-  public void testUnTar() throws IOException {
+  public void testUnTar() throws Exception {
     // make a simple tar:
     final File simpleTar = new File(del, FILE);
-    OutputStream os = new FileOutputStream(simpleTar); 
-    TarOutputStream tos = new TarOutputStream(os);
-    try {
+    OutputStream os = new FileOutputStream(simpleTar);
+    try (TarOutputStream tos = new TarOutputStream(os)) {
       TarEntry te = new TarEntry("/bar/foo");
       byte[] data = "some-content".getBytes("UTF-8");
       te.setSize(data.length);
@@ -596,55 +676,42 @@ public class TestFileUtil {
       tos.closeEntry();
       tos.flush();
       tos.finish();
-    } finally {
-      tos.close();
     }
 
     // successfully untar it into an existing dir:
     FileUtil.unTar(simpleTar, tmp);
     // check result:
-    assertTrue(new File(tmp, "/bar/foo").exists());
+    Verify.exists(new File(tmp, "/bar/foo"));
     assertEquals(12, new File(tmp, "/bar/foo").length());
-    
-    final File regularFile = new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog");
-    regularFile.createNewFile();
-    assertTrue(regularFile.exists());
-    try {
-      FileUtil.unTar(simpleTar, regularFile);
-      assertTrue("An IOException expected.", false);
-    } catch (IOException ioe) {
-      // okay
-    }
+
+    final File regularFile =
+        Verify.createNewFile(new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog"));
+    LambdaTestUtils.intercept(IOException.class, () -> FileUtil.unTar(simpleTar, regularFile));
   }
   
   @Test (timeout = 30000)
   public void testReplaceFile() throws IOException {
-    final File srcFile = new File(tmp, "src");
-    
     // src exists, and target does not exist:
-    srcFile.createNewFile();
-    assertTrue(srcFile.exists());
+    final File srcFile = Verify.createNewFile(new File(tmp, "src"));
     final File targetFile = new File(tmp, "target");
-    assertTrue(!targetFile.exists());
+    Verify.notExists(targetFile);
     FileUtil.replaceFile(srcFile, targetFile);
-    assertTrue(!srcFile.exists());
-    assertTrue(targetFile.exists());
+    Verify.notExists(srcFile);
+    Verify.exists(targetFile);
 
     // src exists and target is a regular file: 
-    srcFile.createNewFile();
-    assertTrue(srcFile.exists());
+    Verify.createNewFile(srcFile);
+    Verify.exists(srcFile);
     FileUtil.replaceFile(srcFile, targetFile);
-    assertTrue(!srcFile.exists());
-    assertTrue(targetFile.exists());
+    Verify.notExists(srcFile);
+    Verify.exists(targetFile);
     
     // src exists, and target is a non-empty directory: 
-    srcFile.createNewFile();
-    assertTrue(srcFile.exists());
-    targetFile.delete();
-    targetFile.mkdirs();
-    File obstacle = new File(targetFile, "obstacle");
-    obstacle.createNewFile();
-    assertTrue(obstacle.exists());
+    Verify.createNewFile(srcFile);
+    Verify.exists(srcFile);
+    Verify.delete(targetFile);
+    Verify.mkdirs(targetFile);
+    File obstacle = Verify.createNewFile(new File(targetFile, "obstacle"));
     assertTrue(targetFile.exists() && targetFile.isDirectory());
     try {
       FileUtil.replaceFile(srcFile, targetFile);
@@ -653,9 +720,9 @@ public class TestFileUtil {
       // okay
     }
     // check up the post-condition: nothing is deleted:
-    assertTrue(srcFile.exists());
+    Verify.exists(srcFile);
     assertTrue(targetFile.exists() && targetFile.isDirectory());
-    assertTrue(obstacle.exists());
+    Verify.exists(obstacle);
   }
   
   @Test (timeout = 30000)
@@ -668,13 +735,13 @@ public class TestFileUtil {
     assertTrue(tmp1.exists() && tmp2.exists());
     assertTrue(tmp1.canWrite() && tmp2.canWrite());
     assertTrue(tmp1.canRead() && tmp2.canRead());
-    tmp1.delete();
-    tmp2.delete();
+    Verify.delete(tmp1);
+    Verify.delete(tmp2);
     assertTrue(!tmp1.exists() && !tmp2.exists());
   }
   
   @Test (timeout = 30000)
-  public void testUnZip() throws IOException {
+  public void testUnZip() throws Exception {
     // make sa simple zip
     final File simpleZip = new File(del, FILE);
     OutputStream os = new FileOutputStream(simpleZip); 
@@ -695,18 +762,12 @@ public class TestFileUtil {
     // successfully unzip it into an existing dir:
     FileUtil.unZip(simpleZip, tmp);
     // check result:
-    assertTrue(new File(tmp, "foo").exists());
+    Verify.exists(new File(tmp, "foo"));
     assertEquals(12, new File(tmp, "foo").length());
-    
-    final File regularFile = new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog");
-    regularFile.createNewFile();
-    assertTrue(regularFile.exists());
-    try {
-      FileUtil.unZip(simpleZip, regularFile);
-      assertTrue("An IOException expected.", false);
-    } catch (IOException ioe) {
-      // okay
-    }
+
+    final File regularFile =
+        Verify.createNewFile(new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog"));
+    LambdaTestUtils.intercept(IOException.class, () -> FileUtil.unZip(simpleZip, regularFile));
   }
 
   @Test (timeout = 30000)
@@ -752,24 +813,24 @@ public class TestFileUtil {
     final File dest = new File(del, "dest");
     boolean result = FileUtil.copy(fs, srcPath, dest, false, conf);
     assertTrue(result);
-    assertTrue(dest.exists());
+    Verify.exists(dest);
     assertEquals(content.getBytes().length 
         + System.getProperty("line.separator").getBytes().length, dest.length());
-    assertTrue(srcFile.exists()); // should not be deleted
+    Verify.exists(srcFile); // should not be deleted
     
     // copy regular file, delete src:
-    dest.delete();
-    assertTrue(!dest.exists());
+    Verify.delete(dest);
+    Verify.notExists(dest);
     result = FileUtil.copy(fs, srcPath, dest, true, conf);
     assertTrue(result);
-    assertTrue(dest.exists());
+    Verify.exists(dest);
     assertEquals(content.getBytes().length 
         + System.getProperty("line.separator").getBytes().length, dest.length());
-    assertTrue(!srcFile.exists()); // should be deleted
+    Verify.notExists(srcFile); // should be deleted
     
     // copy a dir:
-    dest.delete();
-    assertTrue(!dest.exists());
+    Verify.delete(dest);
+    Verify.notExists(dest);
     srcPath = new Path(partitioned.toURI());
     result = FileUtil.copy(fs, srcPath, dest, true, conf);
     assertTrue(result);
@@ -781,7 +842,7 @@ public class TestFileUtil {
       assertEquals(3 
           + System.getProperty("line.separator").getBytes().length, f.length());
     }
-    assertTrue(!partitioned.exists()); // should be deleted
+    Verify.notExists(partitioned); // should be deleted
   }  
 
   @Test (timeout = 30000)
@@ -869,8 +930,8 @@ public class TestFileUtil {
     // create the symlink
     FileUtil.symLink(file.getAbsolutePath(), link.getAbsolutePath());
 
-    Assert.assertTrue(file.exists());
-    Assert.assertTrue(link.exists());
+    Verify.exists(file);
+    Verify.exists(link);
 
     File link2 = new File(del, "_link2");
 
@@ -880,10 +941,10 @@ public class TestFileUtil {
     // Make sure the file still exists
     // (NOTE: this would fail on Java6 on Windows if we didn't
     // copy the file in FileUtil#symlink)
-    Assert.assertTrue(file.exists());
+    Verify.exists(file);
 
-    Assert.assertTrue(link2.exists());
-    Assert.assertFalse(link.exists());
+    Verify.exists(link2);
+    Verify.notExists(link);
   }
 
   /**
@@ -898,13 +959,13 @@ public class TestFileUtil {
     // create the symlink
     FileUtil.symLink(file.getAbsolutePath(), link.getAbsolutePath());
 
-    Assert.assertTrue(file.exists());
-    Assert.assertTrue(link.exists());
+    Verify.exists(file);
+    Verify.exists(link);
 
     // make sure that deleting a symlink works properly
-    Assert.assertTrue(link.delete());
-    Assert.assertFalse(link.exists());
-    Assert.assertTrue(file.exists());
+    Verify.delete(link);
+    Verify.notExists(link);
+    Verify.exists(file);
   }
 
   /**
@@ -931,13 +992,13 @@ public class TestFileUtil {
     Assert.assertEquals(data.length, file.length());
     Assert.assertEquals(data.length, link.length());
 
-    file.delete();
-    Assert.assertFalse(file.exists());
+    Verify.delete(file);
+    Verify.notExists(file);
 
     Assert.assertEquals(0, link.length());
 
-    link.delete();
-    Assert.assertFalse(link.exists());
+    Verify.delete(link);
+    Verify.notExists(link);
   }
 
   /**
@@ -1003,7 +1064,7 @@ public class TestFileUtil {
   public void testSymlinkSameFile() throws IOException {
     File file = new File(del, FILE);
 
-    file.delete();
+    Verify.delete(file);
 
     // Create a symbolic link
     // The operation should succeed
@@ -1076,21 +1137,21 @@ public class TestFileUtil {
 
     String parentDir = untarDir.getCanonicalPath() + Path.SEPARATOR + "name";
     File testFile = new File(parentDir + Path.SEPARATOR + "version");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 0);
     String imageDir = parentDir + Path.SEPARATOR + "image";
     testFile = new File(imageDir + Path.SEPARATOR + "fsimage");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 157);
     String currentDir = parentDir + Path.SEPARATOR + "current";
     testFile = new File(currentDir + Path.SEPARATOR + "fsimage");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 4331);
     testFile = new File(currentDir + Path.SEPARATOR + "edits");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 1033);
     testFile = new File(currentDir + Path.SEPARATOR + "fstime");
-    Assert.assertTrue(testFile.exists());
+    Verify.exists(testFile);
     Assert.assertTrue(testFile.length() == 8);
   }
 
@@ -1151,9 +1212,9 @@ public class TestFileUtil {
     }
 
     // create non-jar files, which we expect to not be included in the classpath
-    Assert.assertTrue(new File(tmp, "text.txt").createNewFile());
-    Assert.assertTrue(new File(tmp, "executable.exe").createNewFile());
-    Assert.assertTrue(new File(tmp, "README").createNewFile());
+    Verify.createNewFile(new File(tmp, "text.txt"));
+    Verify.createNewFile(new File(tmp, "executable.exe"));
+    Verify.createNewFile(new File(tmp, "README"));
 
     // create classpath jar
     String wildcardPath = tmp.getCanonicalPath() + File.separator + "*";
@@ -1239,9 +1300,9 @@ public class TestFileUtil {
     }
 
     // create non-jar files, which we expect to not be included in the result
-    assertTrue(new File(tmp, "text.txt").createNewFile());
-    assertTrue(new File(tmp, "executable.exe").createNewFile());
-    assertTrue(new File(tmp, "README").createNewFile());
+    Verify.createNewFile(new File(tmp, "text.txt"));
+    Verify.createNewFile(new File(tmp, "executable.exe"));
+    Verify.createNewFile(new File(tmp, "README"));
 
     // pass in the directory
     String directory = tmp.getCanonicalPath();
@@ -1275,7 +1336,7 @@ public class TestFileUtil {
       uri4 = new URI(uris4);
       uri5 = new URI(uris5);
       uri6 = new URI(uris6);
-    } catch (URISyntaxException use) {
+    } catch (URISyntaxException ignored) {
     }
     // Set up InetAddress
     inet1 = mock(InetAddress.class);
@@ -1298,7 +1359,7 @@ public class TestFileUtil {
       when(InetAddress.getByName(uris3)).thenReturn(inet3);
       when(InetAddress.getByName(uris4)).thenReturn(inet4);
       when(InetAddress.getByName(uris5)).thenReturn(inet5);
-    } catch (UnknownHostException ue) {
+    } catch (UnknownHostException ignored) {
     }
 
     fs1 = mock(FileSystem.class);
@@ -1318,62 +1379,87 @@ public class TestFileUtil {
   @Test
   public void testCompareFsNull() throws Exception {
     setupCompareFs();
-    assertEquals(FileUtil.compareFs(null,fs1),false);
-    assertEquals(FileUtil.compareFs(fs1,null),false);
+    assertFalse(FileUtil.compareFs(null, fs1));
+    assertFalse(FileUtil.compareFs(fs1, null));
   }
 
   @Test
   public void testCompareFsDirectories() throws Exception {
     setupCompareFs();
-    assertEquals(FileUtil.compareFs(fs1,fs1),true);
-    assertEquals(FileUtil.compareFs(fs1,fs2),false);
-    assertEquals(FileUtil.compareFs(fs1,fs5),false);
-    assertEquals(FileUtil.compareFs(fs3,fs4),true);
-    assertEquals(FileUtil.compareFs(fs1,fs6),false);
+    assertTrue(FileUtil.compareFs(fs1, fs1));
+    assertFalse(FileUtil.compareFs(fs1, fs2));
+    assertFalse(FileUtil.compareFs(fs1, fs5));
+    assertTrue(FileUtil.compareFs(fs3, fs4));
+    assertFalse(FileUtil.compareFs(fs1, fs6));
   }
 
   @Test(timeout = 8000)
   public void testCreateSymbolicLinkUsingJava() throws IOException {
     final File simpleTar = new File(del, FILE);
     OutputStream os = new FileOutputStream(simpleTar);
-    TarArchiveOutputStream tos = new TarArchiveOutputStream(os);
-    File untarFile = null;
-    try {
+    try (TarArchiveOutputStream tos = new TarArchiveOutputStream(os)) {
       // Files to tar
       final String tmpDir = "tmp/test";
       File tmpDir1 = new File(tmpDir, "dir1/");
       File tmpDir2 = new File(tmpDir, "dir2/");
-      // Delete the directories if they already exist
-      tmpDir1.mkdirs();
-      tmpDir2.mkdirs();
+      Verify.mkdirs(tmpDir1);
+      Verify.mkdirs(tmpDir2);
 
-      java.nio.file.Path symLink = FileSystems
-          .getDefault().getPath(tmpDir1.getPath() + "/sl");
+      java.nio.file.Path symLink = Paths.get(tmpDir1.getPath(), "sl");
 
       // Create Symbolic Link
-      Files.createSymbolicLink(symLink,
-          FileSystems.getDefault().getPath(tmpDir2.getPath())).toString();
+      Files.createSymbolicLink(symLink, Paths.get(tmpDir2.getPath()));
       assertTrue(Files.isSymbolicLink(symLink.toAbsolutePath()));
-      // put entries in tar file
+      // Put entries in tar file
       putEntriesInTar(tos, tmpDir1.getParentFile());
       tos.close();
 
-      untarFile = new File(tmpDir, "2");
-      // Untar using java
+      File untarFile = new File(tmpDir, "2");
+      // Untar using Java
       FileUtil.unTarUsingJava(simpleTar, untarFile, false);
 
       // Check symbolic link and other directories are there in untar file
       assertTrue(Files.exists(untarFile.toPath()));
-      assertTrue(Files.exists(FileSystems.getDefault().getPath(untarFile
-          .getPath(), tmpDir)));
-      assertTrue(Files.isSymbolicLink(FileSystems.getDefault().getPath(untarFile
-          .getPath().toString(), symLink.toString())));
-
+      assertTrue(Files.exists(Paths.get(untarFile.getPath(), tmpDir)));
+      assertTrue(Files.isSymbolicLink(Paths.get(untarFile.getPath(), symLink.toString())));
     } finally {
       FileUtils.deleteDirectory(new File("tmp"));
-      tos.close();
     }
+  }
+
+  @Test(expected = IOException.class)
+  public void testCreateArbitrarySymlinkUsingJava() throws IOException {
+    final File simpleTar = new File(del, FILE);
+    OutputStream os = new FileOutputStream(simpleTar);
 
+    File rootDir = new File("tmp");
+    try (TarArchiveOutputStream tos = new TarArchiveOutputStream(os)) {
+      tos.setLongFileMode(TarArchiveOutputStream.LONGFILE_GNU);
+
+      // Create arbitrary dir
+      File arbitraryDir = new File(rootDir, "arbitrary-dir/");
+      Verify.mkdirs(arbitraryDir);
+
+      // We will tar from the tar-root lineage
+      File tarRoot = new File(rootDir, "tar-root/");
+      File symlinkRoot = new File(tarRoot, "dir1/");
+      Verify.mkdirs(symlinkRoot);
+
+      // Create Symbolic Link to an arbitrary dir
+      java.nio.file.Path symLink = Paths.get(symlinkRoot.getPath(), "sl");
+      Files.createSymbolicLink(symLink, arbitraryDir.toPath().toAbsolutePath());
+
+      // Put entries in tar file
+      putEntriesInTar(tos, tarRoot);
+      putEntriesInTar(tos, new File(symLink.toFile(), "dir-outside-tar-root/"));
+      tos.close();
+
+      // Untar using Java
+      File untarFile = new File(rootDir, "extracted");
+      FileUtil.unTarUsingJava(simpleTar, untarFile, false);
+    } finally {
+      FileUtils.deleteDirectory(rootDir);
+    }
   }
 
   private void putEntriesInTar(TarArchiveOutputStream tos, File f)
@@ -1450,7 +1536,7 @@ public class TestFileUtil {
     String result = FileUtil.readLink(file);
     Assert.assertEquals("", result);
 
-    file.delete();
+    Verify.delete(file);
   }
 
   /**


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 09/16: HDFS-16428. Source path with storagePolicy cause wrong typeConsumed while rename (#3898). Contributed by lei w.

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5c3fdf0f4baf4ac6100f16f136f34002139effc4
Author: Thinker313 <47...@users.noreply.github.com>
AuthorDate: Tue Jan 25 15:26:18 2022 +0800

    HDFS-16428. Source path with storagePolicy cause wrong typeConsumed while rename (#3898). Contributed by lei w.
    
    Signed-off-by: Ayush Saxena <ay...@apache.org>
    Signed-off-by: He Xiaoqiao <he...@apache.org>
---
 .../hadoop/hdfs/server/namenode/FSDirRenameOp.java |  6 +++-
 .../hadoop/hdfs/server/namenode/FSDirectory.java   |  6 +++-
 .../apache/hadoop/hdfs/server/namenode/INode.java  | 10 ++++++
 .../java/org/apache/hadoop/hdfs/TestQuota.java     | 39 ++++++++++++++++++++++
 4 files changed, 59 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
index c60acaa0031..ee0bf8a5fb1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
@@ -80,8 +80,12 @@ class FSDirRenameOp {
     // Assume dstParent existence check done by callers.
     INode dstParent = dst.getINode(-2);
     // Use the destination parent's storage policy for quota delta verify.
+    final boolean isSrcSetSp = src.getLastINode().isSetStoragePolicy();
+    final byte storagePolicyID = isSrcSetSp ?
+        src.getLastINode().getLocalStoragePolicyID() :
+        dstParent.getStoragePolicyID();
     final QuotaCounts delta = src.getLastINode()
-        .computeQuotaUsage(bsps, dstParent.getStoragePolicyID(), false,
+        .computeQuotaUsage(bsps, storagePolicyID, false,
             Snapshot.CURRENT_STATE_ID);
 
     // Reduce the required quota by dst that is being removed
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 7b902d5ff1b..fc17eaebf7f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -1363,9 +1363,13 @@ public class FSDirectory implements Closeable {
     // always verify inode name
     verifyINodeName(inode.getLocalNameBytes());
 
+    final boolean isSrcSetSp = inode.isSetStoragePolicy();
+    final byte storagePolicyID = isSrcSetSp ?
+        inode.getLocalStoragePolicyID() :
+        parent.getStoragePolicyID();
     final QuotaCounts counts = inode
         .computeQuotaUsage(getBlockStoragePolicySuite(),
-            parent.getStoragePolicyID(), false, Snapshot.CURRENT_STATE_ID);
+            storagePolicyID, false, Snapshot.CURRENT_STATE_ID);
     updateCount(existing, pos, counts, checkQuota);
 
     boolean isRename = (inode.getParent() != null);
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
index 03f01eb32ee..8e417fe43aa 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
@@ -340,6 +340,16 @@ public abstract class INode implements INodeAttributes, Diff.Element<byte[]> {
     return false;
   }
 
+  /**
+   * Check if this inode itself has a storage policy set.
+   */
+  public boolean isSetStoragePolicy() {
+    if (isSymlink()) {
+      return false;
+    }
+    return getLocalStoragePolicyID() != HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED;
+  }
+
   /** Cast this inode to an {@link INodeFile}.  */
   public INodeFile asFile() {
     throw new IllegalStateException("Current inode is not a file: "
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
index 79088d3be85..e14ea4dc265 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
@@ -44,6 +44,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.QuotaUsage;
 import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.client.impl.LeaseRenewer;
 import org.apache.hadoop.hdfs.protocol.DSQuotaExceededException;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
@@ -958,6 +959,44 @@ public class TestQuota {
         6 * fileSpace);
   }
 
+  @Test
+  public void testRenameInodeWithStorageType() throws IOException {
+    final int size = 64;
+    final short repl = 1;
+    final Path foo = new Path("/foo");
+    final Path bs1 = new Path(foo, "bs1");
+    final Path wow = new Path(bs1, "wow");
+    final Path bs2 = new Path(foo, "bs2");
+    final Path wow2 = new Path(bs2, "wow2");
+    final Path wow3 = new Path(bs2, "wow3");
+
+    dfs.mkdirs(bs1, FsPermission.getDirDefault());
+    dfs.mkdirs(bs2, FsPermission.getDirDefault());
+    dfs.setQuota(bs1, 1000, 434217728);
+    dfs.setQuota(bs2, 1000, 434217728);
+    // file wow3 without storage policy
+    DFSTestUtil.createFile(dfs, wow3, size, repl, 0);
+
+    dfs.setStoragePolicy(bs2, HdfsConstants.ONESSD_STORAGE_POLICY_NAME);
+
+    DFSTestUtil.createFile(dfs, wow, size, repl, 0);
+    DFSTestUtil.createFile(dfs, wow2, size, repl, 0);
+    assertTrue("Without storage policy, typeConsumed should be 0.",
+        dfs.getQuotaUsage(bs1).getTypeConsumed(StorageType.SSD) == 0);
+    assertTrue("With storage policy, typeConsumed should not be 0.",
+        dfs.getQuotaUsage(bs2).getTypeConsumed(StorageType.SSD) != 0);
+    // wow3 without storage policy , rename will not change typeConsumed
+    dfs.rename(wow3, bs1);
+    assertTrue("Rename src without storagePolicy, dst typeConsumed should not be changed.",
+        dfs.getQuotaUsage(bs2).getTypeConsumed(StorageType.SSD) == 0);
+
+    long srcTypeQuota = dfs.getQuotaUsage(bs2).getTypeQuota(StorageType.SSD);
+    dfs.rename(bs2, bs1);
+    long dstTypeQuota = dfs.getQuotaUsage(bs1).getTypeConsumed(StorageType.SSD);
+    assertTrue("Rename with storage policy, typeConsumed should not be 0.",
+        dstTypeQuota != srcTypeQuota);
+  }
+
   private static void checkContentSummary(final ContentSummary expected,
       final ContentSummary computed) {
     assertEquals(expected.toString(), computed.toString());


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 16/16: HADOOP-18088. Replace log4j 1.x with reload4j. (#4052)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 877ef944f96eb986d0816f8a0491981f31858289
Author: Masatake Iwasaki <iw...@apache.org>
AuthorDate: Thu Apr 7 08:33:13 2022 +0900

    HADOOP-18088. Replace log4j 1.x with reload4j. (#4052)
    
    Co-authored-by: Wei-Chiu Chuang <we...@apache.org>
---
 LICENSE-binary                                     |   9 +-
 .../resources/assemblies/hadoop-dynamometer.xml    |   2 +-
 .../resources/assemblies/hadoop-hdfs-nfs-dist.xml  |   2 +-
 .../resources/assemblies/hadoop-httpfs-dist.xml    |   2 +-
 .../main/resources/assemblies/hadoop-kms-dist.xml  |   2 +-
 .../resources/assemblies/hadoop-mapreduce-dist.xml |   2 +-
 .../main/resources/assemblies/hadoop-nfs-dist.xml  |   2 +-
 .../src/main/resources/assemblies/hadoop-tools.xml |   2 +-
 .../main/resources/assemblies/hadoop-yarn-dist.xml |   2 +-
 .../hadoop-client-check-invariants/pom.xml         |   4 +-
 .../hadoop-client-check-test-invariants/pom.xml    |   4 +-
 .../hadoop-client-integration-tests/pom.xml        |   9 +-
 .../hadoop-client-minicluster/pom.xml              |  10 +-
 .../hadoop-client-runtime/pom.xml                  |   8 +-
 hadoop-client-modules/hadoop-client/pom.xml        |  14 +--
 hadoop-common-project/hadoop-auth-examples/pom.xml |   6 +-
 hadoop-common-project/hadoop-auth/pom.xml          |  12 ++-
 hadoop-common-project/hadoop-common/pom.xml        |   6 +-
 .../java/org/apache/hadoop/util/GenericsUtil.java  |   2 +-
 .../java/org/apache/hadoop/util/TestClassUtil.java |   2 +-
 hadoop-common-project/hadoop-kms/pom.xml           |   6 +-
 hadoop-common-project/hadoop-minikdc/pom.xml       |   2 +-
 hadoop-common-project/hadoop-nfs/pom.xml           |   6 +-
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml     |   4 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml     |   6 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml        |   6 +-
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml        |   6 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml            |   6 +-
 .../hadoop-mapreduce-client/pom.xml                |   2 +-
 hadoop-mapreduce-project/pom.xml                   |   2 +-
 hadoop-project/pom.xml                             | 117 +++++++++++++++++++--
 hadoop-tools/hadoop-azure/pom.xml                  |   4 +-
 .../pom.xml                                        |   4 +-
 .../hadoop-yarn-services-core/pom.xml              |   4 +-
 .../hadoop-yarn/hadoop-yarn-client/pom.xml         |   4 +-
 .../hadoop-yarn/hadoop-yarn-common/pom.xml         |   4 +-
 .../hadoop-yarn-server-resourcemanager/pom.xml     |   4 +-
 37 files changed, 195 insertions(+), 94 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 7a712a5ac98..0e93a3aba9f 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -208,6 +208,7 @@ License Version 2.0:
 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java
 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/TimeoutFuture.java
 
+ch.qos.reload4j:reload4j:1.2.18.3
 com.aliyun:aliyun-java-sdk-core:3.4.0
 com.aliyun:aliyun-java-sdk-ecs:4.2.0
 com.aliyun:aliyun-java-sdk-ram:3.0.0
@@ -273,7 +274,6 @@ io.reactivex:rxjava-string:1.1.1
 io.reactivex:rxnetty:0.4.20
 io.swagger:swagger-annotations:1.5.4
 javax.inject:javax.inject:1
-log4j:log4j:1.2.17
 net.java.dev.jna:jna:5.2.0
 net.minidev:accessors-smart:2.4.7
 net.minidev:json-smart:2.4.7
@@ -436,9 +436,10 @@ org.codehaus.mojo:animal-sniffer-annotations:1.17
 org.jruby.jcodings:jcodings:1.0.13
 org.jruby.joni:joni:2.1.2
 org.ojalgo:ojalgo:43.0
-org.slf4j:jul-to-slf4j:1.7.30
-org.slf4j:slf4j-api:1.7.30
-org.slf4j:slf4j-log4j12:1.7.30
+org.slf4j:jcl-over-slf4j:1.7.35
+org.slf4j:jul-to-slf4j:1.7.35
+org.slf4j:slf4j-api:1.7.35
+org.slf4j:slf4j-reload4j:1.7.35
 
 
 CDDL 1.1 + GPLv2 with classpath exception
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-dynamometer.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-dynamometer.xml
index 448035262e1..b2ce562231c 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-dynamometer.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-dynamometer.xml
@@ -66,7 +66,7 @@
       <excludes>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
       </excludes>
     </dependencySet>
   </dependencySets>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-hdfs-nfs-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-hdfs-nfs-dist.xml
index 0edfdeb7b0d..af5d89d7efe 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-hdfs-nfs-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-hdfs-nfs-dist.xml
@@ -40,7 +40,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml
index d698a3005d4..bec2f94b95e 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-httpfs-dist.xml
@@ -69,7 +69,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml
index ff6f99080ca..e5e6834b042 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml
@@ -69,7 +69,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
index 06a55d6d06a..28d5ebe9f60 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
@@ -179,7 +179,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
         <exclude>jdiff:jdiff:jar</exclude>
       </excludes>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-nfs-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-nfs-dist.xml
index cb3d9cdf249..59000c07113 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-nfs-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-nfs-dist.xml
@@ -40,7 +40,7 @@
         <exclude>org.apache.hadoop:hadoop-hdfs</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
index 054d8c0ace2..1b9140f419b 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-tools.xml
@@ -214,7 +214,7 @@
         <exclude>org.apache.hadoop:hadoop-pipes</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
       </excludes>
     </dependencySet>
   </dependencySets>
diff --git a/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml b/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml
index 4da4ac5acb9..cd86ce4e417 100644
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml
@@ -309,7 +309,7 @@
         <exclude>org.apache.hadoop:*</exclude>
         <!-- use slf4j from common to avoid multiple binding warnings -->
         <exclude>org.slf4j:slf4j-api</exclude>
-        <exclude>org.slf4j:slf4j-log4j12</exclude>
+        <exclude>org.slf4j:slf4j-reload4j</exclude>
         <exclude>org.hsqldb:hsqldb</exclude>
       </excludes>
     </dependencySet>
diff --git a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
index 9d1deb63642..c58353c3ddd 100644
--- a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
@@ -84,8 +84,8 @@
                     <exclude>org.slf4j:slf4j-api</exclude>
                     <!-- Leave commons-logging unshaded so downstream users can configure logging. -->
                     <exclude>commons-logging:commons-logging</exclude>
-                    <!-- Leave log4j unshaded so downstream users can configure logging. -->
-                    <exclude>log4j:log4j</exclude>
+                    <!-- Leave reload4j unshaded so downstream users can configure logging. -->
+                    <exclude>ch.qos.reload4j:reload4j</exclude>
                     <!-- Leave javax annotations we need exposed -->
                     <exclude>com.google.code.findbugs:jsr305</exclude>
                     <!-- Leave bouncycastle unshaded because it's signed with a special Oracle certificate so it can be a custom JCE security provider -->
diff --git a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
index b96210dde7d..c7d7a8ee749 100644
--- a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
@@ -88,8 +88,8 @@
                     <exclude>org.slf4j:slf4j-api</exclude>
                     <!-- Leave commons-logging unshaded so downstream users can configure logging. -->
                     <exclude>commons-logging:commons-logging</exclude>
-                    <!-- Leave log4j unshaded so downstream users can configure logging. -->
-                    <exclude>log4j:log4j</exclude>
+                    <!-- Leave reload4j unshaded so downstream users can configure logging. -->
+                    <exclude>ch.qos.reload4j:reload4j</exclude>
                     <!-- Leave JUnit unshaded so downstream can use our test helper classes -->
                     <exclude>junit:junit</exclude>
                     <!-- JUnit brings in hamcrest -->
diff --git a/hadoop-client-modules/hadoop-client-integration-tests/pom.xml b/hadoop-client-modules/hadoop-client-integration-tests/pom.xml
index 51210210204..d74c9c19ceb 100644
--- a/hadoop-client-modules/hadoop-client-integration-tests/pom.xml
+++ b/hadoop-client-modules/hadoop-client-integration-tests/pom.xml
@@ -33,8 +33,8 @@
 
   <dependencies>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
@@ -42,11 +42,6 @@
       <artifactId>slf4j-api</artifactId>
       <scope>test</scope>
     </dependency>
-    <dependency>
-      <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
-      <scope>test</scope>
-    </dependency>
     <dependency>
       <groupId>junit</groupId>
       <artifactId>junit</artifactId>
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index d5ca75cbb4f..aa64544e7d1 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -193,8 +193,12 @@
           <artifactId>slf4j-log4j12</artifactId>
         </exclusion>
         <exclusion>
-          <groupId>log4j</groupId>
-          <artifactId>log4j</artifactId>
+          <groupId>org.slf4j</groupId>
+          <artifactId>slf4j-reload4j</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>ch.qos.reload4j</groupId>
+          <artifactId>reload4j</artifactId>
         </exclusion>
         <exclusion>
           <groupId>com.fasterxml.jackson.core</groupId>
@@ -682,7 +686,7 @@
                       <exclude>commons-logging:commons-logging</exclude>
                       <exclude>junit:junit</exclude>
                       <exclude>com.google.code.findbugs:jsr305</exclude>
-                      <exclude>log4j:log4j</exclude>
+                      <exclude>ch.qos.reload4j:reload4j</exclude>
                       <exclude>org.eclipse.jetty.websocket:websocket-common</exclude>
                       <exclude>org.eclipse.jetty.websocket:websocket-api</exclude>
                       <!-- We need a filter that matches just those things that are included in the above artiacts -->
diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index cf9b95286eb..d4f636de712 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -103,8 +103,8 @@
          * one of the three custom log4j appenders we have
       -->
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
       <optional>true</optional>
     </dependency>
@@ -150,8 +150,8 @@
                       <exclude>org.slf4j:slf4j-api</exclude>
                       <!-- Leave commons-logging unshaded so downstream users can configure logging. -->
                       <exclude>commons-logging:commons-logging</exclude>
-                      <!-- Leave log4j unshaded so downstream users can configure logging. -->
-                      <exclude>log4j:log4j</exclude>
+                      <!-- Leave reload4j unshaded so downstream users can configure logging. -->
+                      <exclude>ch.qos.reload4j:reload4j</exclude>
                       <!-- Leave javax APIs that are stable -->
                       <!-- the jdk ships part of the javax.annotation namespace, so if we want to relocate this we'll have to care it out by class :( -->
                       <exclude>com.google.code.findbugs:jsr305</exclude>
diff --git a/hadoop-client-modules/hadoop-client/pom.xml b/hadoop-client-modules/hadoop-client/pom.xml
index 9670a8a39a6..17411217240 100644
--- a/hadoop-client-modules/hadoop-client/pom.xml
+++ b/hadoop-client-modules/hadoop-client/pom.xml
@@ -206,8 +206,8 @@
           <artifactId>commons-cli</artifactId>
         </exclusion>
         <exclusion>
-          <groupId>log4j</groupId>
-          <artifactId>log4j</artifactId>
+          <groupId>ch.qos.reload4j</groupId>
+          <artifactId>reload4j</artifactId>
         </exclusion>
         <exclusion>
           <groupId>com.sun.jersey</groupId>
@@ -282,11 +282,6 @@
           <groupId>io.netty</groupId>
           <artifactId>netty</artifactId>
         </exclusion>
-        <!-- No slf4j backends for downstream clients -->
-        <exclusion>
-          <groupId>org.slf4j</groupId>
-          <artifactId>slf4j-log4j12</artifactId>
-        </exclusion>
       </exclusions>
     </dependency>
 
@@ -315,11 +310,6 @@
           <groupId>io.netty</groupId>
           <artifactId>netty</artifactId>
         </exclusion>
-        <!-- No slf4j backends for downstream clients -->
-        <exclusion>
-          <groupId>org.slf4j</groupId>
-          <artifactId>slf4j-log4j12</artifactId>
-        </exclusion>
       </exclusions>
     </dependency>
 
diff --git a/hadoop-common-project/hadoop-auth-examples/pom.xml b/hadoop-common-project/hadoop-auth-examples/pom.xml
index 27580e50c8a..ce5130d49a0 100644
--- a/hadoop-common-project/hadoop-auth-examples/pom.xml
+++ b/hadoop-common-project/hadoop-auth-examples/pom.xml
@@ -47,13 +47,13 @@
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
   </dependencies>
diff --git a/hadoop-common-project/hadoop-auth/pom.xml b/hadoop-common-project/hadoop-auth/pom.xml
index 923be91e903..c2812869638 100644
--- a/hadoop-common-project/hadoop-auth/pom.xml
+++ b/hadoop-common-project/hadoop-auth/pom.xml
@@ -82,13 +82,13 @@
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
@@ -176,6 +176,12 @@
       <artifactId>apacheds-server-integ</artifactId>
       <version>${apacheds.version}</version>
       <scope>test</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>log4j</groupId>
+          <artifactId>log4j</artifactId>
+        </exclusion>
+      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.directory.server</groupId>
diff --git a/hadoop-common-project/hadoop-common/pom.xml b/hadoop-common-project/hadoop-common/pom.xml
index 086a77f26d9..791429c8fff 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -159,8 +159,8 @@
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -205,7 +205,7 @@
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java
index 0aba34845a6..334e370214e 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java
@@ -85,7 +85,7 @@ public class GenericsUtil {
     }
     Logger log = LoggerFactory.getLogger(clazz);
     try {
-      Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter");
+      Class log4jClass = Class.forName("org.slf4j.impl.Reload4jLoggerAdapter");
       return log4jClass.isInstance(log);
     } catch (ClassNotFoundException e) {
       return false;
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
index 98e182236c9..04337929abd 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
@@ -35,6 +35,6 @@ public class TestClassUtil {
     Assert.assertTrue("Containing jar does not exist on file system ",
         jarFile.exists());
     Assert.assertTrue("Incorrect jar file " + containingJar,
-        jarFile.getName().matches("log4j.*[.]jar"));
+        jarFile.getName().matches("reload4j.*[.]jar"));
   }
 }
diff --git a/hadoop-common-project/hadoop-kms/pom.xml b/hadoop-common-project/hadoop-kms/pom.xml
index 71be87347a9..986cfe4a00d 100644
--- a/hadoop-common-project/hadoop-kms/pom.xml
+++ b/hadoop-common-project/hadoop-kms/pom.xml
@@ -134,8 +134,8 @@
       <type>test-jar</type>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -145,7 +145,7 @@
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-common-project/hadoop-minikdc/pom.xml b/hadoop-common-project/hadoop-minikdc/pom.xml
index 746d72c429c..441ac244f39 100644
--- a/hadoop-common-project/hadoop-minikdc/pom.xml
+++ b/hadoop-common-project/hadoop-minikdc/pom.xml
@@ -40,7 +40,7 @@
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-common-project/hadoop-nfs/pom.xml b/hadoop-common-project/hadoop-nfs/pom.xml
index baddec82727..06af6768118 100644
--- a/hadoop-common-project/hadoop-nfs/pom.xml
+++ b/hadoop-common-project/hadoop-nfs/pom.xml
@@ -79,13 +79,13 @@
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
index f85db539eba..e468a2e1547 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
@@ -48,8 +48,8 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
           <artifactId>commons-logging</artifactId>
         </exclusion>
         <exclusion>
-          <groupId>log4j</groupId>
-          <artifactId>log4j</artifactId>
+          <groupId>ch.qos.reload4j</groupId>
+          <artifactId>reload4j</artifactId>
         </exclusion>
       </exclusions>
     </dependency>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index e571d744e54..6470e3aa757 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -179,8 +179,8 @@
       <type>test-jar</type>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -190,7 +190,7 @@
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
     <!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
diff --git a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
index 0d8ef6c4c0d..442a3601295 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
@@ -134,8 +134,8 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -160,7 +160,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>provided</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
index b37a1de11e1..02d5bfae3b6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
@@ -54,8 +54,8 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
           <artifactId>commons-logging</artifactId>
         </exclusion>
         <exclusion>
-          <groupId>log4j</groupId>
-          <artifactId>log4j</artifactId>
+          <groupId>ch.qos.reload4j</groupId>
+          <artifactId>reload4j</artifactId>
         </exclusion>
       </exclusions>
     </dependency>
@@ -71,7 +71,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>provided</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index df5d2cce9a6..8aa86dd3b0e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -118,8 +118,8 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
       <scope>compile</scope>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>compile</scope>
     </dependency>
     <dependency>
@@ -162,7 +162,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
+      <artifactId>slf4j-reload4j</artifactId>
       <scope>provided</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
index df6f081a8da..f862ecd6831 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
@@ -86,7 +86,7 @@
     </dependency>
     <dependency>
      <groupId>org.slf4j</groupId>
-       <artifactId>slf4j-log4j12</artifactId>
+       <artifactId>slf4j-reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>
diff --git a/hadoop-mapreduce-project/pom.xml b/hadoop-mapreduce-project/pom.xml
index cba6031809b..b0951176442 100644
--- a/hadoop-mapreduce-project/pom.xml
+++ b/hadoop-mapreduce-project/pom.xml
@@ -88,7 +88,7 @@
     </dependency>
     <dependency>
      <groupId>org.slf4j</groupId>
-       <artifactId>slf4j-log4j12</artifactId>
+       <artifactId>slf4j-reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 66dd3fe6ac6..93ec9dcd84d 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -81,8 +81,8 @@
     <httpcore.version>4.4.13</httpcore.version>
 
     <!-- SLF4J/LOG4J version -->
-    <slf4j.version>1.7.30</slf4j.version>
-    <log4j.version>1.2.17</log4j.version>
+    <slf4j.version>1.7.36</slf4j.version>
+    <reload4j.version>1.2.18.3</reload4j.version>
 
     <!-- com.google.re2j version -->
     <re2j.version>1.1</re2j.version>
@@ -298,12 +298,28 @@
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-common</artifactId>
         <version>${hadoop.version}</version>
+        <exclusions>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-reload4j</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-common</artifactId>
         <version>${hadoop.version}</version>
         <type>test-jar</type>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hadoop</groupId>
@@ -374,12 +390,24 @@
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-core</artifactId>
         <version>${hadoop.version}</version>
+        <exclusions>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-reload4j</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
 
       <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
         <version>${hadoop.version}</version>
+        <exclusions>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-reload4j</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
 
       <dependency>
@@ -953,9 +981,9 @@
         <version>${commons-logging-api.version}</version>
       </dependency>
       <dependency>
-        <groupId>log4j</groupId>
-        <artifactId>log4j</artifactId>
-        <version>${log4j.version}</version>
+        <groupId>ch.qos.reload4j</groupId>
+        <artifactId>reload4j</artifactId>
+        <version>${reload4j.version}</version>
         <exclusions>
           <exclusion>
             <groupId>com.sun.jdmk</groupId>
@@ -1099,7 +1127,7 @@
       </dependency>
       <dependency>
         <groupId>org.slf4j</groupId>
-        <artifactId>slf4j-log4j12</artifactId>
+        <artifactId>slf4j-reload4j</artifactId>
         <version>${slf4j.version}</version>
       </dependency>
       <dependency>
@@ -1305,6 +1333,10 @@
             <groupId>org.apache.kerby</groupId>
             <artifactId>kerby-config</artifactId>
           </exclusion>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
           <exclusion>
             <groupId>org.slf4j</groupId>
             <artifactId>slf4j-api</artifactId>
@@ -1313,6 +1345,10 @@
             <groupId>org.slf4j</groupId>
             <artifactId>slf4j-log4j12</artifactId>
           </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-reload4j</artifactId>
+          </exclusion>
         </exclusions>
       </dependency>
       <dependency>
@@ -1341,6 +1377,14 @@
             <groupId>io.netty</groupId>
             <artifactId>netty-transport-native-epoll</artifactId>
           </exclusion>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
         </exclusions>
       </dependency>
       <dependency>
@@ -1480,6 +1524,10 @@
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
          </exclusion>
+         <exclusion>
+           <groupId>log4j</groupId>
+           <artifactId>log4j</artifactId>
+         </exclusion>
        </exclusions>
      </dependency>
      <dependency>
@@ -1594,6 +1642,10 @@
             <artifactId>jdk.tools</artifactId>
             <groupId>jdk.tools</groupId>
           </exclusion>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
         </exclusions>
       </dependency>
       <dependency>
@@ -1602,6 +1654,16 @@
         <version>${hbase.version}</version>
         <scope>test</scope>
         <classifier>tests</classifier>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hbase</groupId>
@@ -1619,6 +1681,28 @@
         <groupId>org.apache.hbase</groupId>
         <artifactId>hbase-server</artifactId>
         <version>${hbase.version}</version>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+        </exclusions>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.hbase</groupId>
+        <artifactId>hbase-server</artifactId>
+        <version>${hbase.version}</version>
+        <scope>test</scope>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hbase</groupId>
@@ -1626,6 +1710,16 @@
         <version>${hbase.version}</version>
         <scope>test</scope>
         <classifier>tests</classifier>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       <dependency>
         <groupId>org.apache.hbase</groupId>
@@ -1650,6 +1744,14 @@
             <artifactId>jdk.tools</artifactId>
             <groupId>jdk.tools</groupId>
           </exclusion>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+          </exclusion>
         </exclusions>
         </dependency>
         <dependency>
@@ -2160,6 +2262,9 @@
                     <exclude>com.sun.jersey.jersey-test-framework:*</exclude>
                     <exclude>com.google.inject:guice</exclude>
                     <exclude>org.ow2.asm:asm</exclude>
+
+                    <exclude>org.slf4j:slf4j-log4j12</exclude>
+                    <exclude>log4j:log4j</exclude>
                   </excludes>
                   <includes>
                     <!-- for JDK 8 support -->
diff --git a/hadoop-tools/hadoop-azure/pom.xml b/hadoop-tools/hadoop-azure/pom.xml
index c8c5cc37742..6eb7f98c4d6 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -245,8 +245,8 @@
     </dependency>
 
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>test</scope>
     </dependency>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
index 387d4a97417..cb2a32d70bf 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
@@ -46,8 +46,8 @@
     </dependency>
 
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop.thirdparty</groupId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml
index 02b6b7124dc..fb8cc764f98 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/pom.xml
@@ -118,8 +118,8 @@
     </dependency>
 
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
       <scope>runtime</scope>
     </dependency>
 
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
index 368a0251aed..6977afde460 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
@@ -47,8 +47,8 @@
       <artifactId>commons-cli</artifactId>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.eclipse.jetty.websocket</groupId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
index 63ed238ed27..77c493d3ca6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
@@ -164,8 +164,8 @@
       <artifactId>jersey-guice</artifactId>
     </dependency>
     <dependency>
-     <groupId>log4j</groupId>
-     <artifactId>log4j</artifactId>
+     <groupId>ch.qos.reload4j</groupId>
+     <artifactId>reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>com.fasterxml.jackson.core</groupId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
index 68574b45703..40e5b7a0f04 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
@@ -160,8 +160,8 @@
       <artifactId>hadoop-shaded-guava</artifactId>
     </dependency>
     <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
+      <groupId>ch.qos.reload4j</groupId>
+      <artifactId>reload4j</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 13/16: HDFS-16355. Improve the description of dfs.block.scanner.volume.bytes.per.second (#3724)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 63c07519de10234a481f4298049806f35c95a06e
Author: GuoPhilipse <46...@users.noreply.github.com>
AuthorDate: Sun Mar 27 21:23:48 2022 +0800

    HDFS-16355. Improve the description of dfs.block.scanner.volume.bytes.per.second (#3724)
    
    Co-authored-by: gf13871 <gf...@ly.com>
    Signed-off-by: Akira Ajisaka <aa...@apache.org>
    (cherry picked from commit 046a6204b4a895b98ccd41dde1c9524a6bb0ea31)
    
    Change-Id: I2cae5d1c27a492d896da5338a92c7a86f88a8b43
---
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml      |  2 +-
 .../hadoop/hdfs/server/datanode/TestBlockScanner.java    | 16 +++++++++++-----
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 80c481886d7..78af86b0a3c 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -1591,7 +1591,7 @@
   <name>dfs.block.scanner.volume.bytes.per.second</name>
   <value>1048576</value>
   <description>
-        If this is 0, the DataNode's block scanner will be disabled.  If this
+        If this is configured less than or equal to zero, the DataNode's block scanner will be disabled.  If this
         is positive, this is the number of bytes per second that the DataNode's
         block scanner will try to scan from each volume.
   </description>
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
index c74785923a7..2086e15348e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
@@ -282,11 +282,17 @@ public class TestBlockScanner {
   public void testDisableVolumeScanner() throws Exception {
     Configuration conf = new Configuration();
     disableBlockScanner(conf);
-    TestContext ctx = new TestContext(conf, 1);
-    try {
-      Assert.assertFalse(ctx.datanode.getBlockScanner().isEnabled());
-    } finally {
-      ctx.close();
+    try(TestContext ctx = new TestContext(conf, 1)) {
+      assertFalse(ctx.datanode.getBlockScanner().isEnabled());
+    }
+  }
+
+  @Test(timeout=60000)
+  public void testDisableVolumeScanner2() throws Exception {
+    Configuration conf = new Configuration();
+    conf.setLong(DFS_BLOCK_SCANNER_VOLUME_BYTES_PER_SECOND, -1L);
+    try(TestContext ctx = new TestContext(conf, 1)) {
+      assertFalse(ctx.datanode.getBlockScanner().isEnabled());
     }
   }
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 11/16: HDFS-16501. Print the exception when reporting a bad block (#4062)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 376904e4229b6838c14915468f0dbff5725d25f3
Author: qinyuren <14...@qq.com>
AuthorDate: Wed Mar 23 14:03:17 2022 +0800

    HDFS-16501. Print the exception when reporting a bad block (#4062)
    
    Reviewed-by: tomscut <li...@bigo.sg>
    (cherry picked from commit 45ce1cce50c3ff65676d946e96bbc7846ad3131a)
---
 .../main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
index 0367b4a7aa3..2c666a38317 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
@@ -293,7 +293,7 @@ public class VolumeScanner extends Thread {
             volume, block);
         return;
       }
-      LOG.warn("Reporting bad {} on {}", block, volume);
+      LOG.warn("Reporting bad {} on {}", block, volume, e);
       scanner.datanode.handleBadBlock(block, e, true);
     }
   }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 05/16: HADOOP-18125. Utility to identify git commit / Jira fixVersion discrepancies for RC preparation (#3991)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit c6b9fcfd6c5f81d020cc8951aae3e1cc0ae376cf
Author: Viraj Jasani <vj...@apache.org>
AuthorDate: Tue Feb 22 08:30:38 2022 +0530

    HADOOP-18125. Utility to identify git commit / Jira fixVersion discrepancies for RC preparation (#3991)
    
    Signed-off-by: Wei-Chiu Chuang <we...@apache.org>
    (cherry picked from commit 697e5d463640a7107a622262eb2d333d0458fd8b)
---
 dev-support/git-jira-validation/README.md          | 134 +++++++++++++++++++++
 .../git_jira_fix_version_check.py                  | 118 ++++++++++++++++++
 dev-support/git-jira-validation/requirements.txt   |  18 +++
 3 files changed, 270 insertions(+)

diff --git a/dev-support/git-jira-validation/README.md b/dev-support/git-jira-validation/README.md
new file mode 100644
index 00000000000..308c54228d1
--- /dev/null
+++ b/dev-support/git-jira-validation/README.md
@@ -0,0 +1,134 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+Apache Hadoop Git/Jira FixVersion validation
+============================================================
+
+Git commits in Apache Hadoop contains Jira number of the format
+HADOOP-XXXX or HDFS-XXXX or YARN-XXXX or MAPREDUCE-XXXX.
+While creating a release candidate, we also include changelist
+and this changelist can be identified based on Fixed/Closed Jiras
+with the correct fix versions. However, sometimes we face few
+inconsistencies between fixed Jira and Git commit message.
+
+git_jira_fix_version_check.py script takes care of
+identifying all git commits with commit
+messages with any of these issues:
+
+1. commit is reverted as per commit message
+2. commit does not contain Jira number format in message
+3. Jira does not have expected fixVersion
+4. Jira has expected fixVersion, but it is not yet resolved
+
+Moreover, this script also finds any resolved Jira with expected
+fixVersion but without any corresponding commit present.
+
+This should be useful as part of RC preparation.
+
+git_jira_fix_version_check supports python3 and it required
+installation of jira:
+
+```
+$ python3 --version
+Python 3.9.7
+
+$ python3 -m venv ./venv
+
+$ ./venv/bin/pip install -r dev-support/git-jira-validation/requirements.txt
+
+$ ./venv/bin/python dev-support/git-jira-validation/git_jira_fix_version_check.py
+
+```
+
+The script also requires below inputs:
+```
+1. First commit hash to start excluding commits from history:
+   Usually we can provide latest commit hash from last tagged release
+   so that the script will only loop through all commits in git commit
+   history before this commit hash. e.g for 3.3.2 release, we can provide
+   git hash: fa4915fdbbbec434ab41786cb17b82938a613f16
+   because this commit bumps up hadoop pom versions to 3.3.2:
+   https://github.com/apache/hadoop/commit/fa4915fdbbbec434ab41786cb17b82938a613f16
+
+2. Fix Version:
+   Exact fixVersion that we would like to compare all Jira's fixVersions
+   with. e.g for 3.3.2 release, it should be 3.3.2.
+
+3. JIRA Project Name:
+   The exact name of Project as case-sensitive e.g HADOOP / OZONE
+
+4. Path of project's working dir with release branch checked-in:
+   Path of project from where we want to compare git hashes from. Local fork
+   of the project should be up-to date with upstream and expected release
+   branch should be checked-in.
+
+5. Jira server url (default url: https://issues.apache.org/jira):
+   Default value of server points to ASF Jiras but this script can be
+   used outside of ASF Jira too.
+```
+
+
+Example of script execution:
+```
+JIRA Project Name (e.g HADOOP / OZONE etc): HADOOP
+First commit hash to start excluding commits from history: fa4915fdbbbec434ab41786cb17b82938a613f16
+Fix Version: 3.3.2
+Jira server url (default: https://issues.apache.org/jira):
+Path of project's working dir with release branch checked-in: /Users/vjasani/Documents/src/hadoop-3.3/hadoop
+
+Check git status output and verify expected branch
+
+On branch branch-3.3.2
+Your branch is up to date with 'origin/branch-3.3.2'.
+
+nothing to commit, working tree clean
+
+
+Jira/Git commit message diff starting: ##############################################
+Jira not present with version: 3.3.2. 	 Commit: 8cd8e435fb43a251467ca74fadcb14f21a3e8163 HADOOP-17198. Support S3 Access Points  (#3260) (branch-3.3.2) (#3955)
+WARN: Jira not found. 			 Commit: 8af28b7cca5c6020de94e739e5373afc69f399e5 Updated the index as per 3.3.2 release
+WARN: Jira not found. 			 Commit: e42e483d0085aa46543ebcb1196dd155ddb447d0 Make upstream aware of 3.3.1 release
+Commit seems reverted. 			 Commit: 6db1165380cd308fb74c9d17a35c1e57174d1e09 Revert "HDFS-14099. Unknown frame descriptor when decompressing multiple frames (#3836)"
+Commit seems reverted. 			 Commit: 1e3f94fa3c3d4a951d4f7438bc13e6f008f228f4 Revert "HDFS-16333. fix balancer bug when transfer an EC block (#3679)"
+Jira not present with version: 3.3.2. 	 Commit: ce0bc7b473a62a580c1227a4de6b10b64b045d3a HDFS-16344. Improve DirectoryScanner.Stats#toString (#3695)
+Jira not present with version: 3.3.2. 	 Commit: 30f0629d6e6f735c9f4808022f1a1827c5531f75 HDFS-16339. Show the threshold when mover threads quota is exceeded (#3689)
+Jira not present with version: 3.3.2. 	 Commit: e449daccf486219e3050254d667b74f92e8fc476 YARN-11007. Correct words in YARN documents (#3680)
+Commit seems reverted. 			 Commit: 5c189797828e60a3329fd920ecfb99bcbccfd82d Revert "HDFS-16336. Addendum: De-flake TestRollingUpgrade#testRollback (#3686)"
+Jira not present with version: 3.3.2. 	 Commit: 544dffd179ed756bc163e4899e899a05b93d9234 HDFS-16171. De-flake testDecommissionStatus (#3280)
+Jira not present with version: 3.3.2. 	 Commit: c6914b1cb6e4cab8263cd3ae5cc00bc7a8de25de HDFS-16350. Datanode start time should be set after RPC server starts successfully (#3711)
+Jira not present with version: 3.3.2. 	 Commit: 328d3b84dfda9399021ccd1e3b7afd707e98912d HDFS-16336. Addendum: De-flake TestRollingUpgrade#testRollback (#3686)
+Jira not present with version: 3.3.2. 	 Commit: 3ae8d4ccb911c9ababd871824a2fafbb0272c016 HDFS-16336. De-flake TestRollingUpgrade#testRollback (#3686)
+Jira not present with version: 3.3.2. 	 Commit: 15d3448e25c797b7d0d401afdec54683055d4bb5 HADOOP-17975. Fallback to simple auth does not work for a secondary DistributedFileSystem instance. (#3579)
+Jira not present with version: 3.3.2. 	 Commit: dd50261219de71eaa0a1ad28529953e12dfb92e0 YARN-10991. Fix to ignore the grouping "[]" for resourcesStr in parseResourcesString method (#3592)
+Jira not present with version: 3.3.2. 	 Commit: ef462b21bf03b10361d2f9ea7b47d0f7360e517f HDFS-16332. Handle invalid token exception in sasl handshake (#3677)
+WARN: Jira not found. 			 Commit: b55edde7071419410ea5bea4ce6462b980e48f5b Also update hadoop.version to 3.3.2
+...
+...
+...
+Found first commit hash after which git history is redundant. commit: fa4915fdbbbec434ab41786cb17b82938a613f16
+Exiting successfully
+Jira/Git commit message diff completed: ##############################################
+
+Any resolved Jira with fixVersion 3.3.2 but corresponding commit not present
+Starting diff: ##############################################
+HADOOP-18066 is marked resolved with fixVersion 3.3.2 but no corresponding commit found
+HADOOP-17936 is marked resolved with fixVersion 3.3.2 but no corresponding commit found
+Completed diff: ##############################################
+
+
+```
+
diff --git a/dev-support/git-jira-validation/git_jira_fix_version_check.py b/dev-support/git-jira-validation/git_jira_fix_version_check.py
new file mode 100644
index 00000000000..c2e12a13aae
--- /dev/null
+++ b/dev-support/git-jira-validation/git_jira_fix_version_check.py
@@ -0,0 +1,118 @@
+#!/usr/bin/env python3
+############################################################################
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+############################################################################
+"""An application to assist Release Managers with ensuring that histories in
+Git and fixVersions in JIRA are in agreement. See README.md for a detailed
+explanation.
+"""
+
+
+import os
+import re
+import subprocess
+
+from jira import JIRA
+
+jira_project_name = input("JIRA Project Name (e.g HADOOP / OZONE etc): ") \
+                    or "HADOOP"
+# Define project_jira_keys with - appended. e.g for HADOOP Jiras,
+# project_jira_keys should include HADOOP-, HDFS-, YARN-, MAPREDUCE-
+project_jira_keys = [jira_project_name + '-']
+if jira_project_name == 'HADOOP':
+    project_jira_keys.append('HDFS-')
+    project_jira_keys.append('YARN-')
+    project_jira_keys.append('MAPREDUCE-')
+
+first_exclude_commit_hash = input("First commit hash to start excluding commits from history: ")
+fix_version = input("Fix Version: ")
+
+jira_server_url = input(
+    "Jira server url (default: https://issues.apache.org/jira): ") \
+        or "https://issues.apache.org/jira"
+
+jira = JIRA(server=jira_server_url)
+
+local_project_dir = input("Path of project's working dir with release branch checked-in: ")
+os.chdir(local_project_dir)
+
+GIT_STATUS_MSG = subprocess.check_output(['git', 'status']).decode("utf-8")
+print('\nCheck git status output and verify expected branch\n')
+print(GIT_STATUS_MSG)
+
+print('\nJira/Git commit message diff starting: ##############################################')
+
+issue_set_from_commit_msg = set()
+
+for commit in subprocess.check_output(['git', 'log', '--pretty=oneline']).decode(
+        "utf-8").splitlines():
+    if commit.startswith(first_exclude_commit_hash):
+        print("Found first commit hash after which git history is redundant. commit: "
+              + first_exclude_commit_hash)
+        print("Exiting successfully")
+        break
+    if re.search('revert', commit, re.IGNORECASE):
+        print("Commit seems reverted. \t\t\t Commit: " + commit)
+        continue
+    ACTUAL_PROJECT_JIRA = None
+    for project_jira in project_jira_keys:
+        if project_jira in commit:
+            ACTUAL_PROJECT_JIRA = project_jira
+            break
+    if not ACTUAL_PROJECT_JIRA:
+        print("WARN: Jira not found. \t\t\t Commit: " + commit)
+        continue
+    JIRA_NUM = ''
+    for c in commit.split(ACTUAL_PROJECT_JIRA)[1]:
+        if c.isdigit():
+            JIRA_NUM = JIRA_NUM + c
+        else:
+            break
+    issue = jira.issue(ACTUAL_PROJECT_JIRA + JIRA_NUM)
+    EXPECTED_FIX_VERSION = False
+    for version in issue.fields.fixVersions:
+        if version.name == fix_version:
+            EXPECTED_FIX_VERSION = True
+            break
+    if not EXPECTED_FIX_VERSION:
+        print("Jira not present with version: " + fix_version + ". \t Commit: " + commit)
+        continue
+    if issue.fields.status is None or issue.fields.status.name not in ('Resolved', 'Closed'):
+        print("Jira is not resolved yet? \t\t Commit: " + commit)
+    else:
+        # This means Jira corresponding to current commit message is resolved with expected
+        # fixVersion.
+        # This is no-op by default, if needed, convert to print statement.
+        issue_set_from_commit_msg.add(ACTUAL_PROJECT_JIRA + JIRA_NUM)
+
+print('Jira/Git commit message diff completed: ##############################################')
+
+print('\nAny resolved Jira with fixVersion ' + fix_version
+      + ' but corresponding commit not present')
+print('Starting diff: ##############################################')
+all_issues_with_fix_version = jira.search_issues(
+    'project=' + jira_project_name + ' and status in (Resolved,Closed) and fixVersion='
+    + fix_version)
+
+for issue in all_issues_with_fix_version:
+    if issue.key not in issue_set_from_commit_msg:
+        print(issue.key + ' is marked resolved with fixVersion ' + fix_version
+            + ' but no corresponding commit found')
+
+print('Completed diff: ##############################################')
diff --git a/dev-support/git-jira-validation/requirements.txt b/dev-support/git-jira-validation/requirements.txt
new file mode 100644
index 00000000000..ae7535a119f
--- /dev/null
+++ b/dev-support/git-jira-validation/requirements.txt
@@ -0,0 +1,18 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+jira==3.1.1


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 08/16: YARN-11014. YARN incorrectly validates maximum capacity resources on the validation API. Contributed by Benjamin Teke

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit cb40b7d74142a0094eb23d3709e543b75d0baa85
Author: Szilard Nemeth <sn...@apache.org>
AuthorDate: Wed Mar 2 14:23:00 2022 +0100

    YARN-11014. YARN incorrectly validates maximum capacity resources on the validation API. Contributed by Benjamin Teke
    
    Change-Id: I5505e1b8aaa394dfac31dade7aed6013e0279adc
---
 .../scheduler/capacity/CapacityScheduler.java      |  16 ++
 .../capacity/CapacitySchedulerConfigValidator.java |   2 +
 .../TestCapacitySchedulerConfigValidator.java      | 270 ++++++++++++++++++++-
 3 files changed, 284 insertions(+), 4 deletions(-)

diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 69e775f84e5..d0d95c388a6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -2056,6 +2056,22 @@ public class CapacityScheduler extends
     }
   }
 
+  /**
+   * Add node to nodeTracker. Used when validating CS configuration by instantiating a new
+   * CS instance.
+   * @param nodesToAdd node to be added
+   */
+  public void addNodes(List<FiCaSchedulerNode> nodesToAdd) {
+    writeLock.lock();
+    try {
+      for (FiCaSchedulerNode node : nodesToAdd) {
+        nodeTracker.addNode(node);
+      }
+    } finally {
+      writeLock.unlock();
+    }
+  }
+
   private void addNode(RMNode nodeManager) {
     writeLock.lock();
     try {
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
index c3b4df4efdf..d180ffb64ba 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
@@ -42,6 +42,7 @@ public final class CapacitySchedulerConfigValidator {
   public static boolean validateCSConfiguration(
           final Configuration oldConf, final Configuration newConf,
           final RMContext rmContext) throws IOException {
+    CapacityScheduler liveScheduler = (CapacityScheduler) rmContext.getScheduler();
     CapacityScheduler newCs = new CapacityScheduler();
     try {
       //TODO: extract all the validation steps and replace reinitialize with
@@ -49,6 +50,7 @@ public final class CapacitySchedulerConfigValidator {
       newCs.setConf(oldConf);
       newCs.setRMContext(rmContext);
       newCs.init(oldConf);
+      newCs.addNodes(liveScheduler.getAllNodes());
       newCs.reinitialize(newConf, rmContext, true);
       return true;
     } finally {
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerConfigValidator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerConfigValidator.java
index 04f4349db1d..ad114d901cf 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerConfigValidator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerConfigValidator.java
@@ -19,13 +19,23 @@
 package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap;
 import org.apache.hadoop.yarn.LocalConfigurationProvider;
+import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.ResourceInformation;
 import org.apache.hadoop.yarn.api.records.impl.LightWeightResource;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
+import org.apache.hadoop.yarn.server.resourcemanager.MockNM;
+import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import org.apache.hadoop.yarn.util.YarnVersionInfo;
+import org.apache.hadoop.yarn.util.resource.DominantResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.ResourceUtils;
 import org.junit.Assert;
 import org.junit.Test;
 import org.mockito.Mockito;
@@ -34,9 +44,71 @@ import java.io.IOException;
 import java.util.HashMap;
 import java.util.Map;
 
+import static org.apache.hadoop.yarn.api.records.ResourceInformation.GPU_URI;
 import static org.junit.Assert.fail;
 
 public class TestCapacitySchedulerConfigValidator {
+  public static final int NODE_MEMORY = 16;
+  public static final int NODE1_VCORES = 8;
+  public static final int NODE2_VCORES = 10;
+  public static final int NODE3_VCORES = 12;
+  public static final Map<String, Long> NODE_GPU = ImmutableMap.of(GPU_URI, 2L);
+  public static final int GB = 1024;
+
+  private static final String PARENT_A = "parentA";
+  private static final String PARENT_B = "parentB";
+  private static final String LEAF_A = "leafA";
+  private static final String LEAF_B = "leafB";
+
+  private static final String PARENT_A_FULL_PATH = CapacitySchedulerConfiguration.ROOT
+      + "." + PARENT_A;
+  private static final String LEAF_A_FULL_PATH = PARENT_A_FULL_PATH
+      + "." + LEAF_A;
+  private static final String PARENT_B_FULL_PATH = CapacitySchedulerConfiguration.ROOT
+      + "." + PARENT_B;
+  private static final String LEAF_B_FULL_PATH = PARENT_B_FULL_PATH
+      + "." + LEAF_B;
+
+  private final Resource A_MINRES = Resource.newInstance(16 * GB, 10);
+  private final Resource B_MINRES = Resource.newInstance(32 * GB, 5);
+  private final Resource FULL_MAXRES = Resource.newInstance(48 * GB, 30);
+  private final Resource PARTIAL_MAXRES = Resource.newInstance(16 * GB, 10);
+  private final Resource VCORE_EXCEEDED_MAXRES = Resource.newInstance(16 * GB, 50);
+  private Resource A_MINRES_GPU;
+  private Resource B_MINRES_GPU;
+  private Resource FULL_MAXRES_GPU;
+  private Resource PARTIAL_MAXRES_GPU;
+  private Resource GPU_EXCEEDED_MAXRES_GPU;
+
+  protected MockRM mockRM = null;
+  protected MockNM nm1 = null;
+  protected MockNM nm2 = null;
+  protected MockNM nm3 = null;
+  protected CapacityScheduler cs;
+
+  public static void setupResources(boolean useGpu) {
+    Map<String, ResourceInformation> riMap = new HashMap<>();
+
+    ResourceInformation memory = ResourceInformation.newInstance(
+        ResourceInformation.MEMORY_MB.getName(),
+        ResourceInformation.MEMORY_MB.getUnits(),
+        YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_MB,
+        YarnConfiguration.DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_MB);
+    ResourceInformation vcores = ResourceInformation.newInstance(
+        ResourceInformation.VCORES.getName(),
+        ResourceInformation.VCORES.getUnits(),
+        YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_VCORES,
+        YarnConfiguration.DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES);
+    riMap.put(ResourceInformation.MEMORY_URI, memory);
+    riMap.put(ResourceInformation.VCORES_URI, vcores);
+    if (useGpu) {
+      riMap.put(ResourceInformation.GPU_URI,
+          ResourceInformation.newInstance(ResourceInformation.GPU_URI, "", 0,
+              ResourceTypes.COUNTABLE, 0, 10L));
+    }
+
+    ResourceUtils.initializeResourcesFromResourceInformationMap(riMap);
+  }
 
   /**
    * Test for the case when the scheduler.minimum-allocation-mb == 0.
@@ -69,7 +141,6 @@ public class TestCapacitySchedulerConfigValidator {
 
   }
 
-
   @Test
   public void testValidateMemoryAllocation() {
     Map<String, String> configs = new HashMap();
@@ -115,7 +186,6 @@ public class TestCapacitySchedulerConfigValidator {
 
   }
 
-
   @Test
   public void testValidateVCores() {
     Map<String, String> configs = new HashMap();
@@ -147,6 +217,106 @@ public class TestCapacitySchedulerConfigValidator {
     }
   }
 
+  @Test
+  public void testValidateCSConfigDefaultRCAbsoluteModeParentMaxMemoryExceeded()
+      throws Exception {
+    setUpMockRM(false);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, FULL_MAXRES);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+      fail("Parent maximum capacity exceeded");
+    } catch (IOException e) {
+      Assert.assertTrue(e.getCause().getMessage()
+          .startsWith("Max resource configuration"));
+    } finally {
+      mockRM.stop();
+    }
+  }
+
+  @Test
+  public void testValidateCSConfigDefaultRCAbsoluteModeParentMaxVcoreExceeded() throws Exception {
+    setUpMockRM(false);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, VCORE_EXCEEDED_MAXRES);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+    } catch (IOException e) {
+      fail("In DefaultResourceCalculator vcore limits are not enforced");
+    } finally {
+      mockRM.stop();
+    }
+  }
+
+  @Test
+  public void testValidateCSConfigDominantRCAbsoluteModeParentMaxMemoryExceeded()
+      throws Exception {
+    setUpMockRM(true);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, FULL_MAXRES);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+      fail("Parent maximum capacity exceeded");
+    } catch (IOException e) {
+      Assert.assertTrue(e.getCause().getMessage()
+          .startsWith("Max resource configuration"));
+    } finally {
+      mockRM.stop();
+    }
+  }
+
+  @Test
+  public void testValidateCSConfigDominantRCAbsoluteModeParentMaxVcoreExceeded() throws Exception {
+    setUpMockRM(true);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, VCORE_EXCEEDED_MAXRES);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+      fail("Parent maximum capacity exceeded");
+    } catch (IOException e) {
+      Assert.assertTrue(e.getCause().getMessage()
+          .startsWith("Max resource configuration"));
+    } finally {
+      mockRM.stop();
+    }
+  }
+
+  @Test
+  public void testValidateCSConfigDominantRCAbsoluteModeParentMaxGPUExceeded() throws Exception {
+    setUpMockRM(true);
+    RMContext rmContext = mockRM.getRMContext();
+    CapacitySchedulerConfiguration oldConfiguration = cs.getConfiguration();
+    CapacitySchedulerConfiguration newConfiguration =
+        new CapacitySchedulerConfiguration(cs.getConfiguration());
+    newConfiguration.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, GPU_EXCEEDED_MAXRES_GPU);
+    try {
+      CapacitySchedulerConfigValidator
+          .validateCSConfiguration(oldConfiguration, newConfiguration, rmContext);
+      fail("Parent maximum capacity exceeded");
+    } catch (IOException e) {
+      Assert.assertTrue(e.getCause().getMessage()
+          .startsWith("Max resource configuration"));
+    } finally {
+      mockRM.stop();
+    }
+  }
+
   @Test
   public void testValidateCSConfigStopALeafQueue() throws IOException {
     Configuration oldConfig = CapacitySchedulerConfigGeneratorForTest
@@ -155,7 +325,7 @@ public class TestCapacitySchedulerConfigValidator {
     newConfig
             .set("yarn.scheduler.capacity.root.test1.state", "STOPPED");
     RMContext rmContext = prepareRMContext();
-    Boolean isValidConfig = CapacitySchedulerConfigValidator
+    boolean isValidConfig = CapacitySchedulerConfigValidator
             .validateCSConfiguration(oldConfig, newConfig, rmContext);
     Assert.assertTrue(isValidConfig);
   }
@@ -340,9 +510,11 @@ public class TestCapacitySchedulerConfigValidator {
     Assert.assertTrue(isValidConfig);
   }
 
-
   public static RMContext prepareRMContext() {
+    setupResources(false);
     RMContext rmContext = Mockito.mock(RMContext.class);
+    CapacityScheduler mockCs = Mockito.mock(CapacityScheduler.class);
+    Mockito.when(rmContext.getScheduler()).thenReturn(mockCs);
     LocalConfigurationProvider configProvider = Mockito
             .mock(LocalConfigurationProvider.class);
     Mockito.when(rmContext.getConfigurationProvider())
@@ -361,4 +533,94 @@ public class TestCapacitySchedulerConfigValidator {
             .thenReturn(queuePlacementManager);
     return rmContext;
   }
+
+  private void setUpMockRM(boolean useDominantRC) throws Exception {
+    YarnConfiguration conf = new YarnConfiguration();
+    conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+        ResourceScheduler.class);
+    setupResources(useDominantRC);
+    CapacitySchedulerConfiguration csConf = setupCSConfiguration(conf, useDominantRC);
+
+    mockRM = new MockRM(csConf);
+
+    cs = (CapacityScheduler) mockRM.getResourceScheduler();
+    mockRM.start();
+    cs.start();
+
+    setupNodes(mockRM);
+  }
+
+  private void setupNodes(MockRM newMockRM) throws Exception {
+      nm1 = new MockNM("h1:1234",
+          Resource.newInstance(NODE_MEMORY * GB, NODE1_VCORES, NODE_GPU),
+          newMockRM.getResourceTrackerService(),
+          YarnVersionInfo.getVersion());
+
+      nm1.registerNode();
+
+      nm2 = new MockNM("h2:1234",
+          Resource.newInstance(NODE_MEMORY * GB, NODE2_VCORES, NODE_GPU),
+          newMockRM.getResourceTrackerService(),
+          YarnVersionInfo.getVersion());
+      nm2.registerNode();
+
+      nm3 = new MockNM("h3:1234",
+          Resource.newInstance(NODE_MEMORY * GB, NODE3_VCORES, NODE_GPU),
+          newMockRM.getResourceTrackerService(),
+          YarnVersionInfo.getVersion());
+      nm3.registerNode();
+  }
+
+  private void setupGpuResourceValues() {
+    A_MINRES_GPU = Resource.newInstance(A_MINRES.getMemorySize(), A_MINRES.getVirtualCores(),
+        ImmutableMap.of(GPU_URI, 2L));
+    B_MINRES_GPU =  Resource.newInstance(B_MINRES.getMemorySize(), B_MINRES.getVirtualCores(),
+        ImmutableMap.of(GPU_URI, 2L));
+    FULL_MAXRES_GPU = Resource.newInstance(FULL_MAXRES.getMemorySize(),
+        FULL_MAXRES.getVirtualCores(), ImmutableMap.of(GPU_URI, 6L));
+    PARTIAL_MAXRES_GPU = Resource.newInstance(PARTIAL_MAXRES.getMemorySize(),
+        PARTIAL_MAXRES.getVirtualCores(), ImmutableMap.of(GPU_URI, 4L));
+    GPU_EXCEEDED_MAXRES_GPU = Resource.newInstance(PARTIAL_MAXRES.getMemorySize(),
+        PARTIAL_MAXRES.getVirtualCores(), ImmutableMap.of(GPU_URI, 50L));
+  }
+
+  private CapacitySchedulerConfiguration setupCSConfiguration(YarnConfiguration configuration,
+                                                              boolean useDominantRC) {
+    CapacitySchedulerConfiguration csConf = new CapacitySchedulerConfiguration(configuration);
+    if (useDominantRC) {
+      csConf.set(CapacitySchedulerConfiguration.RESOURCE_CALCULATOR_CLASS,
+          DominantResourceCalculator.class.getName());
+      csConf.set(YarnConfiguration.RESOURCE_TYPES, ResourceInformation.GPU_URI);
+    }
+
+    csConf.setQueues(CapacitySchedulerConfiguration.ROOT,
+        new String[]{PARENT_A, PARENT_B});
+    csConf.setQueues(PARENT_A_FULL_PATH, new String[]{LEAF_A});
+    csConf.setQueues(PARENT_B_FULL_PATH, new String[]{LEAF_B});
+
+    if (useDominantRC) {
+      setupGpuResourceValues();
+      csConf.setMinimumResourceRequirement("", PARENT_A_FULL_PATH, A_MINRES_GPU);
+      csConf.setMinimumResourceRequirement("", PARENT_B_FULL_PATH, B_MINRES_GPU);
+      csConf.setMinimumResourceRequirement("", LEAF_A_FULL_PATH, A_MINRES_GPU);
+      csConf.setMinimumResourceRequirement("", LEAF_B_FULL_PATH, B_MINRES_GPU);
+
+      csConf.setMaximumResourceRequirement("", PARENT_A_FULL_PATH, PARTIAL_MAXRES_GPU);
+      csConf.setMaximumResourceRequirement("", PARENT_B_FULL_PATH, FULL_MAXRES_GPU);
+      csConf.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, PARTIAL_MAXRES_GPU);
+      csConf.setMaximumResourceRequirement("", LEAF_B_FULL_PATH, FULL_MAXRES_GPU);
+    } else {
+      csConf.setMinimumResourceRequirement("", PARENT_A_FULL_PATH, A_MINRES);
+      csConf.setMinimumResourceRequirement("", PARENT_B_FULL_PATH, B_MINRES);
+      csConf.setMinimumResourceRequirement("", LEAF_A_FULL_PATH, A_MINRES);
+      csConf.setMinimumResourceRequirement("", LEAF_B_FULL_PATH, B_MINRES);
+
+      csConf.setMaximumResourceRequirement("", PARENT_A_FULL_PATH, PARTIAL_MAXRES);
+      csConf.setMaximumResourceRequirement("", PARENT_B_FULL_PATH, FULL_MAXRES);
+      csConf.setMaximumResourceRequirement("", LEAF_A_FULL_PATH, PARTIAL_MAXRES);
+      csConf.setMaximumResourceRequirement("", LEAF_B_FULL_PATH, FULL_MAXRES);
+    }
+
+    return csConf;
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 15/16: HDFS-16507. [SBN read] Avoid purging edit log which is in progress (#4082)

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d5845474e2512ccd0d837823b1d17dec6c36c730
Author: litao <to...@gmail.com>
AuthorDate: Thu Mar 31 14:01:48 2022 +0800

    HDFS-16507. [SBN read] Avoid purging edit log which is in progress (#4082)
---
 .../org/apache/hadoop/hdfs/server/namenode/FSEditLog.java     | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
index 8b34dfea954..c3e31bcba69 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
@@ -1512,11 +1512,12 @@ public class FSEditLog implements LogsPurgeable {
     if (!isOpenForWrite()) {
       return;
     }
-    
-    assert curSegmentTxId == HdfsServerConstants.INVALID_TXID || // on format this is no-op
-      minTxIdToKeep <= curSegmentTxId :
-      "cannot purge logs older than txid " + minTxIdToKeep +
-      " when current segment starts at " + curSegmentTxId;
+
+    Preconditions.checkArgument(
+        curSegmentTxId == HdfsServerConstants.INVALID_TXID || // on format this is no-op
+        minTxIdToKeep <= curSegmentTxId,
+        "cannot purge logs older than txid " + minTxIdToKeep +
+        " when current segment starts at " + curSegmentTxId);
     if (minTxIdToKeep == 0) {
       return;
     }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org