You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by we...@apache.org on 2019/10/04 00:38:48 UTC

[hadoop] branch branch-3.2 updated (4e223d9 -> 6facb3f)

This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


    from 4e223d9  HDFS-14113. EC : Add Configuration to restrict UserDefined Policies. Contributed by Ayush Saxena.
     new f14fb90  HDFS-14499. Misleading REM_QUOTA value with snapshot and trash feature enabled for a directory. Contributed by Shashikant Banerjee.
     new 21a89d5  HDFS-14624. When decommissioning a node, log remaining blocks to replicate periodically. Contributed by Stephen O'Donnell.
     new 6facb3f  HADOOP-12282. Connection thread's name should be updated after address changing is detected. Contributed by Lisheng Sun.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../main/java/org/apache/hadoop/ipc/Client.java    |  4 +++
 .../blockmanagement/DatanodeAdminManager.java      | 12 ++++----
 .../hdfs/server/namenode/INodeReference.java       | 17 +++++------
 .../TestGetContentSummaryWithSnapshot.java         | 33 ++++++++++++++++------
 4 files changed, 45 insertions(+), 21 deletions(-)


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 02/03: HDFS-14624. When decommissioning a node, log remaining blocks to replicate periodically. Contributed by Stephen O'Donnell.

Posted by we...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 21a89d544fd2b12505f3d1d65d37c37966665871
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Thu Jul 11 08:55:44 2019 -0700

    HDFS-14624. When decommissioning a node, log remaining blocks to replicate periodically. Contributed by Stephen O'Donnell.
    
    (cherry picked from commit 5747f6cff54f79de0e6439d6c77c2ed437989f10)
---
 .../hdfs/server/blockmanagement/DatanodeAdminManager.java    | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java
index 6710c39..f30066a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java
@@ -507,8 +507,10 @@ public class DatanodeAdminManager {
         namesystem.writeUnlock();
       }
       if (numBlocksChecked + numNodesChecked > 0) {
-        LOG.info("Checked {} blocks and {} nodes this tick", numBlocksChecked,
-            numNodesChecked);
+        LOG.info("Checked {} blocks and {} nodes this tick. {} nodes are now " +
+            "in maintenance or transitioning state. {} nodes pending.",
+            numBlocksChecked, numNodesChecked, outOfServiceNodeBlocks.size(),
+            pendingNodes.size());
       }
     }
 
@@ -599,14 +601,14 @@ public class DatanodeAdminManager {
               LOG.debug("Node {} is sufficiently replicated and healthy, "
                   + "marked as {}.", dn, dn.getAdminState());
             } else {
-              LOG.debug("Node {} {} healthy."
+              LOG.info("Node {} {} healthy."
                   + " It needs to replicate {} more blocks."
                   + " {} is still in progress.", dn,
                   isHealthy ? "is": "isn't", blocks.size(), dn.getAdminState());
             }
           } else {
-            LOG.debug("Node {} still has {} blocks to replicate "
-                + "before it is a candidate to finish {}.",
+            LOG.info("Node {} still has {} blocks to replicate "
+                    + "before it is a candidate to finish {}.",
                 dn, blocks.size(), dn.getAdminState());
           }
         } catch (Exception e) {


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 01/03: HDFS-14499. Misleading REM_QUOTA value with snapshot and trash feature enabled for a directory. Contributed by Shashikant Banerjee.

Posted by we...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f14fb9081ff2f3350902fb06dc91dea924353df7
Author: Shashikant Banerjee <sh...@apache.org>
AuthorDate: Fri Jul 12 15:41:34 2019 +0530

    HDFS-14499. Misleading REM_QUOTA value with snapshot and trash feature enabled for a directory. Contributed by Shashikant Banerjee.
    
    (cherry picked from commit f9fab9f22a53757f8081e8224e0d4b557fe6a0e2)
---
 .../hdfs/server/namenode/INodeReference.java       | 17 +++++------
 .../TestGetContentSummaryWithSnapshot.java         | 33 ++++++++++++++++------
 2 files changed, 34 insertions(+), 16 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
index e4e14f7..bc8dccf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
@@ -500,14 +500,15 @@ public abstract class INodeReference extends INode {
     
     @Override
     public final ContentSummaryComputationContext computeContentSummary(
-        int snapshotId, ContentSummaryComputationContext summary) {
-      final int s = snapshotId < lastSnapshotId ? snapshotId : lastSnapshotId;
-      // only count storagespace for WithName
-      final QuotaCounts q = computeQuotaUsage(
-          summary.getBlockStoragePolicySuite(), getStoragePolicyID(), false, s);
-      summary.getCounts().addContent(Content.DISKSPACE, q.getStorageSpace());
-      summary.getCounts().addTypeSpaces(q.getTypeSpaces());
-      return summary;
+        int snapshotId, ContentSummaryComputationContext summary)
+        throws AccessControlException {
+      Preconditions.checkState(snapshotId == Snapshot.CURRENT_STATE_ID
+          || this.lastSnapshotId >= snapshotId);
+      final INode referred =
+          this.getReferredINode().asReference().getReferredINode();
+      int id = snapshotId != Snapshot.CURRENT_STATE_ID ? snapshotId :
+          this.lastSnapshotId;
+      return referred.computeContentSummary(id, summary);
     }
 
     @Override
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestGetContentSummaryWithSnapshot.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestGetContentSummaryWithSnapshot.java
index 1c16818..9aadeb2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestGetContentSummaryWithSnapshot.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestGetContentSummaryWithSnapshot.java
@@ -90,18 +90,22 @@ public class TestGetContentSummaryWithSnapshot {
     final Path foo = new Path("/foo");
     final Path bar = new Path(foo, "bar");
     final Path baz = new Path(bar, "baz");
+    final Path qux = new Path(bar, "qux");
+    final Path temp = new Path("/temp");
 
     dfs.mkdirs(bar);
+    dfs.mkdirs(temp);
     dfs.allowSnapshot(foo);
     dfs.createSnapshot(foo, "s1");
 
     DFSTestUtil.createFile(dfs, baz, 10, REPLICATION, 0L);
+    DFSTestUtil.createFile(dfs, qux, 10, REPLICATION, 0L);
 
     ContentSummary summary = cluster.getNameNodeRpc().getContentSummary(
         bar.toString());
     Assert.assertEquals(1, summary.getDirectoryCount());
-    Assert.assertEquals(1, summary.getFileCount());
-    Assert.assertEquals(10, summary.getLength());
+    Assert.assertEquals(2, summary.getFileCount());
+    Assert.assertEquals(20, summary.getLength());
 
     final Path barS1 = SnapshotTestHelper.getSnapshotPath(foo, "s1", "bar");
     summary = cluster.getNameNodeRpc().getContentSummary(barS1.toString());
@@ -112,8 +116,8 @@ public class TestGetContentSummaryWithSnapshot {
     // also check /foo and /foo/.snapshot/s1
     summary = cluster.getNameNodeRpc().getContentSummary(foo.toString());
     Assert.assertEquals(2, summary.getDirectoryCount());
-    Assert.assertEquals(1, summary.getFileCount());
-    Assert.assertEquals(10, summary.getLength());
+    Assert.assertEquals(2, summary.getFileCount());
+    Assert.assertEquals(20, summary.getLength());
 
     final Path fooS1 = SnapshotTestHelper.getSnapshotRoot(foo, "s1");
     summary = cluster.getNameNodeRpc().getContentSummary(fooS1.toString());
@@ -127,14 +131,14 @@ public class TestGetContentSummaryWithSnapshot {
     summary = cluster.getNameNodeRpc().getContentSummary(
         bar.toString());
     Assert.assertEquals(1, summary.getDirectoryCount());
-    Assert.assertEquals(1, summary.getFileCount());
-    Assert.assertEquals(20, summary.getLength());
+    Assert.assertEquals(2, summary.getFileCount());
+    Assert.assertEquals(30, summary.getLength());
 
     final Path fooS2 = SnapshotTestHelper.getSnapshotRoot(foo, "s2");
     summary = cluster.getNameNodeRpc().getContentSummary(fooS2.toString());
     Assert.assertEquals(2, summary.getDirectoryCount());
-    Assert.assertEquals(1, summary.getFileCount());
-    Assert.assertEquals(10, summary.getLength());
+    Assert.assertEquals(2, summary.getFileCount());
+    Assert.assertEquals(20, summary.getLength());
 
     cluster.getNameNodeRpc().delete(baz.toString(), false);
 
@@ -143,11 +147,24 @@ public class TestGetContentSummaryWithSnapshot {
     Assert.assertEquals(0, summary.getSnapshotDirectoryCount());
     Assert.assertEquals(1, summary.getSnapshotFileCount());
     Assert.assertEquals(20, summary.getSnapshotLength());
+    Assert.assertEquals(2, summary.getDirectoryCount());
+    Assert.assertEquals(2, summary.getFileCount());
+    Assert.assertEquals(30, summary.getLength());
 
     final Path bazS1 = SnapshotTestHelper.getSnapshotPath(foo, "s1", "bar/baz");
     try {
       cluster.getNameNodeRpc().getContentSummary(bazS1.toString());
       Assert.fail("should get FileNotFoundException");
     } catch (FileNotFoundException ignored) {}
+    cluster.getNameNodeRpc().rename(qux.toString(), "/temp/qux");
+    summary = cluster.getNameNodeRpc().getContentSummary(
+        foo.toString());
+    Assert.assertEquals(0, summary.getSnapshotDirectoryCount());
+    Assert.assertEquals(2, summary.getSnapshotFileCount());
+    Assert.assertEquals(30, summary.getSnapshotLength());
+    Assert.assertEquals(2, summary.getDirectoryCount());
+    Assert.assertEquals(2, summary.getFileCount());
+    Assert.assertEquals(30, summary.getLength());
+
   }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 03/03: HADOOP-12282. Connection thread's name should be updated after address changing is detected. Contributed by Lisheng Sun.

Posted by we...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6facb3f7c6d0017141c8e5d82c9e1187a81a1dd2
Author: Wei-Chiu Chuang <we...@apache.org>
AuthorDate: Thu Aug 1 15:50:43 2019 -0700

    HADOOP-12282. Connection thread's name should be updated after address changing is detected. Contributed by Lisheng Sun.
    
    (cherry picked from commit b94eba9f11af66b10638dd255c224e946d842b8c)
---
 .../hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java     | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 4ea1f419..d013f76 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -644,6 +644,10 @@ public class Client implements AutoCloseable {
         LOG.warn("Address change detected. Old: " + server.toString() +
                                  " New: " + currentAddr.toString());
         server = currentAddr;
+        UserGroupInformation ticket = remoteId.getTicket();
+        this.setName("IPC Client (" + socketFactory.hashCode()
+            + ") connection to " + server.toString() + " from "
+            + ((ticket == null) ? "an unknown user" : ticket.getUserName()));
         return true;
       }
       return false;


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org